• Gonzako@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    I’ll be honest, this only matters when running single services that are very expensive. it’s fine if your program can’t be pararlelized if the OS does its job and spreads the love around the cpus

      • Opisek@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        I absolutely love how easy multi threading and communication between threads is made in Go. Easily one of the biggest selling points.

        • Ethan@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Key point: they’re not threads, at least not in the traditional sense. That makes a huge difference under the hood.

          • Opisek@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            11 days ago

            Well, they’re userspace threads. That’s still concurrency just like kernel threads.

            Also, it still uses kernel threads, just not for every single goroutine.

            • Ethan@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 days ago

              What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new ‘user thread’ in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.

              Also, it still uses kernel threads, just not for every single goroutine.

              This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel’s scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don’t mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.

      • kbotc@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        And yet: You’ll still be limited to two simultaneous calls to your REST API because the default HTTP client was built in the dumbest way possible.

    • groknull@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      I initially read this as “all programmers are single-threaded” and thought to myself, “yeah, that tracks”

    • AndrasKrigare@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      14 days ago

      I think OP is making a joke about python’s GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        no, they’re just saying python is slow. even without the GIL python is not multithreaded. the thread library doesn’t use OS threads so even a free-threaded runtime running “parallel” code is limited to one thread.

        • AndrasKrigare@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          13 days ago

          If what you said were true, wouldn’t it make a lot more sense for OP to be making a joke about how even if the source includes multi threading, all his extra cores are wasted? And make your original comment suggesting a coding issue instead of a language issue pretty misleading?

          But what you said is not correct. I just did a dumb little test

          import threading 
          import time
          
          def task(name):
            time.sleep(600)
          
          t1 = threading.Thread(target=task, args=("1",))
          t2 = threading.Thread(target=task, args=("2",))
          t3 = threading.Thread(target=task, args=("3",))
          
          t1.start()
          t2.start()
          t3.start()
          

          And then ps -efT | grep python and sure enough that python process has 4 threads. If you want to be even more certain of it you can strace -e clone,clone3 python ./threadtest.py and see that it is making clone3 syscalls.

          • anton@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            0
            ·
            13 days ago

            Now do computation in those threads and realize that they all wait on the GIL giving you single core performance on computation and multi threaded performance on io.

            • AndrasKrigare@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              13 days ago

              Correct, which is why before I had said

              I think OP is making a joke about python’s GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.

              • thisisnotgoingwell@programming.dev
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                13 days ago

                Isn’t that what threading is? Concurrency always happens on single core. Parallelism is when separate threads are running on different cores. Either way, while the post is meant to be humorous, understanding the difference is what prevents people from picking up the topic. It’s really not difficult. Most reasons to bypass the GIL are IO bound, meaning using threading is perfectly fine. If things ran on multiple cores by default it would be a nightmare with race conditions.

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            13 days ago

            is this stackless?

            anyway, that’s interesting! i was under the impression that they eschewed os threads because of the gil. i’ve learned something.

    • Successful_Try543@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      14 days ago

      Does Python have the ability to specify loops that should be executed in parallel, as e.g. Matlab uses parfor instead of for?

        • Panties@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          14 days ago

          I was telling a colleague about how my department started using Rust for some parts of our projects lately. (normally Python was good enough for almost everything but we wanted to try it out)

          They asked me why we’re not using MATLAB. They were not joking. So, I can at least tell you their reasoning. It was their first programming language in university, it’s safer and faster than Python, and it’s quite challenging to use.

            • Successful_Try543@feddit.org
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              14 days ago

              We weren’t doing any ressource extensive computations with Matlab, mainly just for teaching FEM, as we’ve had an extensive collection of scripts for that purpose, and pre- and some post processing.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        python has way too many ways to do that. asyncio, future, thread, multiprocessing

        • danhab99@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          14 days ago

          I’ve always hated object oriented multi threading. Goroutines (green threads) are just the best way 90% of the time. If I need to control where threads go I’ll write it in rust.

              • entropicdrift@lemmy.sdf.org
                link
                fedilink
                arrow-up
                0
                ·
                14 days ago

                Meh, even Java has decent FP paradigm support these days. Just because you can do everything in an OO way in Java doesn’t mean you need to.

            • danhab99@programming.dev
              link
              fedilink
              arrow-up
              0
              ·
              14 days ago

              If I have to put a thread object in a variable and call a method on it to start it then it’s OO multi threading. I don’t want to know when the thread spawns, I don’t want to know what code it’s running, and I don’t want to know when it’s done. I just want shit to happen at the same time (90% of the time)

              • lime!@feddit.nu
                link
                fedilink
                English
                arrow-up
                0
                ·
                13 days ago

                the thread library is aping the posix thread interface with python semantics.

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            yup, that’s true. most meaningful tasks are io-bound so “parallel” basically qualifies as “whatever allows multiple threads of execution to keep going”. if you’re doing numbercrunching in pythen without a proper library like pandas, that can parallelize your calculations, you’re doing it wrong.

            • WolfLink@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              14 days ago

              I’ve used multiprocessing to squeeze more performance out of numpy and scipy. But yeah, resorting to multiprocessing is a sign that you should be dropping into something like Rust or a C variant.

  • tetris11@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    14 days ago

    I prefer this default. Im sick of having to rein in Numba cores or OpenBlas threads or other out of control software that immediately tries to bottleneck my stack.

    CGroups (Docker/LXC) is the obvious solution, but it shouldn’t have to be

  • dan@upvote.au
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    Do you mean Synapse the Matrix server? In my experience, Conduit is much more efficient.

    • Lena@gregtech.euOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Yep, I mean as in matrix. There is currently no was to migrate to conduit/conduwuit. Btw from what I’ve seen conduwuit is more full-featured.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      13 days ago

      i wish they would switch the reference implementation to conduit

      there is core components on the client side in rust so maybe that’s the way for the future

  • SaharaMaleikuhm@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    Oh wow, a programming language that is not supposed to be used for every single software in the world. Unlike Javascript for example which should absolutely be used for making everything (horrible). Nodejs was a mistake.

    • Lena@gregtech.euOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Oooooh this is really cool, thanks for sharing. How could I install it on Linux (Ubuntu)? I assume I would have to compile CPython. Also, would the source of the programs I run need any modifications?

      • computergeek125@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        From memory I can only answer one of those: The way I understand it (and I could be wrong), your programs theoretically should only need modifications if they have a concurrency related bug. The global interlock is designed to take a sledgehammer at “fixing” a concurrency data race. If you have a bug that the GIL fixed, you’ll need to solve that data race using a different control structure once free threading is enabled.

        I know it’s kind of a vague answer, but every program that supports true concurrency will do it slightly differently. Your average script with just a few libraries may not benefit, unless a library itself uses threads. Some libraries that use native compiled components may already be able to utilize the full power of you computer even on standard Python builds because threads spawned directly in the native code are less beholden to the GIL (depending on how often they’d need to communicate with native python code)

        • Lena@gregtech.euOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          Thanks for the answer, I really hope Synapse will be able to work with concurrency enabled.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        13 days ago

        In this case, it’s a feature of the language that enables developers to implement greater amounts of parallelism. So, the developers of the Python-based application will need to refactor to take advantage of it.