Now do computation in those threads and realize that they all wait on the GIL giving you single core performance on computation and multi threaded performance on io.
I think OP is making a joke about python’s GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.
Isn’t that what threading is? Concurrency always happens on single core. Parallelism is when separate threads are running on different cores. Either way, while the post is meant to be humorous, understanding the difference is what prevents people from picking up the topic. It’s really not difficult. Most reasons to bypass the GIL are IO bound, meaning using threading is perfectly fine. If things ran on multiple cores by default it would be a nightmare with race conditions.
Now do computation in those threads and realize that they all wait on the GIL giving you single core performance on computation and multi threaded performance on io.
Correct, which is why before I had said
Ups, my attention got trapped by the code and I didn’t properly read the comment.
Isn’t that what threading is? Concurrency always happens on single core. Parallelism is when separate threads are running on different cores. Either way, while the post is meant to be humorous, understanding the difference is what prevents people from picking up the topic. It’s really not difficult. Most reasons to bypass the GIL are IO bound, meaning using threading is perfectly fine. If things ran on multiple cores by default it would be a nightmare with race conditions.