#ProjectLoom's virtual threads will make high performance in concurrent systems attainable with much simpler code. But Loom aims for even more and wants to make the code clearer and more robust by introducing *structured concurrency*. Here's what that's all about. 🧵
Important: This is about concurrency, not parallelism. See yesterday's thread 👇🏾 for a more detailed distinction, but the gist is that concurrency is about processing lots of tasks that the environment throws at your system at the same time, ideally with high throughput.
Important: This is about concurrency, not parallelism. See yesterday's thread 👇🏾 for a more detailed distinction, but the gist is that concurrency is about processing lots of tasks that the environment throws at your system at the same time, ideally with high throughput.
@nipafx Yep. It’s really exciting. I need to translate my German article about loom an structured concurrency github.com/jexp/blog/blob…
@nipafx So many remarkable things about Project Loom! Among them: forward compatibility with existing code and the ease with which one can write new I/O primitives. Elsewhere, one must confront the complexity of async I/O. With Loom, that's handled. Synchronous network code. Amazing!
@nipafx Can you explain other than improvement in general java api, if it has any advantage on completable-futures usage?
@nipafx Nice threads on loom, keep it up! 👍 Can you also post something on scope locals: openjdk.java.net/jeps/8263012
@nipafx What about true native tail-call optimization? I thought it was discussed in sertain proximity to loom, but can not rember it now.
@nipafx Do you mean by structured concurrency something like algebraic effect ?
@nipafx you can program as many threads or fibers or whatever they are called - at the end the performance depends on the cpu-cores - on the *available* cpu-cores