Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So each "process" is deployed as a separate binary, so presumably run as a separate process?

If so, this seems somewhat problematic in terms of increased overhead.

How is fast communication achieved? Some fast shared memory IPC mechanism?

Also, I don't see anything about integration with async? For better or worse, the overwhelming majority of code dealing with networking has migrated to async. You won't find good non-async libraries for many things that need networking.



By “distributed” I assumed it meant “distributed,”as in on entirely separate machines. Thus necessitating each component running as an independent process.


Currently, Hydro is focused on networked applications, where most parallelism is across machines rather than within them. So there is some extra overhead if you want single-machine parallelism. It's something we definitely want to address in the future, via shared memory as you mentioned.

At POPL 2025 (last week!), an undergraduate working on Hydro presented a compiler that automatically compiles blocks of async-await code into Hydro dataflow. You can check out that (WIP, undocumented) compiler here: https://github.com/hydro-project/HydraulicLift




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: