* Dirty schedulers: This allows easy integration of blocking C-based libraries. So for example can wrap something like RocksDb and make it available to the rest of the VM easier. Or libcurl and others.
* DTLS : This lets it talk to WebRTC clients
* Erlang literals are no longer copied when sending messages : This is kinda of a sneaky one. By default (with exception of large binaries) Erlang VM usually copies data when it sends messages. However, in this case module literals (constants, strings, etc) will be another thing that's not copied. There is a hack to dynamically compile configuration values or other tables of constants as a module at runtime. So if you use that hack, you'd get a nice performance boost.
* code_change, terminate and handle_info callbacks optional in the OTP behaviors. This is very nice. I always wondered why I had to write all that boiler plate code.
> * Dirty schedulers: This allows easy integration of blocking C-based libraries. So for example can wrap something like RocksDb and make it available to the rest of the VM easier. Or libcurl and others.
Amazing. I'm new to erlang, but within the last hour just read comments by a seasoned programmer on how erlang's shiny scheduler has an achilles heel when it comes to blocking C-based libraries. I suppose then that this truly is a significant advancement.
> how erlang's shiny scheduler has an achilles heel when it comes to blocking C-based libraries.
It's an improvement but it was already possible to write C-based libraries. There are drivers, ports, NIFs were there before but for long running computation had to do your own thread and queue setup. This just avoids that bit because VM writers did it for the user so to speak (and probably did the right way).
Also it is really something advanced and not what most Erlang programmers would end up doing anyway. Once you start writing C code and directly loading it into the VM, the same caveats as before apply - that is a some fault tolerance and safety guarantees go out of the window.
Exactly. Half the point of the beam is not having that type of code impact the scheduler.
Case in point, you can write a faster JSON parser in C and use it (jiffy)...but it's not desirable to pollute your BEAM for the minor performance gain.
Could you elaborate more on this? I don't know much about erlang/OTP's support for WebRTC. Does adding DTLS makes erlang/OTP possible to talk to WebRTC clients because WebRTC requires DTLS? or this means erlang/OTP now has a complete stack and API for WebRTC communication out of the box?
Thanks for the clarification. I was aware of the DTLS requirement for WebRTC, but not sure the whole WebRTC stack support on erlang/OTP. Now thinking about it, maybe an erlang/elixir/whatever-on-BEAM implementation of STUN/TURN server would make sense...
I thought that the STUN/TURN stuff was just for discovery and used fairly conventional protocols...Isn't the DTLS for actual media transmission - i.e. directly between clients (or via some kind of Gateway?)
STUN is for signaling, but the hole-punching business is not 100% guaranteed to work, so you need TURN as the back up relay server. DTLS is needed for both p2p traffic and the replaying traffic through TURN, so your TURN server does need DTLS for its job.
ETS is a fast [1] in-memory K/V store including a set type with constant time put/get in the standard library of Erlang. New in OTP 20 is an atomic CAS (compare-and-swap) operation.
> * Dirty schedulers: This allows easy integration of blocking C-based libraries. So for example can wrap something like RocksDb and make it available to the rest of the VM easier. Or libcurl and others.
Silly question, but shouldn't elang have some good & highly concurrent http libraries?
Dirty schedules are about functions implemented in C - called NIFs (Native Implemented Functions) - a sort of FFI you'd find in most languages.
Because of preemptive nature of Erlang, doing lengthy work in those functions can destabilise the system (the "magic value" is said to be around 1ms). That is because C functions can't be preempted in the middle of execution like Erlang ones can.
Using dirty schedulers lifts this time limitation, but gives a higher constant overhead when calling a function on a dirty scheduler, since it means switching OS thread. This tradeoff, however, is perfectly acceptable for a lot of cases.
And yes, Erlang has good & highly concurrent HTTP libraries implemented in Erlang.
Yes. That's not changed. All NIFs (regular or dirty ones) are executed directly in the context of the VM. A safe option would be a port - a regular program where you communicate through stdin/stdout. Ports allow representing such program (running in a separate OS process) as something equivalent to a native Erlang process.
There's also work on supporting writing NIFs in Rust, which gives some degree of additional safety. The relevant project would be: https://github.com/hansihe/rustler
Oh it does there are already a good number of decent ones. I just used libcurl as an example. And as someone suggested one reason to wrap libcurl could be is because it supports a lot of the corner cases and protocols.
* Dirty schedulers: This allows easy integration of blocking C-based libraries. So for example can wrap something like RocksDb and make it available to the rest of the VM easier. Or libcurl and others.
* DTLS : This lets it talk to WebRTC clients
* Erlang literals are no longer copied when sending messages : This is kinda of a sneaky one. By default (with exception of large binaries) Erlang VM usually copies data when it sends messages. However, in this case module literals (constants, strings, etc) will be another thing that's not copied. There is a hack to dynamically compile configuration values or other tables of constants as a module at runtime. So if you use that hack, you'd get a nice performance boost.
* code_change, terminate and handle_info callbacks optional in the OTP behaviors. This is very nice. I always wondered why I had to write all that boiler plate code.
Also here is a detailed list of changes:
http://erlang.org/download/otp_src_20.0-rc2.readme