I wonder if the attributes of Hubris and similar systems -- real-time, lack of dynamism -- will become "ideal attractors" for developers not working in problem domains where these things are absolutely required, especially as the backlash against the complexity at higher layers of the software stack continues to grow. In other words, I wonder if a sizable number of developers will convince themselves that they need an embedded RTOS like Hubris for a project where Linux running on an off-the-shelf SBC (or even PC) would work well enough.
Right now we are going in the opposite direction. Web developers on HN refuse to learn proper embedded programming, and instead stack abstraction on top of abstraction with MicroPython and using Raspberry Pis for every job under the sun.
It is a shame that Arduino/AVR never bothered implementing support for the full C++ library. If the full power of C++ is available to the end user, then perhaps alternatives like MicroPython would be less attractive.
Your metaphor that “python is the new basic” stems from desktop computers use of BASIC to teach beginning programming skills on largely 8 bit machines.
Once you started with BASIC you presumably moved to learning assembly language, as many of these machines gained c compilers, you might have tried to obtain one.
The entire point of micropython is a friendly introduction or friendly prototype platform or learning platform.
In no way does micro python take advantage of the hardware nor could it ever directly talk to hardware.
One should not treat all programming languages the same as they have different purposes and python is not fit for the purposes a c or c++ is fit for, aka memory allocation etc.
The number one lesson a beginning embedded programmer should take away from arduino is that controlling hardware is about writing specific bit patterns to memory locations. Sorry, this is not something python can do or was designed for.
Deeply embedded means “embedded Linux won’t suffice”
Your car braking system had better not be a micropython program.
There are actual safety proofing systems in which code is proven, and the python interpreter itself will not come close to passing as the complexity is too large.
(Formal verification is the Search term you seek (
Yeah when somebody says something like 'deeply embedded' the platform that comes to my mind is the Dreamcast VMU, which has a cpu that doesn't (AFAICT) yet even have a C compiler. ("C compiler....the idea was abandoned."--https://dmitry.gr/?r=05.Projects&proj=25.%20VMU%20Hacking) I doubt something written in Rust would be adequate for such a CPU.
> Once you started with BASIC you presumably moved to learning assembly language, as many of these machines gained c compilers, you might have tried to obtain one.
I would have guessed quite a lot of people went from BASIC to Turbo Pascal. But you're talking 8-bit machines; maybe that was only available for 16-bit and up?
If you bothered to watch the linked video, the Arduino folks are the ones that produce the tools in the box.
They only picked C++, because C would be even worse, and it provided an easy way to have their Arduino like language without creating their own compiler.
They have no plans to ever provide proper C++ support.
So if that makes you jump, go watch a Dan Saks talk about C++ adoption on embedded domain or not.
They picked c++ because it’s a low level capable language that has a reasonably familiar syntax, and their primary training wheels are these simplified arruino libraries.
The actual runtime framework with its “setup” and “loop” methods is a reasonable proxy for an RTOS or the framework an experienced embedded developer would have built as a general runtime system.
There’s nothing wrong with Arduino, except that its SPI SD card library won’t give you good bandwidth but that’s because they wanted it to be an understandable simple access library for SD cards, and you will need to go further if you want reasonable performance.
I would tend to disagree because MicroPython is so abstracted that it resembles writing regular Python on a server more than it does anything embedded.
Just as an example, the WiFi setup resembles a server far more than an ESP32 with esp-idf. All you do is give it the connection details, and MicroPython seems to handle the details like trying to reconnect in the background. It's not far off from what systemd-networkd or similar provides. esp-idf forces you to handle that yourself, and to think about what you want to happen in that situation.
MicroPython also doesn't support threads afaict, so you don't even have to handle scheduling threads.
I like MicroPython as a way to run Raspberry Pi like stuff on the cheap, and it's a great learning tool in that sense, but you're still too far from the hardware to really be learning about embedded systems.
Sure, that's happening, but some of us are also tempted to use Rust for applications where an easier to learn, more popular language would be good enough.
I think you are right and I would add Turing incompleteness to that list. If your problem isn't Turing complete, then a complete language is actually probably not the best tool for the job. Incomplete languages actually give you more power than in a Turing complete language in that case. Completeness is a feature of a language and like all features there are tradeoffs. The ability to express more kinds of problems comes with the cost of not being able to leverage the contours of a particular problem (e.g. monotonic data, no cycles) to increase things like speed, debuggability, parallelization. This can enable cool features things that seem completely out of reach today. e.g. rewinding the state of your program to debug production errors. Modifying programs as they're running. Automatically parallelizing computations across an arbitrary number of cores. The ability to query the provenance of any variable in your program.
Datalog's incompleteness for example allows it to resolve queries faster than a complete language like Prolog due to the simplifying inferences it can make about the code.
If the predictions are true that we will see more and more specialized hardware due to the end of Moore's Law, then we will see more OS services just look like services running on separate processors. Special purpose hardware doesn't need a batteries included operating system. We could argue whether a modular OS still counts as general purpose, but I'll let you guys do that.
With IPC, latency becomes the elephant in the room. An RTOS can't remove that, but it can help.
I doubt it. Real time adds its own flavor, and a small OS doesn't come with everything you might want. It's useful when it's what you want, not when you don't want something else.
Containerisation already goes a way towards this, each executable is bundled with what it needs and nothing else. And the next step is one app per hardware, where you have maybe a minimal stripped OS that just launches your app, effectively a container in hardware. I think Google does this a fair amount.
The jump from this to RTOS is large, though. The abstractions are different. The limitations are different. You probably need to rewrite anything you need. And what do you gain? Mostly only predictable latency, and the ability to run on very limited (but cheap) hardware. Which you need why?
> Containerisation already goes a way towards this, each executable is bundled with what it needs and nothing else.
Maybe in theory, but in practice most people still ship an entire OS in their containers (most of the time it's Alpine and it's not to big of a deal, but too many times it's an entire Debian!)