Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Operating System Development in Rust (github.com/rust-lang)
160 points by krenoten on Jan 11, 2015 | hide | past | favorite | 67 comments


Somewhere in this direction lies the post-C era. The post-buffer-overflow era. And do we need it.

It has been 6 days since the last CERT advisory for a buffer overflow security hole.


I am with you on that.

However, if Rust doesn't get adopted by an OS vendor in their SDK, it will join the ranks of Object Pascal, Extended Pascal, Modula-2, Modula-3, Ada, Oberon,Cyclone, ATS.

This just to mention the alternatives on my lifetime, while ignoring the Algol and PL/I variants that existed before C was even born.


Mozilla is an OS vendor.


Yes, so are there any plans to use Rust on Firefox OS, as well as, to provide a Rust based SDK?


I assume Servo would likely power Firefox OS's front end, at least once it is ready.


Servo is a user space application, though.

My point was about Rust real use case, systems programming.


A) Some systems programming is in user space.

B) A browser is about the beefiest user space program of which I can think at the moment.


To be very clear: Servo is a research project. It is not aimed to replace Gecko. http://paulrouget.com/e/servo/


I'm not sure how having an official OS SDK helps all that much if they also ship a C or C++ SDK. Anyone could write and maintain one for Rust, so what's the difference?


First class treatment on the SDK tooling and APIs.

Of course, the expectation would be that the said OS vendor would give more emphasis to Rust.

For example, C++ earned its place by being bundled alongside C compilers and having things like CORBA, OpenDOC and Windows pushing for it.

> Anyone could write and maintain one for Rust, so what's the difference?

This is how the alternatives to C and C++ faded away of the IT market. The vendors could not keep up with the tools that the OS vendors had on their SDKs and many developers got fed up with writing FFI code.


Sorry this is so long, I deleted many paragraphs.

Something about this doesn't sound right to me. I think the third-party factors are overwhelming. After all, the basic I/O and other low-level stuff is going to be there on any supported OS. But availability of compilers/VMs/interpreters, and good libraries for GL, Qt, audio, etc. are all at the top of my reasons for not going with a particular language that would otherwise be suited to the task.

I think Rust is in a better position than one would normally expect. People are looking at it for web dev, and web devs tolerate more diversity in their language of choice. The existence of Go, and the frequent (if misguided) comparisons to Go helps, since writing a web-facing app in Go is a sensible thing. I'm not sure if that will work out, but if it does I think it's a trump card. No web language ever died, except things like Cold Fusion and languages that were superseded by offerings from their own vendors.

On the other hand, I think Rust is better suited to game development. I don't know of Carmack being sweet on Rust in particular, but a few years ago he did start talking about how it would be nice if id could use safer languages (though his biggest complaint was about script writing.) More to the point, Rust is growing a collection of gamedev libraries. Piston is ambitiously trying to curate an entire ecosystem. This is a rare phenomenon among the pretenders. Piston is in no shape to take over the world today, but if it continues to grow and polish it could turn a number of heads.


Some of those languages have been the main app and system programming language on platforms where the OS is also written in that language. Oberon and the Modulas at least.


Yes, but they suffered the fate of being tied to OSes that failed to be adopted, so they went along with them.

I am a big fan of Wirth's languages.


The era of unrootable devices, unbreakable DRM, and inescapable walled gardens? Considering that this will basically be "trusted computing done right", I'm quite frankly more scared than anything...

I'm not saying that security holes are nothing to worry about, as I am no less irritated by buggy programs than anyone else, but the alternative might be worse.


We are past the point where exploits are a cheap, democratic weapon. So it's time to close the holes that governments and organized crime are using against us. It's our responsibility to build systems that provide democratizing power. It's also our responsibility to practice exercising our political power. Politics will be important for however long we are human.


As long as I have the choice to disable it, I want trusted computing. For example, I want to be able to run a secure cryptocurrency wallet on a portable device. Also would like to be able to introspect the hardware, even if it's costly and potentially destructive.


the problem is that any entity with the resources to create such a device is incentivized to produce one in which you cannot disable the built-in DRM/security. I think such a device could only be made possible under some sort of new law/regulation (like how net neutrality needs to be enacted into law), and i don't foresee how it can realistically happen. Consumer are already very comfortable with locked down devices now - iphone, tivo, even DRM coffee machines, and i don't see any consumer outrage when device makers introduce obstructions in the name of user safety and security.


> is incentivized to produce one in which you cannot disable the built-in DRM/security

Why is that? I don't see why this has to be true.


producing a freer device means you cannot force a business model that is more profitable. For example, Playstation 3 used to be able to run a version of linux, but as time went by, sony removed that capability - they didn't want non-gamers to buy it and use it in clustered computing. Their business model is to sell the machine at cost (or even, take a loss), and recoup/profit on the game sales.

A device maker which makes completely open machines is going to get out-competed with low-margin knock-offs, or generic branded copies from places like china. This is good for the consumer, but not good for a business's bottom-line. This is why there are very few branded computers these days (at least, desktop). You can count them on your fingers.


Not exactly true. Why would Sony have added Linux functionality in the first place?

* Good will among the community

* More familiarity with Cell software development for clusters (although this was behind a VM)

* Import tax savings in certain territories due to the box being classed as a ‘computer’ instead of a ‘games system’.

Which do you think was most important? (hint: it’s the last one) Conversely, when the ability to run Linux was pulled a hypervisor exploit had been discovered. Now the cost equation tipped back the other way. The import tax hit was still there, but the machine had halved in price since launch so that wasn’t as painful. The community had moved away from PS3 clusters as they weren’t energy effective any more: not as painful. The major pain point from removing it was losing the goodwill of the community, but when balanced against the piracy implications they made their decision.

Personally I think they should have patched the hypervisor holes and kept the functionality, but you can see why they decided to remove it.


Another example: had Apple not removed the TPM from its computers and used it for the firmware crypto checks, the Thunderstrike exploit would have faced quite a hurdle. Such a delicate balance.


TPM wouldn't have helped any since it's a passive component. As long as you get to overwrite the code faster (or more persistently) than it can check itself (with or without support from TPM), you win.


Restrictions placed on Turing machines by their producers are ultimately a political problem and not a technical one. We will indeed have to solve it one day once those restrictions are not buggy.


It is a political problem, but I think it's one that has to be solved very very soon, because the majority of the population seem to acquiesce quickly to these new restrictions and "safer" (against them) systems without thinking much of the negative aspects. To let it reach the point when freedom is completely associated with malware, terrorism, or whatever else is used to frighten the population into comformity, it may be too late to oppose.


In general, technical problems can be permanently solved, and political problems cannot be. Where possible, it's always preferable to turn a political problem into a technical problem.


Security holes shouldn't be the solution, but pressure on OEMs to allow control over your own devices.


I agree. But I'm also convinced that in practice there's absolutely no way to pressure OEMs to allow control over our devices in such way that it would lead to persistent computational opennes and freedom.

Walled gardens and eroded digital/computational liberties are here to stay, and even more so in the future. The trend has been that "hacking the device" has become harder and harder, and devices have became more and more closed and controlled. I don't expect this trend to change any time soon. I sure as hell wish it did, but why would it?


Surely, though, if hacking the device was harder - if it wasn't something practically anybody could do with a downloadable tool, for the most part - there'd be more demand for unlockable devices.


Exploits come from both broken languages and human stupidity. We can improve broken languages but don't worry, human stupidity is boundless.


The trend is to replace "human stupidity" with automated, provable correctness. One of the most obvious examples of this would be array bounds checking. As programmers become increasingly restricted by their own tools in the name of preventing bugs, and as machines effectively write more (correct) code for them, one does have to wonder at some point: "Are we controlling the machines, or are they controlling us?"

We are living in very interesting times indeed...


I'm thinking more of a VM like Xen in Rust, with the support to run Docker-like containers, with no C or C++ code anywhere. 100% subscript checked down to the hardware level.


The GPLv3 was designed to counteract this threat. Maybe we should license more stuff under it?


> The era of unrootable devices, unbreakable DRM, and inescapable walled gardens?

No such thing. There is no protection once someone has physical access to the hardware.


Oh really? I guess you'll be releasing your bootloader unlock instructions for my phone any day now, then?


Many people seem to be confused by Rust's lack of limitations imposed on you as a programmer. In particular, the fact that so much of what is thought of as a runtime is provided as optional libraries. You can run tasks similar to Goroutines, which are provided as a library. You can use certain types of garbage collection as a library. You have dramatic freedom and accessible choices. The possibilities are incredibly inspiring.


In my tinkering with both Go and Rust, and liking the looks of both, I get vague feelings like Go is to Python as Rust is to Perl. This may be damning praise for some folks, but it makes me lean slightly toward Rust more than Go (slightly). There seems to be almost an anti-library culture in Go (i.e. "use the standard library" is almost a mantra in Go) whereas in Rust, there's more than one way to do it (even this early in its existence).

That said, in many ways neither language is like the one I've compared it to, probably in enough ways that it's not a useful comparison for much other than this one metric (the TIMTOWDIness of the language). But, it's how my head is processing these two quite novel (to me) languages; which would make sense, as I've written more code in Python and Perl than probably anything else, so they're the known things I compare these unknown things to.


You could say that Go is Python and Rust is Haskell, because, well, Rust is basically Haskell cleverly disguised as a C-style language so as to not scare away imperative programmers.


> Rust is basically Haskell cleverly disguised as a C-style language so as to not scare away imperative programmers.

This comment is very condescending. It makes it seem as if C programmers are ignorant and Haskell programmers are enlightened. Haskell is a garbage collected, lazy, pure language which makes it unsuitable for many of the domains C-style languages are used in. These are also qualities not shared by Rust, which may explain why there is less resistance in its uptake by C programmers.


> Haskell is a garbage collected, lazy, pure language which makes it unsuitable for many of the domains C-style languages are used in.

Don't be so sure! [1] You could simply use Haskell as a metalanguage to generate a C program that does what you need, just like they did with copilot. No Haskell runtime needed.

[1] https://github.com/leepike/copilot-discussion


Oh, come on. Don't take things so seriously. There are tons of Haskell programmers who don't know C, too, and Rust is just as much of a way for them to reap the benefits of C as it is anything else.


I'd say Rust is more like C and OCaml had a baby.


An amazing compliment, IMO.


Could you elaborate on that? Rust has a powerful static type system, HOFs, is expression-based, has an unusually good pattern matching engine for its intended area, assumes immutability by default and uses a notation for function signatures typically associated with FP, but I don't think those are enough to call it a "cleverly disguised Haskell".

In fact, it has been my observation that the language has been getting progressively less functional since its early days. Not that I consider this to be bad.


Sure -- basically what you said: http://science.raphael.poss.name/rust-for-functional-program...

Of course it's not exactly Haskell, but it's arguably closer to FP than it is to C, yet manages to be approachable to imperative programmers. I think this is fantastic.


That would be a really tough argument to make. It goes to great lengths to give control over memory layout and control and lacks a LOT of the features of Haskell's type system. It's a lot more like a prettier D than Haskell. Not only that with every iteration it has been moving farther away from the FP paradigm.


I think you mean GHC's type system, since Rust is arguably quite close to Haskell 98 in spirit, ie. algebraic types, lambdas, pattern matching, traits = single parameter type classes.


I think the point is that it seems actively inspired by Haskell -- that the things people like about Haskell are the same things the designers of Rust want in their language.


As far as I can tell, there is some Haskell inspiration in the form of typeclasses (traits in Rust), but almost everything else that resembles FP has a more common denominator in ML. (Even Haskell's type classes have some common lineage with SML Functors)


Reminds me of hop and house (haskell based OS) http://lambda-the-ultimate.org/node/299 ~ 2004


Which library provides goroutine like concurrency in rust?


With 1.0, rust no longer has a runtime at all, so official support for libgreen was actually dropped. However, between sync::mpsc, sync::TaskPool and sync::Future you can produce similar results.


All those deal with threads. Rust 1.0 will have multi-threading just like C++, Java, etc. It is very different from how Goroutines work and do not offer similar benefits at all (specifically for network services).

libgreen while it existed scaled very poorly and consensus among rust developers seemed to be that, it cannot be improved without compromising some of the core features of rust like no-GC and zero-overhead calls to C libraries.


Even with Go it's a fairly bad idea not to use Goroutine pools for request handling if you're writing low-latency services. You can get isomorphic results with a future pool and channels.


It is not about whether goroutines need to be pooled. Goroutines work very differently from how OS threads. They are scheduled at the user-level by using non-blocking system calls to perform network IO on multiple OS threads. They are also partially pre-empted on function calls.

Whether goroutines need to be pooled depends on the application. For example, the default HTTP creates ones per connection and seems to be used without any problem in production at a lot of places. Creating them is much cheaper than creating OS threads. In fact, libgreen threads were faster to spawn than libnative ones in rust when it existed.

Rust and Go have different trade-offs when it comes to concurrency and each has its benefits and drawbacks.


I'm not saying that a Goroutine is a thread. My point was, you have the ability to write your own systems for your own tradeoffs. Feel free to write a schedulerless coroutine system or an Erlangesque reduction counting scheduler, or even a full reimplementation of the Go scheduler. You can pin Goroutines to threads to ask them to act a little more like them, but that's about as far as you can go. While you are right that Go's system is great for some tasks, there is no reason why Rust can't form a superset of its functionality, in time.


Whether goroutine-like mechanism is possible in rust without compromising its core values is not clear, since libgreen attempted it and failed.

Something like async..await is in the cards and I guess rust does have plans to implemented it sometime in future, but that will require compiler support and can't be done just in libraries like you claimed in the first comment of this thread. While it will not have same runtime characteristics of goroutines, it'll provide similar benefits to program structure.

All that said, rust definitely offers a lot. I'm only contending the claim that the language is flexible enough to implement something similar to goroutines purely as a library.


Huh interesting... how does Go do it under the hood?


This is a good read on the Go scheduler. http://morsmachine.dk/go-scheduler


There is nothing that would prevent an implementation of Go's scheduler in Rust, and libgreen duplicated its functionality pretty comprehensively. Rust's scheduler was more advanced than Go's scheduler in some ways—it did work-stealing in a completely lock-free way, for example.

The only fundamental difference between Rust and Go here is in stack management. In Go, goroutines start off with a small stack, and they can grow because the language is now pervasively, precisely garbage collected and all of the pointers into the stack can be rewritten. In Rust, that wasn't an option because it isn't garbage collected; it used to use the old Go approach of split stacks, but the same problems were encountered. There was also significant backlash against the problems that continue to be an issue in Go and were an issue in Rust—the FFI (cgo in Go's case) was slow due to having to perform stack switches, most importantly.


Yes, that's what I meant when I said it not clear whether goroutines can be replicated as a library in rust "without compromising rust's core values". In fact, I mentioned no-GC and zero-overhead C calls.

Since libgreen did have massive stacks and one could not spawn 100s of thousands of tasks without changing system limits like overcommit, it was not really a comprehensive duplication of goroutines.


> Since libgreen did have massive stacks and one could not spawn 100s of thousands of tasks without changing system limits like overcommit

Well, you could customize the stack size, and we did when running stress tests like that. You don't need to change system settings. The only difference is that you have to know how much stack your "goroutine" is going to need up front.


This is really interesting. One or both of you all should write a blog post on it because I'm eager to learn more.

I love C. Love Go. Love the idea of Rust though I've never tried it, and I just want to know more.


Oh no, flashbacks of C++

Want some simple concurrency? Here try boost::asio... or boost::thread... or... libev... or libuv.

Now every project has a different concurrency model woven into it. Why isn't there a good solution in the stdlib for this common need?


> Now every project has a different concurrency model woven into it. Why isn't there a good solution in the stdlib for this common need?

Because every concurrency/parallelism approach has tradeoffs, and there is no one right implementation for all cases, and rusts intended primary use case is low-level and broad enough that there's not even one approach that's probably good enough for most cases.


Everything mentioned in the parent post is in the standard library.


Also worth checking out:

http://rust-class.org/

Or maybe this, if UEFI is palatable to you:

http://blog.theincredibleholk.org/blog/2013/11/18/booting-to...


For bootstrapping a kernel, I had trouble with cross compiling. My kernel doesn't yet support a libc hosted environment, so I use a i686-pc-elf gcc target with -ffreestanding. rustc works niceley if your target environment is similar enough to your dev environment, or if you are developing for android for which there's an sdk and rustc port already. I initially tripped up on this because i'm developing on osx currently, but targetting gnu. I just need the rustc equivalent of -ffreestanding. Alternatively I'll have to wait until I've got a proper i686-myos gcc and libc port.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: