Since Rob Pike wrote this note in 2000, there have been many commercially important advances in systems software:
- iOS and Android: Power-efficient mobile systems, wakelocks, etc.
- Docker, Borg and Kubernetes: Container isolation, cluster orchestrators.
- Modern browser environments, with high degrees of isolation.
- Fast JavaScript runtimes and specialized JITs.
- WASM.
- Ruby and metaprogramming, which rose to considerable commercial success and then petered out. But by the standards of 2000, there's some neat language design there.
- Rust, which successfully applied a lot of lesser-appreciated academic ideas including affine types.
- Type systems retrofitted to many dynamic languages, including Typescript. Typescript is really rather remarkable, even though the types are "unsound", in the technical sense.
Some of these trends were already underway in 2000, when Rob Pike was writing. Admittedly, much of the most important implementation work happened in big tech companies, or occasionally startups. Not in academia. But that's largely a question of scale. Even as early as 1990, it was hard for a professor and a few graduate students to directly change the industry. But plenty of the items on the list above do have roots in academic research.
And Rob Pike did get to write Go. He deliberately ignored quite a bit of interesting research to focus on a specific aesthetic vision, which is fine. And a lot of good systems work has been done in Go.
Rob was simply wrong. All his thinking was formed by the explosion of systems research around UNIX from the 70s to the 80s which was extremely productive. But research definitely didn't end then. It just transitioned to another dimension.
> Admittedly, much of the most important implementation work happened in big tech companies, or occasionally startups. Not in academia
> - Rust, which successfully applied a lot of lesser-appreciated academic ideas including affine types.
There's some scepticism going round about a model of technological innovation where academics invent an idea and engineers implement it. While I accept that affine types were known to academics, I recall a precursor to them existed in the form of C++ unique pointers. I'm wondering whether the Rust creators reinvented affine types independently, instead of learning of them from the academic literature.
Also, maybe a better question is how beneficial the literature on affine types was to the Rust creators.
Graydon has forgotten more about PLT than most of us will ever know. (And several of the folks who came after Graydon have PhDs…)
Early Rust didn’t have some of the things it had today, but we used to keep a list of papers that Rust either was inspired by or implemented the ideas from on the website, even if it was incomplete.
I also don’t believe that smart pointers in C++ count as affine? Move semantics in C++ work quite differently than in Rust. I could be wrong about that though, but given the whole copy by default thing, I’m not entirely sure that that counts. It might though.
Oh, and in the earliest days, literally Rust’s introduction inside of Mozilla, it was described as “ideas from the past, come to save the future.” Explicitly built on older PLT ideas that hadn’t made it mainstream yet. Rust may feel cutting edge for industry but it’s basically the opposite of avant garde theoretically speaking.
Yeah, thinking about it I would agree with you. I was thinking that it being copyable so easily would be disqualifying, but I wouldn’t say the same about Clone in Rust and it’s not really about explicitness, anyway. Thanks.
> There's some scepticism going round about a model of technological innovation where academics invent an idea and engineers implement it. While I accept that affine types were known to academics, I recall a precursor to them existed in the form of C++ unique pointers.
There feels like an implicit value judgment here, that the work of "academics" isn't needed because "engineers" figured it out themselves.
> I'm wondering whether the Rust creators reinvented affine types independently, instead of learning of them from the academic literature.
Why would this ever be a good thing? Is it "cool" somehow to be proud of not taking advantage of the hard-won knowledge of other people who actively hoped their work would get used?
To be a contrarian here to bring up a question, do these things really count? The roots of similar ideas already existed decades ago in old journals for all of these. Maybe the phrase "System Software Research is Irrelevant" could be interpreted as saying that all the big foundational ideas have all been found. The low hanging research is done and now its all iterating on details. Please tell me why this is incorrect?
You never heard of Self or the Alto, the JVM, BSD jails/Solaris zones/whatever the 360 had, etc? Mobile I'll give you, but the rest have lots of antecedents.
I have been working as a scientist in academic and industrial systems research for most of my life now and I have a different view on this. There are plenty of ideas that float around in academia for years or even decades before they get adopted by the industry. Examples have been mentioned here. People declaring some research area dead are usually proven wrong by time (as was Pike), because nobody can foresee which results will be relevant in the future. I tend to think of it as sort of scientific entrepreneurship, you need 100 startups to get one unicorn.
The point is: (academic) scientists have no interest in productization, they are merely interested into the next hot topic and the next paper, because that's their job. Productization is for product managers and engineers. Occasionally the lines are blurred, but I consider this separation of concerns a benefit, not a problem.
The problem is there is a research bottleneck with inadequate development of existing ideas, and relatedly a coordination problem where there aren't enough "rich artists" who want to get rid of accidentally complexity simply because it is the beautiful and correct thing to do. Stuff like Linux us funded by marginalist types who too often just chase after the current paper cut, and miss the forest for the trees.
At this point, we can't switch hole hog to a new OS very well. But as I argued in https://news.ycombinator.com/item?id=29697474 I think we should dump a bunch of funding getting CloudABI or something like it (WASI with WASM, don't care what it's called) over the finish line in multiple OSes.
I argue this is a very "non-reformist reform"
- Easy to incrementally adopt e.g. for socket-activated services
- Nonetheless narrows gap between Unix and cool stuff like Fuschia/seL4 and WASM's future plans
- Narrower syscall API that isn't just legacy lowest common denominator will allow multiple implementations of the same interface, which in turn will "unstuck" the whole R&D life cycle:
-- Narrow interface means research experiments can better run real programs
-- Research experiemnts can show better promise, especially to "practitioners"
-- Developing research projects into production systems is cheaper less of a jump
This will restore the much needed "fluidity" where there is a continuum between research protypes and production systems, and multiple competing production systems.
----
The good news is that "library OS" type designs are finally kinda going mainstream on the research end of things. Going back to "competition" bucking the current winner take all trend (LLVM > GCC, Linux > other unix) will be prohibitively expensive unless we start reusing code.
He left off the word Operating, and everyone assumed he meant something else... there really are no new operating systems these days, they're all clone of Unix in some fashion.
Containers and virtual machines are stopgap measures we're all adopted because nothing new has come along, and the Unix model of security isn't up to the task.
We need to be able to download code from the internet, and run it against X, where X is a folder, or a photo, or a URL, and know that the Operating System will only let that code access the things we gave it, and nothing else.
We don't have capability based Operating Systems, all we have that's even close is the ham fisted "capability" flags that allow an "app" on a phone to access or location, or not.
Analogy time: If I'm making a cash purchase, and I owe $14.21, I can pull out exact change, or hand over $20, knowing full well that the other party isn't going to magically be able to sell my house, or otherwise cost me more than that $20 capability I chose to exchange with the hope of getting $5.79 in change.
Linux/Unix has no easy way to say... here's a file, run this code with this file as input. The security model of Linux isn't suitable for the any date after the Eternal September of 1993.
The only thing close is to put the code in a virtual machine, or container, to try to limit the scope of the damage a piece of code can do. It's massive overkill, and does the wrong job anyway.
I think he was right, perhaps someone here can convince me otherwise.
He seems to think the solution is that "the OS" should expand in scope to also run and manage all the myriad little special purpose processors on a modern SoC.
I think that i understand the argument, but things changed with the advent of microservices. Suddenly everything turned into a distributed system, and it is hard to cope without some understanding of the subject matter. This lecture series by Martin Kleppman has helped me quite a bit here: https://www.youtube.com/watch?v=UEAMfLPZZhE&list=PLeKd45zvjc... Also see his book 'Designing Data-Intensive Applications', which is more focused on discussing practical systems.
> but things changed with the advent of microservices.
Microservices is just a marketing buzzword invented by container orchestration start-ups. Partitioning of large applications across multiple inter-connected processes has been around for decades. Your computer is packed with microservices.
On Linux, type `watch date` and enjoy two microservices running and interacting. On Windows, observe a dozen of svchost.exe processes in the Task Manager.
I have the impression that the microservice architecture lowers the... 'activation energy', if you will, of building components that operate in a different paradigm. It's intensely pursuing the idea that a subservice is only its interfaces. And behind each of those curtains, it would be practical to construct something really weird.
Of course, it's cheaper and intellectually less stressing to just pop up a container, so that's what everyone does... so far. I mean, you need a TCP/IP stack somewhere.
Perhaps a microservice component embodied entirely on a GPU is an easily accessible example, or on one of those fancy ML-specific processors in the bowels of Google..
i don't know about marketing, but things are much more simple, when everything is running on the same machine. There are fewer points of failure, ipc latencies are significantly lower, you are looking at the same system clock (unless you are looking at rdtsc, which is core specific). It's not quite the same, in practical terms.
We got some of the distributed aspects, when we got multiple processor cores and multiple cpus, but it's still not exactly the same thing.
Since 2000, we have been building new stuff on top of the already established OS and system-level abstractions, without ever questioning their adequacy in these new configurations. Virtualization and containerization have certainly developed and enabled new modes of deployment and maintenance, but they still play with the decades-old x86 ISA and spin up a Unix-like OS with a decades-old process state model and everything-is-a-byte-stream-mindset. Fundamental computing research should start from the ground up without any concerns for portability and industrial viability. It should demonstrate new abstractions for new classes of problems, without carrying the burden of legacy.
You are right, of course. However thin clients weren't such a big hit, and a workstation did cost half a fortune. All this was a bit remote, to me, at least.
Naturally they were only a reality in university campus and big companies, my first thin clients were X Windows IBM terminals and green phosphor text ones used to connect into DG/UX used on campus.
System software research sounds like it could benefit from focusing more on the systems than the software. No one is going to be building the next Linux. But they could build the ecosystem, architecture, and incentives for the next Linux to organically form. It's no longer just an engineering problem but an organizational and business one.
> There has been much talk about component architectures but
only one true success: Unix pipes. It should be possible to
build interactive and distributed applications from piece parts.
Wouldn't Microsoft's COM be another success in that area? It was several years old and in widespread use in Windows at the time the article was written.
Also, while COM is used by Windows developers, I argue the vast majority of non-developer Windows users are unaware of COM. The most they've experienced end-user component-based software is OLE (the technology that enables embedded documents; e.g., having an Excel spreadsheet inside a Word document) and ActiveX (before it became infamous for its security vulnerabilities). However, even though COM (and alternatively the .NET Common Language Infrastructure) provides the infrastructure to create a marketplace of components that can be mixed-and-matched just like how Unix tools can be strung together, the Windows ecosystem is still dominated by large applications, though I believe this has much more to do with the economics of the software industry (companies want to sell integrated solutions, not components) rather than any technological limitations; I believe this is the same factor that caused Apple to cancel OpenDoc; my opinion is that Apple in 1997 needed the support of large software vendors like Adobe and Microsoft in order for the Mac to stay alive, and OpenDoc's promise of fostering a marketplace of components is disruptive to vendors of large software applications, particularly the same vendors Apple wanted to continue making Mac applications.
Maybe this paper was accurate in 2000, but does not capture the work of systems research since then.
Some examples:
- DHT protocols as used in bittorrent
- SMR protocols, used in all kinds of distributed systems, from blockchains to cloud computing systems
- Cloud computing systems, such as Spark, Mesos, MapReduce
This doesn't even cover work on networks, databases, systems, security, etc.
I think person is now, in fact, getting paid to redo the firmware as desired, but while commonplace firmware might be uniquely disgusting, the bigger monoliths with even less competition are still the operating systems. Something still needs to be done there.
Idk, we have a bunch of VM optimizations and innovations, unikernels.
I also think PL innovations are going to usher in a new revolution at the bottom of the stack, just by making systems programming more accessible. I mean, Rust has already made inroads into existing OSes, and the increased accessibility will inevitably lead to more exciting experiments soon.
Yes PL, my home field, I am proud to say has been leading the way. I am certainly more optomistic than I would have been 20 years ago when Rob Pike wrote this, but the war is not yet won.
I fear some of the accidental complexity is more indicative than economic problems than CS ones. E.g. it feels like 1000s of bad abstraction dependencies is mirroring https://homesignalblog.wordpress.com/2020/12/25/industrial-s... in the real world. (I am not against larger numbers of dependencies, just the bloat that comes from a gazillion square pegs in round holes, and conway's law preventing anyone from trying to do anything about it.)
Indeed. A few more examples of fundamental systems advances since 2000: iOS, Android (intents), Chrome (multiprocess), Kubernetes, Cloudflare Durable Objects (collaboration), WASM, Rust, Typescript, Golang, Pytorch, JAX, Bitcoin, Ethereum.
Hmm, doesn't Rob Pike now work at Google, which has developed the Chrome browser, which in addition to being a successor to Netscape (mentioned in the article) is basically a complete OS and runtime environment, as well as the literal ChromeOS, Android, Fuchsia, and Kubernetes, as well as Dart, and Go (and also adopted Kotlin for Android?) Not to mention software systems for special-purpose hardware like tensorflow, quantum, etc..
Which of those are research projects and which are commercial products/projects (or both perhaps?) is left as an exercise for the reader.
You are exactly correct, and that’s his only (highly belaboured) point - none of those exciting things are coming out of academia, it’s all commercial labs.
In addition to things categorized as “systems research”, there’s a bunch of new stuff like IoT devices which all need their own OS and update mechanisms.
Maybe the classical “write a new OS (because we didn’t understand the economics of hardware)” is no longer the case, but there’s still plenty of research going on: unikernels, containers…
It just looks different than what it used to. Also that paper is from 20 years ago.
> The web happened in the early 1990s and it surprised the
computer science community as much as the commercial one.
It then came to dominate much of the discussion, but not to much effect. . .
> Research has contributed little, despite a huge flow of papers on caches, proxies, server architectures, etc
- "The cloud": the systems became multi-computer networks. (Aurora, Mesos, Kubernetes, etc.)
- Web Browser as a platform (and associated technologies, such as Node)
The scale of the problem changed and shifted. Single machine scheduling is not as interesting, since large scale computing is about that, scale (even at the cost of some efficiency). You could say that Amazon EC2 is a cloud operating system (thought as all the separate subsystems that allow a mostly frictionless experience).
Mobile shifted the focus to security and energy consumption.
And the Web changed what a system platform is. The browser is a system abstraction, to the point where things like Chromebooks are viable. Code is now portable, run anywhere.
Systems software research is alive and well. Making new single machine kernels is just no longer priority. Community shifted to distributed system (essentially viewing thousands of computers as one fault-tolerant OS). A lot of things came indeed from academia. The next shift currently underway is in distributed machine learning (how can you build, train and serve trillion trillion parameter models - call it a distributed machine learning OS). A lot of things again is coming from academia. The field has grown up and old folks are lost in nostalgia of not seeing new kernels as they used to.
Rob Pike was drinking the poison chalice well before the rest of us. He wasn't wrong, here, back, in 2000. But damn. We've seen net zero systematic gain in computing across the entire industry in TWO FUCKING DECADES. Not a single iota of gains or win. Not a single systems software change has made it into the world. Everything is dead dead dead dead. Applications claim more and more and more, networks portals & endless walled gardens claiming bigger and bigger chunks of mindshare. General computing has won not a single drop. It's not because Pike was right: it's because we let everything except Systems Software grow, because we failed utterly to even try to deliver "wide" value in computing. Systems research was already giving the fuck up, it's funding was disappearing, & Pike was merely pounding another cynical nail in the coffin of trying for better at a time when we were desperately in need. Few concerted efforts were materializing then, & less & less have materialized along the way.
Perhaps the biggest modern counter-example we might point to is Rust. But in many ways, I feel like Rust justifies many of Pike's complaints:
> If systems research was relevant, we’d see new operating systems and new languages making inroads into the industry, the way we did in the ’70s and ’80s.
Now, I don't think this expectation is fair. But I do think that Rust is primarily different for developers. It hasn't really given rise to anything new, from what I can see. There's no new potentials here, no new possibilities embued by Rust: it's simply the same, but less, deliberately less, deliberately more choosy: it's a deliberate subset. It will never help us explore further, make better user systems. Only ones which fuck up slightly less. Generally I think the paranoia fear & sadness over software camp already had too strong a hold.
This paper hurt like fuck when I originally read it. It's still not wrong. But it's unnecessary overkill, to a world that was already suffering, and which, I believed then & believe now, must ultimately rise like a phoenix & rediscover itself, re-find purpose, & make itself real. Computing has gotten only ever more fake & bullshit since this decrial, and 99% of what Pike pissed on is exactly what is necessary for computing to regain even the most faint sense of vitality & aliveness. We've been in the deadzone for decades now, decades since this was written. Pike was not wrong when he warned about what a doledrums we were in, but it's not Systems Software that's at fault. It's failure to make real, failure to believe, failure to live in a better world, failure to build with as opposed to build alternatives-to that had been dooming us. This call for anti-vision is 100% the black miasma we have had to live inside of for the past two decades, has perfectly described the nihilism that hope has been trapped inside of. It's fucking time to open the god damned pandora's box again, and let hope try to make it's way out.
Rather than measure by how many operating systems are being created, I'd prefer measuring by more forward thinking metrics. The results are still not good. But measuring how many cross-website information systems emerge. Counting systems like Tim Berners-Lee's Solid or ActivityPub: they are where the puck is. The truth is, operating systems are irrelevant, because they have never succeeded beyond a single computer, and no one cares about a single computer anymore. It's no longer where the puck is. Systems research needs to be connected, communicative today. It needs to be online, and preferrably if at all possible offline too. The new frontiers of systems research are tech like CRDTs and Git. They're tools like Web Annotations that give us read-annotate-write capabilities to the general computing medium/fabric of the day, like the web. To a lesser degree it's also systems like FreeDesktop & DBus, which help applications & daemons expose themselves to user-scripting, it's tools like systemd which are intensely automateable & advanced new master-control-programs for the OS. I think we're still missing layers to really surface the malleability of this digital matter to the user, we're still not making a real difference, & we need to some day emerge from behind the digital curtain & become real to the world, to make systems research relevant. But systems research is happening, it is important, it has changed everything. Just... alas... very very very little for the user. That still needs to be changed. But new OSes will probably not make the difference. We already have great mediums, great systems; the frontier merely needs research to be bridged back into the mundane everyday experience of the "user'.
One of the quagmires with systems research is that it makes “stuff” (performance or usability) easier for the user. Research is supposed to be “scientific”, so you need ample evidence of those claims. This is difficult to obtain for very weird systems because usability is in human-computer interaction, which is hard to measure. If it’s so weird to not be portable and you have not measured humans, it must be impractical. That’s the line of thought that I encounter, at least.
The much more publishable paper is on fixing the existing interface and changing the implementation. Look at a paper like Tensorflow; the value add for users is huge, but at the end of the day all the results are on comparing to highly tuned assembly programming. There is some notion that each of the ~millions of ML practitioners that use TF would only use it if it had very good performance. Reality is that most practitioners just want a working implementation. In Pike’s slides, most users are “grandma”, which, in this case, severely distorts relevant metrics.
Also, CS tends to dislike revisiting problems and is seasonal in research topics (see the rise and fall of AI and Big Data frameworks). I’d guess most of this “hot topics” agenda is motivated by possible industrial use. The industrial utility of very new things is near zero (basically what the slides say). You basically see research doing a “I can do that, too” whenever something interesting hits the market.
I think an interesting thought experiment is to take something like databases or distributed systems and simulate inventing them today. It’s unclear how you’d argue for such big ideas. Afterall, you can write database queries by hand and buy bigger computers.
Systems research has become like orthodox economics, doing various marginal empirics but missing the bigger more qualative macro picture.
Maybe the megacorp that is going to hire you wants to see some benchmarks improved, but frankly, that's not what the world needs, or even what would best benefit the megacorp?
Economics at least has its uncooth grandfather in Political Economy. Computer systems needs to discover it's equivalent, something that is not afraid to go full qualitative and tackle the daemon of accidental complexity head on. There is still great work to be done auditing "supply chains" in dependencies and bootstraps, characterizing bad abstractions and proposing better ones, etc. etc. Per what I wrote the top level comment about capabilities, I think this work can be "practical" in offering realistic incremental improvements and not just more baleful essays like this one, too.
It's not how academia is structured, but backing up criticism with a protoype of the alternative IMO completely solves the credability problem.
From the vantage of the social sciences, where one can only dream of running prototype towns and cities, this is a gift and systems researchers should consider themselves lucky.
What if there are many alternatives? Each implementation would be a challenge to optimize fairly. You’d be forced to have some open competition to make it convincing.
I'm confused what you mean? The prototypes need not be performant, they just must be correct and not outrageously naive to demonstrate the better abstractions/layering is working.
The questions are: is the code beautiful and dose it work?
Not really. Closest I know is the PL papers, but even there I would say there is too much new stuff without the criticism. But that is more the general development bottleneck / research traffic jam. With few of those able to make to to the "real world", there is less experience to ground such criticism in.
Notably just about zero of what you've mentioned is apparent or directly empowering to users. They have better services at their disposal as a result of these systems developments, but almost no users have more personal.xontrol, are better aware & more capable of relliantly doing computing than they had been. In my view we've only made users dependent & less capable, we've given up all ideas of systems to the cult of the application. Many many black boxes, all unalike.
The blockchain crazyshit is perhapa regardable somewhat as an exception, but it's stupid fucking crazy expensive & polluting to do anything, and you basically need to be 20x smarter than the average programmer to not footgun yourself. In principle i want to try to allow it as promuser innovative & systems oriented but it's such a totalized system of constraint & restriction, such a low low low resource world compared to even having a pentium1 & dsl connection, that I dont see it as meaningfully ennabling.
as counter example to your premise, icd cite projects like Node-RED or Yahoo Pipes. These are little micro-os like systems, environmemts for cobbling together computing. They're higher level than what we might often think of, but they are easily im domain to me at least. And they're definitely about empowering!
It's a fault that more systems research isnt about finding more userhfaving abstractions. Naked Objects environments. Oberon. Good abstractions & systems dont just have to be for developers.
Very chicken & egg to me! I believe strongly in systems software research. Not just tuning systems & hunting for more throughput or other gains, but making malleable base abstractions & exposing them. Making new means to operate.
In this way, I agree with Pike: systems research often consigns itself to irrelevance. It should be bolder & more visible & more real.
I'll confess I think you're closer to what most people think of. But I also think there's a lack of words ideas & concepts of trying to better ground computing, and I think that work falls into the systems research bucket. And it's not active enough. We're kind of stuck. We are seeing some good systems work, but mainstream computing breached up on the shore 20 years ago & it's this discipline that needs to be the one helping shake us loose & getting us into more novel, useful paradigms of computing. The projects I listed- Yahoo Pipes, Node-RED, Oberon, Naked Objects- we need more big scale tries & there aren't enough.
It takes a person like this to design a language like Go, whose fundamental philosophy is that people cannot and do not want to learn anything new, ever. It takes that kind of cynicism to also view hardware with disdain and horror.
I’m glad I’m not the only person who views the design of go through that lens. It seems to me to be firmly rooted in the belief that developers are better off with weaker tools.
That is absolute true about go, but OP is misreading this article: the flagellation is ironic self-flagellation, supposed to be a "hey snap out of it; what's happening to our field?".
What it means is that in some twisted staged of grief, Rob Pike went to acceptance, from "Systems Software Research is Irrelevant, is that what we want?", to "Programming Language Research is Irrelevant, who cares!".
- iOS and Android: Power-efficient mobile systems, wakelocks, etc.
- Docker, Borg and Kubernetes: Container isolation, cluster orchestrators.
- Modern browser environments, with high degrees of isolation.
- Fast JavaScript runtimes and specialized JITs.
- WASM.
- Ruby and metaprogramming, which rose to considerable commercial success and then petered out. But by the standards of 2000, there's some neat language design there.
- Rust, which successfully applied a lot of lesser-appreciated academic ideas including affine types.
- Type systems retrofitted to many dynamic languages, including Typescript. Typescript is really rather remarkable, even though the types are "unsound", in the technical sense.
Some of these trends were already underway in 2000, when Rob Pike was writing. Admittedly, much of the most important implementation work happened in big tech companies, or occasionally startups. Not in academia. But that's largely a question of scale. Even as early as 1990, it was hard for a professor and a few graduate students to directly change the industry. But plenty of the items on the list above do have roots in academic research.
And Rob Pike did get to write Go. He deliberately ignored quite a bit of interesting research to focus on a specific aesthetic vision, which is fine. And a lot of good systems work has been done in Go.