Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The classic example is the Lisp Machine. Hypothetically a purpose-built computer would have an advantage running Lisp, but Common Lisp was carefully designed so that it could attain high performance on an ordinary "32-bit" architecture like the 68k or x86 as well as SPARC, ARM and other RISC architectures. Java turned out the same way.

It's hard to beat a conventional CPU tuned up with superscalar, pipelines, caches, etc. -- particularly when these machines sell in such numbers that the designers can afford heroic engineering efforts that you can't.



It's also hard to beat conventional CPUs when transitor density doubles every two years. By the time your small team has crafted their purpose built CPU, the big players will have released a general purpose CPU on the next generation of manufacturing abilities.

I expect that once our manufacturing abilities flatline, we will start seeing more interest in specialized CPUs again, as it will be possible to spend a decade designing your Lisp machine


I never thought I could make a custom CPU until I came across

https://en.wikipedia.org/wiki/Transport_triggered_architectu...

But when I saw that I thought, yeah, I could implement something like that on an FPGA. It's not so much a language-specific CPU as an application-specific CPU. If you were building something that might be FPGA + CPU or FPGA with a soft core it might be your soft core, particularly if you had the right kind of tooling. (Wouldn't it be great to have a superoptimizing 'compiler' that can codesign the CPU together with the program?)

It has its disadvantages, particularly the whole thing will lock up if a fetch is late. For workloads where memory access is predictable I'd imagine you could have a custom memory controller but my picture of how that works is fuzzy. For unpredictable memory access though you can't beat the mainstream CPU -- me and my biz dev guy had a lot of talks with a silicon designer who had some patents for a new kind of 'vector' processor who schooled us on how many ASIC and FPGA ideas that sound great on paper can't really fly because of the memory wall.


There's also a matter of scale. By the time you've made your first million custom CPUs, the next vendor over has made a billion generic CPUs, sold them, and then turned the money back into R&D to make even better ones.


John McCartey had amazing prescience given that Lisp was created in 1956 and the RISC revolution wasn't until at least 1974.

There's certainly some dynamic language support in CPUs: the indirect branch predictors and target predictors wouldn't be as large if there wasn't so much JavaScript and implementations that make those circuits work well.


Half of the Dragon Book is about parsing and the other half is about reconciling the lambda calculus (any programming language with recursive functions) with the Von Newman/Turing approach to computation. Lisp manages to skip the first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: