Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for this, and more specifically for adding the relevant timestamp - and more personally - for Jaggar's answer to the subsequent question regarding complexity in hardware and software stacks, which as a layperson is something I often think about.

I'd love to see the result of a concerted ground-up creation of a hardware and software system that starts from scratch and leaves unresolvable legacy considerations to virtual machines or whatever.

Anyway, totally Off Topic, but thanks for what I'm assuming will be my watching that entire video!



> I'd love to see the result of a concerted ground-up creation of a hardware and software system that starts from scratch and leaves unresolvable legacy considerations to virtual machines or whatever.

This has been tried often. (Thinking Machines, Multiflow, e.g., not to mention Itanium) I would love to see this too; I worked for a startup that tried to do this (vliw, no specex), and taped out with a working chip, after 6 months. Maybe the startup didn't have the right sales team, but they didn't manage to make any meaningful sales, the culture in the buyer's market is too conservative.

In the case of the company I worked for I suspect that part of the problem is that conservative buyers will look for excuses to say no, instead of excuses to say yes. One such example is that nobody would accept that you could move specex to the compiler, with the old "sufficiently advanced compiler" joke, despite the fact that they could prove that llvm was "advanced enough".


They solved the problem of unpredictable load times at compile time? Using just LLVM? How?


No they used delayed conditionals instead of specex. Common tactict for dsp


*common tactic for dsp processors, generally unused for gp-cpu designs, because "it's hard to compile with optimization"


> I'd love to see the result of a concerted ground-up creation of a hardware and software system that starts from scratch and leaves unresolvable legacy considerations to virtual machines or whatever.

https://en.wikipedia.org/wiki/Lisp_machine


Have you seen the Mill videos? Seems to have stalled out but it represents a lot of rethinking of assumptions.

https://millcomputing.com/docs/


I'm not a specialist in CPU design but to me RISC-V seems like an incremental evolution, nothing really ground-breaking, when compared to alternatives the main benefit is the openness/licensing/cost. But at the same time think that you can get a $5 raspberry pi and embedded ARM chips cost pennies.

The entire effort of supporting it and building the ecosystem will be hard to justify economically at these cost figures.

Mill on the other hand has a lot of interesting ground-breaking ideas that can potentially offer a 10x performance/watt improvement.

Many people have been recently in awe after experiencing the ARM performance improvements around 40% offered by the Apple silicon and AWS Graviton CPUs. Just imagine something offering a 10x based on a Mill design, I'd love to see it built one day and available on the market.


40% on existing software & even better performance when emulating old software. Mill is extremely alien with no indication they're able to get any of the same gains. Regardless of their brilliance, they seem to not know how to run an engineering business & their choices show that (our new CPU needs a completely new compiler vs adding a new backend, our new CPU needs a new OS, etc). Their efforts are basically "boil the ocean" levels of engineering without any sales to justify that they're on a path to success.


Oh, also this one is maybe a midpoint between the two in interestingness.

https://www.forwardcom.info/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: