Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And with an "actually open CPU," how does one verify that the silicon in the final package is actually what's in the design and that no "closed and secret designs" have been added by the fabricator?


You need to xray the die and confirm the designs are the same, but it doesn't guarantee that extra or bad transistors aren't inserted that leak data.

You never really know.

https://www.cl.cam.ac.uk/~sps32/ches2012-backdoor.pdf

https://www.usenix.org/legacy/event/leet08/tech/full_papers/...

http://en.wikipedia.org/wiki/Hardware_Trojan


One solution could be to obfuscate the design when it goes to fabrication and then check the amount of time it takes to fabricate. This assumes that obfuscation is possible in logic design, and insertion of back-doors in an obfuscated design is going to be non-trivial.


Well, somewhat a solved problem if your hardware is a uniform combinatorial logic and routing mesh (E.g. a FPGA), not exactly energy efficient.

But I think this is a weird diversion: That I can't add (or pay to add) advanced security features in my CPUs even at substantial (but sane) costs is a clear reason the current closed ecosystem is inferior to an open source one.

This remains true even if even an open cpu design were not cost-effectively auditable at the hardware level, it's an orthogonal issue (and even more so— the closed cpu designs are inherently less audi-able if hardware backdoors are your concern). An open design doesn't have to be better in ever possible way to be better in some.


I think from the perspective of openness and verification a simple 8-bit CPU like a 6502[1] would be ideal - there's not a lot that can hide in 3500 transistors.

[1] http://www.visual6502.org/




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: