Hacker Newsnew | past | comments | ask | show | jobs | submit | more danielfalbo's commentslogin


For a perhaps easier to read intro to the topic, see https://ai-2027.com/


or read your favorite sci-fi novel, or watch Terminator. this is pure bs by a charlatan


> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

This makes me think: I wonder if Goodhart's law[1] may apply here. I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend. Should we care or would it be ok for AI to produce code that passes all tests and is faster? Would the AI become good at creating explanations for humans as a side effect?

And if Goodhard's law doesn't apply, why is it? Is it because we're only doing RLVR fine-tuning on the last layers of the network so all the generality of the pre-training is not lost? And if this is the case, could this be a limitation in not being able to be creative enough to come up with move 37?

[1] https://wikipedia.org/wiki/Goodhart's_law


I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.

This is generally true for code optimised by humans, at least for the sort of mechanical low level optimisations that LLMs are likely to be good at, as opposed to more conceptual optimisations like using better algorithms. So I suspect the same will be true for LLM-optimised code too.


> I wonder if, for instance, optimizing for speed may produce code that is faster but harder to understand and extend.

Superoptimizers have been around since 1987: https://en.wikipedia.org/wiki/Superoptimization

They generate fast code that is not meant to be understood or extended.


But there output is (usually) executable code, and is not committed in a VCS. So the source code is still readable.

When people use LLMs to improve their code, they commit their output to Git to be used as source code.


...hmm, at some point we'll need to find a new place to draw the boundaries, won't we?

Until ~2022 there was a clear line between human-generated code and computer-generated code. The former was generally optimized for readability and the latter was optimized for speed at all cost.

Now we have computer-generated code in the human layer and it's not obvious what it should be optimized for.


> it's not obvious what it should be optimized for

It should be optimized for readability by AI. If a human wants to know what a given bit of code does, they can just ask.


Ehh I think if it ends up being a half good architecture you wind up with a difficult to understand kernel that never needs touching.


As soon as I noticed it was down I came to hacker news to post about it, but...


The footer animation of koi.ai is so cool.


Building my own static site generator using vanilla Python and SQLite for my personal blog and Notion-like second-brain https://github.com/danielfalbo/prev


Author here. I previously used Next.js for my blog and Notion for my collection of linked books/resources/notes, but I wasn't happy with the compilation time of Next.js for a simple blog and the slowness of Notion. So I built my own solution for both from scratch.

I use Python for the logic (zero dependencies) and SQLite for the data.

I still have to migrate the data, but the core is already online at https://danielfalbo.com

For instance, I have an SQLite table for resources and one for authors, linked together.

The indices for each are automatically generated at https://danielfalbo.com/resources and https://danielfalbo.com/authors

If you open, for example, an author's page, you'll find hyperlinks to all their resources in the first part of the document, and vice versa for the resources. Example: https://danielfalbo.com/resources/fabric-of-reality

The blog will live in the "notes" table, which behaves similarly at https://danielfalbo.com/notes (actually, I'm still thinking whether to split the notes and the blog tables or keep them together, but infrastructure-wise it doesn't change anything).

A fun feature is the dynamic calculation of my age at the moment of writing of any blog post, for example, seen at https://danielfalbo.com/notes/essentials

There is also a lightweight livee preview feature for nice offline html content editing.

I still have to finish migrating everything and might realize there are gaps along the way, but the goal is relevant-features parity with https://old.danielfalbo.com/weblog and https://danielfalbo.notion.site/3cce7ff647e94470ba1342211337...


For comparison: ASML machines print chips with precision up to 2nm.

https://www.asml.com/en/products/euv-lithography-systems/twi...


And the atoms in the proteins and DNA that are exactly replicated to the atom each have a feature sizes resolved at fractions of a nanometer in 3 dimensions (and likely in time/dynamics too).


Not 13?


Applies to Software too.


Update: https://web.archive.org/ gives 503 Service Unavailable to me right now.



Thanks! This is about C and low-level, but what about general-purpose scripting languages? Python/PHP?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: