Hacker Newsnew | past | comments | ask | show | jobs | submit | sweetheart's commentslogin

The recent developments of only the last 3 months have been staggering. I think you should challenge your beliefs on this a little bit. I don't say that as an AI fanboy (if those exist), it's just really, really noticeable how much progress has been made in doing more complex SWE work, especially if you just ask the LLM to implement some basic custom harness engineering.

>The recent developments of only the last 3 months have been staggering.

What developments have been "staggering"? Claude 4.6 vs 4.5? ChatGPT 5.2 vs 5? The Gemini update?

Only the hype has been staggering, and bs non-stories like the "AI agents conspire and invent their own religion".


I'll let you know in 12 months when we have been using it for long enough to have another abortion for me to clean up.

> I started programming when I was seven because a machine did exactly what I told it to

What a poetic ending. So beautiful! And true, in my experience.


This is my first time hearing of Oxide, but I had the same initial thought after reading this blog post then poking through their site. The degree of careful thought put into their policies and culture is really impressive, at least from the outside. Good for them, I hope they continue to be in a position to have that luxury (genuinely).

React's core is agnostic when it comes to the actual rendering interface. It's just all the fancy algos for diffing and updating the underlying tree. Using it for rendering a TUI is a very reasonable application of the technology.


The terminal UI is not a tree structure that you can diff. It’s a 2D cells of characters, where every manipulation is a stream of texts. Refreshing or diffing that makes no sense.


When doing advanced terminal UI, you might at some point have to layout content inside the terminal. At some point, you might need to update the content of those boxes because the state of the underlying app has changed. At that point, refreshing and diffing can make sense. For some, the way React organizes logic to render and update an UI is nice and can be used in other contexts.


How big is the UI state that it makes sense to bring in React and the related accidental complexity? I’m ready to bet that no TUI have that big of a state.


IMO diffing might have made sense to do here, but that's not what they chose to do.

What's apparently happening is that React tells Ink to update (re-render) the UI "scene graph", and Ink then generates a new full-screen image of how the terminal should look, then passes this screen image to another library, log-update, to draw to the terminal. log-update draws these screen images by a flicker-inducing clear-then-redraw, which it has now fixed by using escape codes to have the terminal buffer and combine these clear-then-redraw commands, thereby hiding the clear.

An alternative solution, rather than using the flicker-inducing clear-then-redraw in the first place, would have been just to do terminal screen image diffs and draw the changes (which is something I did back in the day for fun, sending full-screen ASCII digital clock diffs over a slow 9600baud serial link to a real terminal).


Any diff would require to have a Before and an After. Whatever was done for the After can be done to directly render the changes. No need for the additional compute of a diff.


Sure, you could just draw the full new screen image (albeit a bit inefficient if only one character changed), and no need for the flicker-inducing clear before draw either.

I'm not sure what the history of log-output has been or why it does the clear-before-draw. Another simple alternative to pre-clear would have been just to clear to end of line (ESC[0K) after each partial line drawn.


Only in the same way that the pixels displayed in a browser are not a tree structure that you can diff - the diffing happens at a higher level of abstraction than what's rendered.

Diffing and only updating the parts of the TUI which have changed does make sense if you consider the alternative is to rewrite the entire screen every "frame". There are other ways to abstract this, e.g. a library like tqmd for python may well have a significantly more simple abstraction than a tree for storing what it's going to update next for the progress bar widget than claude, but it also provides a much more simple interface.

To me it seems more fair game to attack it for being written in JS than for using a particular "rendering" technique to minimise updates sent to the terminal.


Most UI library store states in tree of components. And if you’re creating a custom widget, they will give you a 2D context for the drawing operations. Using react makes sense in those cases because what you’re diffing is state, then the UI library will render as usual, which will usually be done via compositing.

The terminal does not have a render phase (or an update state phase). You either refresh the whole screen (flickering) or control where to update manually (custom engine, may flicker locally). But any updates are sequential (moving the cursor and then sending what to be displayed), not at once like 2D pixel rendering does.

So most TUI only updates when there’s an event to do so or at a frequency much lower than 60fps. This is why top and htop have a setting for that. And why other TUI software propose a keybind to refresh and reset their rendering engines.


The "UI" is indeed represented in memory in tree-like structure for which positioning is calculated according to a flexbox-like layout algo. React then handles the diffing of this structure, and the terminal UI is updated according to only what has changed by manually overwriting sections of the buffer. The CLI library is called Ink and I forget the name of the flexbox layout algo implementation, but you can read about the internals if you look at the Ink repo.


What an amazing blog post. Such a treat to have read this. Thanks for sharing.


https://archive.ph/20251125055632/https://www.theverge.com/e...

I'm actually currently in the process of trying to career shift from a "normal" SWE career into indie game development, and starting to navigate this a bit myself. As I become more invested in the indie game space, both as someone who wants to make a living within it, but also as someone who wants to support other indie devs more and more, I feel like what I care about most is when a game has a clear sense of the individual(s) behind the project. I dont think that this strong sense of identity is antithetical to generative AI use, but I definitely think it can become a crutch that hurts rather than helps.

I say all this, but at the same time can't imagine feeling compelled to do without Cursor for development. To me, there is a remarkable difference between AI being used for the software engineering vs. the art direction. But this is just personal preference, I think. Still, it's hard to know if that will mean I can't also use something like a "Gen-AI Free" product label, or where that line will fall. Does the smart fill tool in Photoshop count as Gen AI? How could it not?

In the end, I think there is (or there _can_ be) real value to knowing that the product you purchased was the result of a somewhat painstaking creative process.


I think the point remains, though, that making it harder to ensure a young child is sitting next to their guardian benefits _no one_. Having learned over the last year what flying with a 2 year old is like, an increase in the amount of toddlers who fly without sitting next to their parents is just going to be a nightmare for the kids, the parents, the other passengers, and the crew. No one should want this, in my opinion. Besides, the parents have the leverage in this situation I think, in the form of feral toddlers hell bent on maximizing chaos (and I mean that lovingly and empathetically, but still vaguely as a threat lol)


Such an incredible book! I read it like 8 years ago and think about it often enough that it was on my mind just yesterday :)


One can believe that all people are deserving of love and friendship regardless of who they are or what they've done, and simultaneously believe that replacing social interaction with AI is generally a net harm for any/everyone. No one is bad because they want social stimulation from an AI, but I think it reinforces damaging norms that will leave us all worse off.


> but even just the tought of becoming... irrelevant is depressing

In my opinion, there can exist no AI, person, tool, ultra-sentient omniscient being, etc. that would ever render you irrelevant. Your existence, experiences, and perception of reality are all literally irreplaceable, and (again, just my opinion) inherently meaningful. I don't think anyone's value comes from their ability to perform any particular feat to any particular degree of skill. I only say this because I had similar feelings of anxiety when considering the idea of becoming "irrelevant", and I've seen many others say similar things, but I think that fear is largely a product of misunderstanding what makes our lives meaningful.


Thank you, you really are a sweetheart. And correct. But it's not easy to combat the anxiety.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: