I hear a lot from linux users that found gtk 2 era on x11 as pretty close to perfect. I know i had run ubuntu and after boot it used far less than 1GB. The desktop experience was perhaps even slightly more polished than what we have today. Not much has fundamentally changed except the bloat and a regression on UX where they started chasing fads.
I suppose the most major change on RAM usage is electron and the bloated world of text editors and other simple apps written in electron.
Just stick XFCE on a modern minimal-ish (meaning not Ubuntu, mainly) distribution and you'll have this with modern compatibility. Debian and Fedora are both good options. If you want something more minimal as your XFCE basd, there are other options too.
XFCE is saddled with its GTK requirement, and GTK gets worse with every version. Even though XFCE is still on GTK3, that's a big downgrade from GTK2 because it forces you to run Wayland if you don't want your GUI frame rate arbitrary capped at 60 fps.
For people wanting the old-fashioned fast and simple GUI experience, I recommend LXQt.
It makes it easier to treat the computer as part of your own body, allowing operation without conscious thought, as you would a pencil or similar hand tool.
Outside of gaming, not much. However, now that I'm used to a 144Hz main monitor, there is no world where I would get back. You just feel the difference.
So basically, no use when you've not tasted 120+Hz displays. And don't because once you do, you won't go back.
I have a 165hz display that I use at 60hz. Running it at max speed while all I'm doing is writing code or browsing the web feels like a waste of electricity, and might even be bad for the display's longevity.
But for gaming, it really is hard to go back to 60.
Mine supports variable refresh rate, which means for most desktops tasks (I.e when nothing is moving), it runs at 48Hz.
Incredibly, Linux has better support than windows for it on the desktop: DWM runs full blast, while sway supports VRR on the desktop. Windows will only enable it for games (and games that support it). Disclaimer: Wayland compositor required.
It’s not enabled by default on e.g. sway because on some GPU and monitor combos, it can make the display flicker. But if you can, give it a try!
Windows 11 idles at around 60 Hz in 120 Hz modes on my VRR ("G-SYNC Compatible") display when the "Dynamic refresh rate" option is enabled, and supports VRR for applications other than games (e.g., fullscreen 24 FPS video playback runs at 48 Hz* via VRR rather than mode switching, even with "Dynamic refresh rate" disabled).
* The minimum variable refresh rate my display (LG C4) supports is 40 Hz.
> What use is there in display frame rates above 60 fps?
On a CRT monitor the difference between running at 60 Hz and even a just slightly better 72 Hz was night and day. Unbearable flickening vs a much better experience. I remember having some little utility for Windows that'd allow the display rate to be 75 (not 72 but 75). Under Linux I was writing modelines myself (these were the days!) to have the refresh rate and screen size (in pixels) I liked: I was running "weird" resolutions like 832x604 @ 75 Hz instead of 800x600 @ 60 Hz, just to gain a little bit more screen real estate and better refresh rate.
Now since monitors started using flat panels: I sure as heck have no idea if 60 fps vs 120 fps or whatever change anything for a "desktop" usage. I don't think the problem of the image fading too quickly at 60 Hz that CRT had is still present. But I'm not sure about it.
120 FPS vs 60 FPS is definitely noticeable for desktop use. Scrolling and dragging are night and day, but even simple mouse cursor movement is noticeably smoother.
The whole linux stack got bigger though - just look at what you need now to compile stuff, cmake, meson/ninja, mesa, llvm and so forth. gtk2 was great; GTK is now a GNOMEy-toolkit only, controlled by one main corporation. Systemd increased the bloat factor too - and also gathers age data of users now (https://github.com/systemd/systemd/pull/40954).
I guess one of the few smaller things would be wayland, but this has so few features that you have to wonder why it is even used.
Runtime-wise we use more garbage collected languages now. Java and such are great and can be very high performance, the real cost though is memory. GC languages need much more memory for book keeping, but they also need much more memory to be performant. Realistically, a Java app needs 10x the amount of memory as a similar C++ application to get good performance. That's because GC languages only perform well when most of their heap is unused.
As a side-note, that's how GC languages can perform so well in benchmarks. If you run benchmarks that generate huge amounts of garbage or consistently run the heap at 90%+ usage, that's when you'll see that orders of magnitude slowdown.
Oh also containers, lots more containerized applications on modern Linux desktops.
Programs that manually allocate and deallocate memory to store "huge amounts of garbage" can easily incur more memory management overhead than programs using garbage collection to do the same.
If a Java application requires an order of magnitude more memory than a similar C++ application, it's probably only superficially similar, and not only "because GC".
Well no because in a manual memory management language if you allocate 1 object, and then destroy it, and then reallocate another object then you've used 1 object amount of memory.
In Java, that's two objects, and one will be collected later.
What this means is that a C++ application running at 90% memory usage is going to use about the same amount of work per allocation/destruction as it would at 10% usage. The same IS NOT true for GC languages.
At 90% usage, each allocation and deallocation will be much more work, and will trigger collections and compactions.
It is absolutely true that GC languages can perform allocations cheaper than manual languages. But, this is only true at low amounts of heap usage. The closer you get to 100% heap usage, the less true this becomes. At 90% heap usage, you're gonna be looking at an order of magnitude of slowdown. And that's when you get those crazy statistics like 50% of program runtime being in GC collections.
So, GC languages really run best with more memory. Which is why both C# and Java pre-allocate much more memory than they need.
And, keep in mind I'm only referring to the allocation itself, not the allocation strategy. GC languages also have very poor allocation strategies, particularly Java where everything is boxed and separate.
I’ve been using cmake since early 2000s when i was hacking on the vtk/itk toolkit. Compiling a c++ program hasn’t gotten any better/worse. FWIW, I always used the curses interface for it.
Is not FUD; the full name, email and the rest were not META/corporations mandated, which are lobbying for it so they can earn money with users' preferences. Get your spyware to somewhere else.
If META's business model is not lucrative, is not my problem.
>which are lobbying for it so they can earn money with users' preferences
Given it's a field where you can put absolutely anything in (and probably randomize, if you want), how is this different than the situation today, where random sites ask you for your birthday (also unverified)? Moreover Meta already has your birthday. It's already mandated for account creation, so claims of "so they can earn money with users' preferences" don't make any sense.
This is against HN guidelines: " Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
>The contents of the field will be protected from modification except by users with root privileges.
Yep. I still develop Gtk2 applications today. It's a very snappy and low resource usage toolkit aimed entirely at desktop computers. None of that "mobile" convergence. I suppose you could put Gtk2 applications into containers of some sort but since Gtk2 has (luckily) been left alone by GNOME for decades it's a stable target (like NES or N64 is a stable target) and there's no need for it.
Most of the bloat these days is from containers and Canonical's approach to Ubuntu since ~2014 has been very heavy on using upstream containers so they don't have to actually support their software ecosystem themselves. This has lead to severe bloat and bad graphical theming and file system access.
Sure, one is connmapperl. It is a server/client application where the server is a GUI map of the world that shows all the various clients collected IP established connections via geoip lookup (local). It stores everything in sqlite db and has a bunch of config/filtering options; http://superkuh.com/connmapperl.html Technically a fork of X11 connmap I made because I coulnd't get it to run on my old X11, but with many, many more features (like offline whois from raw RIR dumps, the db, the hilbert mapping, the replays of connection history, etc).
Another one is memgaze, a program to vizualize linux process virtual memory spaces as RGB images and explore them using various binary visualization and sonification tools. Ie, you can just click a hilbert map of all processes then in the new window click around inside the image of that particular process' virtual ram and then listen to it interpreted as an 8bit wav, or find an extract images, for example. Or search for strings, run digraph analysis, etc. http://superkuh.com/memgaze-page.html
Or feeed.pl, my very quick and low resource usage feed reader for 1000+ feeds written in Perl/Gtk2 that is text only (no html, no images, etc). It is really handy for loading .opml files and finding and fixing broken feeds using the heuteristics I hard coded in to find feed urls. http://superkuh.com/blog/2025-09-13-2.html
These are a few I made 2025-26 that other people might care to use. But I have a lot more that just scratch my own particular itches. Like a Perl/Gtk2 version of MS Paint that interprets arbitrary loaded and painted images as sound, or the things that I use to monitor my ISP uptime/speed, etc.
That's rose-tinted. I remember specifically switching to KDE because GTK apps of the day segfaulted all the time. Unfortunately KDE then screwed things up massively with Plasma (remember the universally loathed kidney bean?) and it's really only recovered recently.
And to say the desktop experience was more polished than what we have now is laughable. I remember that you couldn't have more than one application playing sound at the same time. At one point you had to manually configure Xfree86 to be aware that your mouse had a middle button. And good luck getting anything vaguely awkward like WiFi or suspend-to-ram working.
The Linux desktop is in a vastly better position now, even taking the Wayland mess into account.
Static/Dynamic analysis tools find vulnerabilities all the time. Almost all projects of a certain size have a large backlog of known issues from these boring scanners. The issue is sorting through them all and triaging them. There's too many issues to fix and figuring out which are exploitable and actually damaging, given mitigations, is time consuming.
Am i impressed claude found an old bug? Sort of.. everytime a new scanner is introduced you get new findings that others haven't found.
Static analyzers find large numbers of hypothetical bugs, of which only a small subset are actionable, and the work to resolve which are actionable and which are e.g. "a memcpy into an 8 byte buffer whose input was previously clamped to 8 bytes or less" is so high that analyzers have little impact at scale. I don't know off the top of my head many vulnerability researchers who take pure static analysis tools seriously.
Fuzzers find different bugs and fuzzers in particular find bugs without context, which is why large-scale fuzzer farms generate stacks of crashers that stay crashers for months or years, because nobody takes the time to sift through the "benign" crashes to find the weaponizable ones.
LLM agents function differently than either method. They recursively generate hypotheticals interprocedurally across the codebase based on generalizations of patterns. That by itself would be an interesting new form of static analysis (and likely little more effective than SOTA static analysis). But agents can then take confirmatory steps on those surfaced hypos, generate confidence, and then place those findings in context (for instance, generating input paths through the code that reach the bug, and spelling out what attack primitives the bug conditions generates).
If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
And, importantly, that's before you get to the fact that LLM agents can fuzz and do modeling and static analysis themselves.
There are plenty of static analyzers do attempt to
walk code paths for reachability. Some even track tainted input. And yes, these are often good starting points for developing exploits. I’ve done this myself.
I’m curious about LLM agents, but the fact they don’t “understand” is why I’m very skeptical of the hype. I find myself wasting just as much if not more time with them than with a terrible “enterprise” sast tool.
> and both of those are a result of the United State unwillingness to fully fund something like Amtrak.
What kind of funding are we looking at? Is the issue that this is cost-prohibitive for reasons of scale that make this non-competitive for businesses themselves to fund as compared to elsewhere?
Amtrak was created to preserve the last vestiges of passenger rail when private businesses pulled out. It has conflicting missions so it's never going to be competitive in service.
Amtrak does not own its own rail network. It has priority over cargo trains de jure but in practice cargo takes priority. Many areas only have one set of tracks and trains can only pull over onto sidings when they exist. Class 1 railroads are capital intensive so to be more profitable they don't spend any money they don't have to. Such as more sidings, more train yards, not maximizing the length of trains so they fit onto those sidings, or more than one operator per train. Class 1 railroads are focused on cargo and making money, not helping Amtrak trains go first. The government doesn't care to enforce the law either. https://www.bls.gov/opub/btn/volume-13/tracking-productivity...
Amtrak operates routes that suffer from low demand instead of focusing on the New York Washington DC route. It's about counting US Senate votes as much as customer satisfaction or breaking even.
The Federal government heavily subsidized cars starting in the 1950s through the Interstate Highway System. Cars and airliners are considered critical passenger transportation infrastructure, trains are not.
The S-Line project is underway in NC and VA. It will rehabilitate an abandoned line (the former Seaboard Coast Line) to allow faster travel between Raleigh and Richmond. It won't be electrified but will allow trains to run at up to 110 mph/177 kph which is a big improvement over the current 60-70 mph (when the passenger train isn't being delayed by a freight train).
They are currently doing a couple of grade-separation bridge projects in north Raleigh and some minor curve straightening. Since the S-Line is not currently being used they can straighten many of the curves since there won't be any impact to existing operations.
The S-Line right of way is owned by CSX and they will be running freight on it. The budget wasn't there to acquire all of it by NCDOT and VA and dedicate it to passenger service.
This is a really funny comment. In setting the world record for Li's 39-round collision[1] (still unbroken, and one of our favorite papers), he also set some records in sha-224, reaching 40 rounds in that one. Of course, saying sha-224 is "87% of the way" to sha-256 is correct in a sense, and that's why his record is slightly larger in reduced-round full-schedule collisions on that metric, 40 rounds for sha-224 and only 39 in sha-256. At the same time, the fact that he reached only 39/40 rounds on those shows the difficulty of getting through the full 64 rounds, which is what our paper does with a slightly relaxed schedule adherence.
Assuming I know what I want and am somewhat competent at describing it, I would guess ten times the final length should be plenty. If you are exploring different options, you can of course produce an unlimited amount of videos. But that is not really what I was referring to, I was more thinking of how many attempts it takes the model to produce what you want given a good prompt - I have never used it and have no idea if it nails it essentially every time or whether I should expect to run the same prompt ten times in order to get one good result.
reply