Hacker Newsnew | past | comments | ask | show | jobs | submit | dlcarrier's commentslogin

There's two things that cause this. First, Windows has a variable swap file size, whereas Linux has a fixed size, so Windows can just fill up your drive, instead of running out of swap space. Second, the default behavior for the out-of-memory killer in Linux isn't very aggressive, with the default behavior being to over-commit memory instead of killing processes.

As far as I know, Linux still doesn't support a variable-sized swap file, but it is possible to change how aggressively it over-commits memory or kills processes to free memory.

As to why there differences are there, they're more historical than technical. My best guess is that Windows figured it out sooner, because it has always existed in an environment where multiple programs are memory hogs, whereas it wasn't common in Linux until the proliferation of web-based everything requiring hundreds of megabytes to gigabytes of memory for each process running in a Chrome tab or Electron instance, even if it's something as simple as a news article or chat client.

Check out this series of blog posts. for more information on Linux memory management: https://dev.to/fritshooglandyugabyte/series/16577


Not in practice: you'll spend most of your time trying to figure out what the API is supposed to do, why it isn't doing it, and what you can do about it. LLMs are surprisingly good at aggregating everyone else's work doing the same, though.

My background is in electrical engineering, but I've done my share of programming, both with low-level assembly-language firmware to highly abstracted JavaScript user interfaces. Programming firmware was a very similar process to designing hardware, but the overly abstracted programming for software run on a modern computer or phone was completely different, and LLMs can play a role in the latter.

With either firmware programming or hardware design, it starts with a few days to weeks of work figuring out what it's going to do, then finding all of the right components to make it happen, figuring out how to connect them together to do so, and finally verifying that that they will do so.

With hardware design, electrical components need connections between inputs to outputs, whereas with firmware libraries have calls and returns. What makes it work well is that there is a chain of documentation and testing to ensure that every component and library accepts the inputs it's designed to handle and creates the outputs its designed to generate. There's a lot more constraints to hardware than software, but to make up for it, an electrical component as simple as a single 1-cent transistor usually has several pages of documentation (e.g.: https://en.mot-mos.com/vancheerfile/files/pdf/MOT2302B2.pdf) ensuring that any data needed to make a design decision is readily available.

When writing firmware, a single page of documentation for each routine in a library is usually enough, with a description of each input and output, the data formats and ranges, possible error conditions, behavior when inputs are valid, and usually the resources needed to run it. When creating libraries, or a finished firmware or hardware designs, documenting the design and testing to ensure it matches the documentation ensures that the end user is able to select the right product and use it reliably.

The documentation is what makes it possible to get a working design, and it not only speeds up development cycles, allowing for a one-and-done approach instead of constant revisiting as end users discover design discrepancies, it also speeds up automation. Chances are whatever processor you are currently using, as well as every processor involved in the network this comment travels through, was laid out using automated tools that incorporated a pool of designs using a few transistors for small logical tasks, a huge array of information about their timing and performance, and the human-made description of the needed functionality of the processor. There are multiple types of AI algorithms in use, from simple genetic algorithms to complex neural networks, but they're all closed-loop iterative systems that continuously modify a design to optimize it, while keeping it within design parameters. LLMs can't produce anything nearly as useful, because it's single-pass design makes it practically impossible follow constraints.

The extremely abstracted programming done for software that runs on modern computers and phones is a whole different beast, because good documentation is extremely difficult to come by. Errors in documentation compound as more layers are added, and at some point when you are writing an API call for a library, in a framework, running in an interpreter, in a VM, in a web browser, in an operating system, the chance of good documentation is so low that no one even tries. This results in far more work figuring out how to get the tools to do what you want them to do, than figuring out what to ask them to do.

Most programmer's I've worked with search for examples of other projects using the pertinent API call, and use that to figure out what to do. One thing that LLMs are really good at is parsing documentation, and they also treat other projects as documentation, so if you ask it to do something, it can easily figure out what call correlates best with doing that thing. It's not great at figuring out what to do, but it sure can figure out what call to use to do it.

Another factor that makes LLMs usable for programming in overly abstracted software environments is that it's effectively impossible for a human to make something that works reliably (see also: https://xkcd.com/2030/) so the high error rate of LLMs is still competitive.


It's very import that Firefox look like a cheap knock-off of whatever is popular. This ensures that there's no reason for new users to switch to it, whilst also alienating current users, bringing the Mozilla Foundation closer to their goal, which seems to be having zero happy users and zero happy employees, for some reason.

This is what offices exist for. In fields where efficiency matters, you end up with contractors, working remotely, getting paid by the project, and not being tied to one company. This is how lots of engineering and architecture works as well as many other fields.

In a work environment dominated by office social situations, language plays a key role in establishing social status, but there are other forms of posturing, with promotions generally based more on social status than job performance, reinforcing the social hierarchy. Technical buzzwords aren't even the only kind of jargon used in this manor, there's often an entire litany of language used outside of the job functions themselves. For example, human resources has its own language rules.

The author has come across this phenomenon and is attributing it to language alone, but there is far more involved here.


Night owls and early birds are kind of like introverts and extroverts. In general, introverts like being around other introverts, but are okay with others being extroverts, but extroverts think something is inherently wrong with everyone that is an introvert, and that they should be fixed.

To the same effect, in general, night owls like working on a night owls schedule, but are okay with early birds doing their thing, but early birds think something is inherently wrong with everyone that is a night owl, and that they should be fixed.

I swear the only daylight savings time has stuck around as long as it has is that early birds consider the major biennial disruption a worthwhile compromise over letting night owls be.


There's been data since at least 2017 showing that getting up before sunrise is a bad idea: https://aacrjournals.org/cebp/article/26/8/1306/283057/Longi...

I've never understood why client-side execution is so heavy in modern web pages. Theoretically, the costs to execute it are marginal, but in practice, if I'm browsing a web page from a battery-powered device, all that compute power draining the battery not only affects how long I can use the device between charges, but is also adding wear to the battery, so I'll have to replace it sooner. Also, a lot of web pages are downright slow, because my phone can only perform 10s of billions of operations per second, which isn't enough to responsively arrange text and images (which are composited by dedicated hardware acceleration) through all of the client-side bloat on many modern web pages. If there was that much bloat on the server side, the web server would run out of resources with even moderate usage.

There's also a lot of client-side authentication, even with financial transactions, e.g. with iOS and Android locally verifying a users password, or worse yet a PIN or biographic information, then sending approval to the server. Granted, authentication of any kind is optional for credit card transactions in the US, so all the rest is security theater, but if it did matter, it would be the worst way to do it.


It doesn't appear to be there yet, but keep an eye on Libby, where you can borrow it using a library card, from your local library.

For example, here's another Werner Herzog book: https://share.libbyapp.com/title/9611895


It's not just LLMs, it's people doing that too— LLMs are trained on real writing in papers published in science, academia, and technology journals, as well as web pages and social media posts.

I've met real hard-working people who have had to change writing styles, because their style is too heavily mimicked by LLMs, so it now takes extra effort to not be accused of cheating.

You can use tools to detect the likelihood of text being written by an LLM, but as mentioned it will have lots of false positives in fields that significantly contributed to training data.

Your best option is to keep track of writers and journalists, who's styles you've appreciated, and follow them on whatever platforms they write for. Journalists often publish to more than one media outlet, and self-hosting platforms like Substack are growing quickly.


Even the slower ones are more like a Wii U, which is perfectly capable of everything a set-top box needs to do. Really, the hardware acceleration does all of the heavy lifting, and the processor only needs to render text and coordinate what to composite.

It's the bloat of the software layer on top that's slowing things down.

A 1st-generation Chromecast only has 512 MB pf RAM and a dual-core 1.2 GHz processor, and it can handle video streaming just fine. Building an interface on top of that doesn't take a lot of resources, if the underlying layers aren't bloated. With current Android/iOS development, they very much are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: