Hacker Newsnew | past | comments | ask | show | jobs | submit | srdjanr's commentslogin

That's true for some jobs, but I'd be very surprised if anyone enjoys cleaning shit, for example

It can be enjoyable in the context of failure analysis: troubleshooting, finding root causes, documenting other people's fuckups then tracing through the assignment logs on who interacted with the server last.

Leaving aside the scene from Life of Brian, I have no issue cleaning shit - I've raised children, they poop, I have livestock, they shit, kids will happily frisbee cow pats, raking out sheep shit from under shearing sheds is a job that I've done, as have many .. you end up with a couple of tonne stacked high on a double axle trailer that's great for the garden.

For what it's worth, I don't mind a bit of higher dimensional data reduction when processing raw multi channel data, or geophysical world modelling (magnetic fields, gravity, radiometrics, etc).


I'm heading to the Graeberian world of bullshit jobs which ironically tends to head towards the direction of meaning.

I'm pro "everyone cleans their own shit" but the meaning of a garbage truck driver could immense compared to a honest hedge fund manager or a VC Patagonia vest.

Cleaning time of our own shit hopefully won't be a full time job. We'll just figure out the ones creating too much shit and educate them as a society :D


I don't really understand what's the point here, other than a somewhat inserting playing with LLMs. What does this tell us that's in any way applicable or points to further research? Genuinely asking


Regarding tiny packages, I don't think they affect the size of shipped bundle at all. They only bloat your local dev environment.


Bloat is mostly added by package authors, not website authors. And they can't know who's running it and can't look at the metrics. I doubt many website authors directly use isEven or polyfills.


What's wrong with a well protected VM? Especially compared to something where the security selling point is "no one uses it" (according to your argument; I don't know how secure this actually is)


Nothing, but "there are already working options" does not necessarily mean we shouldn't try new (and sometimes weird) things


Yeah but GP was answering to a comment saying "you don't want to run code in a well protected VM". Which is of course complete non sense to say and GP was right to question it.


GP says "You don't want to just run that code in ... even a very well protected VM." Why?


Because unless you can fund several teams - kernel, firmware(bios,etc), GPU drivers, qemu, KVM, extra hardening(eg. qemu runs under something like bpfilter) + a red team, security through obscurity is cheaper. The attack surface area is just too large.


What is this "security through obscurity" you're talking about? We're talking about running linux in a VM running in a browser. That has just as much attack surface (and in some ways, more) as running linux in a hypervisor.


Sure, by your own argument, you should somehow increase the price of people telling other people what to avoid spending money on


Regarding safety, no benchmark showed 0% misalignment. The best we had was "safest model so far" marketing speech.

Regarding predicting the future (in general, but also around AI), I'm not sure why would anyone think anything is certain, or why would you trust anyone who thinks that.

Humanity is a complex system which doesn't always have predictable output given some input (like AI advancing). And here even the input is very uncertain (we may reach "AGI" in 2 years or in 100).


I guess that it generally has 50/50 chance of drive/walk, but some prompts nudge it toward one or the other.

Btw explanations don't matter that much. Since it writes the answer first, the only thing that matters is what it will decide for the first token. If first token is "walk" (or "wa" or however it's split), it has no choice but to make up an explanation to defend the answer.


If I had to choose between a large organization and a single person vibe coded app, I'd choose large organization.


Can one solution be always doing two scans, N months apart, before drawing any conclusions (excluding things that can be reliably detected from a single scan)? Initial scan could affect N (if you find something potentially aggressive, you can schedule the second scan sooner). And then do a follow up every M years.

That should exclude benign or very slowing growing things


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: