Hacker Newsnew | past | comments | ask | show | jobs | submit | canpan's commentslogin

People are just different. I always wonder how we should think about eating and health on a personal level.

I can eat McDonalds and still get perfect blood results. (I dont do that anymore). I have a friend who does not like any vegetables and fruits, he is fine. But also friends who just look at a bag of sweets and grow fat. Allergies and stomach health can be very specific.

Of course you do control a lot. But at the same time, it seems very individual. Maybe a chance for personal AI nutrition practice?


Steam developing proton was what made it possible for me to change fully. No dual boot or anything needed. It's great.

Funnily I also run GoG games through steam proton.. But looking forward to the GoG client working!


WINE crawled so that Proton could run.

Like even in 2014 WINE worked well enough for most games for me. Proton just made it utterly effortless, and lets me run AAA games like RDR2 and CP2077.


I would say that WINE did 90% of what had to be done, then Proton came and did another 90% so now we are 99% there.

Proton is amazing and it's really three different subprojects that deserve a lot of credit each.

First is Wine itself, with its implementation of Win32 APIs. I ran some games through Wine even twenty years ago but it was certainly not always possible, and usually not even easy.

Second is DXVK, which fills the main gap of Wine, namely Direct3D compatibility. Wine has long had its own implementation of D3D libraries, but it was not as performant, and more importantly it was never quite complete. You'd run into all sorts of problems because the Wine implementation differed from the Windows native D3D, and that was enough to break many gams. DXVK is a translation layer that translates D3D calls to Vulkan with excellent performance, and basically solves the problem of D3D on Linux.

Then there's the parts original to Proton itself. It applies targeted, high quality patches to Wine and DXVK to improve game compatibility, brings in a few other modules, and most importantly Proton glues it all together so it works seamlessly and with excellent UX. From the first release of Proton until recently, running Windows games through Steam took just a couple extra clicks to enable Proton for that game. And now even that isn't necessary, Proton is enabled by default so you run a game just by downloading it and launching, same exact process as on Windows.


Steam with Proton is simply incredible.

And now it doesn't even split games in "Linux" vs "Windows"; it simply assumes all games run on Linux. And they mostly do! Though to be fair I had to tweak a couple to make them run, and Space Marine II absolutely refuses to play past the cutscene, but most other games "just work".


God I hope Valve gets serious with Steam OS and it becomes a competitive target for PC games. They're making amazing progress with the Steam Deck, and I'm so ready to be free from Windows.

Is there something wrong with the many distros that make Steam a really easy install, or in the box? I mean Bazzite literally has a FS Steam option in the box for installers that's pretty close to the Steam OS experience with broader hardware support.

I'm trying to word this without sounding dismissive of Bazzite for simply not being from a big company with money to throw around. I'm sure the people making it are doing great work. But I just don't get the feeling it's anywhere near the position it needs to be a "real platform" that could disrupt Windows. It has to be looked at from the perspective of publishers, and whether it's worth their money to target a new platform.

Valve has good, stable funds to pay a team full time to build and support Steam OS which, over a long period of time and with enough user uptake, I think will have better chances of getting publishers on board with ensuring their games work on something that isn't Windows. Hell, they could probably make deals with publishers to say "hey, here's a pile of money to make sure your game works on Steam OS day 1, and put it in all the ads" and get the ball rolling that way.

Gaming is a tough space to crack. I think Valve's money and their history of supporting the most popular gaming platform on PC inspires more trust needed to make their platform a standard target.


The PLATFFORM from a game publisher's perspective is still going to be Steam/Proton on Linux... More likely than not, it's all still mostly going to be Win32/64, but with improved Proton testing/targetting... this will be for SteamOS or Steam on other Linux distros... it's the same.

From your perspective you aren't waiting around for "completion" ... in terms of scope, most of it is built on efforts from Fedora/Redhat with enough customization to make it friendlier to gamers. Linux distros aren't like Windows, they share a lot and are largely interoperable or compatible with a few major camps.

But very little of this affects what will happen with games. Your experience with Steam on pretty much any Linux distro is likely to be as good or better than Steam on SteamOS.

Edit: to clarify, there are differences between Linux distros... but the fact is, that Steam on pretty much any modern/updated distro will be a very similar experience wether it's "SteamOS" or something else that you aren't having to wait around for. For that matter, you can put together a current AMD system with up to a 9070XT and run SteamOS today, the hardware is supported and you don't actually have to wait for it if you don't want to. You may find the experience better with a desktop distro, if you plan on using it more or as much of a desktop as game platform. And more so if you want to run a non-amd GPU.


The core of bazzite has nothing to do with being from a big company or not. The complaint doesn't make much sense given the foundation Bazzite is actually built on is sponsored and developed by Fedora/RHEL.

Maybe I'm downplaying what the Bazzite team is actually doing, but from afar it is Fedora Silverblue with gaming related tweaks out of the box, probably targeting handhelds and common gaming hardware in testing.

The actual issue of adopting a new operating system is already rearing its head on this thread. "What's Bazzite? What's Silverblue? SteamOS, is that linux? Is that different from this other linux?".

There's too many options for someone that wants to sit down and play a game. Unless a major OEM decides to push Linux on their systems, SteamOS is generally the only real competitor in this space due to reputation and control of the PC gaming market. Time in the market, versus timing the market is what comes to mind here.


Paradox-of-choice issues are overblown. Every Linux distro is a repackaging of the same core components and same software. The PC is standardized for the most part, there is not much commodity hardware that lacks support, and the popular hardware that needs particular support (Nvidia drivers) is catered to by any popular distro out there.

Users are mostly afraid of wasting time trying Linux (any Linux) and having to go back to Windows for reason X, Y, or Z that they didn't even know about. For my partner who doesn't game, reason Z is one particular feature of Microsoft Word (the shrinkwrap application, not 365 Copilot App or whatever) that isn't emulated by LibreOffice or Google Docs. For competitive PC gamers, it's kernel anti-cheat. The Linux desktop story in general has been to slowly whittle down these reasons until there really is no good excuse for users not to switch and for vendors not to support the OS, even through compatibility layers.


The problem I have with this approach is that ultimately you're trading one owning company for another, rather than building to a standard that anyone could build around.

Because someday Valve may no longer be privately owned, and we're potentially back where we started. If we support having strong OSS ecosystems around computers we don't have to fight this battle over and over again.

Valve slow-rolling SteamOS and being coy about it ever being released as a "standalone, supported" OS is only because they're a private company and can build for open source ecosystems.


SteamOS is actively shipping on consumer hardware today, that's the real major difference here. People who don't even know how to install their own operating system are using it.

There isn't a downside to these other distros like Bazzite.


Considering the Steam Machine will come with SteamOS, it looks like they are going all in.

I was amazed that the PC port of Spider-man Myles Morales worked perfectly with no tweaking at all. That’s the newest AAA game I own (I think), and it runs silky smooth and hasn’t had any issues.

It wasn’t that long ago that Wine was only really useful for games that were at least 5-10 years old. Proton is amazing.


Im not super familiar with the space.

Is the only reason for needing Proton is to do direct x api translations?


Games use plenty of other win32 APIs. Creating windows, running processes, opening files are all APIs.

Something like wine is needed to do that translation too.


right but some games like CS have native linux clients. Is it that hard to recompile the game to run under linux?

It often is hard. If you’re using win32 APIs extensively, you’ll have to port your code to Linux counterparts.

There’s also the issue of forward compatibility. Sometimes you just can’t run an old Linux game on a newer distro, while it works fine in Wine. Or it might partially work: for example, I’ve managed to run a Linux build of Heroes of Might and Magic III, but didn’t get any sound, because it relied on some ancient sound API (pre-ALSA; perhaps OSS?). Windows version works great in Wine to this day.

For some game engines though, porting is really easy. There are some piracy groups releasing Linux ports of Unity games (that don’t have an official Linux version) by just replacing the game executable with a compatible one from another game.


Run under which display server protocol?

Can someone explain, why this is done? I get a feeling, it's normally done when a company is in trouble or will soon? But they should have more money than ever.

They say it is to focus on innovation, but if you are a smart young person in NL, would you want to work where they just fired 1700 people? And if you already work there and are a top player it is a good time to rethink? A company I know wanted to focus, instead of firing, they sold the parts of the company they felt did not fit their future vision for money.


Actually, they are eliminating 3000 of the 4500 engineering manager positions. However, of those 3000, they are moving 1400 to an engineering position. The article also says that engineers are spending 35% of the time coordinating with their managers and that they want to cut the red tape with this move.

Of course, it's hard to tell how much is PR and how much reality. However, if there is substance to it, it would want me to work there even more, since they value engineering culture over management culture. Having more velocity is good.


> of those 3000, they are moving 1400 to an engineering position

Interesting. In old companies the only way to climb the ladder (get a raise) was to get into management. And then if they were a bad manager, they might get 'sidemoted' into some position where they could still contribute. Anyway, back in the old days, it was not uncommon to see 'managers' or even 'directors' with no direct reports.


Where do you know the details from? It’s not in the press release. Is that some insider information?

The link was changed. Originally a Dutch news article was linked that had these details.

So are you saying that there are not new positions open for engineers that were actually doing engineering and that instead some managers that were not doing engineering will now suddenly have to pick up the thread again and start doing engineer?

That would be disappointing for engineers that were actually doing engineering, as yet again their grade increases would be taken by management types.


I think the press release is actually clear that they felt this was necessary to retain talent:

> Engineers in particular have expressed their desire to focus their time on engineering, without being hampered by slow process flows

I'm guessing ASML had a lot of regrettable attrition and heard this in the exit interviews.


It's well known here in the region around ASML that they are very process heavy and things move slow. I'm not keen on working for them

> would you want to work where they just fired 1700 people

Firing 1700 managers is somewhat different than firing 1700 ICs. Whether managers will want to work there is an open question, but quite a lot of ICs will see the trimmed management layer as a good sign that they'll be free to get shit done


IC = individual contributor. Wikipedia translating management speak:

> Individual contributor, a business role for an employee without management responsibilities


Thank you, I was lost reading the comments. Funny that all the comments shitting on managers are speaking their language.

I am actually abhorred by the word "individual" as part of IC (which I luckily hadn't encountered before, so I had to look it up). It appears very pejorative to me, as if all cooperation goes via the management instead of directly between team members.

At ASML, they could also be integrated circuits being baked in an oven to cure.

They trimmed the managerial layer. A smart young person isn't immediately affected by it and (at least to me), this signals focus on actual work and flattening of the structure of an organization.

It is done because management needs to show that profits are increasing or they themselves will lose their jobs. Since they do not want to lose their jobs and they do not know how to increase profits they decided to fire 1700 employees with the hope that less expenses will translate into larger profits.

They've also done another thing:

>ASML also announced a new share buyback programme of up to €12 billion, to be executed by 31 December 2028.

They have €12 billion they don't know what to do with with so they will give it to shareholders, for a nice gain of less than 1% per year for the next 3 years. Assuming the annual salary costs of each of the 1700 employees is 150K (likely much much lower) those 12 billion could have paid for their employment for the next 47 years.


I worked for a company that went through 2 cycles like this and I can report that it had zero effect on us engineers.

My impression was that people were constantly being promoted into management and at some point we just had too many managers and that's why it was done. Of course, when you know this, the question becomes: why allow things to get to this point in the first place?


Presumably because people expect to be promoted periodically, so they pile up on the high end until the symptom gets corrected all at once. A realistic (but quite controversial) solution might be to emulate other companies that have done away with most of the promotion hierarchy. Different roles but more or less standardized pay across all employees and an understanding that promotions aren't a thing. Rather than climbing a ladder you're there to get shit done.

Just have the possibility for pay increase without taking on management responsibilities.

Once engineering starts out earning them by a wide enough margin management will become insecure. /s

I actually am curious why this isn't a more commonplace practice. Why would we build systems that keep accumulating managers at the expense of skilled senior engineers?


Because managers promote managers and managers have vested interest in themselves.

Why? Bad management. Perhaps even bad leadership.

It sounds like:

Layoff --> increase short term valuation --> increase value per share --> owner of shares happy during buyback.

After, it's true that having a lot of middle management can slow things down. On the other side, they could have indeed created new entities, new projects, re-qualify employees,...


One reason, maximizing investor value. CEO and executives usually get bonuses after layoffs.

Regarding memory, I recently changed to try to not use dynamic memory, or if I need to, to do it once at startup. Often static memory on startup is sufficient.

Instead use the stack much more and have a limit on how much data the program can handle fixed on startup. It adds the need to think what happens if your system runs out of memory.

Like OP said, it's not a solution for all types of programs. But it makes for very stable software with known and easily tested error states. Also adds a bit of fun in figuring out how to do it.


This.

As someone who spent most of their career as an embedded dev, yes, this is fine for (like parent said) some types of software.

Even for places where you'd think this is a bad idea, it's still can be a good approach, for example allocating and mapping all memory up to the limit you are designing. Honestly this is how engineering is done - you have specified limits in the design, and you work explicitly to those limits.

So "allocate everything at startup" need not be "allocate everything at program startup", it can be "allocate everything at workflow startup", where "workflow" can be a thread, a long-running input-directed sequence of functions, etc.

For example, I am starting a tiny stripped down web-server for a project, and my approach is going to be a single 4Kb[1] block for each request, allocated via a pool (which can expand on pressure up to some maximum) and returned to the pool once the response is sent.

The 4Kb includes at most 14 headers (regardless of each headers size) with the remaining data for the JSON payload. The JSON payload is limited to at most 10 fields. This makes parsing everything "allocate-less" because the array holding pointers to the keys+values of the header is `const char *headers[14]` and to the payload JSON data `const char *fields[10]`.

A request that doesn't fit in any of that will be rejected. This means that everything is simple and the allocation for each request happens once at startup (pool creation) even while parsing the input.

I'm toying with the idea of doing the same for responses too, instead of writing it out as and when the output is determined during the servicing of the request.

-------------------------

[1] I might switch to 6Kb or 8Kb if requests need more; whatever number is chosen, it's going to be a static number.


In recent years I had to write some firmware code with C and that was exactly the approach I took. So far I never had need for any dynamic memory and I was surprised how far I can get without it.

This is the way. Allocate all memory upfront. Create an allocator if you need to divy it up dynamically. Acquire all resources up front. Try to fit everything in stack. Much easier that way.

Only allocate on the heap if you absolutely have to.


Dynamic memory allocation solves the problem of dynamic business requirements.

If you know your requirements up front, static memory initialisation is the way.

For instance, indexing a typed array with an enum is no different then an unordered map of string to int, IF you have all your business requirements up front


I have some firmware that runs an event loop. There is no malloc anywhere. But I do have an area which gets reset event handler after each call. Useful for passing objects up the call stack.

One other thing I tend to do anything that needs to live longer than the current call stack gets copied into a queue of some sort. I feel it's kinda doing manually what rusts borrow checker tries to enforce.


I've been looking into Ada recently and it has cool safety mechanisms to encourage this same kind of thing. It even allows you to dynamically allocate on the stack for many cases.

You can allocate dynamically on the stack in C as well. Every compiler will give you some form of alloca().

True, but in many environments where C is used the stacks may be configured with small sizes and without the possibility of being grown dynamically.

In such environments, it may be needed to estimate the maximum stack usage and configure big enough stacks, if possible.

Having to estimate maximum memory usage is the same constraint when allocating a static array as a work area, then using a custom allocator to provide memory when needed.


Sure, the parent was commenting more about the capability existing in Ada in contrast to C. Ada variable length local variables are basically C alloca(). The interesting part in Ada is returning variable length types from functions and having them automatically managed via the “secondary stack”, which is a fixed size buffer in embedded/constrained environments. The compiler takes care of most of the dirty work for you.

We mainly use C++, not C, and we do this with polymorphic allocators. This is our main allocator for local stack:

https://bloomberg.github.io/bde-resources/doxygen/bde_api_pr...

… or this for supplying a large external static buffer:

https://bloomberg.github.io/bde-resources/doxygen/bde_api_pr...


> You can allocate dynamically on the stack in C as well. Every compiler will give you some form of alloca().

And if it doesn't, VLAs are still in there until C23, IIRC.


`-Wvla` Friends don’t let friends VLA :)

alloca is certainly worse. Worst-case fixed size array on the stack are also worse. If you need variable-sized array on the stack, VLAs are the best alternative. Also many other languages such as Ada have them.

Another paying user here. Very happy with Kagi, but would cancel asap if there were ads. Don't mind paying more. I just don't want ads. But I cannot really imagine it, they would loose half their competitive advantage. (The other half being having good results)

For me it would probably mean to build a search from scratch. For 90% of my search use cases it's pretty straightforward. I mostly visit the same sites..


For finance, I find interesting the other way around. You see many fire types preparing TOO MUCH. Obviously you should not live paycheck to paycheck. But if you prepare for a 3% return fire, wasting years, your chance of dying early is the much bigger worry than running out of money.

The tricky part is you don't have a motor. It's an unpowered vehicle.

(Caveat is the start, you will be pulled up with a rope, another powered plane or have starter motor to get up once)

You can go thousands of miles without propulsion! IF the weather and wind plays nice.

So you go up with thermals (warm air) or lift from hill sides and go forward by gliding. Repeat with skill and luck.


Working in that area in Japan. I think I can provide some answers.

Payment: CC are mostly used for BtoC but if you are a BtoB SaaS you want invoice and a local presence (ie no tax or currency shenanigans for your customer).

Hand on sales: Don't expect customers to sign up for a free plan and convert. Your conversion rate will be close to 0. Mostly scammers. Instead: Contact form, Cold call, go out to events, lots of drinking.

Regarding language: Many people do not speak English. I think that surprises some, but Japan is big and you can live forever happily only speaking Japanese. So if you don't support Japanese it's a complete no go.


> Many people do not speak English

More like noone speaks English, plainly put


> Hand on sales: Don't expect customers to sign up for a free plan and convert. Your conversion rate will be close to 0. Mostly scammers.

Brutal! Is that true even for Japanese companies with a traditional sales force?


Your sales team can totally utilize a free plan, while communicating with the customer. But a free sign up from the website, without sales person contact, is something we only had negative experience with, so stopped doing it. (Talking about B2B, B2C might be different)

In other words, don't bother from outside of Japan unless the SaaS has been fully translated into Japanese? That's what I'm getting from this response. Is that true?

Translation alone is simply not enough. You'll need local presence, Japanese sales people, and patience (a lot of them).

Thank you. I'm already on it.

I came here to say the same. The location is a bit hard to get to. But if you are in the area, also visit the town of Heidelberg, it is close by and worth the trip.

I went there as a child and loved it, in particular the UBoat you can enter. Next time I am going to Germany I plan to visit it again.


The people with the drive to be able to retire early are also the most likely to be bored when it happens.

Working on something fun and novel, like in his case Gemini, mentioned in the article, is the ideal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: