Hacker Newsnew | past | comments | ask | show | jobs | submit | jabl's commentslogin

I don't think the situation is that comparable to python, since in python the library has to be present at runtime. And with the dysfunctional python packaging there's potentially a lot of grey hairs saved by not requiring anything beyond the stdlib.

With Rust, it's an issue at compile-time only. You can then copy the binary around without having to worry about which crates were needed to build it.

Of course, there is the question of trust and discoverability. Maybe Rust would be served by a larger stdlib, or some other mechanism of saying this is a collection of high-quality well maintained libraries, prefer these if applicable. Perhaps the thing the blog post author hints at would be a solution without having to bundle everything into the stdlib, we'll see.

But I'd be somewhat vary of shoveling a lot of stuff into stdlib, it's very hard to get rid of deprecated functionality. E.g. how many command-line argument parsers are there in the python stdlib? 3?


> You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

> It would be very hard to accomplish.

Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.


Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.

If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.

Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.

One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.

We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.


Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."

Yeah I'm not really convinced that this matters at all tbh

I thought the entire point of Ladybird was precisely to reinvent the wheel?

This is also the case for Servo, so it makes sense to collaborate.

Servo has a distinct design goal that sets it apart from its predecessor within Mozilla and has already had offsprings that has made its way directly into Firefox.

Its purpose is not to reinvent everything. It’s not a hype project.


Servo's original purpose was to reinvent everything for Firefox to modernize the codebase, and make it secure and more performant (e.g. CSS styling engine, HTML parser, etc.) So it actually fits that purpose pretty well.

There's https://himmelblau-idm.org/ for a Linux client for Entra. Haven't tried it myself though.

Doesn't FreeIPA work with EntraID? I used to use it with Exchange and it worked pretty well.. (or, as well as any non-microsoft product that has to intergrate with Microsoft products at least).

Looks nice, all it needs is an OSS server now ;)

Does this evolution of the Vulkan API get closer to the model explained in https://www.sebastianaaltonen.com/blog/no-graphics-api which we discussed in https://news.ycombinator.com/item?id=46293062 ?


Yes, you can get very close to that API with this extension + existing Vulkan extensions. The main difference is that you still kind of need opaque buffer and texture objects instead of raw pointers, but you can get GPU pointers for them and still work with those. In theory I think you could do the malloc API design there but it's fairly unintuitive in Vulkan and you'd still need VkBuffers internally even if you didn't expose them in a wrapper layer. I've got a (not yet ready for public) wrapper on Vulkan that mostly matches this blog post, and so far it's been a really lovely way to do graphics programming.

The main thing that's not possible at all on top of Vulkan is his signals API, which I would enjoy seeing - it could be done if timeline semaphores could be waited on/signalled inside a command buffer, rather than just on submission boundaries. Not sure how feasible that is with existing hardware though.


It's a baby-step in this direction, e.g. from Seb's article:

> Vulkan’s VK_EXT_descriptor_buffer (https://www.khronos.org/blog/vk-ext-descriptor-buffer) extension (2022) is similar to my proposal, allowing direct CPU and GPU write. It is supported by most vendors, but unfortunately is not part of the Vulkan 1.4 core spec.

The new `VK_EXT_descriptor_heap` extension described in the Khronos post is a replacement for `VK_EXT_descriptor_buffer` which fixes some problems but otherwise is the same basic idea (e.g. "descriptors are just memory").


Kind of, but not really.

Rayleigh scattering is elastic (only the direction changes), whereas Raman scattering is inelastic (energy, that is color changes in addition to direction) scattering.


If you want to make an argument for something else being the representations of boolean variables than 0 for false and 1 for true, one could make the case for true being all bits set.

That would make it slightly easier to do things like memset()'ing a vector of boolean, or a struct containing a boolean like in this case. Backwards compatibility with pre-_Bool boolean expressions in C99 probably made that a non starter in any case.


A 1-bit integer can be interpreted as either a signed integer or as an unsigned integer, exactly like an integer number of any other size.

Converting a 1-bit integer to a byte-sized or word-sized integer, by using the same extension rules as for any other size (i.e. by using either sign extension or zero extension), yields as the converted value for "true" either "1" for the unsigned integer interpretation or the value with all ones (i.e. "-1") for the signed integer interpretation.

So you could have "unsigned bool" and "signed bool", exactly like you have "unsigned char" and "signed char", to choose between the 2 possible representations.


> one could make the case for true being all bits set

Historical note: this was the case in QBasic, where true was defined as -1.


There, apparently, were quite a number of ISAs where checking the sign bit was more convenient (performant?) than checking (in)equality with zero.


Some Fortran compilers also did this. MS Powerstation Fortran at least, IIRC.


The (minor, but still) optimization that is enabled by assuming _Bool can contain only 1 or 0 is that negating a boolean value can be with x^1, without requiring a conditional.

That being said, for just testing the value, using the zero/nonzero test that every (?) cpu has is enough; I'm not sure what is achieved here with this more complex test.


Amusingly(?), the Juha Sipilä character mentioned in the article later became prime minister in Finland from 2015-2019.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: