Hacker Newsnew | past | comments | ask | show | jobs | submit | vlovich123's commentslogin

Couldn’t you just go up your chain to the VP or whatever and use their backing / negotiating at the VP level to organize? It might not work for random projects but if Apple is using libsodium for security this could presumably be pitched as an investment into their own software supply chain.

Using zstd with a tuned small file custom dictionary probably gets you most of the benefit without giving up independence of compression.

From this thread it's pretty clear:

1. You've come up with your own unique definition of "zero cost" which isn't what the term means, at least as popularized by C++ where the concept of "zero cost abstractions" comes from. It means zero runtime cost, not zero cognitive cost.

2. Rust generally has more "zero cost abstractions" than C++ in terms of shifting traditional runtime checks into compile-time checks (e.g. while unique_ptr exists in Rust as Box, it's far more rarely used because ownership is easier to transfer).

3. Rust generally has faster defaults in the standard library so the abstractions are lower-cost than equivalents in C++ (e.g. C++ took forever to add a hash table via unordered_map and then proceeded to really fuck up the definition to inhibit high performance designs despite this being raised during standardization & haven't since revisited the issue).

4. Rust offers far more opportunities for "zero cost abstractions" than C++. Notably it's ownership rules prevent aliasing which allows the compiler to do more aggressive optimizations that can't generally be done in C++. It provides abstractions like NonNullPointer and NonZero so that unused bit patterns can be leveraged for enums (e.g. Option<NonNullPointer> is the size of a raw pointer but in C++ it's harder to pull off). Functional style and iterators are implemented to be aggressively inlined and overheads like bounds checking elided by the compiler, whereas C++ can't elide as easily and various bounds checks have to be inserted. Could go on and on.

Anyway, Rust as the base language vs C++ has simpler cognitive abstractions, it has more zero cost abstractions, and it has more efficient abstractions overall, both in the language and the standard library.


This feels like something that’s a neat claim and will work against simple setups, but less accurate for more complicated scenarios (eg Tor). Then you’re really just relying on how accurate your knowledge of the proxies are.

Also, the readme has slightly incorrect logic I think:

> According to Special Relativity, information cannot travel faster than the speed of light. Therefore, if the round trip time (RTT) is 4ms, it's physically impossible for them to be farther than 2 light milliseconds away, which is approximately 600 kilometers.

It calls out the 33% for fiber but ignores that there’s not a straightline path between two points on the network and there could be wireless, cable, and DSL links somewhere on that hop.

Also, the controlled variable here is latency, not distance. Thus you can always increase latency through buffering and therefor you could be made to appear further than you are. And that buffering need not even be intentional - your perceived distance estimate will vary based upon queuing delays in intermediary depending on time of day (itself a fingerprint if you incorporate time-aware measurements, but a source of error if you don’t).

Fingerprinting is hard and I dislike the framing that it’s absolutely impossible to mask or that there’s not false positive and false negative error rates with the fingerprint.


About the straightline path I did think of that but apparently I forgot to address it when writing the README :p

The point I was trying to make is that if the RTT is low enough you can know the connection is being made from close, it's an upper bound, and making some assumptions you can get it lower, so it's not a way of knowing the exact distance but rather the max distance the connection can be made from. If someone is in Spain but they can't be more than 400km from Australia, something went terribly wrong somewhere hehe

In hindsight I think the issue with my explanation is that I was trying to explain the differences when fingerprinting two different protocols, but ended up going for a TCP-only approach since Fastly wouldn't expose to me the data I needed for the TLS and HTTP RTT. But in theory fingerprinting with protocol RTT difference where one protocol is proxied and the other is impossible to bypass, but this is only the theory.

I think I will edit the README in the future since I don't like how it turned out too much. Thanks for the feedback!

By the way, it detects Tor, I tested it ;D


> But in theory fingerprinting with protocol RTT difference where one protocol is proxied and the other is impossible to bypass, but this is only the theory.

Alice wants you to think she's in New York when she's really in Taipei, so she gets a VM in New York and runs a browser in it via RDP. How are you detecting this?


I am not detecting that, I am just detecting L4 proxies for now sob

And then gatekeepers criticize them for doing so.

It’s not about battle testing but that eBPF is has specific restrictions that a) won’t lock up your kernel b) won’t cause a security exploit by being loaded. Now Spectre throws a wrench in things, but the framing is weird; why compare it to eBPF vs just making a mechanism to load kernel modules written in Rust.

> why compare it to eBPF vs just making a mechanism to load kernel modules written in Rust.

Because it's not just a mechanism to load kernel modules in Rust, it's specifically a mechanism to load them in the same places that ebpf programs are loadable, using the existing kernel machinery for executing ebpf programs, and with some helpers to interface with existing epbf programs.


eBPF still guarantees that a loaded program won’t crash or hang the kernel. Rex does let you hang the kernel.

As a lover of Rust, ooo boy does this sound like a bad idea. The Rust compiler is not guaranteed to always output safe code against malicious inputs given that there’s numerous known soundness bugs that allow exploiting this. Unless I’m missing something this is a security nightmare of an idea.

Also there’s reasons why eBPF programs aren’t allowed to run arbitrarily long and this just ignores that problem too.


I asked about this when they presented the project at the Linux Plumbers conference. They replied that it's not really intended to be a security boundary, and that you should not let anyone malicious load these programs.

Given this thread model, I think their project is entirely reasonable. Safe Rust will prevent accidental mistakes even if you could technically circumvent it if you really try.


eBPF's limitations are as much about reliability as security. The bounded loop restriction, for instance, prevents eBPF programs from locking up your machine.

You could still imagine terminating these programs after some bounded time or cycle count. It isn't as good as static verification, but it's certainly more flexible.

If you're doing this kind of "optimistic" reliability story, where developers who stay on the happy path are unlikely to cause any real problems, I don't get what the value of something like this is over just doing a normal Rust LKM that isn't locked into a specific set of helpers.

You can extend the kernel functionality without having to develop a whole kernel module? Just because your module has no memory errors does not mean that it is working as intended.

Further, if you want to hook into specific parts of the kernel, you might well end up writing far more boilerplate instead of just intercepting the one call you're actually interested in and adding some metadata or doing some access control.

I personally am all for a kernel that can do more things for more people with less bespoke kernel modules or patches.


I guess my point is that the delta between a "whole kernel module" and a "Rex extension" is pretty small.

if nothing else, rex makes a good central place to evolve a set of helper code for doing ebpf-like stuff in a rust kernel module. wouldn't be too surprised if it eventually becomes closer to an embedded dsl.

Sure! Can't disagree with that.

As I understand it eBPF has also given up on that due to Spectre. As a result you need root to use it on most distros anyway, and the kernel devs aren't going to expand its use (some systems are stuck on cBPF).

So it's not like eBPF is secure and this isn't. They're both insecure in different ways.


So eBPF for a WAF isn't worth it?

re: eBPF and WAFs: https://news.ycombinator.com/item?id=45951011

From https://news.ycombinator.com/context?id=43564972 :

> Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?

"Isolated Execution Environment for eBPF" (2025-04) https://news.ycombinator.com/item?id=43697214

"ePass: Verifier-Cooperative Runtime Enforcement for eBPF" (2025-12) https://ebpf.foundation/epass-verifier-cooperative-runtime-e... .. https://news.ycombinator.com/item?id=46412121


In this comment someone tries to justify its design, citing a lwn article: https://github.com/rex-rs/rex/issues/2#issuecomment-26965339...

I think this is a fair take:

> We currently do not support unprivileged use case (same as BPF). Basically, Rex extensions are expected to be loaded by privileged context only.

As I understand it, in privileged context would be one where one is also be able to load new kernel modules, that also don't have any limitations, although I suppose the system could be configured otherwise as well for some reasons.

So this is like a more convenient way to inject kernel code at runtime than kernel modules or eBPF modules are, with some associated downsides (such as being less safe than eBPF; the question about non-termination seems apt at the end of the thread). It doesn't seem like they are targeting to actually put this into mainstream kernel, and I doubt it could really happen anyway..


Yeah I agree with this assessment. It is not an eBPF replacement for many reasons. But could be a slightly safer alternative to kernel modules.

That's one aspect of the design. Again, complexity requirements are there for a reason. No explanation seen for why this eschews them.

Fully agree.

If it has to be native code, it should live on user space, at very least.


Or at the very least it should be framed as a way to load kernel modules written in Rust. I just don’t understand the framing that this is an alternative to eBPF programs.

I considering it now. Aside from correctness verification, the main reason we'd use a limited language for packet inspection is in case the policy is malicious. How often is that the case?

For most people, they trust most or all of the code running on their machine. They certainly trust their firewall policy to not be malware. If you already trust it, using a better, safe language might be helpful. In many cases, eBPF will be fine.

This isn't the first time this has been done. SPIN was an operating system in Modula-3 that allowed type-safe linking of code into the kernel, balancing safety and performance.


Can't my eBPF sched starve my monitoring processes, or my eBPF firewall rules prevent me from getting security updates?

If Eve gets to load bad eBPFs programs in your computer then I doubt counter-measures in how they run can save you.


Evil eBPF programs can hide their presence from the bpf syscall as well.

Interesting. Any good read you'd recommend on the topic/attack? Thanks.

Look up "eBPF rootkits"

This is a good article about one found in the wild: https://www.synacktiv.com/en/publications/linkpro-ebpf-rootk...


More that they use GPS to synchronize the clocks. Having your own atomic clock doesn’t really improve your accuracy except for within the single data center you have it deployed (although I’m sure there’s techniques for synchronizing with low bounds against nearby atomic clocks + GPS to get really tight bound so they don’t need one in every data center)

> and allows you to make much stronger guarantees than TrueTime (due to higher precision distributed ordering guarantees, which translate to lower latency and higher throughput distributed writes).

TrueTime is the software algorithm for managing the timestamps. It’s agnostic to the accuracy of the underlying time source. If it was inaccurate then you get looser bounds and as you note higher latency. Google already does everything you suggest for TrueTime while also having atomic clocks in places.


Yup! I was referring to the original TrueTime/Spanner papers, not whatever's currently deployed. The original paper makes reference to distributed ordering guarantees at the milliseconds' scale precision, which implies many more transactions in flight in the uncertain state and coarser distributed ordering guarantees than the much tighter upper bound you can set with nanoseconds' precision and microseconds' comms latency...

More than a decade of progress, probably in no small part from Google pushing vendors to improve hardware :)

Amen. :)

Fwiw in the Bay Area I thought it was a private company but turns out it’s government run with Fastrak operated by The Bay Area Transportation Authority (BATA) in partnership with The California Department of Transportation and The California Highway Patrol (not sure why CHP is involved but they probably get some kickback of the revenue stream in exchange for some enforcement).

CHP probably provides accident response services to the roads

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: