Apparently they've deprecated Postgres support and now only recommend sqlite as the storage backend. I have nothing against sqlite but to me this looks like Tailscale actively signaling what they think the expected use of headscale is.
> Scaling / How many clients does Headscale support?
> It depends. As often stated, Headscale is not enterprise software and our focus is homelabbers and self-hosters. Of course, we do not prevent people from using it in a commercial/professional setting and often get questions about scaling.
> Please note that when Headscale is developed, performance is not part of the consideration as the main audience is considered to be users with a modest amount of devices. We focus on correctness and feature parity with Tailscale SaaS over time. [...]
> Headscale calculates a map of all nodes that need to talk to each other, creating this "world map" requires a lot of CPU time. When an event that requires changes to this map happens, the whole "world" is recalculated, and a new "world map" is created for every node in the network. [...]
> Headscale will start to struggle when [there are] e.g. many nodes with frequent changes will cause the resource usage to remain constantly high. In the worst case scenario, the queue of nodes waiting for their map will grow to a point where Headscale never will be able to catch up, and nodes will never learn about the current state of the world.
I find that quite interesting and it is one of the reasons I've not really considered trying out Headscale myself.
Why? Makes perfect sense to me. Designing a product with a specific use case in mind is good. When you've got the limited resources of am open source volunteer project, trying to solve every problem is a recipe for burnout. If it can even be done.
I mean this is a great advertisement in and of itself. Something being considered "enterprise software" means it will have 90% more features than needed, the code will be a combination of dozens of different mid-level devs new perfect abstractions and will only test code paths through all those features that the original enterprise valued. I.E. it is great if you work in an enterprise as it will generate a lot of work with an easy scapegoat.
I dont understand what these two have to do with anything? The db-use is almost trivial, and SQLite can be embedded. Why would we want wasted effort and configuration complexity on supporting postgres?
With that kind of logic you wouldn't need headscale and would just ask your favorite LLM to write a similar tool for your with your own requirements and nothing else.
No, not really necessary to extrapolate the logic any further. You have deemed a very specific and focused task as "wasted effort." So the logic leads to putting in the effort you do not find "wasteful" and outsource the remainder to the LLM do this very specific thing.
TIL! My problem with them requiring sqlite was that I assumed it would make a high availability setup either hard or impossible. Maybe that's not true, but definitely off the beaten path for headscale.
Yeah, Headscale people don't hide that it's a toy. I didn't get a homelab full of datacentre-grade equipment because I want to use toy, nonscaling solutions with vastly incomplete feature sets, but for the exact opposite reason.
On a different note; the HN obsession with SQLite these days is getting a bit tiresome.
Just had my first uncorrectable memory read error on our servers in 10 years or so today (in Sacramento). I'd like to think it's related because the alternative (buying new DIMMs) is too horrifying to contemplate
In the performance tests they said they used "consensus among 64 samples" and "re-ranking 1000 samples with a learned scoring function" for the best results.
If they did something similar for these human evaluations, rather than just use the single sample, you could see how that would be horrible for personal writing.
I don’t understand how that is generalizable. I’m not going to be able to train a scoring function for any arbitrary task I need to do. In many cases the problem of ranking is at least as hard as generating a response in the first place.
> PostgreSQL 17 adds a new connection parameter, sslnegotiation, which allows PostgreSQL to perform direct TLS handshakes when using ALPN, eliminating a network roundtrip. PostgreSQL is registered as postgresql in the ALPN directory.
I'm looking forward to being able to offload PostgreSQL TLS to a standard (non-pg-specific) proxy.
They’re probably amplifiers rather than repeaters. Optical amplifiers don’t need to decode the signal to work. Here’s Wikipedia on erbium-doped fiber amplifiers:
> A relatively high-powered beam of light is mixed with the input signal using a wavelength selective coupler (WSC). The input signal and the excitation light must be at significantly different wavelengths. The mixed light is guided into a section of fiber with erbium ions included in the core. This high-powered light beam excites the erbium ions to their higher-energy state. When the photons belonging to the signal at a different wavelength from the pump light meet the excited erbium ions, the erbium ions give up some of their energy to the signal and return to their lower-energy state.
I moved away from Cloudflare—to self hosting our network infrastructure—because, while this didn’t happen to us, I was very aware that it could. We had a great deal on Enterprise for a couple of years, but zero guarantees that it would last (and some indications that it wouldn’t). I wanted to stop praying that they wouldn’t alter the deal.
Do you mean to imply that cloud services at higher levels of abstraction are cheaper per unit of compute than simple VMs? I believe you’ll find that the opposite is true.
At the scale discussed here, there are no free lunches.
You absolutely do not have to distribute VMs manually. This [0] is a tiny Python script run as a cron that migrates VMs in a Proxmox (also free) cluster according to CPU utilization. You could extend it for other parameters.
While I don’t personally have experience with more enterprise-y solutions like VMWare, I have to imagine they have more complete solutions already baked-in.
Not really, because Concorde died in the seventies when the lie flat business seat didn't exist.
BA and AF managed to keep the zombie fleet going very profitably all the way until the end in the early 2000s, and that business wasn't killed by the lie flat business class seat either. It was killed by the impossibility of continuing to operate a tiny fleet of '60s planes forever.
Now if you said that the reason we don't have ANY supersonic passenger jets today is because lie flat business seats are good enough, then that's a more defendable position, but I'd still say that the overland flight restrictions limiting any SST to just a couple of routes is a bigger factor.
When I flew on Concorde the one thought I never had was "I wish I had a lie flat seat and half the airspeed".
It is the combination of lie flat seat and the very limited range and the overland restriction.
A six to three hour flight is not really worth the premium. At the same time no supersonic flight has the range to do transpacific where the time difference would be much greater.
Not sure why you're getting downvoted, this was definitely a key factor that made Concorde into a niche product.
It's not that customers preferred slower and cheaper flights over Concorde—they didn't, Concorde had very healthy average occupancy rates and operating the flights was very profitable for BA and Air France (they got the planes for free, of course).
It's that you can't fly a 1960s plane forever and you also can't amortize the design and development cost of new models with the only addressable market being first class customers travelling between the East Coast and a couple of European capitals (and this was directly caused by the overland flight restrictions).
reply