Hacker Newsnew | past | comments | ask | show | jobs | submit | Weryj's commentslogin

I guess we know how he managed to get the funding.

I’ve been considering buying 8x64g models and setting them as equal priority swap disks (to mitigate the low throughput) for this exact reason.

Can confirm doing so is awesome. Get some slightly bigger ones and partition them for additional use as zil. They're extremely satisfying to use, and depressing to remember that we'll never see their like again.

Do you have any more details? This is such a niche idea, that I’d be buying blind.

Sure! This is more or less how I'm using Optane in my storage box:

Two of U.2x4 to PCIe x16 riser cards, one loaded with 960GB Intel-branded Optanes, one with 1.5TB IBM-branded. PCIe bifurcation is set up in the BIOS to let them all come up properly, where they just show as regular NVMe. Riser cards like this can easily be substituted for PCIe to SAS/Oculink to U.2 cables, if that would be more accommodating to your chassis.

Once they all come up, partition them for your preferred split of swap and ZFS special. Swap should have them all mounted with the same priority and discard=pages, I also recommend setting up zswap (not zram swap) with lz4 as an additional layer of fast, evictable memory pool, as well as `vm.overcommit_memory=2` and `vm.swappiness=150`. This will effectively give you really good memory tiering for workloads and file cache.

When adding the other partitions to ZFS, use `-ashift=12 special mirror dev dev special mirror dev dev ...`. ZFS special covers all metadata, the intent log (sorta write cache), and optionally small files. I like to set it up so <= 8k small files get sent there, but you can probably go higher depending on how much capacity you allocate. My ~24T of allocated data ended up being ~150GB special with 8k small file, and that's with the whole pool configured with deduplication and blake3 for all hashes. Blake3 is fast as heck, but has very long hashes, so from a metadata standpoint, I'm using the most expensive option. I mitigate that a bit my setting metadata redundancy to `some`, since my metadata is effectively RAID10 anyway.

With some extra NVMe/Optane allocated to regular ZFS read cache, and all my spinning-rust data VDEVs also as RAID10, it's almost like having the whole array in memory, or at least on fast flash. Eliminating metadata from your drives seeking and letting them be written nearly instantly with Optane does wonderful things for spinning rust :)


My approach has been using static analysis to produce a Mermaid diagram of all Classes:Methods and their caller/callees.

If they made a thin client with these processors, in a mac-mini (mac-tiny?) format. I would be buying a couple on every paycheck.

But that's very wishful thinking.


Mac Neo

Sure, but the screen/keyboard aren't necessary

I believe they use https://bun.com/ Not Node.js


You can always join the Orleans Discord


Feels like half of the goal here is to give people more incentive to upgrade over the free tier.


The bit were the death toll was 70 after a week of protests, then the internet was cut and in 3 days it’s closer to 2,000.


TimeMachine has never been so important.


Arq does it better.


TimeMachine is worthless trash compared to restic


Please elaborate


It works on Linux, Windows, macOS, and BSD. It's not locked to Apple's ecosystem. You can back up directly to local storage, SFTP, S3, Backblaze B2, Azure, Google Cloud, and more. Time Machine is largely limited to local drives or network shares. Restic deduplicates at the chunk level across all snapshots, often achieving better space efficiency than Time Machine's hardlink-based approach. All data is encrypted client-side before leaving your machine. Time Machine encryption is optional. Restic supports append-only mode for protection against ransomware or accidental deletion. It also has a built-in check command to check integrity.

Time Machine has a reputation for silent failures and corruption issues that have frustrated users for years. Network backups (to NAS devices) use sparse bundle disk images that are notoriously fragile. A dropped connection mid-backup can corrupt the entire backup history, not just the current snapshot. https://www.google.com/search?q=time+machine+corruption+spar...

Time Machine sometimes decides a backup is corrupted and demands you start fresh, losing all history. Backups can stop working without obvious notification, leaving users thinking they're protected when they're not. https://www.reddit.com/r/synology/comments/11cod08/apple_tim...

The shift from HFS+ to APFS introduced new bugs, and local snapshots sometimes behave unpredictably. https://www.google.com/search?q=time+machine+restore+problem...

The backup metadata database can grow unwieldy and slow, eventually causing failures.

https://www.reddit.com/r/MacOS/comments/1cjebor/why_is_time_...

https://www.reddit.com/r/MacOS/comments/w7mkk9/time_machine_...

https://www.reddit.com/r/MacOS/comments/1du5nc6/time_machine...

https://www.reddit.com/r/osx/comments/omk7z7/is_a_time_machi...

https://www.reddit.com/r/mac/comments/ydfman/time_machine_ba...

https://www.reddit.com/r/MacOS/comments/1pfmiww/time_machine...

https://www.reddit.com/r/osx/comments/lci6z0/time_machine_ex...

Time Machine is just garbage for ignorant people.


Almost all of my backup is around restic, including monitoring of backups (when they fail and when they do not run often enough).

It is a very solid setup, with 3 independent backups: local, nearby and far away.

Now - it took an awful lot of time to set up (including drinking the wrapper to account for everything). This is advanced IT level.

So Time Machine is not for ignorant people, but something everyone can use. (I never used it, no idea if it's good but it has to all last work)


One works, one loses your data. Oh well.

Guess there's a lot of money to be made wrapping it with a paid GUI


I am not sure what you are after, to be honest.

Restic is fantastic. And restic is complicated for someone who is not technical.

So there is a need to have something that works, even not in an optimal way, that saves people data.

Are you saying that Time Machine doe snot backup data correctly? But then there are other services that do.

Restic is not for the everyday Joe.

And to your point about "ignorant people" - it is as I was saying that you are an ignorant person because you do not create your own medicine, or produce your own electricity, or paint your own paintings, or build your own car. For a biochemist specializing in pharma (or Walt in Breaking Bad :)) you are an ignorant person unable to do the basic stuff: synthetizing paracetamol. It is a piece of cake.


But I just want to backup my important files to the cloud


I did the exact same thing for my own sandboxing. Through the Proxmox API


That’s awesome — thanks for sharing!

If you don’t mind me asking:

- Did you use LXC containers, or full VMs for each sandbox? - How did you handle SSH / network isolation? - Any tips on making provisioning faster or keeping resources efficient?

We’re using unprivileged LXC + SSH jump hosts on a single VM for cost efficiency. I’d love to hear what tradeoffs you found using the Proxmox API.


My setup is quite purpose built. I use Orleans as the main fabric of our codebase. But since the Orleans cluster is a 'virtual computer' in a sense, you can't rely on anything outside the runtime, since you don't know which machine your code is executing on.

So a Grain calls Proxmox with a generated SSH Key / CloudInit, then persists that to state, then deploys an Orleans client which connects to the cluster for any client side C# execution. There's lots you could do for isolated networks with the LXC setup, but my uses didn't require it.

Proxmox handles the horizontal scaling of the hardware. Orleans handles the horizontal scaling of the codebase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: