Hacker Newsnew | past | comments | ask | show | jobs | submit | iJohnDoe's commentslogin

Everyone linking to their favorite tool, but wanted to point out to the OP that Monosketch looks awesome. Cool being open source as well.

Was hoping this comment would be here. Firecracker and microVMs are good use-case. Also, being able to simply test and develop is a nice to have.

Nested virtualization can mean a lot of things. Not just full VMs.


> Firecracker and microVMs are good use-case.

Good use-case for what?


We operate a postgres service on Firecracker. You can create as many databases as you want, and we memory-snapshot them after 5 seconds of inactivity, and spin them up again in 50ms when a query arrives.

https://www.prisma.io/postgres


Nowadays universal answer for "what? why?" is AI. AI agent needs VMs to run generated code in sandbox as they can not be trusted.

I don't think everyone should assume that AI is the answer to all questions. I was asking the person I replied to, thanks.

We are running Sandboxes for AI Agents using Firecracker microVMS @ E2B

The poster you asked can reply too - Postgres and microvms are worth considering nearly every time at the start.

Beyond encapsulation it greatly increases the portability of the software between environments and different clouds.


Many questions on their forum are similar to our situation. People wondering if they should restart their containers to get things working again. Worried about if they should do anything, risk losing data if they do anything, or just give everything more time.

Lots of concerns about doing a Restart or Redeploy since a lot of people are still offline 4+ hours.

Since there hasn't been any responses on the official support forum, maybe this will help someone.

I did a backup of our deployment first and did a Restart (not a Redeploy). Our service came back up thankfully.

Obviously do your own safety check about persistent volumes and databases first.


Affected by the outage since about 6:15 AM PT this morning. We're still down as of 9:00 AM PT.

Our existing containers were in a failure state and are now are in a partial failure state. Containers are running, but underlying storage/database is offline.

Many questions on their forum are similar to our situation. People wondering if they should restart their containers to get things working again. Worried about if they should do anything, risk losing data if they do anything, or just give everything more time.

I'm glad Railway updated their status page, but more details need to be posted so everyone knows what to do now.

Everyone has outages, it's the way of life and technology. Communication with your customers always makes it less painful and people remember good communication and not the outage. Railway, let's start hearing more communication. Forum is having problems as well. Thanks.


(Angelo from Railway here)

Heard. Being transparent, usually the delay on ack is us trying to determine and correlate the issue. We have a post mortem going out but we note that first report was in our system 10 minutes before it was acked, to which the platform team was trying to see which layer the impact was at.

That said, this is maybe concern #1 of the support team. Where we want the delta between report and customer outage detected to be as small as possible. The way it usually works is that we have the platform alarms and pages go first, and then the platform engineer usually will page a support eng. to run communications.

Usually the priority is to have the platform engineer focus on triaging the issue and then offload the workload to our support team so that we can accurately state what is going on. We have a new comms clustering system that rolling out so that if we get 5 reports with the similar content, it pages up to the support team as well. (We will roll this out after we communicated with affected customers first.)


Thanks for the reply. Understood.

In situations like this, please dedicate at least one team member to respond as quickly as possible to the Railway Help Station posts. That's where your customers are going for communication and support.


Holy moly AI written comments, Batman!

So friggin’ cool. Well done.

They’re concentration camps. What else do you call a place built to hold tens of thousands of people? Why do you need 75,000 beds if you’re not planning to cram people in?

Then what happens after they’re locked in there? Are they processed one by one? Do the math. Even with absurdly optimistic assumptions of one hour per person, eight hours a day, every single day. You’re still talking about more than a year to get through 75,000 people. And that assumes perfect efficiency, no delays, no shortages, no illness.

While all that’s happening, people will get sick, injured, desperate. People will die. And after someone is “processed,” where do they go? Immediately put on a plane and sent back to their home country? Is that realistically happening at scale?

This setup isn’t about processing people. It’s about warehousing them. And when large numbers of people are caged indefinitely under those conditions, deaths get written off as “suicides.”


FTA - The original person posting about the unusual behavior was truly helpful.

https://community.notepad-plus-plus.org/topic/27212/autoupda...

Thankfully the responses weren’t outright dismissive, which is usually the case in these situations.

It was thought to be a local compromise and nothing to do Notepad++.

Good lessons to be learned here. Don’t be quick to dismiss things simply because it doesn’t fit what you think should be happening. That’s the whole point. It doesn’t fit, so investigate why.

Most tech support aims to prove the person wrong right out the gate.


Really cool. Easy flow. Super awesome you can just starting using it. I wish every online app followed this.

I wouldn’t be so confident. The article even references this. Apple has used third-party baseband devices in the iPhone since the beginning, which was from other manufacturers. All bets are off regarding security when this is the case. This does included microphone access.

The article touches on this by saying Apple is making the baseband/modem hardware now. Something they should have done since day one, and I’m not sure what took them so long. However, it was was clear they didn’t have the expertise in this area and it was easier to just uses someone else’s.


So what is the evidence of this being possible? Or is this just pure conjecture on your part?

Patents is why it took them so long.

Yeah but also RF in the real world is hard.

Apple found out the hard way with the iPhone 4. Their secrecy didn't help. People doing real world testing had a case that made it look like an iPhone 3s and that also happened to mitigate the death grip problem. We know this because one was stolen and given to gizmodo.

And that was even only antenna design, they still used a standard RF stack then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: