Helene survivor here. What's wild to me is that, regardless of the small scale of this facility, it's only a few hundred meters from a 1% flood zone: https://msc.fema.gov/portal/search
The address I found for the facility is 9101 Windmill Park Lane Hudson, TX 77064
This seems ill advised given recent events like Hurricane Harvey
Industrial buildings are typically built at dock height. Even if they don't do any grading, that would put the building well above any plausible flooding in that area.
My point is that we really don't know what "plausible" is anymore with these storms. That much is clear in the data. It seems silly to be so close to a flood zone with your very expensive DUV/EUV machines. There are probably other places they could have placed this facility.
The price of the carried inventory is still significant; the scale they mention reaching towards is thousands per day. That's not including the backlog of components they would have onsite to ensure production uptime.
Absolutely, but they are not losing a billion+ in EUV machines with year+ lead times in a flood. It'll hurt for sure though and doesn't appear to be the smartest overall move.
It also turns out that for insurance purposes you are allowed to use infill to get the corner of a property that's below the high water mark above it. At least in some states.
Some of the calculus is not about if it will flood it's about if you'll lose your investment if it floods. If an underwriter is willing to cover it, you might go for it anyway.
They will build to a much higher standard than normal US residential construction, as they do with most commercial buildings. Many people do not understand the vast difference between residential construction quality and the quality that mega corps get. I personally watched Apple build their new campus in Austin (I have daily progress pictures of the construction site, I work there), everything is solid concrete. These buildings can withstand any type of hurricane.
Flooding is also something which can be mitigated: build foundations to be taller, work with the topography to avoid the path of water, and build drainage solutions. You should see the drainage field that Apple built for their campus in Austin, it's absolutely massive and can divert an incredible amount of water.
> Many people do not understand the vast difference between residential construction quality and the quality that mega corps get.
It’s not limited to mega corps. Commercial construction is built to a higher standard. Some times you can buy commercial grade hardware and materials for your house if you want.
Larger buildings are also more robust at the foundation because it needs to be so much stronger. That thick concrete is necessary, not a luxury.
No, my YouTube recommendation algorithm just vacillates erratically between recommending esoteric engineering clips from 15 years ago and trying to push me down an alt right reactionary pipeline.
That specific location would probably never flood in the way that you might think. The areas you really need to worry about are downstream of the Addicks and Barker dams:
I don't know what the topography of houston is like, but here in toronto, a few hundred meters would move you from the bottom of a deep river valley to the top of it. I would imagine they made sure they could get insurance before building and wouldn't have picked any place with a significant risk.
The topography of Houston is that everywhere is a few hundred meters from a flood zone. You are exactly right; the area did not even come closer to flooding during Harvey and is a good 30ft higher than the flood zone OP is referencing.
Likely a combination of business-friendly policies (low tax, no employer payroll tax, etc.) and proximity to ports. Houston is the 6th [1] largest port in the USA.
Apple also managed to build a Houston factory quickly there, it was announced in Feb 2025 and was starting production by August which is pretty impressive.
I moved from TX to west coast a few years back. Property taxes down, all other taxes and expenses up; total cost of living much higher now. It's also business friendly enough to make deals on taxes as needed, I can't imagine that will be a problem. I get the hate on TX but tbh outside of the heat, it can be a pretty great place to live across many dimensions.
I think there's more to your sibling's taxes than property taxes. The data tell the opposite story - WI property taxes are higher than TX ones, at least if we look at the medians:
As someone living in Fort Worth and making good money as a Staff SWE, I got a tax refund this year. It was due to paying interest on my house, but still.
I'd recommend asking your sibling see if they qualify for the homestead exemption, it's significant. You or they can check if they're using it and see their exact property taxes here:
Texas property tax rates are some of the highest in the country. Should be higher than Wisconsin.
The difference here is really more of an indicator of property values in the respective areas. In major metros in Texas, you're looking at ~2%+ tax rates, which is infact higher than Wisconsin, even in the metros there.
> As someone living in Fort Worth and making good money as a Staff SWE, I got a tax refund this year. It was due to paying interest on my house, but still.
If you paid more in property taxes, that would indicate you can take a larger federal tax deduction... so, if anything, a tax refund implies you paid a lot in local property tax. Either that, or a boatload in interest (or, both). Neither is indicative of local property tax being low.
Isn't this something where there is clear and easy to obtain aggregate data. What is the average tax burden for someone in Wi vs Tx instead of comparing a single data point from each? I have a feeling it's going to contradict you
Indeed and surprised you are the first to mention it. The abatements these tech companies receive is quite substantial and will easily pay for flood damage.
It's actually a linear (more generally, abstract) algebra thing. (All, Differentiable, Smooth, or all sorts of other sets of) functions form a vector space. The derivative is a linear operator (generalized matrix). If you have a linear equation Ax=b, then if you can find some solution X, the general solution set is X+kerA, where kerA (the kernel or nullspace) is the set of all v where Av=0. What's the kernel of the derivative operator (i.e. what has 0 derivative)? Constant functions. So the general solution is whatever particular antiderivative you find plus any constant function.
You can do this sort of "particular solution plus kernel" analysis on any linear operator, which gives one strategy for solving linear differential equations. e.g. (aD^2+bD+cI) is a linear operator (weighted sums and compositions of linear operators are linear), so you can potentially do that analysis to solve problems like af''+bf'+cf=g. In that context you say the general solution is to add a homogeneous solution (af''+bf'+cf=0) to a particular solution (my intro differential equations class covered this but didn't have linear algebra as a prereq so naturally at the time it was just magic, like everything else presented in intro diffeq).
How would this product compare to a PostgREST based approach (this is the cool tech behind the original supabase) with load balancing at the HTTP level?
PostgREST is a translation layer: you use HTTP methods, inputs and outputs, to interact with Postgres, the database. It's a replacement for SQL, the language, which happens to also have a load balancer.
Their load balancer is still at the Postgres layer though. You can think of it as just an application that happens to speak a specific API. Load balancing applications is a solved problem.
PostgREST doesn't provide a replacement, rather a subset of the SQL language meant to be safe to expose to untrusted (frontend) clients.
Load balancing is not built-in currently, but it can be done at the proxy layer, taking the advantage that GET/HEAD requests are always executed on read only transactions, so they can be routed to read replicas. This is what Supabase does [1] for example.
I've found it interesting that systemd and Linux user permissions/groups never come into the sandboxing discussions. They're both quite robust, offer a good deal of customization in concert,and by their nature, are fairly low cost.
Unix permissions were written at a time where the (multi user) system was protecting itself from the user. Every program ran at the same privileges of the user, because it wasn't a security consideration that maybe the program doesn't do what the user thinks it does. That's why in the list of classic Unix tools there is nothing to sandbox programs or anything like that, it was a non issue
And today this is.. not sufficient. What we require today is to run software protected from each other. For quite some time I tried to use Unix permissions for this (one user per application I run), but it's totally unworkable. You need a capabilities model, not an user permission model
Anyway I already linked this elsewhere in this thread but in this comment it's a better fit https://xkcd.com/1200/
>And today this is.. not sufficient. What we require today is to run software protected from each other. For quite some time I tried to use Unix permissions for this (one user per application I run), but it's totally unworkable. You need a capabilities model, not an user permission model
Unix permissions remain a fundamental building block of Android's sandbox. Each app runs as its own unix user.
I feel like apparmor is getting there, very, very slowly. Just need every package to come with a declarative profile or fallback to a strict default profile.
Nowadays, it's fairly simple to ask for a unit file and accompanying bash script/tests for correctness. I think the barrier in that sense has practically vanished.
Linux kernel is ridden with local privilege escalation vulnerabilities. This approach works for trusted software that you just want to contain, but it won't work for malicious software.
Ridden? There are issues from time to time, but it's not like you can grab the latest, patched Ubuntu LTS and escalate from an unprivileged seccomp sandbox that doesn't include crazy device files.
Any sandbox technology works fine until it isn't. It's not like you could escape Java sandbox, but Java applets were removed from the browsers due to issues being found regularly. In the end, browser sandbox is one of the few that billions of people use and run arbitrary code there every day, without even understanding that. The only comparable technology is qemu. I don't think there are many hosters who will hand off user account to a shared server and let you go wild there.
> Java applets were removed from the browsers due to issues being found regularly
Java applets were killed off my MS's attempt at "embrace, extent, extinguish" by bundling an incompatible version of Java with IE, and Sun's legal response to this.
The Linux API surface is massive. And the fact it's written on C leaves lots of room for vulnerabilities. I don't think you need to reach for a VM, but without a slimmer kernel interface, it's difficult to trust the kernel to actually uphold its required duties in the face of adversaries. This is why folks push heavily for microkernels. Chrome needs to work incredibly hard to provide reliable sandboxing as a result.
> user permissions/groups never come into the sandboxing discussions
Sometimes *nix user accounts for AI agent sandboxing does come up in discussions. At [0], HN user netcoyote linked to his sandvault tool [1], which "sandboxes AI agents in a MacOS limited user account".
Actually seems like a great idea IMO, to be lightweight, generic, and robust-enough.
It shouldn’t come up because it’s not sufficient. How would systemd prevent local JavaScript code from sending DNS, http, webrtc network requests when it’s opened in the users browser?
True, and they do indeed offer an additional layer of protection (but with some nontrivial costs). All (non-business killing) avenues should be used in pursuit of defense in depth when it comes to sandboxing. You could even throw a flatpak or firejail in, but that starts to degrade performance in noticeable ways (though I've found it's nice to strive for this in your CI).
Make sure you install it via FDroid. Also grab Termux:API to be able to write little apps with bash scripts. Here's one I did which gives a notification based interface to Pandora: https://github.com/ijustlovemath/pbr
"just use postgres from your distro" is *wildly* underselling the amount of work that it takes to go from apt install postgres to having a production ready setup (backups, replica, pooling, etc). Granted, if it's a tiny database just pg-dumping might be enough, but for many that isn't going to be enough.
If you're a 'startup', you'll never need any of that work until you make it big. 99% of startups do not make it even medium size.
If you're a small business, you almost never need replicas or pooling. Postgres is insanely capable on modern hardware, and is probably the fastest part of your application if your application is written in a slower dynamic language like Python.
I once worked with a company that scaled up to 30M revenue annually, and never once needed more than a single dedicated server for postgres.
I don't think any of these would take more than a week to setup. Assuming you create a nice runbook with every step it would not be horrible to maintain as well. Barman for backups and unless you need multi-master you can use the builtin publication and subscription. Though with scale things can complicated really fast but most of the time you won't that much traffic to have something complicated.
The one problem with using your distro's Postgres is that your upgrade routine will be dictated by a 3rd party.
And Postgres upgrades are not transparent. So you'll have a 1 or 2 hours task, every 6 to 18 months that you have only a small amount of control over when it happens. This is ok for a lot of people, and completely unthinkable for some other people.
Why would your distro dictate the upgrade routine? Unless the distro stops supporting an older version of Postgres, you can continue using it. Most companies I know of wouldn't dare do an upgrade of an existing production database for at least 5 years, and when it does happen... downtime is acceptable.
And if you want a supabase-like functionality, I'm a huge fan of PostgREST (which is actually how supabase works/worked under the hood). Make a view for your application and boom, you have a GET only REST API. Add a plpgsql function, and now you can POST. It uses JWT for auth, but usually I have application on the same VLAN as DB so it's not as rife for abuse.
Is this the same NIOS that runs on FPGA? We wrote some code for it during digital design in university, and even an a.out was terribly slow, can't imagine running a full kernel. Though that could have been the fault of the hardware or IP we were using.
"This is the io_uring library, liburing. liburing provides helpers to setup and
teardown io_uring instances, and also a simplified interface for
applications that don't need (or want) to deal with the full kernel
side implementation."
The address I found for the facility is 9101 Windmill Park Lane Hudson, TX 77064
This seems ill advised given recent events like Hurricane Harvey
reply