Discouraging higher education, IMO, isn't a bad thing. Academia is too much of a walled garden that is too easy to enter.
This resulted in higher ed becoming a defacto requirement for many professions that could have been open to professionally trained or experienced people. Employers need to draw their baseline requirements somewhere and if expensive credentials are too ubiquitous, it's understandable they rather select from those who achieved them than from those who didn't (or couldn't afford to).
And it's frankly disgusting how many doors remain closed to yourself unless you got access to an .edu email. More people with academic interests not having acquired one might open the door to many more who discovered their academic interests later in life but can no longer find a way to enter that garden.
IMO it's exactly the right layer, just like for ECC memory.
There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it.
In practice, this is a solved problem, but only until someone makes a mistake, then there will be a lot of trouble debugging it between the manufacturer certainly denying their mistake and people getting caught up on the usual suspects.
Doing all the ECC stuff right on the CPU gives you all the benefits against bitrot and resilience against all errors in transmission for free.
And if all things go just right we might even be getting better instruction support for ECC stuff. That'd be a nice bonus
> There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it.
That's a physical layer, and as such should obviously have end-to-end ECC appropriate to the task. But the error distribution shape is probably very different from that of bytes in NAND data at rest, which is different from that of DRAM and PCI again.
For the same reason, IP does not do error correction, but rather relies on lower layers to present error-free datagram semantics to it: Ethernet, Wi-Fi, and (managed-spectrum) 5G all have dramatically different properties that higher layers have no business worrying about. And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly.
> And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly.
Outside of wireless links (where FEC of some degree is necessary regardless) this is mostly because TCP’s checksum is so weak. QUIC for example handles this much better, since the packet’s authenticated encryption doubles as a robust error detecting code. And unlike TLS over TCP, the connection is resilient to these failures: a TCP packet that is corrupted but passes the TCP checksum will kill the TLS connection on top of it instead of retransmitting.
Ah, I meant go south in terms of performance, not correctness. Most TCP congestion control algorithms interpret loss exclusively as a congestion signal, since that's what most lower layers have historically presented to it.
Other than that, I didn't realize that TLS has no way of just retransmitting broken data without breaking the entire connection (and a potentially expensive request or response with it)! Makes sense at that layer, but I never thought about it in detail. Good to know, thank you.
ECC memory modules don’t do their own very complicated remapping from linear addresses to physical blocks like SSDs do. ECC memory is also oriented toward fixing transient errors, not persistently bad physical blocks.
You can still do this for boot code if the error isn't significant enough to make all of the boot fail. The "fixing it by plugging it in somewhere else" could then also be simple enough to the point of being fully automated.
ZFS has "copies=2", but iirc there are no filesystems with support for single disk erasure codes, which is a huge shame because these can be several orders of magnitude more robust compared to a simple copy for the same space.
The only reason nuclear is more expensive than any alternative are absurd regulations, reporting duties, the practice of financing these projects on borrowed money with high interests and that many of the companies running these projects are career parking spots and accelerators for the social circles around politicians and the bureaucratic aristocracy.
Complexity-wise they're about halfway between gas and coal.
The plant and equipment required to maintain a stable nuclear reaction and extract its heat is far more complex than that required to control a coal or natural gas firebox.
This is reflected in the fact that to run 1GW of nuclear generation, on average (in the US) requires about 700 FTE to operate. The average for coal generation is about a third of that number. And the average for a combined cycle gas plant is about 60 FTE.
And nuclear fission produces low-grade heat - around 320°C - compared to coal (around 550°C) or natural gas (over 1300°C). Thus are less thermally efficient and require huge cooling towers and much larger turbines to extract the thermal energy. Which, of course, are more expensive and complex to build and maintain.
Kind of my fault, I specifically thought only about the powerplant + fuel part.
Of course nuclear is much more complex as a whole, because it comes with at least two, sometimes three different business sections attached by default: Production and sale of rare isotopes, on-site laboratories and research and recycling of spent fuel.
It's hard to beat gas. The small double digit MW plant in my town literally has only one on-site full-time employee. My guess the only reason the FTE hits even 60 (didn't check) is because there are so many small installations.
Coal has a lot of fuel processing on-site just for its own demand, the mostly very sensible environmental regulations add a lot of complexity to processing the flue gasses and this adds A LOT of moving parts.
Nuclear can be built simple enough that people are literally thinking about dropping it down a mile deep hole, barely the width of a US-standard human. On the "hands off" scale it can't beat gas, barely anything but solar, geothermal and nuclear thermal electric can, but it could beat coal and hydro and possibly even wind via scale. Just how often should one have to send a report to some oversight body on the number of functional overhead lights and whether the change in microclimate didn't displace any rare insect species before one can say: "You didn't read the last 20, you're not getting another one."
> Of course nuclear is much more complex as a whole, because it comes with at least two, sometimes three different business sections attached by default: Production and sale of rare isotopes, on-site laboratories and research and recycling of spent fuel.
That misses my point. Managing fuel and waste is more complex for nuclear. Producing heat using a nuclear reactor is more complex than producing it with coal and gas. And extracting useful energy from the heat is also more complex (given the low-grade heat that reactors provide).
At every step of the way you have more complexity in engineering and operations.
These engineering realities are independent of the regulatory environment or other activities occurring around the plant.
The only reason? Solar constantly getting cheaper is not also part of the reason? Is there any price that solar could decline you to where you would begin to credit solar's low price as being part of the reason?
The rules were updated on Oct6 to allow media outlets to report using any information even if classified and unapproved for release, as long as they didn't solicit it or were given it with the premise that it won't be released.
So if they were to be approached by a whistleblower or happened to hear the right conversation or find the right documents, it'd be fair game.
I experimented with Cuckoo tables a lot, but sadly never managed to beat quadratic probing. It's honestly quite depressing just how hard beating "having your data right next to each other" is.
The one thing Cuckoo tables can do much better than anything else I've tried is load factor. Insertions get slow well above 90%, but as long as your buckets are large enough or you got enough inner tables, it'll do fast lookups even at a perfect 100%.
But you'll have a hard time beating getting all the data you'll need for 99% of your lookups within a single cache line.
Why would the malware industry benefit from digital message privacy?
If you're the victim, just hand over the relevant chats yourself. Otherwise, just follow the money. And if the attackers are sitting in a country whose banks you can't get to cooperate, intercepting chat messages from within that country won't do you any good either.
Also, if someone has malicious intent and is part of a criminal network, the people within that network would hardly feel burdened by all digital messages on all popular apps being listened in on by the government. These people will just use their own private applications. Making one is like 30min of work or starting at $50 on fiverr.
”Follow the money”. Yes, let’s decide that no bank is to have anything to do with crypto from next year. And not do business with other banks that accepts crypto. That would help stop fraud much more effective than
Chat Control.
For the vast majority of crypto currencies tracing the transactions is trivial. And even currencies like XMR are hardly as anonymous as people think.
The challenging regulations around technically anonymous crypto currencies require you to actively make trackable arrangements with your financial service providers. VERY few people will ever do this, and therefore if anything suspicious were to occur, all you've achieved is putting yourself on the suspect list preemptively.
Sure, if you want to read the messages, but the whole point is that that's rarely necessary and the price isn't worth the minimal gain.
Of the serious criminals, the only ones you'll be catching are those with low technical knowledge (everyone else will just be using their own applications) and the Venn diagram of those with little tech knowledge and those whose digital privacy practices could deceive law enforcement resembles AA cups against a pane of glass.
Regarding Encrochat, it is no surprise that an (unintentional?) watering hole gathered up a bunch of tech-illiterate, the fallacy is that those people wouldn't have been caught if they weren't allowed to flock to a single platform for some time.
Would some people have not been caught until much later or even not at all? Sure, but if LE would do its job (and not ignoring, or even covering up, well known problem areas and organizations for years to decades), only those of low priority.
Is that little gain worth creating a tool to allow Iran or similar countries to check every families' messages if they suspect some family member might be gay?
Hard nope.
> Or just downvoting me.
Don't worry, I rarely do that and that's not just because I can't...
With very few exceptions, politicians do know perfectly fine what they are doing.
Each of them has a large budget to hire several staffers presenting every issue to them in ways they understand best, each government has a huge apparatus of various departments with domain experts analyzing every situation from every reasonable perspective and if all of that fails, barely and university or institute would fail to respond to a request for input from an elected official.
Any and all ignorance of any significant politician is by choice. You can't push the oppression a decision causes to the maximum you'll get away with at that point in time without understanding the issue first.
No offense, but an org-mode user's opinion about Latex is about equivalent to a Masochist's opinion on letting your child play with Lego in the living room.
There's no point in being able to buy an outrageously fancy toilet with remittances if there's no sewer to hook it up to.