Hacker Newsnew | past | comments | ask | show | jobs | submit | tpurves's commentslogin

Ships of that era and leader had castles on both ends fore and aft. It's just the forward one than retained in usage as a sailing term, even after foredecks no longer looked like castles. The aft castle became a quarterdeck, a poop deck, a cockpit or a bridge etc.

Meanwhile, a built-up and elevated stern 'castle' is advantageous place to put the steering and command position, close to the rudder and with visibility of the whole ship, it's rig, plus where the ship is going. While maximizing mid-ship area for cargo. If you have to pick one end or the other, stern is the more comfortable end of the ship being most sheltered from wave action and weather. Being elevated and fortified also helps as a fighting/defensive position, but that is less important for modern cargo ships. 'Anticipation' isn't quite the right word as shipbuilders have always worked within the same basic design considerations and trade-offs, as the sea itself continues to enforce the same fundamental constraints.


[this is a reply to fourseventy] Looking up the violent crime rates by migrants in places like MN, it's effectively zero. As a rule, migrants and immigrants don't commit crimes at anything close to the rate of native US citizens.

Meanwhile in Minneapolis, the overwhelming majority of violent crimes (including aggravated assaults, theft, murders and sexual assaults) are being committed by ICE agents.


Your comment above should make more clear that the large scale healcare fraud you mentioned was not about Minnesota. It was nationwide, involved mass identity theft and large corporate scale white collar crime.

Dell is cooked this year for reasons entirely outside their control. DRAM and storage/drive shortages are causing costs of those to go to the moon. And Dell's 'inventory' light supply chain and narrow margins puts them in a perfect storm of trouble.

I can't wait for all the data center fire-sales when the whole "AI" boom goes bust. Ebay is going to be flooded with tech.

> I can't wait for all the data center fire-sales when the whole "AI" boom goes bust. Ebay is going to be flooded with tech.

I think a lot of the hardware of these "AI" servers will rather get re-purposes for more "ordinary" cloud applications. So I don't think your scenario will happen.


Yep, hyperscalers go on and on about the "fungible" datacenter capacity in their earning calls as a hedge for a sudden decrease in demand. I could see a scenario where there would be an abundance of GPU capacity, but I’m sure we’d find uses for those too. For instance, there are classic data retrieval workloads that can be accelerated using GPUs.

Anything but admitting that AI king is naked, here on HN...

What? No, this is a pretty relevant comment that is being directly caused by AI.

Consumer PCs and hardware are going to be expensive in 2026 and AI is primarily to blame. You can find examples of CEOs talking about buying up hardware for AI without having a datacenter to run it in. This run on hardware will ultimately drive hardware prices up everywhere.

The knock on effect is that hardware manufacturers are likely going to spend less money doing R&D for consumer level hardware. Why make a CPU for a laptop when you can spend the same research dollars making a 700 core beast for AI workloads in a datacenter? And you can get a nice premium for that product because every AI company is fighting to get any hardware right now.


> Why make a CPU for a laptop when you can spend the same research dollars

You might be right, but I suspect not. While the hardware company are willing to do without laptop sales, data centers need the power efficiency as well.

Facebook has (well had - this was ~10 years ago when I heard it) a team of engineers making their core code faster because in some places a 0.1% speed improvement across all their servers results in saving hundreds of thousands of dollars per month (sources won't give real numbers but reading between the lines this seems about right) on the power bill. Hardware that can do more with less power thus pays for itself very fast in the data center.

Also cooling chips internally is often a limit of speed, so if you can make your chip just a little more efficient it can do more. Many CPUs will disable parts of the CPU not in use just to save that heat, if you can use more of the CPU that translates to more work done and in turn makes you better than the competition.

Of course the work must be done, so data centers will sometimes have to settle for whatever they can get. Still they are always looking for faster chips that use less power because that will show up on the bottom line very fast.


See also, Crucial exiting the marketplace. That one hit me out of left field, since they've been my go-to for RAM for decades. Though I also see that as a little bit of what has been the story of American businesses: "It's too much trouble to make consumer products. Let's just make components or sell raw materials, or be middlemen instead. No one will notice."

Dell is doing very well on Server sales if I recall correctly. Should offset any PC sales slump.

So it was RAM a couple months ago and now storage/drives are going to the moon also?

It was RAM a couple months ago, and it continues to be RAM. Major RAM manufacturers like SK Hynix are dismantling NAND production to increase RAM manufacturing, which is leading to sharp price increases for solid-state storage.

So Zen 6/7 will have a core design and a CCD design. But like past gens, these will be packaged into different products with different sockets and packages (everything from monolithic APUs to sprawling multi-chiplet Server cpus).

So to say that Zen 6/7 supports AM5 on desktop, doesn't necessarily exclude that Zen 6/7 product family in general doesn't support other new/interesting sockets on desktop (or mobile) also. Maybe products for AM6 and AM5 from the same zen family.

Medusa Halo and the Zen7 based 'Grimlock Halo' version might be the interesting ones to watch (if you like efficient Apple-stlyle big APUs with all the memory bandwidth)


Undoubtedly each new model from OpenAi has numerous training and orchestration improvements etc.

But how much of each product they release also just a factor of how much they are willing to spend on inference per query in order to stay competitive?

I always wonder how much is technical change vs turning a knob up and down on hardware and power consumption.

GTP5.0 for example seemed like a lot of changes more for OpenAI's internal benefit (terser responses, dynamic 'auto' mode to scale down thinking when not required etc.)

Wondering if GPT5.2 is also case of them in 'code red mode' just turning what they already have up to 11 as a fastest way to respond to fiercer competion.


I always liked the definition of technology as "doing more with less". 100 oxen replaced by 1 gallon of diesel, etc.

That it costs more does suggest it's "doing more with more", at least.


Good luck with reproducing and eating diesel like can be done with oxen and related species.

Humanity won't be able to tap into this highly compressed energy stock that was generated through processes taking literally geological scales time to bed achieved.

That is, technology is more about what alternative tradeoffs can we leverage on to organize differently with resources at hand.

Frugality can definitely be a possible way to shape the technologies we want to deploy. But it's not all possible technologies, just a subset.

Also better technology is not necessarily bringing societies to morale and well-being excellency. Improving technology for efficient genocides for example is going to bring human disaster as obvious outcome, even if it's done in a manner that is the most green, zero-carbon emissions and growing more forests delivered beyond expectations of the specifications.


Either way, you are right to point out that it important to only a try a pattern like this if your clients are highly trusted (or/and have additional compensating controls against DDOS threats). It would be beneficial if the OP made more explicit what their client/server relationships and also flagged the risk you mentioned for general audiences not to go implementing such a solution in the wrong places.


Thanks for calling this out. Here is a better comparison. Before Google was founded, the market for online search advertising was negligible. But the global market for all advertising media spend was on the order of 400B (NYT 1998). Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.

If you think of openAI like a new google, as in a new category-defining primary channel for consumers to search and discover products. Well, 2% does seem pretty low.


>Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.

Or about 30% of the global advertising spend circa 2024.

I wonder if there is an upper bound on what portion of the economy can be advertising. At some point it must become saturated. People can only consume so much marketing.


Advertising is in many market like a tax or tariff - something all businesses needs to pay. Think of selling consumer goods online - you need ads on social media to bring in customers. Spending 10% on ads as COGS is a no brainer. 20% too. Maybe it could go as high as 50% - if the companies do not really have an alternative, and all the competitors ard doing it too? They are just going to pass the bill to the consumer anyway...


But that occurred with a new form of media that people now use in more of their time than back before Google. It implies AI is growth in time spent. I think the trend is more likely that AI will replace other media.


i hate to be that guy, but.. before google was around, it was the first wave of commercial internet - for all of what five years? Online search was a thing, in-fact it was THE thing across many vendors and all relied on advertising revenue. Revenue on the internet which was ramping up still for dotcom era in those few years. Google's ad revenue vs 98 global ad spend revenue - is that inflation adjusted? Global markets development since then, internet economy expansion, even sheer number of people alive.. completely different worlds.

What might stand from comparison is google introduced a good product people wanted to use and innovative approach to marketing at the time which was unobtrusive. Product drive the traffic. It was quite a bit before Google figured it all out though.


What you are describing has been proposed before, for example within context of projects like Breakthrough Starshot. In that the case the idea is to launch thousands of probes, each weighing only a few grams or less, and accelerating them to an appreciable fraction of the speed of light using solar sails and (powerful) earth-based lasers. The probes could reach alpha centauri within 20-30 years. There seems to be some debate though about whether cross-links between probes to enable relaying signals is ever practical from a power and mass perspective vs a single very large receiver on earth.


Indeed. I think the main reason to send thousands of probes is increasing the odds that they will survive the trip and also be in the right position to gather usable data to transmit back.

Also once you have created the infrastructure of hundreds or thousands of very powerful lasers to accelerate the tiny probes to incredibel speeds, sending many probes instead of a few doesn't add much to the cost anyway.


Sun as a focus lens. "Just" 500 AU.

The Voyager can be overtaken in several years if we to launch today a probe with nuclear reactor powered ionic thruster - all the existing today tech - which can get to 100-200km/s in 2-3 stages (and if we stretch the technology a bit into tomorrow, we can get 10x that).


For anyone interested, this is approximately the wait/walk dilemma, specifically the interstellar travel subset: https://en.wikipedia.org/wiki/Wait/walk_dilemma#Interstellar...

I was listening to an old edition of the Fraser Cain weekly question/answer podcast earlier where he described this exact thing. I think he said that someone has run the numbers in the context of human survivable travel to nearby stars and on how long we should wait and the conclusion was that we should wait about 600 years.

Any craft for human transport to a nearby star system that we launch within the next 600 years will probably be overtaken before arrival at the target star system by ships launched after them.


I guess there's a paradox in that we'd only make the progress needed to overtake if we are still launching throughout those 600 years and iteratively improving and getting feedback along the way.

Because the alternative is everyone waiting on one big 600-year government project. Hard to imagine that going well. (And it has to be government, because no private company could raise funds with its potential payback centuries after the investors die. For that matter, I can't see a democratic government selling that to taxpayers for 150 straight election cycles either.)


We can get lots of iterative practice on interplanetary ships, so not much paradox there.

And the research doesn't need to be anywhere near continuous. It's valid to progress though bursts here and there every couple decades.

And a lot of what we want is generic materials science.


Yes, my understanding is that the 600 year figure was arrived at assuming that there is iterative progress in propulsion technology throughout the intervening years. But at the end of the day, it is just some number that some dude on YouTube said one time (although Fraser Cain is in fact not just some dude, he's a reliable space journalist (and you can take that from me, some dude on the Internet))


From what I understand a Solar lens telescope could only point to a single destination.

Btw 500 AU is 69 light hours.


What these proposals like to forget (even if addressing everything else) is that you need to slow down once you arrive if you want to have any time at all for useful observation once you reach your destination.

What's the point of reaching alpha centauri in 30 years if you're gonna zip past everything interesting in seconds? Will the sensors we can cram on tiny probes even be able to capture useful data at all under these conditions?


Jupiter is 43 lightminutes from the Sun.

If we shoot a thousand probes at 0.1c directly at the Alpha Centauri star, they should have several hours within a Jupiter-distance range of the star to capture data. Seems like enough sensors and time to synthesize an interesting image of the system when all that data gets back to Earth.


Could the probe just fire off some mass when it got there?


Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.

Be careful with the word "just". It often makes something hard sound simple.


Not trying to oversimplify. But suppose 95% of the probe's mass was intended to be jettisoned ahead of it on arrival by an explosive charge, and would then serve as a reflector. That might give enough time for the probe to be captured by the star's gravity...?


It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.

We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.

I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.

IANan aerounatics engineer, so I could be totally wrong.


You’re just talking about a very inefficient rocket (bad ISP).

A rocket works the same way (accelerating mass to provide thrust), just far more efficiently and in a more controlled fashion.


If I don't recall wrongly, Breakthrough Starshot was not a means for commnunicaiton relay as he describes.


It wasn't intended for a communications relay, but it was intended to have 2-way communication. I went down a rabbit hole reading ArXiv papers about it. Despite their tiny size, the probes could phone home with a smaller laser - according to the papers I read, spinning the photons a certain way would differentiate them from other photons, and we apparently have the equipment to detect and pick up those photons. The point of the communication would be for them to send back data and close-up images of the Alpha C system. Likewise, they could receive commands from earth by having dozens of probes effectively act as an interferometry array.


I bet you that this hasn't been proposed, though: https://www.youtube.com/watch?v=GfClJxdQ6Xs

I found that video very interesting! Especially the second half about apparent superliminal speed


really wonderful explanation


It seems like strixhalo is a pipe-cleaner part of sorts and the real deal may have to be Medusa Halo. That one could be a monster. The bad news is that it sounds like it's a long way off (2027 sometime) so who knows what Apple M5 or M6 Max could look like by then for competition.


Did you perhaps intend to post this on https://news.ycombinator.com/item?id=45968611


i think this is on the wrong thread


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: