Hacker Newsnew | past | comments | ask | show | jobs | submit | CodeWriter23's commentslogin

My high school computer lab instructor would tell me when I was frustrated that my code was misbehaving, "It's doing exactly what you're telling it to do".

Once I mastered the finite number of operations and behaviors, I knew how to tell "it" what to do and it would work. The only thing different about vibe coding is the scale of operations and behaviors. It is doing exactly what you're telling it to do. And also expectations need to be aligned. Don't think you can hand over architecture and design to the LLM; that's still your job. The gain is, the LLM will deal with the proper syntax, api calls, etc. and work as a reserach tool on steroids if you also (from another mentor later in life) ask good questions.


"I really hate this damn machine. I wish that they would sell it. It never does what I want it to, only what I tell it."


> customers of a depreciating SaaS product surely churn after a 1-3 years, so they wouldn't make enough of a return

You might think that. Then there's Earthlink and AOL still collecting $5 or $6/mo per mailbox as their cash cow.


They recently bought AOL too!


I was unaware, thanks for the update.


Apple is the company that just over 10 years ago made a strategic move to remove Intel from their supply chain by purchasing a semiconductor firm and licensing ARM. Managing 'painful' transitions is a core competency of theirs.


I think you’re correct that they’re good at just ripping the band-aid off, but the details seem off. AFAIK, Apple has always had a license with ARM and a very unique one since they were one of the initial investors when it was spun out from Acorn. In fact, my understanding is that Apple is the one that insisted they call themselves Advanced RISC Machines Ltd. because they did not want Acorn (a competitor) in the name of a company they were investing in.


Correct, from the ARM Wikipedia entry:

The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA.


Which acquisition are you referring to? Apple bought PA Semi in 2008 and Intrinsity in 2010.


PA, Intrinsity wasn't front of mind for me. My point is, Apple has proven they can buy their way into vertical integration, let's look at the history.

68K -> PowerPC, practically seamless

Mac OS 9 -> BSD / OS X with excellent backward compatibility

PowerPC -> x86

x86 -> ARM

Each major transition, biting off orders of magnitude more complexity of integration. Looking at this continuum, the next logical vertical integration step for Apple is fabrication. The only question in my mind, does Tim have the guts to take that risk.


Doesn't Apple have an ARM "Architectural License" arising from being one of the original founding firms behind ARM, which they helped create back in the 90s for the Apple Newton. That license allows them to design their own ARM-compatible chips. The companies they bought more recently gave them the talent to use their existing license, but they always had the right to design their own chips.


/whoosh


You could buy a house and a 69 Charger for $25K in the 60's with a tidy sum left over.


$50k in 2016 dollars.


You're correct, but for some reason heavily downed at the moment (Edit: no longer the case!). Relevant excerpt backing this up:

> the sugar industry paid the Harvard scientists the equivalent of $50,000 in 2016 dollars

I.e. it was something more like 6k-7k in terms of dollars at the time of payment.


Did you know the average bribe accepted for a politician is something like 5K (This was from a few years back so probably higher now). So yeah this is totally within bribe limits.

As a unrelated note it really is depressing to think about how easy it is to buy off politicians and how much money the bribers have vs an average person.


Average home price in the late 60s was 25k so even if it is equivalent to $50k in 2016 dollars, 25k could still get you further than today in some specific areas.


Some clarification as the actual numbers and the random 25k number keep getting compared to the wrong contexts in this chain (it originally arose as a misunderstanding that the 50k was already in terms of 2016 dollars instead of the original 1960s payment https://news.ycombinator.com/user?id=CodeWriter23):

~$6,000-$7,000 is the amount the researchers were paid off with in the mid 60s. This is roughly equivalent to ~$50,000 in 2016 when using CPI-U figures.

$25,000 in the mid 60s would be equivalent to ~$193,000 by the same measure, and does not relate to $50,000 in 2016 in any way.

But your core point that the items in the CPI-U basket do not adjust equally, which is why it's a basket in the first place. Median housing price in 2016 was ~$300,000, so ~$193,000 is a bit of variance... but not nearly as much as mixing the numbers from the different comparisons made it sound.


Ah missed that.


$25,000 in 1969 has the same buying power as approximately $220,000 to $226,000 today

In terms of 2016, from gemini:

> In 2016, $25,000 from 1969 was worth approximately $163,490.

> Based on the Consumer Price Index (CPI), $1 in 1969 had the same purchasing power as $6.54 in 2016. This represents a total inflation increase of roughly 554% over that 47-year period


People are just downvoting you rather than discussing for some reason. It drives me bonkers when I see that happen here... :).

rendaw was pointing out the $50k in the article & parent comment was in terms of 2016 dollars, not that the mid 60s $25k in CodeWrite23's comment converts to $50k in 2016.

I.e. that the researchers would not be getting anything close to a house + charger + spare change for just half the $50k amount. They got more like $6k-$7k at time of payment in the mid 60s. Which is still a good chunk of change for the time... just not the amounts it was made to sound.


I doubt that the 50k was given to the research as personal pay. It was likely a “research grant” that was used to fund the research and/or get swallowed up as “overhead” by the university


And you probably earned under $10k/yr.


take me back


[flagged]


You're really reducing a whole economic situation to a currency issue ?


It's not just a currency issue; inflation is by definition a reduction in the purchasing power of a fixed wage, and the issue we're facing is that the purchasing power of people's wages is less. If their wages were denominated in a unit of account that wasn't continuously losing value, they wouldn't be continuously losing purchasing power.

The reason you may not know it's an issue is because inflation in our current system isn't just a loss of purchasing power, it's a transfer of purchasing power to those who first receive/spend the newly created money: the banking/financial system. So of course the system invested a lot of money, time and effort in convincing you that it's a good thing to continuously donate a fraction of your purchasing power to the finance industry every year.


I can't remember a bigger HN blackpill than this getting downvoted.


The first paragraph is doing a tricky little sleight of hand. Yeah inflation reduces the power of a fixed wage. Nobody has that kind of fixed wage. The issues with wages and prices we face are not caused by inflation, which is really easy to compensate for.

The second part is just confusing. Inflation benefits the first to "receive/spend" new money? Receiving and spending are opposites, and inflation benefits anyone that's spending whether they got that money first or fiftieth.


> inflation is by definition a reduction in the purchasing power of a fixed wage

So what? Nominal wages can go up just fine. They do that all the time.

> it's a transfer of purchasing power to those who first receive/spend the newly created money

No. That would only be true, if economic actors were too stupid to anticipate expected inflation. People ain't that stupid.


The US had 0-1% inflation a year until the federal reserve. I blame the FED and currency, yes. Look up the "what happened in 1970" charts, and its we got off the gold standard.


It's a confluence of various factors. Explosive population growth, for example. The modern economy (of which fiat currency plays a pivotal role) relies on that of course, as the lending system is a bet on future growth. If that fails the whole thing can enter a state of catastrophic failure. But population growth has more precedence. Fiat currency, bureaucratization, etc. were adopted as reactions to increasingly explosive populations and unchecked rationalism developing the absolutely ridiculous modern state system.

If you want demons to point a finger at, you're going to have to look further back in time than the 20th century. Then and now we're just doing a frantic tap dance to keep what we inherited from catching on fire.


Huh, what? Population increased a lot in the 19th century, and many countries did not have fiat currencies back then; and the price level most went down slowly as the population grew.

(Modern day 2%-ish stable inflation is mostly fine for the economy, even if it technically erodes the value of money in the long term. The classic pre-WW1 gold standard was also fine-ish. The Frankenstein gold standard-ish they until the 1970s was bad. And so was the rampant inflation that followed for a while.)


I specifically mentioned that population growth precedes fiat currency. Where's your confusion? I'm explicitly telling you to broaden your perspective and look at overarching political currents across the centuries succeeding the renaissance. For instance many countries also were not so extensively bureaucratized, particularly in how they interfaced with the public, until the late 19th century and early 20th century.

Political evolution is spread over many years and is structurally anisotropic. Metallism's death was inevitable by the 18th century at best, but don't misunderstand that to mean it was going to happen immediately. It's also just a symptom. The enlightenment's political revolution is a manifold spread across centuries. Don't just look at the symptoms, you won't understand anything and it will lead you to half-baked conclusions.


No, fiat currency has allowed our money supply to track closer to our GDP, preventing currency shortages and price manipulation by foreign adversaries, giving us the most stable economy the world has ever experienced over the last 50 years. Yes, it can be abused (and some Asian countries have taken this to dangerous extremes), but it’s better than all the alternatives so far.


The good standard didn't even last half a century before collapsing.

Gold is way too inelastic to work as a basis for currency in an industrial economy.


The Bozo Explosion is in full force at Apple now. They should bring Forstall back. Engineers need to be be humiliated for making stupid decisions.


I get where you're coming from, but the word humiliation is not constructive in a professional setting. Reasonable decisions can easily look stupid without context and hindsight is always 20/20. Being responsible for your actions should be the norm, but ridicule is not the right way to get there.


The Jobs I and Jobs II eras at Apple prove it is constructive.


Why would one want to do that instead of using claude-zai -c from the start? All this is pretty new to me, kick a n00b a clue please.


Claude is smarter than this model. So spilling over to a less preferred model when you run out of quota is a thing.


"Oddball string instructions", as an assembler coder bitd, they were a welcome feature as opposed to running out of registers and/or crashing the stack with a Z-80.


The Z80 had LDIR which was a string copy instructions. The byte at (HL) would be read from memory, then written to (DE), HL and DE would be incremented, and BC decremented and then repeated until BC became zero.

LDDR was the same but decremented HL and DE on each iteration instead.

There were versions for doing IN and OUT as well, and there was an instruction for finding a given byte value in a string, but I never used those so I don't recall the details.


LDIR? We used DMA for that.

I was referring to LODSB/W (x86) which is quite useful for processing arrays.


LDIR sounds great on paper but is implemented terribly making it slower than manual unrolled loop

https://retrocomputing.stackexchange.com/questions/4744/how-...

Repeat is done by decrementing PC by 2 and re-loading whole instruction in a loop. 21 cycles per byte copied :o

To be fair Intel did same fail implementation of REP MOVSB/MOVSW in 8088/8086 reloading whole instruction per iteration, REP MOVSW is ~14 cycles/byte 8088 (9+27/rep) and ~9 cycles/byte 8086 (9+17/rep), ~same cost as non REP versions (28 and 18). NEC V20/V30 improved by almost 2x to 8 cycles/byte V20 or unaligned V30 (11+16/rep) and 4 cycles/byte on fully aligned access V30 (11+8/rep) with non REP cost being 19 and 11 respectively. V30 pretty much matched Intel 80186 4 cycles/byte (8+8/rep, 9 non rep). 286 was another jump to 2 cycles/byte (5+4/rep). 386 same speed, 486 much slower for small rep counts, under a cycle for big rep movsd. Pentium up to 0.31 cycles per byte, MMX 0.27 cycle/byte (http://www.pennelynn.com/Documents/CUJ/HTML/14.12/DURHAM1/DU...), then 2009 AVX doing block moves at full L2 cache speed and so on.

In 6502 corner there was nothing until 1986 WDC W65C816 Move Memory Negative (MVN), Move Memory Positive (MVP) 7 cycles/byte. Slower than unrolled code, 2x slower than unrolled code using 0 page. Similar bad implementation (no loop buffer) re-fetching whole instruction every iteration.

1987 NEC TurboGrafx-16/PC Engine 6502 clone by HudsonSoft HuC6280 Transfer Alternate Increment (TAI), Transfer Increment Alternate (TIA), Transfer Decrement Decrement (TDD), Transfer Increment Increment (TII) theoretical 6 cycles/byte (17+6rep). I saw one post long time ago claiming block transfer throughput of ~160KB/s on a 7.16 MHz NEC manufactured TurboGrafx-16 (hilarious 43 cycles/byte) so dont know what to think of it considering NEC V20 inside OG 4.77MHz IBM XT does >300KB/s.

    CPU / Instruction   Cycles per Byte
    Z80 LDIR 8-bit              21
    8088 MOVSW 8bit             ~14
    6502 LDA/STA 8bit           ~14
    8086 MOVSW                  ~9
    NEC V20 MOVBKW 8bit         ~8
    W65C816 MVN/MVP 8bit        ~7  block move
    HuC6280 T[DIAX]/TIN 8bit    ~6  block transfer instructions
    80186 MOVSW 16bit           ~4
    NEC V30 MOVSW               ~4
    80286 MOVSW                 ~2
    486 MOVSD                   <1
    Pentium MOVSD               ~0.31
    Pentium MMX MOVSD           ~0.27 http://www.pennelynn.com/Documents/CUJ/HTML/14.12/DURHAM1/DURT1.HTM


Only the Z80 refetched the entire instruction, x86 never did it this way. Each bus transfer (read or write) takes multiple clocks:

    CPU                        Cycles  per              theoretical minimum per byte for block move
    Z80 instruction fetch      4       byte
    Z80 data read/write        3       byte             6
    80(1)88, V20               4       byte             8
    80(1)86, V30               4       byte/word        4
    80286, 80386 SX            2       byte/word        1
    80386 DX                   2       byte/word/dword  0.5
LDIR (etc.) are 2 bytes long, so that's 8 extra clocks per iteration. Updating the address and count registers also had some overhead.

The microcode loop used by the 8086/8088 also had overhead, this was improved in the following generations. Then it became somewhat neglected since compilers / runtime libraries preferred to use sequences of vector instructions instead.

And with modern processors there are a lot of complications due to cache lines and paging, so there's always some unavoidable overhead at the start to align everything properly, even if then the transfer rate is close to optimal.


This is correct, but it should be noted that the 2-cycle transfers of 286/386SX/386DX could normally be achieved only from cache memory (if the MB had cache), while for DRAM accesses at least 1 or 2 wait states were needed, lengthening the access cycles to 3 or 4 clock cycles.

Moreover, the cache memories used with 286/386SX/386DX were normally write-through, which means that they shortened only the read cycles, not also the write cycles. Such caches were very effective to diminish the impact on performance of instruction fetching, but they brought little or no improvement to block transfers. The caches were also very small, so any sizable block transfer would flush the entire cache, then all transfers would be done at DRAM speed.


0 wait state 286 was pretty standard affair for 8-10 and some 12MHz gray boxes. Example https://theretroweb.com/motherboard/manual/g2-12mhz-zero-wai...

"12MHz/0 wait state with 100ns DRAM."

another https://theretroweb.com/chip/documentation/neat-6210302843ed...

"The processor can operate at 16MHz with 0.5-0.7 wait state memory accesses, using 100 nsec DRAMs. This is possible through the Page Interleaved memory scheme."


I seem to recall Musk saying something about OpenAI being over-valued/under-funded earlier this year. Of course he was summarily booed off the stage by the startup crowd.


He says a lot of things. Just because some of them end up being true by happenstance doesn't make him a prophet.


As he should be booed off the stages in any respectable society. Regardless of whether he is right or wrong about ai.


> I disapprove of what you say, but I will defend to the death your right to say it.

This is _actually_ what a respectable society does.


Who are you to regulate my booing? Free speech for me, not for thee?


Very simple: if booing is used to prevent another person from being heard/being able to properly articulate their ideas in public, that's a violation of _their_ freedom of speech.

Again, I might have misunderstood what booing means though (which explains the downvotes at least...)


booing is also covered by free speech.


I might be misunderstanding what booing means then. My understanding is covering another person's voice with shouts in order to sabotage his speech. It might indeed be part of what some society might define free speech, but I'd consider it more of a coward form of violence.

If with "booing" you mean "disrespect whatever good idea a person has because it also has very bad ideas", then I wonder who we will end up respecting. Even I have ideas I end up discovering bad. Should I boo myself and ignore everything else I say?

If I am missing another definition of booing then I am sorry.


That is exactly what booing is, but citizens are allowed to boo. I can boo you, you can boo me. If you are booing me then I can walk away, and likewise you can walk away from me. If I'm booing you during a public performance that is indeed rude but then I need to be thrown out by security, which is perfectly allowed and expected.

Citizens, i.e each other, are not the problem when it comes to free speech, ever. The only entity which needs to be defended against is the entity that has a monopoly on violence, which is of course the government.


> but I'd consider it more of a coward form of violence.

Booing being a form of violence is the hottest take i’ve seen this week.


> I'd consider it more of a coward form of violence

If you think that the act of booing is a form of violence, then what do you think about _actual_ hate speech?


Well Elon Musk is working hard to suppress freedom of others and openly supports authoritarian movements.

Using free speech to boo him when he is getting celebrated is defending free speech.


Musk managed to attract so much negative attention that he is going to be booed no matter what.

I happen to agree with him re OpenAI...


Elon Musk is suing OpenAI over the for-profit conversion so that clouds his judgement.


Costco. Go to a supervisor in a red vest and ask what other Costco has the item that has stocked out and you'll see. No idea what the backend is but the app they use is a terminal emulator that looks straight out of the late 80's.


Here's a photo for anyone curious:

https://mastodon.social/@nixCraft/111839478303640635

It's also worth noting that the original mainframe hardware has likely been virtualized at this point. Used to work for a company that was doing a lot of that around 15 years ago


> that the original mainframe hardware has likely been virtualized at this point

The as400 is a mini-computer, the high end of this line overlaps the low end of mainframe.

When I did some consulting work out there many years ago, they had a network of the largest as400's that IBM makes, connected together in one image.

Regarding virtualization: It would have to be on IBM's power processors. IBM does offer cloud services running as400, I have no info on whether Costco is using that or not.


It's a network of high end as400's, the software is custom.

They've burned multiple 100's of millions of dollars on multiple projects trying to re-develop and move off as400's, but they just pulled the plug on their most recent project a year or two ago.

The biggest issue with adoption on new system (based on insiders I've talked to) is that the existing system is very efficient for people knowledgeable about how to use it and the newer GUI based systems just don't match it.


Looks like an AS/400


AS400 in their case


AS400 I think


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: