My high school computer lab instructor would tell me when I was frustrated that my code was misbehaving, "It's doing exactly what you're telling it to do".
Once I mastered the finite number of operations and behaviors, I knew how to tell "it" what to do and it would work. The only thing different about vibe coding is the scale of operations and behaviors. It is doing exactly what you're telling it to do. And also expectations need to be aligned. Don't think you can hand over architecture and design to the LLM; that's still your job. The gain is, the LLM will deal with the proper syntax, api calls, etc. and work as a reserach tool on steroids if you also (from another mentor later in life) ask good questions.
Apple is the company that just over 10 years ago made a strategic move to remove Intel from their supply chain by purchasing a semiconductor firm and licensing ARM. Managing 'painful' transitions is a core competency of theirs.
I think you’re correct that they’re good at just ripping the band-aid off, but the details seem off. AFAIK, Apple has always had a license with ARM and a very unique one since they were one of the initial investors when it was spun out from Acorn. In fact, my understanding is that Apple is the one that insisted they call themselves Advanced RISC Machines Ltd. because they did not want Acorn (a competitor) in the name of a company they were investing in.
The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA.
PA, Intrinsity wasn't front of mind for me. My point is, Apple has proven they can buy their way into vertical integration, let's look at the history.
68K -> PowerPC, practically seamless
Mac OS 9 -> BSD / OS X with excellent backward compatibility
PowerPC -> x86
x86 -> ARM
Each major transition, biting off orders of magnitude more complexity of integration. Looking at this continuum, the next logical vertical integration step for Apple is fabrication. The only question in my mind, does Tim have the guts to take that risk.
Doesn't Apple have an ARM "Architectural License" arising from being one of the original founding firms behind ARM, which they helped create back in the 90s for the Apple Newton. That license allows them to design their own ARM-compatible chips. The companies they bought more recently gave them the talent to use their existing license, but they always had the right to design their own chips.
Did you know the average bribe accepted for a politician is something like 5K (This was from a few years back so probably higher now). So yeah this is totally within bribe limits.
As a unrelated note it really is depressing to think about how easy it is to buy off politicians and how much money the bribers have vs an average person.
Average home price in the late 60s was 25k so even if it is equivalent to $50k in 2016 dollars, 25k could still get you further than today in some specific areas.
Some clarification as the actual numbers and the random 25k number keep getting compared to the wrong contexts in this chain (it originally arose as a misunderstanding that the 50k was already in terms of 2016 dollars instead of the original 1960s payment https://news.ycombinator.com/user?id=CodeWriter23):
~$6,000-$7,000 is the amount the researchers were paid off with in the mid 60s. This is roughly equivalent to ~$50,000 in 2016 when using CPI-U figures.
$25,000 in the mid 60s would be equivalent to ~$193,000 by the same measure, and does not relate to $50,000 in 2016 in any way.
But your core point that the items in the CPI-U basket do not adjust equally, which is why it's a basket in the first place. Median housing price in 2016 was ~$300,000, so ~$193,000 is a bit of variance... but not nearly as much as mixing the numbers from the different comparisons made it sound.
$25,000 in 1969 has the same buying power as approximately $220,000 to $226,000 today
In terms of 2016, from gemini:
> In 2016, $25,000 from 1969 was worth approximately $163,490.
> Based on the Consumer Price Index (CPI), $1 in 1969 had the same purchasing power as $6.54 in 2016. This represents a total inflation increase of roughly 554% over that 47-year period
People are just downvoting you rather than discussing for some reason. It drives me bonkers when I see that happen here... :).
rendaw was pointing out the $50k in the article & parent comment was in terms of 2016 dollars, not that the mid 60s $25k in CodeWrite23's comment converts to $50k in 2016.
I.e. that the researchers would not be getting anything close to a house + charger + spare change for just half the $50k amount. They got more like $6k-$7k at time of payment in the mid 60s. Which is still a good chunk of change for the time... just not the amounts it was made to sound.
I doubt that the 50k was given to the research as personal pay. It was likely a “research grant” that was used to fund the research and/or get swallowed up as “overhead” by the university
It's not just a currency issue; inflation is by definition a reduction in the purchasing power of a fixed wage, and the issue we're facing is that the purchasing power of people's wages is less. If their wages were denominated in a unit of account that wasn't continuously losing value, they wouldn't be continuously losing purchasing power.
The reason you may not know it's an issue is because inflation in our current system isn't just a loss of purchasing power, it's a transfer of purchasing power to those who first receive/spend the newly created money: the banking/financial system. So of course the system invested a lot of money, time and effort in convincing you that it's a good thing to continuously donate a fraction of your purchasing power to the finance industry every year.
The first paragraph is doing a tricky little sleight of hand. Yeah inflation reduces the power of a fixed wage. Nobody has that kind of fixed wage. The issues with wages and prices we face are not caused by inflation, which is really easy to compensate for.
The second part is just confusing. Inflation benefits the first to "receive/spend" new money? Receiving and spending are opposites, and inflation benefits anyone that's spending whether they got that money first or fiftieth.
The US had 0-1% inflation a year until the federal reserve. I blame the FED and currency, yes. Look up the "what happened in 1970" charts, and its we got off the gold standard.
It's a confluence of various factors. Explosive population growth, for example. The modern economy (of which fiat currency plays a pivotal role) relies on that of course, as the lending system is a bet on future growth. If that fails the whole thing can enter a state of catastrophic failure. But population growth has more precedence. Fiat currency, bureaucratization, etc. were adopted as reactions to increasingly explosive populations and unchecked rationalism developing the absolutely ridiculous modern state system.
If you want demons to point a finger at, you're going to have to look further back in time than the 20th century. Then and now we're just doing a frantic tap dance to keep what we inherited from catching on fire.
Huh, what? Population increased a lot in the 19th century, and many countries did not have fiat currencies back then; and the price level most went down slowly as the population grew.
(Modern day 2%-ish stable inflation is mostly fine for the economy, even if it technically erodes the value of money in the long term. The classic pre-WW1 gold standard was also fine-ish. The Frankenstein gold standard-ish they until the 1970s was bad. And so was the rampant inflation that followed for a while.)
I specifically mentioned that population growth precedes fiat currency. Where's your confusion? I'm explicitly telling you to broaden your perspective and look at overarching political currents across the centuries succeeding the renaissance. For instance many countries also were not so extensively bureaucratized, particularly in how they interfaced with the public, until the late 19th century and early 20th century.
Political evolution is spread over many years and is structurally anisotropic. Metallism's death was inevitable by the 18th century at best, but don't misunderstand that to mean it was going to happen immediately. It's also just a symptom. The enlightenment's political revolution is a manifold spread across centuries. Don't just look at the symptoms, you won't understand anything and it will lead you to half-baked conclusions.
No, fiat currency has allowed our money supply to track closer to our GDP, preventing currency shortages and price manipulation by foreign adversaries, giving us the most stable economy the world has ever experienced over the last 50 years. Yes, it can be abused (and some Asian countries have taken this to dangerous extremes), but it’s better than all the alternatives so far.
I get where you're coming from, but the word humiliation is not constructive in a professional setting. Reasonable decisions can easily look stupid without context and hindsight is always 20/20. Being responsible for your actions should be the norm, but ridicule is not the right way to get there.
"Oddball string instructions", as an assembler coder bitd, they were a welcome feature as opposed to running out of registers and/or crashing the stack with a Z-80.
The Z80 had LDIR which was a string copy instructions. The byte at (HL) would be read from memory, then written to (DE), HL and DE would be incremented, and BC decremented and then repeated until BC became zero.
LDDR was the same but decremented HL and DE on each iteration instead.
There were versions for doing IN and OUT as well, and there was an instruction for finding a given byte value in a string, but I never used those so I don't recall the details.
Repeat is done by decrementing PC by 2 and re-loading whole instruction in a loop. 21 cycles per byte copied :o
To be fair Intel did same fail implementation of REP MOVSB/MOVSW in 8088/8086 reloading whole instruction per iteration, REP MOVSW is ~14 cycles/byte 8088 (9+27/rep) and ~9 cycles/byte 8086 (9+17/rep), ~same cost as non REP versions (28 and 18). NEC V20/V30 improved by almost 2x to 8 cycles/byte V20 or unaligned V30 (11+16/rep) and 4 cycles/byte on fully aligned access V30 (11+8/rep) with non REP cost being 19 and 11 respectively. V30 pretty much matched Intel 80186 4 cycles/byte (8+8/rep, 9 non rep). 286 was another jump to 2 cycles/byte (5+4/rep). 386 same speed, 486 much slower for small rep counts, under a cycle for big rep movsd. Pentium up to 0.31 cycles per byte, MMX 0.27 cycle/byte (http://www.pennelynn.com/Documents/CUJ/HTML/14.12/DURHAM1/DU...), then 2009 AVX doing block moves at full L2 cache speed and so on.
In 6502 corner there was nothing until 1986 WDC W65C816 Move Memory Negative (MVN), Move Memory Positive (MVP) 7 cycles/byte. Slower than unrolled code, 2x slower than unrolled code using 0 page. Similar bad implementation (no loop buffer) re-fetching whole instruction every iteration.
1987 NEC TurboGrafx-16/PC Engine 6502 clone by HudsonSoft HuC6280 Transfer Alternate Increment (TAI), Transfer Increment Alternate (TIA), Transfer Decrement Decrement (TDD), Transfer Increment Increment (TII) theoretical 6 cycles/byte (17+6rep). I saw one post long time ago claiming block transfer throughput of ~160KB/s on a 7.16 MHz NEC manufactured TurboGrafx-16 (hilarious 43 cycles/byte) so dont know what to think of it considering NEC V20 inside OG 4.77MHz IBM XT does >300KB/s.
Only the Z80 refetched the entire instruction, x86 never did it this way. Each bus transfer (read or write) takes multiple clocks:
CPU Cycles per theoretical minimum per byte for block move
Z80 instruction fetch 4 byte
Z80 data read/write 3 byte 6
80(1)88, V20 4 byte 8
80(1)86, V30 4 byte/word 4
80286, 80386 SX 2 byte/word 1
80386 DX 2 byte/word/dword 0.5
LDIR (etc.) are 2 bytes long, so that's 8 extra clocks per iteration. Updating the address and count registers also had some overhead.
The microcode loop used by the 8086/8088 also had overhead, this was improved in the following generations. Then it became somewhat neglected since compilers / runtime libraries preferred to use sequences of vector instructions instead.
And with modern processors there are a lot of complications due to cache lines and paging, so there's always some unavoidable overhead at the start to align everything properly, even if then the transfer rate is close to optimal.
This is correct, but it should be noted that the 2-cycle transfers of 286/386SX/386DX could normally be achieved only from cache memory (if the MB had cache), while for DRAM accesses at least 1 or 2 wait states were needed, lengthening the access cycles to 3 or 4 clock cycles.
Moreover, the cache memories used with 286/386SX/386DX were normally write-through, which means that they shortened only the read cycles, not also the write cycles. Such caches were very effective to diminish the impact on performance of instruction fetching, but they brought little or no improvement to block transfers. The caches were also very small, so any sizable block transfer would flush the entire cache, then all transfers would be done at DRAM speed.
"The processor can operate at 16MHz with 0.5-0.7 wait state memory accesses, using 100 nsec DRAMs. This is possible through the Page Interleaved memory scheme."
I seem to recall Musk saying something about OpenAI being over-valued/under-funded earlier this year. Of course he was summarily booed off the stage by the startup crowd.
Very simple: if booing is used to prevent another person from being heard/being able to properly articulate their ideas in public, that's a violation of _their_ freedom of speech.
Again, I might have misunderstood what booing means though (which explains the downvotes at least...)
I might be misunderstanding what booing means then. My understanding is covering another person's voice with shouts in order to sabotage his speech. It might indeed be part of what some society might define free speech, but I'd consider it more of a coward form of violence.
If with "booing" you mean "disrespect whatever good idea a person has because it also has very bad ideas", then I wonder who we will end up respecting. Even I have ideas I end up discovering bad. Should I boo myself and ignore everything else I say?
If I am missing another definition of booing then I am sorry.
That is exactly what booing is, but citizens are allowed to boo. I can boo you, you can boo me. If you are booing me then I can walk away, and likewise you can walk away from me. If I'm booing you during a public performance that is indeed rude but then I need to be thrown out by security, which is perfectly allowed and expected.
Citizens, i.e each other, are not the problem when it comes to free speech, ever. The only entity which needs to be defended against is the entity that has a monopoly on violence, which is of course the government.
Costco. Go to a supervisor in a red vest and ask what other Costco has the item that has stocked out and you'll see. No idea what the backend is but the app they use is a terminal emulator that looks straight out of the late 80's.
It's also worth noting that the original mainframe hardware has likely been virtualized at this point. Used to work for a company that was doing a lot of that around 15 years ago
> that the original mainframe hardware has likely been virtualized at this point
The as400 is a mini-computer, the high end of this line overlaps the low end of mainframe.
When I did some consulting work out there many years ago, they had a network of the largest as400's that IBM makes, connected together in one image.
Regarding virtualization: It would have to be on IBM's power processors. IBM does offer cloud services running as400, I have no info on whether Costco is using that or not.
It's a network of high end as400's, the software is custom.
They've burned multiple 100's of millions of dollars on multiple projects trying to re-develop and move off as400's, but they just pulled the plug on their most recent project a year or two ago.
The biggest issue with adoption on new system (based on insiders I've talked to) is that the existing system is very efficient for people knowledgeable about how to use it and the newer GUI based systems just don't match it.
Once I mastered the finite number of operations and behaviors, I knew how to tell "it" what to do and it would work. The only thing different about vibe coding is the scale of operations and behaviors. It is doing exactly what you're telling it to do. And also expectations need to be aligned. Don't think you can hand over architecture and design to the LLM; that's still your job. The gain is, the LLM will deal with the proper syntax, api calls, etc. and work as a reserach tool on steroids if you also (from another mentor later in life) ask good questions.