Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intel has been "dead" before. AMD has "beaten it" before.

AMD knocked it out of the park with x86_64, which allowed a seamless transition to 64bit. Intel ended up having to license x86_64 implementation from AMD.

AMD beat Intel with their K6 and similar series of chips where, just like this time around, they were able to get way more performance per tick out of the CPU. Intel was supposedly dead in the water due to their toaster era P4 chips that ran hot as hell, and consumed way more power to get the same job done. AMD started making some serious inroads in the server CPU market with early era Opteron processors. Following that era, out came Centrino era of mobile processors, which took a different approach to the CPU architecture from the P4 and set things up for the Core2Duo etc series of processors and on in to the i7s and the like.

I'm highly skeptical that Intel is any more dead now than it was then. It has a track record of going away and completely changing the whole story all over again, and they've got the financial resources to keep on doing so.



That was Intel as a powerful incumbent pushing out one major competitor in the same market. AMD made better chips for years, but couldn't displace Intel due to Intel's entrenched advantages (Market share/Legal etc...). Now the market is moving and Intel remains entrenched.

This is generally how dominant companies die. ExxonMobile doesn't die because someone built a better oil company. They die when someone builds a better battery.


I believe your analogy is flawed unless there has been some massive shift in the chip industry that invalidates a large body of pre-existing knowledge about the industry.

How is today's chip industry compared with yesterday's chip industry as dramatic of a shift as going from oil to batteries? That seems like a much more significant leap than x86 to ARM. What am I missing?


You're too focused on the technology aspect, when that is only the linchpin. There are a number of contributing factors here, but I will summarize.

1. CPUs are diminishing in importance. They aren't the bottleneck for most applications. Whatever the new hot tech is, it's probably limited by GPU, RAM, or storage. ARM doesn't have to be better than Intel, they just have to be good enough and more ubiquitous. The top of the market will go GPU and the bottom will go ARM, and the middle will be an ever shrinking x86 market share. The few places that will need heavy CPU resources will be the same people who can apply pressure to Intel's margins.

2. Intel can't force ARM chips out of the market, because they aren't playing the same game as AMD. The licensing business model of ARM has allowed them to separate Intel from their traditional allies while also pooling the efforts of Intel's competitors.

3. The next generation will know ARM. The hobby chips that Intel is discontinuing now means that they are handing over the next generation of 'learners' to ARM. ARM based training/learning boards are proliferating fast. Right now "everyone knows/runs X86", but that will change.

The process of chip making will look very similar in the future, but the brand of the CPU will matter less every year. Intel's not "dead in five years", but Intel will definitely cross the point of no return in that timeframe. Shifting a big company's focus is more difficult than growing another company who already has the right focus.

Back to the analogy: Batteries wouldn't invalidate oil. There are a multitude of other areas where petrochemicals are used. Batteries would shift the market enough to make it difficult for exxonmobil to follow.


Thank you. This is the kind of explanation I was looking for. Poor phrasing on my part.


Doesn't Intel license some ARM architecture, and fab some ARM chip?


>massive shift in the chip industry

I would argue that there has been, specifically that Moore's law isn't working as it once did, so the competition is catching up. It's becoming a commodity space now that smaller process sizes are hard, and gains from that are paltry.


Another thing is that the increasing use of bytecode and managed languages make the CPUs less relevant.

For example on the mobile space, for the applications fully written in Java, .NET, Swift[1], JavaScript, how the CPU looks like doesn't matter at all.

[1] - When using LLVM bitcode as deployment target, although it is leaky.


Intel makes x86 (Exxon makes oil). Most things are moving to ARM (batteries). Intel (Exxon) now loses the market it built its entire business on as the world moves on to different products.

Exxon can make batteries and Intel can make things like ARM chips. But they aren't really because they aren't good at it.


Yeah, but what I'm saying is how is the scale of a move of x86 to ARM comparable to oil to batteries?

From the outside, looking in, it seems like the two leaps are on massively different scales which I feel is important when talking about things that will kill a large corporation.

I guess my key question is what makes ARM so much different than x86 that it invalidates Intel's existing knowledge? Batteries have a completely different product life cycle than oil. For one, batteries don't burn away and they recharge. This creates completely different business models. Is there some major difference between ARM chips and x86 chips that I am missing?


There are two things about x86 vs <pick an architecture... right now its ARM> that are significantly different than they were 10+ years ago:

1) Binary compatibility doesn't matter nearly as much as it used to. This was everything in the 80's, 90's and still important into the early 00's. It's why people were willing to pay top dollar for Intel CPUs for decades. First it was to run DOS apps like Lotus 1-2-3, WordPerfect and then it was to run Windows apps including Microsoft Office which were the primary thing most people had PCs for back then. Most mainstream computing users (business and personal) would be hard pressed to come up with a specific need that requires x86. Thanks to the web, Linux, Apple, mobile, etc. x86 is just another architecture that can be used, not the architecture it once was.

2) Competing products are anywhere from a fraction to an order of magnitude less expensive than Intel's offerings. Unless you absolutely need maximum performance, cheap and good enough is where the majority of the market is.

Look at what Intel has been banking on first with their failed attempt in mobile and now with their failed attempt at IoT: they thought that because they had 1 (the thing that doesn't matter much anymore) that they could disregard 2 (the thing that does). Intel sure looks like it's having a Kodak moment: the market that exists today is much smaller in terms of $/CPU or $/perf or $/watt but Intel refuses to do what it needs to adapt.


Also, stupidly a lot of these Intel IoT lines weren't actually binary compatible with existing x86 code for various reasons. Some of them had a hardware bug where the LOCK prefix locked up the hardware, some were outright missing less-used instructions, and the documentation of what was actually supported was terrible.


The Galileo supported i386 instructions only, no MMX or anything later. The Joule, Edison, and Minnowboard Max are all Atom processors, and support full, modern instruction sets.


Part of the reason probably more biz related.

CEO/CFO of Intel looked at the balance sheet every Quarters and make decision if they should continue to invest in ARM SOC (A few years back before they sold that div to Marvell.)

x86 has 50% + margin, #1 market position.

ARM is #4,5, 6 in market position behind TI, Qualcomm, Freescale and lossing $$$ every quarters.

One need to investment huge amount of $$$ continuously fundamentally at IP for GSM, LTE, Mobile OS Team, SOC team and there were almost zero chance to catch up with #1 #2 players of the time -- Qualcomm, TI, 8-10 years ago.


I'm not an electrical engineer or hardware expert but there must be something because Intel has been trying to compete with ARM for many years with no success. It doesn't seem like it's just something you can just pivot too (or Intel is just really, really, bad at pivoting.)


I agree. And I think by dead, we mean Intel becomes irrelevant. Intel will likely still be around in 20 years time, much like ExxonMobile will still be here even if battery suddenly make a major leap, but they will just be irrelevant. Which is like IBM is dying, it has been dying for YEARs, but it is not dead, yet.

And I just want to mention to the parent, Intel were never painted "dead" in the K6 era. Because K6 didn't actually beat Intel. It was the Athlon and Athlon 64. And even in that Era Intel wasn't dead by any means. AMD had at best 30% of market shares, and everyone knew Intel could continue to play the pricing discount game for as long as they wanted.

Right now the PC industry is shrinking. As a matter of fact it is shrinking faster then expected contrary to what numbers you may have read, that is because one specific segment, the PC Gaming Industry is booming, and especially in SEA region. That is why you see "lots" of Gaming Laptop appear, when I would have wanted the same 10 - 15 years ago they simply wont there. And this segment has help the numbers to look not as bad as it is.

Microsoft and Apple are fully aware of how Chromebook is taking places in Education. And Microsoft knows if this continues, there may be a generation of people who dont know Windows, and what could be even worst, they dont use Office, especially Excel! Any Windows Netbook or Notebook are unable to complete with Chromebook on priceing because of Intel. Unless Microsoft and AMD get a deal that uses a Cut down Xbox chip to a price point, Microsoft is forced to go with ARM to complete with Chromebook.

I think Intel is well positioned in the server market. Their biggest threat isn't ARM but AMD, which lower their margin.

Assuming we dont see a killer App on PC that requires a huge jump of CPU performance. The next generation of AMD APU will likely make a killing in the consumer market. AMD Vega GPU along side with Zen. And then Next year you get Zen 2 + Vega on 7nm.

Intel should have open up their Fab. At least they should have worked with Apple, ensuring 300M of these SoC dont goes to Samsung or TSMC. But since they have been straggling to make this decision, TSMC and Apple is now pretty much lined up all the way to 2019 which is the TSMC 7nm+.

With the current CEO i dont have much faith in Intel. I really wish it was Patrick Gelsinger who had became the CEO.


"and everyone knew Intel could continue to play the pricing discount game for as long as they wanted"

let's not forget the "don't build with AMD chips or else" game which they got sued over, though too late to matter


Don't forget back in the 90s when IBM was also cranking out its own x86 chips.


False similarity; as the EU case showed that the reason Intel remained on top then was because they pressured everyone, from OEM to hosting providers against going AMD.

I doubt they have the clout to pull that one again, especially when you consider how the courts would react to them doing it again.


Also keep in mind, the Pentium M was sort of a random bush project that ended up being at the right time. Basically, it was a revert to the Pentium 3 style of doing things designed by an intel team in Israel to compete in mobile. I'm not convinced Intel is going to be so fortunate again.


I dont know I could see something like the MIC product line or the Larabee work making a similar pivot for Intel.


"AMD beat Intel with their K6..."

I cannot speak for the K6 vs original Pentium but the K6-2 was what I had as a kid and even though it was cheap and cheerful it seemed handily bested by the P2. Synthetic benchmarks which took advantage of 3DNow were roughly even but games were pretty poor - with otherwise equivalent hardware (128MB ram, Voodoo3) my friend's Pentium II 300MHz outperformed my K6-2 366 @ 400-ish MHz comfortably in every game we played. The K6-3 maybe edged the P3 according to the magazines I devoured at the time, but I don't think it was pretty popular and it seemed like a stopgap until the Athlon came out. Athlon genuinely bested the P3 and P4 on price, power and performance for a good while. I still have fond memories of picking up a sub-GHz AXIA core Athlon for under 100 GBP and taking it to 1GHz and slightly beyond. That was pretty fun :-)


You are correct, the K6 did not beat Intel. It was the Athlon/Athlon64 that had better IPC than the Pentium 4.


When my athlon died it seemed quick and easy to get a shockingly fast celeron, i think they were nearly 3ghz? I couldn't believe how poorly that PC ran compared to the althon chugging at a lowly 2ghz. Got another athlon after a few months and firmly put to bed the ghz war for me.


> my friend's Pentium II 300MHz outperformed my K6-2 366 @ 400-ish MHz comfortably in every game we played.

How old is the Intel compiler again? Both the Pentium 2 and K6 had the MMX extension. Code compiled with the rather popular Intel compiler checked the CPU vendor ID to force programs into badly optimized code paths with disabled extensions on competitors products. It was a nice undocumented feature until 2005 and makes any experienced performance difference suspect.


Ahhhhh this is a great point!


> AMD beat Intel with their K6 and similar series of chips where, just like this time around, they were able to get way more performance per tick out of the CPU.

There was in interesting submission the other day about performance of the Ryzen vs i7, and how their AVX2 instruction support isn't what it's racked up to be[1]. I'm not really qualified to assess the source or claims accurately, so I'll let others read it themselves and come to their own conclusions, but it was interesting.

1: https://hashcat.net/forum/thread-6534-post-35415.html


Those benchmarks mean absolutely nothing, because they obviously knew almost nothing about the processors they were trying to use.

1. "Ryzen's AVX2 support is a bold-faced lie" To say this only shows complete ignorance. It was publicly known for many years that Ryzen will have only 128-bit AVX units, compared to the 256-bit AVX units of Haswell and its successors.

Nevertheless, using AVX-256 is still preferable on Ryzen, to reduce the number of instructions, even if the top speed per core is half of that reached by Intel.

2. The benchmark results just show incompetence. While the top speed per core is half, the number of cores is double, so you just need to run twice more threads for a Ryzen to match the speed of Intel.

It is true that an i7 7700K will retain a small advantage, because of higher IPC and higher clock frequency, but the advantage for correct programs is small, not like the large advantages of those incompetent benchmarks. I have both a 3.6 GHz /4.0 GHz Ryzen and a 3.6 GHz / 4.0 GHz Skylake Xeon, so I know their behavior from direct experience.

While 4-core Intel retains a small advantage in AVX2 computations over 8-core Ryzen, there are a lot of other tasks, e.g. source program compilations, where Ryzen has almost a double speed, so you should choose your processor depending on what is important for you.

3. The most stupid benchmark results are for SHA-1 and SHA-256. Ryzen already implements the SHA instructions that are also implemented in Intel Apollo Lake processors (to boost the GeekBench results against ARM) and will also be implemented in the future Intel Cannonlake processors (whose 2-core version is expected to be introduced this year).

If they had benchmarked a correct program that uses the SHA instructions, Ryzen would have trounced any Kaby Lake processor.


It's complicated.

Skylake/Kaby Lake have two full-fledged 256-bit vector units.

Ryzen has four partial units. There are two 128-bit adders, and two 128-bit multipliers.

Intel's best case is a constant stream of 256-bit FMA instructions. They can do two per cycle, while AMD can do one.

The more plain adds and multiplies, the better Ryzen does. The same for 128-bit vector instructions. With enough of both, it can actually do significantly more work per cycle.


I may be somewhat qualified to speculate...Based on my experience with both Intel's and AMD's OpenCL implementations for their CPUs, I suggest that Intel has a much better vecorizing compiler than AMD. The benchmarks they are running have different compilers. If it was a simple C code compiled by GCC for each CPU, the comparison would be better. It would be interesting to see the results for AMD's OpenCL compiler on the i7 and Intel's compiler on Ryzen.


This has been known since Ryzen's release - it struggles a lot with AVX2-heavy codecs such as VP9 and x265. If hashcat was compiled to take advantage of AVX2 then this is expected.


I don't doubt Intels AVX2 will be faster, but I suspect there is more to it than what this guy is saying. I suspect there is a big element of overspecialisation for Intel CPUs and that these programs will need to be tweaked for Ryzens AVX implementation.


What do you think about Intel's monolithic dies vs. AMD's approach to packaging smaller dies together onto a socket-mount and linking them with "Infinity Fabric" into a single CPU? Seems to me yields would be a lot better with AMD's approach while allowing it to scale to huge core counts (I think the EPYC 32core CPU is 8x 4-core dies in a package?). Do you think Intel's integrated graphics is a big strategy tax that's holding them back?


Don't forget that whole illegally strong arming manufacturers at the same time centrino came out.

https://en.m.wikipedia.org/wiki/Advanced_Micro_Devices,_Inc.....


This time there's ARM, which was there before, but wasn't anywhere near Intel in performance stakes. ARM based chips will probably end up in the lower end products and take money away from both AMD and Intel. Servers are where Intel makes money, and if Epyc is looking as good as it promises, they are going to have a tough time there as well.


Intel used to make the best ARM CPUs on the market, check out xscale.


Sure, and then Intel discontinued it. Just like the i860.


used to being the operative phrase there.


of course it is. just saying that they could if they wanted.


Could they, though? If all the dozens or so people who could really drive a new big ARM project are at Apple, Samsung, Qualcomm, and NVidia, they might struggle to assemble a team that could make the needed breakthroughs to differentiate it. They also don't seem to have much interest from a business standpoint, preferring instead to sell $2k+ Xeons all day...and I hope EPYC gives them a real competitor.


Intel manufacured 32-bit ARMv5 chips via a unit originally acquired from DEC in '97. It was sold to Marvell 10 years ago. It's hard to see how any of this would help Intel create competitive ARMv8 server-class chips, but you're right, given their market hold of their x86 chips they clearly have the means to do it. However it would probably take a significant amount of time, and they would have real competition. Sadly, I don't think they'll be investing into server-class 64-bit ARMv8 chips, and that's a mistake. CPUs are a commodity now, and the architecture wars are largely irrelevant with a heavy push towards OSS stacks. It's a race to the bottom, and ARM chips are going to be simpler for the same amount of performance, and that means ultimately cheaper.


I think, based on their past, that's a very optimistic "could".

I see no reason to believe that they could do so, organizationally, even if there was a desire to.


Nothing is ever so simple, and I agree wih you that the sensationalism that pairs with schizo tech journalism, that seems to dash from one extreme to the next, is pretty tiring, with media analysts frequently glossing over important details that don't fit their current world view.

However, mobile devices are the same kind of disruption to PCs as the later were to the UNIX workstations and servers of yore. Intel missed that transition and failed to see the threat. They've tried in early 2010 to establish any x86 Android presence but it didn't work out. Funny, but things might have been different if XScale wasn't sold off, who knows...mobile/handheld mostly stagnated back when it was WinCE/Palm, even though Palm invented the HPC and Microsoft practically invented the smartphone.

IoT/IoE is another major disruptor, on many fronts, and Intel doesn't seem to get it. NFV is probably going to converge on ARM64, due to the "good enough" factor, TCO, specialized I/O accelerators, cheap customized server SoCs and cutthroat competition. Windows on ARM64 opens up VDI opportunities, as does the proliferation of "smart" Android-powered devices. Open source ended being the Colt of the computing world (Chip vendors create computers but Linux made them equal), so in the cloud, architecture is irrelevant, only the bottom line...it's a race to the bottom and we all get to win, but Intel might not make it.


The commonality is all of those cases were competing on the x86 instruction set playing field. Intel will probably still win at that game. But now there are many areas of computing where x86 doesn't matter. (As noted, even Windows is moving to support ARM.)


Windows is moving to support ARM but the applications are not which was the original problem with Windows on ARM.

And in the business world backwards compatibility is everything.


With Surface RT I would have agreed wholeheartedly with this. But Microsoft is doing it right this time around by introducing x86 emulation, which gives you a fast, efficient ARM platform that has amazing battery life most of the time, and which can still open up that odd business application you need that hasn't recompiled itself for the new platform yet.

The critical thing to observe here is that, while obviously x86 applications will be slower on this platform on paper, most consumers will not notice or care. This creates a better product for casual use and arguably a better product for business use, one where the power efficiency of ARM creates a direct benefit to the consumer that Intel can't match (battery life and efficiency) for which the occasional odd performance issue in some heavy "Desktop" app is a small price to pay.


This is only an issue with C and C++ written applications that aren't recompiled or rely on x86 specific opcodes, the .NET ones will just run, unless they rely on native code, of course.

Hence why they are doing a JIT this time around, but as we all know Intel isn't happy about it.


The x86 emulation is completely irrelevant. Microsoft didn't want win32 applications to be recompiled to run on Windows RT. They now changed their mind. The only benefit of emulating x86 is for software that is no longer supported. If it's mission critical then those businesses that rely on x86 backwards compatibility will stay with x86 hardware because paying $600 more for a computer doesn't even register on their radar. The only reason I'd care about x86 emulation is to run old video games.


It is completely relevant. Basically, you can now replace aging Windows PCs with VDI hosted on ARM clouds, and people won't know or be able to tell a difference. You will now have x86 Win32 apps, arm64 Win32 apps, and UWP running on what's just another SKU of Win10. And since every piece of software for Windows must support 32-bit x86, there is full backwards comaptibilty. You won't need to wait for some mission-critical bit of software to be ported, and in the Windows world, there is plenty of mission-critical abandonware, trust me. Now it gets the AOT/JIT treatment. Microsoft just made enterprise a reality for ARM64 servers by crossing the proverbial rubicon. Suddenly, if you use Windows and Microsoft products, you can still buy into ARM data centers and clouds.


It doesn't have to be about the cost of a computer but e.g. getting field workers onto Windows tablets with decent battery life (but still being able to use the ancient in-house timeclock app)


It is not only business, they need a way to bring those XP users.


OSS (FOSS/FLOSS) and the current push to boutique vertical software stacks largely make ISA wars irrelevant. Right now if it runs Linux and does virtualization it is good enough, and organizations will buy gear based on other characteristics. And Microsoft realized they don't want to be a purveyor of x86 goods either, probably because they don't want Windows to turn into another OpenVMS or AIX.


While true, open source, mobile OSes and IoT wave were less relevant when those situations happened.

Nowadays it seems they will get cornered on desktop and server CPUs, unless they happen to buy ARM licenses or try to re-invent CPUs with builtin FPGAs.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: