Really they need to work on power usage and temperature of x86 so the chips are easier to use in mobile devices without a fan and dying in 3 hours. Stationary devices seem to be chugging along with x86 comfortably, but the chips are currently impractical otherwise.
It seems that they’re finally taking that seriously though so it’s good to see. They never really had any incentive to put too much effort in making x86 more efficient for consumer devices since their server chips have much, much higher profit margins.
Lunar Lake and AMD’s Z1 is a good start and it’s interesting to see where this goes.
The new Intel chips already addressed that, at least for notebook class devices.
Realistically, there wasn’t really a reason for Intel and AMD to be super power efficient, simply because there wasn’t any competition for quite a while. It took Apple Silicon to show how powerful arm can be and how easy the transition could be.
You’re not going to see phones with x86. The architecture just isn’t going to scale down like that. Not if you want something faster than a Pentium III.
How do you hire people who can implement it right? There are three companies that can make x86. One is failing, one gave up years ago, and the third is kicking ass but seems uninterested in this part of the market. All the people who know how to do x86 well already work for one of them. That third company that nobody talks about gave up because by 2010, they lacked the ability to make a worthwhile product.
It’s an incredibly difficult ISA to work with, and all the talent is already busy. Due to its closed nature, there is little hope of significantly growing that talent base. Not unless you want the early 2000s version of x86-64, which is patent free.
They were comparable to the rest of the phones at the time. Not great, not terrible. Compared to anything in 2024 they were obviously trash, but that’s mostly because we’ve made 10 years of progress since then.
The Samsung Galaxy is 15 years old, and it was excellent. First Android device I felt had decent performance. I think my first one was an Galaxy S III, which would have been 2012.
The Zenphone that’s out now uses a Snapdragon. There’s probably a good reason for that.
It actually can, the thing we learned is that the unpleasant bits of x86 scale well, so we spent 30% of the die doing uop decode, but that’s now just 1-2% because we blow so much on more registers and cache.
Also we can play games like soft-deprecating instructions and features so they exist, but are stupid slow in microcode.
We used to think only risc could run fast at low power, but our current cisc decoded to risc works fine, Intel just got stupid lazy.
Apple just took all the tradeoffs Intel was too cheap to spend silicon on and turned them to 11, we could have had them before but all the arm guys were basically buying ip and didn’t invest in physically optimized designs, but now that tsmc is the main game in town (fallback to gf was nice for price), there’s a lot more room to rely on their cell libraries.
Intel got so insanely arrogant, just like Boeing and all the other catastrophic American failures right now, we just need to correct for that and we can be decent again.
It’s hardly just Intel. There are two other x86 licenses out there. One gave up. The other is kicking ass, but Apple didn’t go with them, either.
Meanwhile, Intel themselves kept the 80486 alive until 2007 as an embedded processor. It outlasted the Pentium III by a few months. It was never as popular as PIC or ARM or z80 devices, but it found some kind of niche.
I’ll grant that in theory, it could be done. But why? There are millions of smartphones running fine with ARM, and they don’t have any backwards compatibility needs to x86. Why pick an ISA that can only legally be designed by three companies? Why pick an ISA that hasn’t been as well tested on mobile device OSes? ARM will hand a license to anyone who shows up with some cash, and if you want to take a plunge into a different ISA, then RISC-V is sitting right there. There doesn’t seem to be a single real benefit to x86 over what mobile device creators have now, and plenty of reasons not to.
I worked on platform enablement for armv8, bringing all the ecosystem to 64 bit arm. Was an everest, so much code was expecting x86, lots of secret asm and other assumptions like memory model.
But once it was done, we did it again for riscv in no time, all the work was done, it was basically setting defines, maybe adding tsc/rdcycle (now rdtime).
Architectures don’t really matter anymore, but also the overhead of architectures are pretty minor, riscv will probably win because it’s basically free and single thread performance isn’t as critical on client devices, lot of work goes to the GPU too, and servers do other heavy lifting. Qualcomm scared everybody too, and China is going their own way which means even more riscv.
Basically, nothing matters except cost now, we’ll figure out how to run things on a potato, we’ve gotten good at it.
Really they need to work on power usage and temperature of x86 so the chips are easier to use in mobile devices without a fan and dying in 3 hours. Stationary devices seem to be chugging along with x86 comfortably, but the chips are currently impractical otherwise.
It seems that they’re finally taking that seriously though so it’s good to see. They never really had any incentive to put too much effort in making x86 more efficient for consumer devices since their server chips have much, much higher profit margins.
Lunar Lake and AMD’s Z1 is a good start and it’s interesting to see where this goes.
It’s amazing what a modern process node and not cranking clock speeds to high hell will do.
You forgot about ditching more of the chipset etc. in favour of integrating everything into the CPU die.
The new Intel chips already addressed that, at least for notebook class devices.
Realistically, there wasn’t really a reason for Intel and AMD to be super power efficient, simply because there wasn’t any competition for quite a while. It took Apple Silicon to show how powerful arm can be and how easy the transition could be.
Apple took all the old tricks Intel was always way too cheap to use, and turned them to 11.
Nothing magic, nothing special, just balls and the willingness to spend silicon.
You’re not going to see phones with x86. The architecture just isn’t going to scale down like that. Not if you want something faster than a Pentium III.
ISA doesn’t matter as much as most people think it does. It’s all about how you implement it.
How do you hire people who can implement it right? There are three companies that can make x86. One is failing, one gave up years ago, and the third is kicking ass but seems uninterested in this part of the market. All the people who know how to do x86 well already work for one of them. That third company that nobody talks about gave up because by 2010, they lacked the ability to make a worthwhile product.
It’s an incredibly difficult ISA to work with, and all the talent is already busy. Due to its closed nature, there is little hope of significantly growing that talent base. Not unless you want the early 2000s version of x86-64, which is patent free.
Asus Zenfones used to use Intel Atom x86 processors.
And were they any good?
My car runs Android Automotive^1 on an Intel Atom and performance is trash. I would hate to have a phone on the same platform.
^1 As in, the car runs Android directly, not Android Auto running from a phone.
They were comparable to the rest of the phones at the time. Not great, not terrible. Compared to anything in 2024 they were obviously trash, but that’s mostly because we’ve made 10 years of progress since then.
The Samsung Galaxy is 15 years old, and it was excellent. First Android device I felt had decent performance. I think my first one was an Galaxy S III, which would have been 2012.
The Zenphone that’s out now uses a Snapdragon. There’s probably a good reason for that.
It actually can, the thing we learned is that the unpleasant bits of x86 scale well, so we spent 30% of the die doing uop decode, but that’s now just 1-2% because we blow so much on more registers and cache.
Also we can play games like soft-deprecating instructions and features so they exist, but are stupid slow in microcode.
We used to think only risc could run fast at low power, but our current cisc decoded to risc works fine, Intel just got stupid lazy.
Apple just took all the tradeoffs Intel was too cheap to spend silicon on and turned them to 11, we could have had them before but all the arm guys were basically buying ip and didn’t invest in physically optimized designs, but now that tsmc is the main game in town (fallback to gf was nice for price), there’s a lot more room to rely on their cell libraries.
Intel got so insanely arrogant, just like Boeing and all the other catastrophic American failures right now, we just need to correct for that and we can be decent again.
It’s hardly just Intel. There are two other x86 licenses out there. One gave up. The other is kicking ass, but Apple didn’t go with them, either.
Meanwhile, Intel themselves kept the 80486 alive until 2007 as an embedded processor. It outlasted the Pentium III by a few months. It was never as popular as PIC or ARM or z80 devices, but it found some kind of niche.
I’ll grant that in theory, it could be done. But why? There are millions of smartphones running fine with ARM, and they don’t have any backwards compatibility needs to x86. Why pick an ISA that can only legally be designed by three companies? Why pick an ISA that hasn’t been as well tested on mobile device OSes? ARM will hand a license to anyone who shows up with some cash, and if you want to take a plunge into a different ISA, then RISC-V is sitting right there. There doesn’t seem to be a single real benefit to x86 over what mobile device creators have now, and plenty of reasons not to.
No, it doesn’t make sense to do it.
I worked on platform enablement for armv8, bringing all the ecosystem to 64 bit arm. Was an everest, so much code was expecting x86, lots of secret asm and other assumptions like memory model.
But once it was done, we did it again for riscv in no time, all the work was done, it was basically setting defines, maybe adding tsc/rdcycle (now rdtime).
Architectures don’t really matter anymore, but also the overhead of architectures are pretty minor, riscv will probably win because it’s basically free and single thread performance isn’t as critical on client devices, lot of work goes to the GPU too, and servers do other heavy lifting. Qualcomm scared everybody too, and China is going their own way which means even more riscv.
Basically, nothing matters except cost now, we’ll figure out how to run things on a potato, we’ve gotten good at it.