Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Intel Gets Put On Notice

Intel Gets Put On Notice
Thread Tools
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 3, 2018, 07:49 AM
 
Rumor has it Apple’s ditching Intel for self-made ARM chips come 2020.

I’m sooooo hosed.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Apr 3, 2018, 09:31 AM
 
Did you build a Hackintosh?

I'm not looking forward to going back to the processor wars and universal binaries and all of that crap.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 3, 2018, 10:40 AM
 
This rumor pops up every now and then. Personally, I think that it is a lot less likely now that Apple has re-committed to making a Mac Pro. I can see them making laptops and even an iMac with a scaled-up ARM chip, but the Mac Pro breaks the budget.

You'd have to develop a chip with

* lots of cores. iMac is fine with 4, but a Mac Pro needs to have at least 16, and probably a lot more. This makes things like core communication and cache coherency a lot more complex.
* ECC support.
* PCIe lanes. Lots of PCIe lanes, and they need to be certified to follow a standard too, if you are to be able to put standard expansion cards in there.
* Way higher TDP

No other Mac Apple makes needs any of these, and Apple has said that the Mac Pro is a tiny part of the Mac market. If Apple could cut that small segment loose - as I think they tried to do - I could see the ARM-Mac idea being viable, but as it is? Nah.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 3, 2018, 12:19 PM
 
Originally Posted by P View Post
... Apple has said that the Mac Pro is a tiny part of the Mac market.
I've refused to buy the trash can because it is un-upgradable. I've been waiting it out for a good Mac Pro. If other people are following the same plan, then the Mac Pro is a tiny slice because it is unattractive.

Apple should gauge the market size based on cheesegrater sales, not via the current sales. And they should go back to 2010, as they held back on Mac Pro upgrades after that. Again, making people wait for a better choice. The 2012 spec bump came a year late, with no adjustment in price for stale tech. And no GPU changes after two years.
( Last edited by reader50; Apr 3, 2018 at 01:00 PM. )
     
subego  (op)
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 3, 2018, 02:48 PM
 
Originally Posted by Laminar View Post
Did you build a Hackintosh?

I'm not looking forward to going back to the processor wars and universal binaries and all of that crap.
Thought about it. Decided it was too risky for anything mission critical.
     
subego  (op)
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 3, 2018, 02:54 PM
 
Originally Posted by P View Post
Apple has re-committed to making a Mac Pro.
Kinda, sorta, not really. The Cookie said one thing about it.

I’m guessing he doesn’t let someone in the supply chain do nothing for 5 years, make vague promises about two years down the line, and go “yeah, I can build a business model around that”.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 3, 2018, 06:23 PM
 
Now that OS X has eGPU support I can see a roadmap including PCIe with ARM chips. I suspect Apple would quite like to add Thunderbolt support to iPad and iPhone to shift 4K movies about if nothing else.

Maybe the new look Mac Pro will have stackable render modules that allow the addition of lots of cores via extra CPUs. This would be a nice way to differentiate from Wintel server hardware.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Ham Sandwich
Guest
Status:
Reply With Quote
Apr 3, 2018, 07:00 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 09:46 AM. )
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Apr 3, 2018, 07:46 PM
 
Most definitely going to happen. In fact, it already has with the T1 and T2 chips in the MacBook and iMac Pro, respectively. In the 2020 future, it will be more of the same -- a T-X coprocessor performing sub-functions, such as sleep and security roles, and probably not providing the main umpf. The old Macs had math coprocessors before. If they do really ditch Intel and have their T-X chips performing the main duties, then we need all app recompilations as during the G5 to Intel transition. It won't be as brutal as before, but it may happen gradually as developers start compiling for ARM instead.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 3, 2018, 08:52 PM
 
First of all, of all journalists Mark Gurman has probably the best sources inside Apple. So if he writes that Apple is planning to do this, I'd take that very seriously. Of course, Apple may not follow through with its plans for various reasons, but I am inclined to believe the reporting. I think what will have tipped the balance is Intel's roadmap: even the Intel's public roadmap is a mess, and I don't see any parts that deliver on what Apple needs. Here is an incomplete list:

(1) Stagnant performance. Apple won't say this publicly, but I am sure they will mention that their in-house SoCs not only have significant performance improvements year-over-year, but that they are actually faster than the Intel CPUs in many of Apple's mobile Macs.

(2) On the Windows side ARM-based laptops have significantly longer battery life.

(3) Connected to that: Intel has no big.LITTLE on its road map (i. e. smaller, more power efficient cores are paired with faster, higher-power cores). That's table stakes now.

(4) There is pretty much no way to integrate custom logic (e. g. for machine learning, image processing or security). Apple's integration of the T1 and T2 with an Intel processor is ingenious, but would become unnecessary if it created its own SoCs.

(5) Intel has lost the process advantage. Intel can argue our 14 mm++ vs. other people 10 nm processes, but it is no longer a generation ahead.

(6) In many of the mobil Intel CPUs the performance of the integrated GPU is lackluster.

Unless Intel has shown Apple dramatically different road maps than what it recently revealed to the public, Apple eventually has to make a hard choice. On the other hand, Apple knows its internal CPU and GPU road map exactly and can then compare. Imagine three years from now if we extrapolate moderate growth in CPU and GPU speeds for Apple's SoCs, they will eventually be significantly faster than Intel's mobile CPUs. Something has got to give eventually.

We have hashed out the arguments here before, and for me it boils down to the question whether having, say, a MacBook with 20 hours of battery life and a unified basis for its OS is a worthwhile tradeoff for having potentially slower desktop CPUs. (I still believe that one way Apple could more easily justify developing desktop processors is to think of its server farms.)

I'd be over the moon if Apple released a MacBook (Pro) with 15-20 hours of honest-to-goodness battery life, better real-life performance and faster GPU performance. One of the greatest limitations of my 13" MacBook Pro is that it only has two cores, and after getting a Mac Pro with 12, then 8 cores, I know what a dramatic improvement of usability additional cores can make. I know that Intel has announced suitable 4-core CPUs for the 13" MacBook Pro today, although I would still say that they are not as good a fit as a big.LITTLE setup.
( Last edited by OreoCookie; Apr 3, 2018 at 10:16 PM. )
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 3, 2018, 08:53 PM
 
The software transition is already about to start with this rumoured platform to run iOS apps on OS X. Many apps will shift to run on both and by 2020 only the stragglers will be left behind to run under some new x86 emulation. MS Office will still be running from its OS 9 code base well into the 2030s no doubt.

I wonder what happens to VMWare and Parallels if ARM takes over the line completely.

Part of me thinks it would be slightly mad to build an all new Mac Pro just before you shift your CPU architecture to a totally different setup but being modular, maybe Apple plans to give people a choice and keep the Xeons around for virtualisation and other assorted heavy lifting?
That would give computer scientists the option of x86, ARM or GPU based clusters which would be cool. And more importantly, different.
I have plenty of more important things to do, if only I could bring myself to do them....
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 3, 2018, 09:00 PM
 
I'm not understanding why Apple doesn't offer AMD CPUs on some lines. Push Intel with extra competition, like they used to switch GPU sources regularly.

For some years, AMD has lagged on performance. But the Zen architecture has been working well. There's constant Ryzen advertising on NewEgg for example.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 3, 2018, 10:12 PM
 
Originally Posted by Waragainstsleep View Post
MS Office will still be running from its OS 9 code base well into the 2030s no doubt.
Actually, as far as cross platform apps go, Office has become a model citizen: all versions of Microsoft Office are based off of the same core code that was rewritten from scratch, and a custom interface is put on top. That means there will likely be no problem in switching underlying CPU architectures — Office already runs on ARM. However, if Apple moves towards new, Swift-based, unified APIs for a lot of the functionality, then they may have to rewrite the interface code. But that'd be the case even if Apple stuck to x86-64 on ints Macs.
Originally Posted by Waragainstsleep View Post
Part of me thinks it would be slightly mad to build an all new Mac Pro just before you shift your CPU architecture to a totally different setup but being modular, maybe Apple plans to give people a choice and keep the Xeons around for virtualisation and other assorted heavy lifting?
That would give computer scientists the option of x86, ARM or GPU based clusters which would be cool. And more importantly, different.
That is the billion dollar question: Apple could punt here and stick to x86-64 for the foreseeable future. Or even use someone else's server grade ARM v8 SoCs as an interim solution.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 3, 2018, 10:16 PM
 
Originally Posted by reader50 View Post
I'm not understanding why Apple doesn't offer AMD CPUs on some lines. Push Intel with extra competition, like they used to switch GPU sources regularly.
If I look at my list of issues with Intel's strategy, a lot of them apply verbatim to AMD. On a per-core basis, Apple's high performance cores are faster. And the only products where AMD has an edge as far as CPU performance is concerned is in the 16+ core market or when many PCIe lanes are needed — which is a very small segment. Apple could switch to AMD CPUs for its Mac Pro, but I don't see AMD's road map as being strong enough to compare against Apple's own.
I don't suffer from insanity, I enjoy every minute of it.
     
iMOTOR
Mac Elite
Join Date: Jan 2003
Location: San Diego
Status: Offline
Reply With Quote
Apr 4, 2018, 09:15 PM
 
Pretty much any advantage that Intel-x86 has just becomes more moot as transistors get closer to the atomic limit. We have basically hit the ceiling on Moore's law already, and the x86 was never the superior architecture.

I think the direction Apple is really headed in would be to develop their own non IP liscensed RISC architecture from scratch, or perhaps put billions toward an Open Source architecture like RISC-V.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Apr 4, 2018, 10:45 PM
 
Originally Posted by iMOTOR View Post
Pretty much any advantage that Intel-x86 has just becomes more moot as transistors get closer to the atomic limit. We have basically hit the ceiling on Moore's law already, and the x86 was never the superior architecture.
True. In terms of absolute numbers, you could buy a 2 GHz PC 10+ years ago, and today you can also buy a 2 GHz PC.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 5, 2018, 03:26 AM
 
Plus the ceiling has been around 4GHz for a decade or so too.
I have plenty of more important things to do, if only I could bring myself to do them....
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 7, 2018, 12:48 PM
 
Originally Posted by reader50 View Post
I've refused to buy the trash can because it is un-upgradable. I've been waiting it out for a good Mac Pro. If other people are following the same plan, then the Mac Pro is a tiny slice because it is unattractive.

Apple should gauge the market size based on cheesegrater sales, not via the current sales. And they should go back to 2010, as they held back on Mac Pro upgrades after that. Again, making people wait for a better choice. The 2012 spec bump came a year late, with no adjustment in price for stale tech. And no GPU changes after two years.
Apple has stated that Mac sales are 80-20 laptop-desktop, and all they've said about Mac Pro sales is that they're "single-digit percentage". The Mac Pro never outsold the iMac, so it can never have been more than 10% of the total sales, and likely a lot less, even when the cheesegrater sold well.

Do you remember my old sig line, about the single-socket MP being the most overpriced Mac since the IIvi? I wasn't really exaggerating. Those single-socket MPs were insanely overpriced. I don't think the Mac Pro has sold well since there were G4s in them, and that is because they were really a lot faster than the iMacs back then.

Originally Posted by subego View Post
Kinda, sorta, not really. The Cookie said one thing about it.

I’m guessing he doesn’t let someone in the supply chain do nothing for 5 years, make vague promises about two years down the line, and go “yeah, I can build a business model around that”.
Apple invited journalists to Cupertino talk only about about how they are making a new MP. It is coming, although not this year.

https://daringfireball.net/2017/04/the_mac_pro_lives

Originally Posted by OreoCookie View Post
First of all, of all journalists Mark Gurman has probably the best sources inside Apple. So if he writes that Apple is planning to do this, I'd take that very seriously. Of course, Apple may not follow through with its plans for various reasons, but I am inclined to believe the reporting. I think what will have tipped the balance is Intel's roadmap: even the Intel's public roadmap is a mess, and I don't see any parts that deliver on what Apple needs. Here is an incomplete list:
I think that Intel has responded to Apple's complaints, though. The new laptop CPUs with AMD graphics are one indication of that, hiring Raja Koduri to develop new graphics is another.

(1) Stagnant performance. Apple won't say this publicly, but I am sure they will mention that their in-house SoCs not only have significant performance improvements year-over-year, but that they are actually faster than the Intel CPUs in many of Apple's mobile Macs.
I think that a little too much hay has been from this. Apple has an excellent CPU in the A7, but they have mostly been tweaking that ever since. The A8 was a dud. The A9 was great, but it was great because Apple got access to a new process that was much better (16nm FinFET instead of the flat 20nm and 28nm of previous chips). The A10 is... what? I haven't seen a good analysis of how fast it is. The main thing it did is the big.LITTLE thing.

That specific CPU is extremely slow compared to anything Apple ships in a Mac, though. It doesn't follow that a CPU that is fast enough will give that fantastic battery life, and note that the iPad in that test has worse battery life.

(3) Connected to that: Intel has no big.LITTLE on its road map (i. e. smaller, more power efficient cores are paired with faster, higher-power cores). That's table stakes now.
Well, Intel doesn't have a roadmap right now... Still not sure that big.LITTLE is a good idea. You'll note that Apple stayed away from that idea for the longest time

(4) There is pretty much no way to integrate custom logic (e. g. for machine learning, image processing or security). Apple's integration of the T1 and T2 with an Intel processor is ingenious, but would become unnecessary if it created its own SoCs.
Integrate to what extent? On die? I wonder how well it would work to put more stuff on the die. Apple's AX series chips are usually around 150mm2. That is fairly big in a situation where yields are trending down, and what things do you want that need need to be closer than a PCIe lane away? If it is enough to put it on package, Intel has EMIB, as we've discussed before.

(5) Intel has lost the process advantage. Intel can argue our 14 mm++ vs. other people 10 nm processes, but it is no longer a generation ahead.
Agree. The 10nm launch has been a disaster for Intel. Then claimed in a financial statement that they're now shipping 10nm chips for revenue, but they haven't officially launched, and the 8th generation is filling up. The only real spot left for Cannonlake is the 5W Y-series chips, and that is a logical spot to launch them, but they are insanely late.

(6) In many of the mobil Intel CPUs the performance of the integrated GPU is lackluster.
Sure, but there are at least good performance options available now. The Iris Plus stuff is not too shabby, there is a Vega M option now, and Intel is clearly focusing on this for the future.

Unless Intel has shown Apple dramatically different road maps than what it recently revealed to the public, Apple eventually has to make a hard choice. On the other hand, Apple knows its internal CPU and GPU road map exactly and can then compare. Imagine three years from now if we extrapolate moderate growth in CPU and GPU speeds for Apple's SoCs, they will eventually be significantly faster than Intel's mobile CPUs. Something has got to give eventually.
What inherent advantage does Apple have over Intel to make their chips be significantly faster a few years down the line? The ARM ISA is easier to decode, yes, but that only leads to a slightly lower base power consumption. Will Apple have access to better process than Intel? Unlikely - even if Intel has lost the advantage, we're a long way from TSMC et al being better at anything. Intel has flubbed the launch of the 10nm process, and 14nm was late, but they executed pretty much like clock work for four node switches before that. At the same time, the entire foundry business messed up 20nm at the same time. The only one to even make a process for it was TSMC, but they cancelled all but the mobile SoC process, and some chips (notably nVidia's GPUs) had to be redesigned on 28nm. On 14nm, GF failed so badly that they had to license Samsung's process. GF failed at 32nm as well. Everyone fails every now and then, and that Intel failed on 10nm doesn't mean that they will fail on 7nm.

We have hashed out the arguments here before, and for me it boils down to the question whether having, say, a MacBook with 20 hours of battery life and a unified basis for its OS is a worthwhile tradeoff for having potentially slower desktop CPUs. (I still believe that one way Apple could more easily justify developing desktop processors is to think of its server farms.)
Yes, we did this already, let's not go back there.

I'd be over the moon if Apple released a MacBook (Pro) with 15-20 hours of honest-to-goodness battery life, better real-life performance and faster GPU performance. One of the greatest limitations of my 13" MacBook Pro is that it only has two cores, and after getting a Mac Pro with 12, then 8 cores, I know what a dramatic improvement of usability additional cores can make. I know that Intel has announced suitable 4-core CPUs for the 13" MacBook Pro today, although I would still say that they are not as good a fit as a big.LITTLE setup.
I actually think that the new 28W chips announced today are the best news out of Intel for a long time. I'm not going to upgrade, but I would have loved to have one of them.

More cores are great, but there are very real diminishing returns. I would love 4, but I'm not sure that I could make use of 6 in a very efficient manner.

Originally Posted by reader50 View Post
I'm not understanding why Apple doesn't offer AMD CPUs on some lines. Push Intel with extra competition, like they used to switch GPU sources regularly.

For some years, AMD has lagged on performance. But the Zen architecture has been working well. There's constant Ryzen advertising on NewEgg for example.
Zen is a massive step forward, but it doesn't suit Apple all that well. You could put one in the iMac to give it six or eight cores at the top, but that's mostly it. The mobile chips are quads at 15W with graphics far weaker than what Apple is shipping in the 13" non-TB MBP, the only place where that TDP CPU is used. If the Mac mini were still a thing, it could do well there, but I don't think Apple will risk antagonising Intel over that tiny product.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
subego  (op)
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 7, 2018, 01:10 PM
 
I saw the Mac Pro deal, gives me some hope.

As for cores, with video in After Effects, if I have the RAM to accomodate it, each core works on its own frame. More cores essentially equals faster rendering.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 7, 2018, 04:21 PM
 
There is definitely a market for a laptop with huge battery life and without massive horsepower for enterprise users. Only needs to run MS Office nice and smoothly. Maybe the Air will go to ARM and leave the heavy lifting to the MB and MBP for a bit longer.
I have plenty of more important things to do, if only I could bring myself to do them....
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
Apr 8, 2018, 11:16 AM
 
Most business apps are web based anyways so all the horsepower is cloud side
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 8, 2018, 07:00 PM
 
Plenty of small business and sales guys who like Apple's style just want regular laptops for doing their spreadsheets. The old school ones aren't that taken by the cloud versions.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 8, 2018, 07:54 PM
 
@P

I've added my numbering back for clarity.
Originally Posted by P View Post
[(1)] I think that a little too much hay has been from this. Apple has an excellent CPU in the A7, but they have mostly been tweaking that ever since. The A8 was a dud. The A9 was great, but it was great because Apple got access to a new process that was much better (16nm FinFET instead of the flat 20nm and 28nm of previous chips). The A10 is... what? I haven't seen a good analysis of how fast it is. The main thing it did is the big.LITTLE thing.
I don't quite get what you are claiming here: there have been consistent year-over-year gains, e. g. Apple's claims that the A11 is on average 30 % faster than the A10 is confirmed by Geekbench. I'm sure Apple's internal CPU core roadmap has explicit performance targets that they will compare to those of Intel. Phones shouldn't be as fast or faster than laptops.

(Just to be clear, Geekbench is just one benchmark, and we can discuss whether it is a good benchmark that reflects “average use”, whatever that means. It is unfortunate that Anand live in the belly of the beast now, and that there aren't any forensic deep dives that not only analyze the fast cores of the A11, but also the slow ones. Nevertheless, I think it is still clear that up until now Apple's SoCs have gotten faster significantly faster than Intel's CPUs.)
Originally Posted by P View Post
[(2)] That specific CPU is extremely slow compared to anything Apple ships in a Mac, though. It doesn't follow that a CPU that is fast enough will give that fantastic battery life, and note that the iPad in that test has worse battery life.
Two things here: first of all, Apple's SoCs are significantly faster but still offer at least comparable power efficiency here. So my point is rather that there is a sizable subset of people clamoring for a laptop with honest-to-goodness 20-hour battery life (12 hours under decent load). This is the same type of people who wanted a MacBook Air when the first two generations were released — despite their (lack of) performance. Because of the success of the MacBook Air, the TDP of the default CPUs for such machines has been falling over time (the latest Intel CPUs destined for the 13" MacBook Pro have a TDP of 28 W, down from 35 W). That is in addition to the now popular SKU of 15 W TDP CPUs. The trend line here is clear.
Originally Posted by P View Post
[(3)] Well, Intel doesn't have a roadmap right now... Still not sure that big.LITTLE is a good idea. You'll note that Apple stayed away from that idea for the longest time
Yes, but now Apple has fully embraced the idea. I always thought that they initially weren't convinced, saw that the idea worked in the ARM SoC space and then started to design a big.LITTLE implementation themselves. That seems to jive with the time line at least. To me it seems inevitable for mobile computers, and I am sure Apple thought about this very carefully before committing to this strategy. Intel clearly has no plans for big.LITTLE, they have just cut their efforts to produce Atom-based SoCs and scaled back their efforts for the Atom line in general.
Originally Posted by P View Post
[(4)] Integrate to what extent? On die? I wonder how well it would work to put more stuff on the die. Apple's AX series chips are usually around 150mm2. That is fairly big in a situation where yields are trending down, and what things do you want that need need to be closer than a PCIe lane away? If it is enough to put it on package, Intel has EMIB, as we've discussed before.
EMIB is not a panacea here: Apple would have to work with Intel here to at least produce a custom package. While this is not entirely unprecedented (the MacBook Air had a custom package of a vanilla Intel chip if memory serves), it'd mean that they would have to closely work with Intel on this.
Originally Posted by P View Post
[(5)] Agree. The 10nm launch has been a disaster for Intel. Then claimed in a financial statement that they're now shipping 10nm chips for revenue, but they haven't officially launched, and the 8th generation is filling up. The only real spot left for Cannonlake is the 5W Y-series chips, and that is a logical spot to launch them, but they are insanely late. [...] Will Apple have access to better process than Intel? Unlikely - even if Intel has lost the advantage, we're a long way from TSMC et al being better at anything. Intel has flubbed the launch of the 10nm process, and 14nm was late, but they executed pretty much like clock work for four node switches before that. At the same time, the entire foundry business messed up 20nm at the same time. The only one to even make a process for it was TSMC, but they cancelled all but the mobile SoC process, and some chips (notably nVidia's GPUs) had to be redesigned on 28nm. On 14nm, GF failed so badly that they had to license Samsung's process. GF failed at 32nm as well. Everyone fails every now and then, and that Intel failed on 10nm doesn't mean that they will fail on 7nm.
For the longest time Intel had a process advantage that meant if all other things were equal, Intel CPUs would be faster and more power efficient. I don't see Intel gaining a process advantage by leap frogging the other big players in the fab business, at the very least that is unlikely. The most probable outcome is that Intel's process is roughly on par with that of TSMC and Samsung. Intel now has to compete on a level playing field, and at least in the low power space, it doesn't compete well.
Originally Posted by P View Post
[(6)] Sure, but there are at least good performance options available now. The Iris Plus stuff is not too shabby, there is a Vega M option now, and Intel is clearly focusing on this for the future.
Vega M is a high power solution, not suitable for 15 W TDP and lower — and I reckon that these are the CPUs that sell the most in Apple products. (I'm quite sure the MacBook Air, MacBook and 13" non-Touch Bar MacBook Pro are Apple's best selling notebooks by quite a margin.) And after all these years Intel's promise to get better at graphics seems too little too late. I noticed their recent hiring decisions, but I don't expect anything homegrown to come to fruition because of that for at least another two, if not three years.
Originally Posted by P View Post
What inherent advantage does Apple have over Intel to make their chips be significantly faster a few years down the line? The ARM ISA is easier to decode, yes, but that only leads to a slightly lower base power consumption.
I agree that this is probably just a contributing factor, not a main reason.
Originally Posted by P View Post
I actually think that the new 28W chips announced today are the best news out of Intel for a long time. I'm not going to upgrade, but I would have loved to have one of them.
For the short term, yes, finally Intel gives me the option of putting four cores in a 13" MacBook Pro. But I don't think this has any bearing on the long-term viability of sticking to Intel.
Originally Posted by P View Post
More cores are great, but there are very real diminishing returns. I would love 4, but I'm not sure that I could make use of 6 in a very efficient manner.
2 big + 4 LITTLE cores seem like a good compromise here, not just for an iPhone, but also for notebook workloads. Or better, Apple could give us a choice like they do in the iMac Pro and Mac Pro, where it gives users the option to upgrade to a SoC with more big cores at a lower base clock. If anything, that should make it easier for Apple to make SoCs for all of Apple's notebooks.

Moreover, if Apple wants to produce more iOS-based devices such as a “touch-based iMac” aka a “21"-27" iPad” with CPU power to boot. I don't see Apple porting iOS to x86 for those machines. Hence, Apple's efforts to develop CPUs and SoCs suitable for an iMac would bootstrap “desktop iOS” devices. The latter is just speculation on my part, but if Apple is serious about touch-based OSes being the future, we would eventually need “desktop-level” iOS devices.


To summarize, I think all of the points you raise are valid individually. However, looking at the situation as a whole I see most things shift in favor of a switch to ARM and away from Intel. Compounded with the technical reasons are recent business decisions: Intel has just announced to focus towards the high margin server market. Microsoft has just announced to de-emphasize Windows — which weakens the Wintel alliance. Can you think of indicators moving in Intel's favor here? The best counter arguments I can see are of the form “Apple would have to replace/license/develop a replacement for technology X” where X could stand for something like Thunderbolt (which was featured prominently in Panzerino's article on the development efforts surrounding the Mac Pro). And given enough effort from Apple, these could be solved by Apple.
I don't suffer from insanity, I enjoy every minute of it.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Apr 8, 2018, 09:05 PM
 
Considering Apple has officially said that a new Mac Pro will come in 2019 (6 years after the trashcan Mac?), we will see an A13X processor in them. Will it handle the main functions of the OS? Probably not, but I am often wrong.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 9, 2018, 11:10 AM
 
Originally Posted by OreoCookie View Post
@P

I've added my numbering back for clarity.

(1)I don't quite get what you are claiming here: there have been consistent year-over-year gains, e. g. Apple's claims that the A11 is on average 30 % faster than the A10 is confirmed by Geekbench. I'm sure Apple's internal CPU core roadmap has explicit performance targets that they will compare to those of Intel. Phones shouldn't be as fast or faster than laptops.
What I'm saying is that Apple made a fantastic design in the A7, that was huge boost from anything before it. The A8 was a dud - Apple claimed 25% faster, but some of this was compiler improvement and some was a higher clock, so the only real improvement was single-digit percentage. The A9 was great, but it was great because Apple finally got access to a FinFET process and could rev the clocks (which also came at a cost because batteries couldn't keep up, but that would be less of an issue in a laptop). What they have done since is very minor and focused around extra low-power cores (and graphics, let's not forget).

This idea that Apple delivers fantastic improvement year after year is a bit of an illusion. The A7 was super-impressive, and the 16nm FinFET process really closed the gap with Intel, but those are the two improvements. They have rejiggered the caches, and yes they did reduce the main memory latency at one point (it is still way worse than Intel, though) but mostly Apple is maintaining. I have a hard time seeing the A12 "big core" being significantly faster than the A11 "big core" clock-for-clock

(Just to be clear, Geekbench is just one benchmark, and we can discuss whether it is a good benchmark that reflects “average use”, whatever that means. It is unfortunate that Anand live in the belly of the beast now, and that there aren't any forensic deep dives that not only analyze the fast cores of the A11, but also the slow ones. Nevertheless, I think it is still clear that up until now Apple's SoCs have gotten faster significantly faster than Intel's CPUs.)
Geekbench is a pretty terrible test, to be honest, but we don't have anything better, so let's look at it. Single-core scores are up 17% on a 6% clockspeed increase, so IPC is up 10%, maybe? Some of that is probably compiler tricks again, but still, 10% is not so shabby. It is also what Intel gets on its "tocks", or redesigns. What Apple has done, more than anything else, is increase the clock. The A11 "big core" clocks are as high as 2.5 GHz. The A5, the last non-Apple designed core, ran at 800 MHz. They got these clock increases in large part because they got access to better processes - TSMC et al caught up with Intel. They can't continue to increase this way unless TSMC outdoes Intel going forward, and I have a hard time seeing that.

(2)Two things here: first of all, Apple's SoCs are significantly faster but still offer at least comparable power efficiency here. So my point is rather that there is a sizable subset of people clamoring for a laptop with honest-to-goodness 20-hour battery life (12 hours under decent load). This is the same type of people who wanted a MacBook Air when the first two generations were released — despite their (lack of) performance. Because of the success of the MacBook Air, the TDP of the default CPUs for such machines has been falling over time (the latest Intel CPUs destined for the 13" MacBook Pro have a TDP of 28 W, down from 35 W). That is in addition to the now popular SKU of 15 W TDP CPUs. The trend line here is clear.
The current iPad Pro gets a 10 hour battery life on a battery that is 41 Wh in the 12.9" version. Let's say that we take this computer and put it into a 13" case similar to the current 13" MBP. Let's also pretend that everything we add to the computer - keyboards, extra ports, slightly larger display etc - don't use any power, and that whatever OS we use is as power-efficient as current iOS. We would then need an 82 Wh battery to get 20 hours of battery life. The current 13" MBP has a battery just above or just below 50 Wh, depending on the variant. Our 20h iPad-book would thus be significantly thicker than the current MBP - in fact, it would be slightly thicker than the old 2012 MBP. That is about how thin and light that 20h MBP would be.

If we instead focus on the 12h number you put in, we need a battery that is close enough to the current 50Wh as makes no difference. Now, Apple's claims of 10h battery life for the MBP don't exactly hold up in court, so maybe this theoretical iPad-book gets slightly better battery life, but it isn't a fantastic difference, and it is no slimmer than the current MBP.

I agree that laptop TDPs have been trending down for a while, but TDP does not correspond exactly to battery life. The iPad CPU uses less power than Intel's CPUs, but it isn't an order of magnitude, and the Intel laptop chips are still faster.

(3) Yes, but now Apple has fully embraced the idea. I always thought that they initially weren't convinced, saw that the idea worked in the ARM SoC space and then started to design a big.LITTLE implementation themselves. That seems to jive with the time line at least. To me it seems inevitable for mobile computers, and I am sure Apple thought about this very carefully before committing to this strategy. Intel clearly has no plans for big.LITTLE, they have just cut their efforts to produce Atom-based SoCs and scaled back their efforts for the Atom line in general.
Intel has been responding to Apple's wishes lately, and remember that Intel hasn't launched a new design since Skylake in 2015. If Intel either saw big.LITTLE as a good idea at about the same time as Apple, or if Apple started leaning on them by the time they themselves figured it out, Intel could never have got that design done in time for Skylake in 2015 - Apple themselves launched it with the A10 in 2016. It would always have to be done for the next major revision at the earliest, and the next major revision after Skylake isn't here yet. The lack of a big.LITTLE isn't indication that Intel isn't considering it.

There is also one more thing. Intel has Hyperthreading, and Apple does not. This means that Intel can handle a task with lots of threads on a dualcore better than Apple can, and they can easily go to four cores (as they are doing). IBM has also demonstrated that Hyperthreading (which they call SMT because Hyperthreading is a trademark, but it works the same way) can work fine with 8 threads per core. 4 cores with 8 threads per core means more threads than I think any sane single-user task will ever need. This means that half the reason for the extra cores in the A11 is gone for Intel. What remains is the use case that you have a single low-power core that remains awake at extremely low power to check emails or whatever, but that isn't particularly compelling for a laptop. My phone will tell me when I have email, even if I then check it on the laptop.

(4) EMIB is not a panacea here: Apple would have to work with Intel here to at least produce a custom package. While this is not entirely unprecedented (the MacBook Air had a custom package of a vanilla Intel chip if memory serves), it'd mean that they would have to closely work with Intel on this.
Which they are, on Kaby Lake-G. Intel designed EMIB for exactly these things.

I also think that there are very few things that need it. A PCIe lane goes far.

(Note that Intel added 4 more of those to the 28W CPUs in the latest release. For all of you in the commentariat who went insane because the right-hand ports on the touchbar MBP had less bandwidth to the CPU than the left-hand ports - you can calm down now. This change means that there should now be enough PCIe lanes for all of the ports.)

(5) For the longest time Intel had a process advantage that meant if all other things were equal, Intel CPUs would be faster and more power efficient. I don't see Intel gaining a process advantage by leap frogging the other big players in the fab business, at the very least that is unlikely. The most probable outcome is that Intel's process is roughly on par with that of TSMC and Samsung. Intel now has to compete on a level playing field, and at least in the low power space, it doesn't compete well.
But if you think that Intel will have a process that is roughly on par with TSMC and Samsung, there will be no massive improvements for Apple A-something chips going down the line, because a significant part of those improvements were driven by process improvements. Intel lost its advantage, and that is what we have seen, but you can't double-count that difference.

Vega M is a high power solution, not suitable for 15 W TDP and lower — and I reckon that these are the CPUs that sell the most in Apple products. (I'm quite sure the MacBook Air, MacBook and 13" non-Touch Bar MacBook Pro are Apple's best selling notebooks by quite a margin.) And after all these years Intel's promise to get better at graphics seems too little too late. I noticed their recent hiring decisions, but I don't expect anything homegrown to come to fruition because of that for at least another two, if not three years.
For 15W CPUs, Intel has Iris Plus 640. That is not too shabby a GPU, and it only arrived at that TDP with Skylake. It is anyway hard to compare to the iPad GPUs, which use tile-based deferred rendering, which cannot be used for something that should support DirectX and full OpenGL games. I think that Apple's push of Metal is in part so it will be able to use those GPUs on a Mac down the line, but it will cut off a lot of PC game ports if Apple goes that way.

2 big + 4 LITTLE cores seem like a good compromise here, not just for an iPhone, but also for notebook workloads. Or better, Apple could give us a choice like they do in the iMac Pro and Mac Pro, where it gives users the option to upgrade to a SoC with more big cores at a lower base clock. If anything, that should make it easier for Apple to make SoCs for all of Apple's notebooks.
A quadcore with HT is better than this at everything. More faster cores, more total threads. It will eat battery when loaded down, but even my iPad Pro will eat battery when I do something even remotely demanding.

Moreover, if Apple wants to produce more iOS-based devices such as a “touch-based iMac” aka a “21"-27" iPad” with CPU power to boot. I don't see Apple porting iOS to x86 for those machines. Hence, Apple's efforts to develop CPUs and SoCs suitable for an iMac would bootstrap “desktop iOS” devices. The latter is just speculation on my part, but if Apple is serious about touch-based OSes being the future, we would eventually need “desktop-level” iOS devices.
I don't think it is hard to develop a CPU that would work in an iMac. In a Mac Pro, maybe harder, but iMac should be doable with an iPad Pro CPU.

To summarize, I think all of the points you raise are valid individually. However, looking at the situation as a whole I see most things shift in favor of a switch to ARM and away from Intel. Compounded with the technical reasons are recent business decisions: Intel has just announced to focus towards the high margin server market. Microsoft has just announced to de-emphasize Windows — which weakens the Wintel alliance. Can you think of indicators moving in Intel's favor here? The best counter arguments I can see are of the form “Apple would have to replace/license/develop a replacement for technology X” where X could stand for something like Thunderbolt (which was featured prominently in Panzerino's article on the development efforts surrounding the Mac Pro). And given enough effort from Apple, these could be solved by Apple.
I can see Apple moving to ARM. I just don't see that the reasons for doing so are fantastically stronger now than they were when this idea was first floated some ten years ago. I think that the reasons for it boil down to exactly one thing - Intel messed up the 10nm launch badly. There is no guarantee that they will mess up the next transition, or that TSMC and GF won't.

Remember - Apple didn't ditch PowerPC when the G4 failed to reach 500 MHz. They didn't ditch PowerPC when the Motorola G5 project died. They did it when there was literally no other option than paying blackmail prices to IBM, Intel was showing a great roadmap, and the code for Rosetta was available for purchase. The Mac is far better off today than it was when the PowerPC era was at its darkest. Tim Cook doesn't do things for emotional reasons, and he has access to better roadmaps than we do. The fact that Intel keeps making chips that Apple clearly requests is an indication that Apple is antsy but that Intel is working to keep them. When that was the case around the last switch - when IBM came knocking - Apple stayed. It was when IBM wasn't interested anymore than Jobs took the call from Otellini.

So I think that the switch MAY happen, but I think that it is far from assured. What I think will make it happen for sure is if Chromebooks start to eat Windows laptops from below. They might - it is textbook low-end disruption - but it hasn't happened yet.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 9, 2018, 08:53 PM
 
I'm surprised Google hasn't thrown more advertising weight at Chromebooks. Some of them are a very viable corporate/admin alternative now and typically cheaper.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 9, 2018, 09:45 PM
 
Originally Posted by P View Post
What I'm saying is that Apple made a fantastic design in the A7, that was huge boost from anything before it. The A8 was a dud - Apple claimed 25% faster, but some of this was compiler improvement and some was a higher clock, so the only real improvement was single-digit percentage. The A9 was great, but it was great because Apple finally got access to a FinFET process and could rev the clocks [...] Single-core scores are up 17% on a 6% clockspeed increase, so IPC is up 10%, maybe? Some of that is probably compiler tricks again, but still, 10% is not so shabby. It is also what Intel gets on its "tocks", or redesigns. What Apple has done, more than anything else, is increase the clock. The A11 "big core" clocks are as high as 2.5 GHz.[...]
I think you are oscillating between two valid performance measures, IPC and performance. Of course, Apple did reap the benefits of process shrinks and spent it accordingly as a mix of power savings and clock speed increases. You do have a point that also compilers got better, and that it is hard to separate this from all the other factors.

On the user end what matters is total performance. So they will see 17 % rather than 10 %. IPC matters if you want to see how the architecture would scale with frequency (e. g. because you have a higher thermal budget and can afford to up the clocks). But we should clearly separate both points and not mix them in our discussion of performance.

Even if I look at the figures you present, I still think they are consistent with the claim that Apple's SoCs have gotten faster faster than Intel's for the last few years. For example, the last huge performance jump I remember was with the introduction of Sandybridge. The last tock release I remember had only 2.5-5 % improvements in IPC. (Specifically, Anandtech found an improvement when going from Broadwell to Skylake of 2.4-2.7 % depending on the RAM you use in your system.) That was because Intel wanted to make Skylake more power efficient, and forwent outright performance improvements for performance-per-watt improvements.
Originally Posted by P View Post
Geekbench is a pretty terrible test, to be honest, but we don't have anything better, so let's look at it.
I would gladly base the discussion on a better benchmark, but I don't know of any. The best alternative are SPEC CPUmarks that Anandtech ran on the A9X. AFAIK the A10(X) and A11 have not yet been SPEC marked, so we don't know what gains were made generation-over-generation and in what area. However, even SPEC CPU mark is a rather old benchmark with a very sciency workload, so even that is not a be-all-end-all of benchmarks.

As an aside, do you have an idea why Anandtech stopped running SPEC CPU marks on mobile CPUs? When they managed to run SPEC int on the iPad Pro, I thought they'd include it in their standard test suite and compare each subsequent generation of Apple SoCs — and those of other manufacturers as well! Seems weird.

Edit: Apparently, Anandtech has been benchmarking SoCs for Android using SPEC CPU mark, although I still have not seen SPECmarks of the A10, A10X or A11.
Originally Posted by P View Post
The current iPad Pro gets a 10 hour battery life on a battery that is 41 Wh in the 12.9" version. Let's say that we take this computer and put it into a 13" case similar to the current 13" MBP. Let's also pretend that everything we add to the computer - keyboards, extra ports, slightly larger display etc - don't use any power, and that whatever OS we use is as power-efficient as current iOS. We would then need an 82 Wh battery to get 20 hours of battery life. The current 13" MBP has a battery just above or just below 50 Wh, depending on the variant. Our 20h iPad-book would thus be significantly thicker than the current MBP - in fact, it would be slightly thicker than the old 2012 MBP. That is about how thin and light that 20h MBP would be.
This is a valid argument and a very delicate point. Let me preface my comment by saying that I was not very precise when I was talking about laptops with 15-20 hour battery life.

I did not mean to suggest that by simply swapping a Core iX or a Core mX for an Apple A11X (or later) would immediately deliver on that promise. The discussion with battery life in particular is quite tricky overall, because “wifi browsing” battery benchmark figures and real life figures are very different. The 2017 MacBook gets 809 minutes in Arstechnica's wifi browsing battery life test. The older A9X-based iPad Pro gets 508 minutes. On the other hand, if you look at the benchmarks under load, Arstechica's WebGL test, the iPad Pro lasts almost twice as long (372 minutes vs. 205 minutes). So now the delicate question is: which of the two “actually” gets better battery life? A colleague of mine who owns a MacBook (and totally loves it) gets 5 in his mix of applications.

In the more compute-intensive test, the CPU has a higher percentage of the overall power draw. Some of the differences between the two platforms you compare, the MacBook and the 12.9" iPad Pro, make it a little hard to quantify what that is (the iPad has a larger screen and higher resolution which, in turn, taxes the GPU more; and both run different OSes). But it seems to me that under load Apple's SoCs are more power efficient and yet deliver comparable performance. And that is the basis for my claim that I believe in real life, an Apple A11X or later-powered MacBook would last significantly longer in real life than an Intel-based one. Arriving at a 20-hour battery life in a MacBook form factor would probably still require several generations worth of engineering.
Originally Posted by P View Post
I agree that laptop TDPs have been trending down for a while, but TDP does not correspond exactly to battery life. The iPad CPU uses less power than Intel's CPUs, but it isn't an order of magnitude, and the Intel laptop chips are still faster.
I don't think I have claimed something being an order of magnitude faster (I'm a physicists, so an order of magnitude is a factor of 10, not a factor of 2 ). The main reason why Apple's SoCs are (and probably can be) so much more efficient than what Intel is offering is the simple fact that Apple has to target one TDP with its chips and optimize for that. (The X-variant contains more cores, and has usually only marginally higher clocks.) Intel's Core cores (ugh, I still hate that name) on the other hand have to cover everything from 5W TDP to 165 W TDP. That is a huge gulf, and why especially at the extreme ends Core cores become less optimal designs.

In terms of CPU architecture techniques, “everybody is cooking with water” as we say in German, and Apple does not have magic pixie dust which makes its architectures faster. For more complex OoO designs, the RISC vs. CISC stuff is negligible. But being able to more specifically target its designs is what IMHO gives Apple the upper hand. And yes, that will mean it will have to eventually introduce new custom core designs to cover higher TDP optimization points, which make different trade-offs in their architecture. That is also the reason why for a lot of workloads big.LITTLE makes sense: the smaller cores are optimized for a lower power, higher performance-per-watt point.
Originally Posted by P View Post
The lack of a big.LITTLE isn't indication that Intel isn't considering it.
No, it doesn't. But I don't see any indication from Intel that it is developing the necessary technology (e. g. by pushing the development of Atom). Intel might still surprise us, but I don't think this is likely, because I reckon Intel would want to use the LITTLE core in other applications (SoCs for NASes and such).
Originally Posted by P View Post
There is also one more thing. Intel has Hyperthreading, and Apple does not. This means that Intel can handle a task with lots of threads on a dualcore better than Apple can, and they can easily go to four cores (as they are doing).
Hyperthreading helps with core utilization, and is one of the last things that Apple has not yet integrated into its SoCs. It's an obvious gain, and one that Apple could in principle add to the A12 or later. (There may be legal reasons, too, Apple might need to work around someone else IP so as not to violate any patents.) So I don't see Hyperthreading as an Intel-specific advantage, just as a gain that Apple did not put into its SoCs — yet.
Originally Posted by P View Post
What remains is the use case that you have a single low-power core that remains awake at extremely low power to check emails or whatever, but that isn't particularly compelling for a laptop. My phone will tell me when I have email, even if I then check it on the laptop.
Here, I disagree. While I am typing, my screen is mostly static and my computer is doing “nothing really”. That sounds like the perfect workload for several slower cores that can deal with iTunes and the like.
Originally Posted by P View Post
Which they are, on Kaby Lake-G. Intel designed EMIB for exactly these things.
Kaby Lake-G to me is not about Intel giving Apple the ability to integrate other dies onto the package, but an admission by Intel that it can't offer fast graphics to its customers. Hell had to freeze over for this to happen — and I agree that it was in all likelihood at Apple's insistence. But I don't see that as the beginning of a burgeoning custom Core iX line made just for Apple. It might happen, but then I think Apple would at least have a hand in designing the package. Plus, it'd take time to integrate EMIB interconnect into its designs for the co-processor.
Originally Posted by P View Post
But if you think that Intel will have a process that is roughly on par with TSMC and Samsung, there will be no massive improvements for Apple A-something chips going down the line, because a significant part of those improvements were driven by process improvements. Intel lost its advantage, and that is what we have seen, but you can't double-count that difference.
I don't understand what difference I am supposedly double counting. I'm just remarking that Intel lost a performance advantage afforded by it being one process node ahead of the competition. Nothing else.
Originally Posted by P View Post
For 15W CPUs, Intel has Iris Plus 640. That is not too shabby a GPU, and it only arrived at that TDP with Skylake. It is anyway hard to compare to the iPad GPUs, which use tile-based deferred rendering, which cannot be used for something that should support DirectX and full OpenGL games. I think that Apple's push of Metal is in part so it will be able to use those GPUs on a Mac down the line, but it will cut off a lot of PC game ports if Apple goes that way.
Again, I would love to base the discussion on better better benchmarks. So I agree that a quantification of the performance here is hard, but the more important point is the performance delta accrued over the years by the GPU built into Apple's SoCs. The story here is again that the growth is much larger than on the Intel side.
Originally Posted by P View Post
A quadcore with HT is better than this at everything. More faster cores, more total threads. It will eat battery when loaded down, but even my iPad Pro will eat battery when I do something even remotely demanding.
For high performance compute with a workload that parallelizes well, you are correct. But for other workloads that are more typical of mobile devices, I don't think this is necessarily true: when going from 4 (virtual = real) to 8 (virtual = 2x real) cores you are already deep into diminishing returns territory. HT helps with core utilization, but that's not that helpful if you can keep <= 4 cores busy at a given time.
Originally Posted by P View Post
I don't think it is hard to develop a CPU that would work in an iMac. In a Mac Pro, maybe harder, but iMac should be doable with an iPad Pro CPU.
If you want strictly better performance in an iMac, you need to invest quite a bit more into the design of a custom SoC. At the very least you need a very speedy interconnect to feed the GPUs that are necessary to run 5K displays. I don't think an iPad Pro SoC is fast enough here, especially on the graphics end. But I agree, this is a problem with a clear solution, and a solution that Apple can — in principle — afford.
Originally Posted by P View Post
I can see Apple moving to ARM. I just don't see that the reasons for doing so are fantastically stronger now than they were when this idea was first floated some ten years ago.
You don't think iOS completely changes the story here?

Honestly, without the pressure from iOS, I would be quite skeptical. If you look at past CPU architecture transitions, what helped with the Intel transition was that Apple would suddenly have access to the most-used architecture. That brought lots of advantages with it (e. g. the massive efforts in the open source community developing and optimizing compilers, the fact that you could natively run Windows and Linux if your heart so desires, etc.). The leverage ARM has is that iOS and Android are based almost exclusively on that architecture and all the development effort goes into that. Given the ubiquity of touch, I believe it is inevitable that Apple will release larger and larger touch-based devices. I would like a 27" touch-based “iPad” once the software is there.

Add to that Intel's and Microsoft's recent restructuring efforts (no doubt in reaction to slumping PC sales), and it is no longer clear whether Intel's future road maps align as neatly with Apple's as they did in 2005. Back then Intel refocused its CPU design efforts to be mobile first — and it worked wonderfully. Now, it is much less clear. (Now I can't even keep straight which *lake core is in which product, and what Core generation that would be. )
( Last edited by OreoCookie; Apr 10, 2018 at 01:05 AM. Reason: Added link to SPECmark results for Android phone SoCs)
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 9, 2018, 09:47 PM
 
Originally Posted by Waragainstsleep View Post
I'm surprised Google hasn't thrown more advertising weight at Chromebooks. Some of them are a very viable corporate/admin alternative now and typically cheaper.
I have seen few Chromebooks out in the real world, although I have heard they are popular in US schools. Plus, Chrome OS is a red headed step child in Google's OS strategy, so it is not clear what is going to happen in the long term. To me the idea that you do “everything in the browser” seems very misguided.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Apr 10, 2018, 06:16 AM
 
Originally Posted by OreoCookie View Post
I have seen few Chromebooks out in the real world, although I have heard they are popular in US schools. Plus, Chrome OS is a red headed step child in Google's OS strategy, so it is not clear what is going to happen in the long term. To me the idea that you do “everything in the browser” seems very misguided.
To an end user, the effect is similar to the setup where they remote into a VM hosted by their employers. Except their employers don't need the basement full of servers just to provide people access to Word and Excel.

For anyone doing anything remotely fancy I can see how it would seem dreadful but never underestimate the Powerpoint/Excel/Word/Browser crowd. There are still many millions of them.

PEWB. Thats an acronym worth establishing. "How many PEWBs do you have {in your company}?"
I have plenty of more important things to do, if only I could bring myself to do them....
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 10, 2018, 11:35 AM
 
Originally Posted by OreoCookie View Post
I think you are oscillating between two valid performance measures, IPC and performance. Of course, Apple did reap the benefits of process shrinks and spent it accordingly as a mix of power savings and clock speed increases. You do have a point that also compilers got better, and that it is hard to separate this from all the other factors.

On the user end what matters is total performance. So they will see 17 % rather than 10 %. IPC matters if you want to see how the architecture would scale with frequency (e. g. because you have a higher thermal budget and can afford to up the clocks). But we should clearly separate both points and not mix them in our discussion of performance.
Well yes, I suppose I do. My point is that if we are trying to predict the future, we have to separate process improvements from architecture improvements. I don't think that there will be any more process improvements by TSMC relative to Intel - that is, I think TSMC has caught up to Intel and will not overtake them - and I don't think that Apple will be able to rev clocks that much higher with the battery problems they have. This leaves the architecture improvements as the only thing that can happen down the line, and I think Apple has been more or less even with Intel here.

Even if I look at the figures you present, I still think they are consistent with the claim that Apple's SoCs have gotten faster faster than Intel's for the last few years. For example, the last huge performance jump I remember was with the introduction of Sandybridge. The last tock release I remember had only 2.5-5 % improvements in IPC. (Specifically, Anandtech found an improvement when going from Broadwell to Skylake of 2.4-2.7 % depending on the RAM you use in your system.) That was because Intel wanted to make Skylake more power efficient, and forwent outright performance improvements for performance-per-watt improvements.
That analysis is faulty, as it compares Broadwell with eDRAM to Skylake without it. If we compare to Haswell instead and look at IPC generation to generation, it is more like 6%. This is a little unfair in that we get one notch of process improvement between mature 22nm and first-gen 14nm, but I'm not sure that that process is any better. 6% is less than 10%, but it is much more comparable. Ivy to Haswell was 11%, that is slightly more than the 10% we supposedly have from A10 to A11.

(I also suspect that the benchmark increase of A10 to A11 is much more about getting to max turbo quicker than actual IPC, but that distinction may be irrelevant here.)

I would gladly base the discussion on a better benchmark, but I don't know of any. The best alternative are SPEC CPUmarks that Anandtech ran on the A9X. AFAIK the A10(X) and A11 have not yet been SPEC marked, so we don't know what gains were made generation-over-generation and in what area. However, even SPEC CPU mark is a rather old benchmark with a very sciency workload, so even that is not a be-all-end-all of benchmarks.
I agree, SPEC has its flaws. They have recently made a new version available (SPEC2017) that is supposed to be better. I have not seen a good analysis of it yet. For general purpose benches, it is probably better to look at Javascript benches right now.

As an aside, do you have an idea why Anandtech stopped running SPEC CPU marks on mobile CPUs? When they managed to run SPEC int on the iPad Pro, I thought they'd include it in their standard test suite and compare each subsequent generation of Apple SoCs — and those of other manufacturers as well! Seems weird.

Edit: Apparently, Anandtech has been benchmarking SoCs for Android using SPEC CPU mark, although I still have not seen SPECmarks of the A10, A10X or A11.
There hasn't been any deep analysis of Apple SoCs at all lately. Not sure why. Various other forums I lurk in have also noticed this.

This is a valid argument and a very delicate point. Let me preface my comment by saying that I was not very precise when I was talking about laptops with 15-20 hour battery life.

I did not mean to suggest that by simply swapping a Core iX or a Core mX for an Apple A11X (or later) would immediately deliver on that promise. The discussion with battery life in particular is quite tricky overall, because “wifi browsing” battery benchmark figures and real life figures are very different. The 2017 MacBook gets 809 minutes in Arstechnica's wifi browsing battery life test. The older A9X-based iPad Pro gets 508 minutes. On the other hand, if you look at the benchmarks under load, Arstechica's WebGL test, the iPad Pro lasts almost twice as long (372 minutes vs. 205 minutes). So now the delicate question is: which of the two “actually” gets better battery life? A colleague of mine who owns a MacBook (and totally loves it) gets 5 in his mix of applications.

In the more compute-intensive test, the CPU has a higher percentage of the overall power draw. Some of the differences between the two platforms you compare, the MacBook and the 12.9" iPad Pro, make it a little hard to quantify what that is (the iPad has a larger screen and higher resolution which, in turn, taxes the GPU more; and both run different OSes). But it seems to me that under load Apple's SoCs are more power efficient and yet deliver comparable performance. And that is the basis for my claim that I believe in real life, an Apple A11X or later-powered MacBook would last significantly longer in real life than an Intel-based one. Arriving at a 20-hour battery life in a MacBook form factor would probably still require several generations worth of engineering.
I was trying to compare the 12.9" iPad to the 13.3" MBP, because I figured that the other factors were comparable. I know of course that the Intel CPU in those is either a 15W or a 28W model while the iPad Pro is more comparable to the 5W Y series, but the other factors are more similar.

As for the power draw... We don't have to guess how much power the x86 chips use. Intel has a little program called Intel Power Gadget:

https://software.intel.com/en-us/art...ower-gadget-20

Install it and run a few tasks, it is quite enlightening. For one thing, turbo makes the power draw shoot up. For another, the power requirements vary widely with the tasks. Integer tasks draw almost nothing, and will stay at full-core turbo constantly. Floating-point vector math, especially 256-bit AVX, use so much power that the CPU can't even stay at its base clock (this is documented by Intel, but mostly hidden). At the same time, those high-power tasks are where Intel runs circles around ARM chips (and Skylake crushes Broadwell and Haswell, BTW).

So if Intel wanted to make a CPU that got better battery life, they could easily do so. Limit the max clock to 2.5 GHz or so, what Apple usually maxes its CPUs at, make sure Hyperthreading is always on, but disable AVX. Since Intel's best mobile chips still beat the A11X even on Geekbench, Intel can take that clockspeed decrease and remain competitive.

But does this make sense? How often do you want to be 10 or 15 or 20 hours away from a power outlet and run CPU-intensive applications?

All of that goes for CPU tasks, though. I think that it is easier to say that Apple's GPU is more efficient than Intel's (which is also what shows up on the WebGL test), but that is much harder to do something about. The tile-based deferred rendering that mobile graphics use IS more efficient, but that doesn't help much if you can't run existing OpenGL (or ported DirectX) applications on it. Metal is the way forward there, I think, and the fact that the windowserver now uses Metal indicates that Apple sees a great future for it.

Note that Vega - which is likely in the next 15" MBP - supports rapid packed math, something that Metal makes good use of if it exists. Wouldn't surprise me if Intel has put that feature in Arctic Sound or Jupiter Sound (codenames for Gen 12 and 13 of Intel's graphics, the Raja Koduri-designed ones).

I don't think I have claimed something being an order of magnitude faster (I'm a physicists, so an order of magnitude is a factor of 10, not a factor of 2 ). The main reason why Apple's SoCs are (and probably can be) so much more efficient than what Intel is offering is the simple fact that Apple has to target one TDP with its chips and optimize for that. (The X-variant contains more cores, and has usually only marginally higher clocks.) Intel's Core cores (ugh, I still hate that name) on the other hand have to cover everything from 5W TDP to 165 W TDP. That is a huge gulf, and why especially at the extreme ends Core cores become less optimal designs.
This a problem for Intel - and Icelake apparently extends the range even further upwards, if the latest leaks are to be believed - but also here Intel is reacting. Skylake cores have varying amounts of L2 cache depending on the desktop/laptop or server implementation, and the L3 cache protocol is different (inclusive versus victim cache). The desktop and laptop chips appear to have traded some performance away for better power efficiency

In terms of CPU architecture techniques, “everybody is cooking with water” as we say in German, and Apple does not have magic pixie dust which makes its architectures faster. For more complex OoO designs, the RISC vs. CISC stuff is negligible. But being able to more specifically target its designs is what IMHO gives Apple the upper hand. And yes, that will mean it will have to eventually introduce new custom core designs to cover higher TDP optimization points, which make different trade-offs in their architecture. That is also the reason why for a lot of workloads big.LITTLE makes sense: the smaller cores are optimized for a lower power, higher performance-per-watt point.
But even if Apple manages to get better performance/watt out of its more tightly targeted designs, is that enough? Will they get 5% more performance? 10%? It can't be a lot more than that, and the risk is that the foundries flub their next node transition.

No, it doesn't. But I don't see any indication from Intel that it is developing the necessary technology (e. g. by pushing the development of Atom). Intel might still surprise us, but I don't think this is likely, because I reckon Intel would want to use the LITTLE core in other applications (SoCs for NASes and such).
Atom was focused on getting a mobile phone deal, and they didn't happen. Intel has clearly given up on that, and is selling the Atoms as rebranded Pentiums. I wonder if the most recent design (Silvermont) is the one to go for, though - the older in-order Atom core might be a better fit, IF they are interested.

Hyperthreading helps with core utilization, and is one of the last things that Apple has not yet integrated into its SoCs. It's an obvious gain, and one that Apple could in principle add to the A12 or later. (There may be legal reasons, too, Apple might need to work around someone else IP so as not to violate any patents.) So I don't see Hyperthreading as an Intel-specific advantage, just as a gain that Apple did not put into its SoCs — yet.
Hyperthreading is notoriously hard to get right, though. It might be that Apple took a look at it, decided that a big.LITTLE design was easier to get right and suited their purposes better, so they did that instead.

(I predicted that Apple would include HT or something like it with the A10, but I don't think that they will do it now that they have gone as wide as they have in A11).

Here, I disagree. While I am typing, my screen is mostly static and my computer is doing “nothing really”. That sounds like the perfect workload for several slower cores that can deal with iTunes and the like.
Only if you can keep using the low-power core for a long time, because the switch isn't going to be instantaneous. The main reason that low-power chips use in-order execution is that you can't cut power to that big reorder buffer. Yes that is a power draw, but compared to what a laptop uses, it really isn't significant - just check that power gadget I linked above. Intel CPUs will go to "S0i1" or "S0i3" mode, which is a silly way of saying that the core will drop to use as little power as they would use in S1 (standby) or S3 (suspend) even while the rest of the computer is still active. In your case, it would be in S0i1 mode. Intel doesn't say what that means, but I suspect that it does include cutting the power to the reorder buffer, and then there is no power advantage at all to using an in-order core.

Kaby Lake-G to me is not about Intel giving Apple the ability to integrate other dies onto the package, but an admission by Intel that it can't offer fast graphics to its customers. Hell had to freeze over for this to happen — and I agree that it was in all likelihood at Apple's insistence. But I don't see that as the beginning of a burgeoning custom Core iX line made just for Apple. It might happen, but then I think Apple would at least have a hand in designing the package. Plus, it'd take time to integrate EMIB interconnect into its designs for the co-processor.
Kaby Lake-G isn't about that, but EMIB is. I think they developed EMIB for a reason. It must have been a rather major project, not just some skunkworks thing.

Note that EMIB isn't being used to connect the GPU to the CPU - it is used to connect the GPU to its HBM2 memory. The CPU/GPU connection is plain ol' PCIe, 8 lanes of it. It seems obvious to me that you could save energy by using something like EMIB - after all, Intel developed a special solution for the connection between CPU and PCH on its U-series CPUs, and that is really just a 4 lane PCIe connection. So yes, Kaby Lake-G was a bit of a last minute fix (or it would at least have been Coffee Lake), but I think that it is clear that Intel made EMIB for this specific situation.

What will happen with high-performance graphics is very unclear. AMD's graphics department hasn't done well since nVidia's Maxwell launch, with the exception of the cryptocoin craze, but they're necessary for MS, for Sony, for Apple, and now for Intel, because none of those companies is going near nVidia for a custom GPU at the moment. I don't know who nVidia pissed off the most, but they're pretty clearly frozen out, with their only deal remotely close to this is their off-the-shelf bargain SoC in the Nintendo Switch.

I don't understand what difference I am supposedly double counting. I'm just remarking that Intel lost a performance advantage afforded by it being one process node ahead of the competition. Nothing else.
What I'm saying is that Intel losing that process node advantage is why Apple seems to be advancing faster than Intel over the last few years. It isn't two different reasons why Apple might switch to ARM - it is one and the same.

Again, I would love to base the discussion on better better benchmarks. So I agree that a quantification of the performance here is hard, but the more important point is the performance delta accrued over the years by the GPU built into Apple's SoCs. The story here is again that the growth is much larger than on the Intel side.
Maybe it doesn't make sense to compare the graphics performance between Intel and Apple in that case. The tasks are different, and Intel is clearly behind on graphics performance tech in general. They do OK when they throw hardware at the problem, but they need better tech. Licensing AMD's tech and hiring their chief GPU designer at least sounds like they're taking the problem seriously.

For high performance compute with a workload that parallelizes well, you are correct. But for other workloads that are more typical of mobile devices, I don't think this is necessarily true: when going from 4 (virtual = real) to 8 (virtual = 2x real) cores you are already deep into diminishing returns territory. HT helps with core utilization, but that's not that helpful if you can keep <= 4 cores busy at a given time.
But how is a 2 big + 4 LITTLE cores any better from this persective?

If you want strictly better performance in an iMac, you need to invest quite a bit more into the design of a custom SoC. At the very least you need a very speedy interconnect to feed the GPUs that are necessary to run 5K displays. I don't think an iPad Pro SoC is fast enough here, especially on the graphics end. But I agree, this is a problem with a clear solution, and a solution that Apple can — in principle — afford.
I wasn't going to bring up the interconnect - I think that a significant part of the reason that Intel CPUs use more power than Apple's SoCs is that they have 12 high-speed lanes of communication bandwidth that Apple doesn't - but you're right, you would need PCIe or something very much like it to handle communication. I would guess that Apple wants to integrate the GPU into the SoC somehow (after all, look at the performance Sony and MS are squeezing out of the SoC of the PS4 and Xbone), but they still need high-speed communication lanes for external ports, networking, storage etc.

You don't think iOS completely changes the story here?

Honestly, without the pressure from iOS, I would be quite skeptical. If you look at past CPU architecture transitions, what helped with the Intel transition was that Apple would suddenly have access to the most-used architecture. That brought lots of advantages with it (e. g. the massive efforts in the open source community developing and optimizing compilers, the fact that you could natively run Windows and Linux if your heart so desires, etc.). The leverage ARM has is that iOS and Android are based almost exclusively on that architecture and all the development effort goes into that. Given the ubiquity of touch, I believe it is inevitable that Apple will release larger and larger touch-based devices. I would like a 27" touch-based “iPad” once the software is there.

Add to that Intel's and Microsoft's recent restructuring efforts (no doubt in reaction to slumping PC sales), and it is no longer clear whether Intel's future road maps align as neatly with Apple's as they did in 2005. Back then Intel refocused its CPU design efforts to be mobile first — and it worked wonderfully. Now, it is much less clear. (Now I can't even keep straight which *lake core is in which product, and what Core generation that would be. )
I think that the risk and cost of a transition like this is large, and that Apple will not do it unless the benefits going forward are massive. I don't see that they are, at this point. I don't think that you can extend the trendline of past performance into the future and see ARM being significantly better than x86 any time soon.

I also think that the transition might happen as bigger and bigger iOS devices eat the Mac market rather than the Mac going ARM. I would HATE that, but I can see it happen, and if I'm honest... what I do on my MBP I could probably do a 13" iPad Pro with a keyboard. I just don't want to live in a world where I can't run what software I want on my machine, where emulators are banned because of Jobs' whim a decade ago. I'm mostly OK with it on my phone because I can do those things on another device, but if you remove the last device that I control, we have a problem.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 10, 2018, 11:38 AM
 
Originally Posted by Waragainstsleep View Post
For anyone doing anything remotely fancy I can see how it would seem dreadful but never underestimate the Powerpoint/Excel/Word/Browser crowd. There are still many millions of them.
This sounds an awful lot like the argument about replacing MS Office - you can do it because nobody uses more than 15% of the functions in Word. Trouble is, not everyone uses the same 15%. The browser does solve some of this, though.

Originally Posted by Waragainstsleep View Post
PEWB. Thats an acronym worth establishing. "How many PEWBs do you have {in your company}?"
++
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 9, 2018, 05:07 AM
 
Semi-relevant to the discussion:

https://www.bloomberg.com/news/artic...amid-cost-cuts

Qualcomm - who make Snapdragon chips, the most common ARM SOC used on Android phones - has been working on ARM server chips, and now appears to be throwing in the towel.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
May 9, 2018, 05:54 AM
 
Very interesting. I remember a few years back when some companies planned on multiple ARM processors (say around 300), multi-core chips to run server tasks. One even came to my company before for a demo. That company is probably bankrupt now as ARM server chips have failed to come into fruition, despite their claim for being low powered. I wish I remembered the company name.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 9, 2018, 08:49 PM
 
Another relevant nugget of information is Intel’s announcement that mass production of chips in their next-gen 10 nm process node is further delayed to 2019. TSMC on the other hand has just started with their 7 nm process. For about a year that will put TSMC two process node s ahead of Intel. Interesting times.
I don't suffer from insanity, I enjoy every minute of it.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
May 10, 2018, 12:08 AM
 
It's ridiculous that they're still using 193nm UV to generate 7nm features. The industry needs to get EUV (13.5nm) lithography rolling. The process size has already lapped it before general adoption - EUV will have to use the same multi-exposure tricks right out the gate. Just less of them initially.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 10, 2018, 07:57 AM
 
Yeah, but to make the understatement of the year, it is really, really difficult to make EUV lasers with high enough intensity and stability. Nevertheless, it is remarkable that Intel is soon two process shrinks behind when it was at least one generation ahead of everyone. How times are changing.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 11, 2018, 04:53 AM
 
I was going to post the Intel and lasers thing, but decided that it may be too esoteric, but anyway: Intel is shipping and selling Cannonlake 10nm CPUs, but they’re dualcores with the smallest graphics or without graphics, and they’re not promoting the fact. The reason is that yields are terrible is very much related to the lasers - Intel has decided to forgo EUV lasers and do their features with quadruple patterning. Multiple patterning reduces yields, because you can’t align things perfectly every time, which is the reason Intel is struggling.

Everyone else going to 10nm or below is using EUV at least partially. I think TSMC does some features with EUV and then others with UV and double patterning, and they want to go to EUV 100% with 7nm.

So the question is what Intel does with this delay. Is it a 1 year delay to make a new process using EUV and just skip Cannonlake wide deployment, or do they think that they can get their multiple patterning to work? Because Intel has to move to EUV next time anyway, so why spend time on other solutions now rather than getting a drop on the EUV problems?

My guess is that Cannonlake is dead, and the long delay now is to because Intel is doing something else for what is coming out in 2019. They may call it 10nm still - everyone lies about the names by now anyway - but I think it will be something new.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 14, 2018, 08:41 AM
 
The 10nm insanity keeps going. Intel is now apparently shipping 2+0-style Cannonlake i3s (ie, dualcores without integrated graphics) to Lenovo, who put them in low-end laptops. i3-8121U is the name, and Intel hasn't even put it into its database. This is the opposite of a paperlaunch - they do actually exist, but Intel pretends they don't.

Meanwhile, the heads have begun to roll. Intel's Chief Marketing Officer, the guy responsible for the silly naming, "PC does what?" campaign and a few more blunders, is out as of today.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 14, 2018, 09:12 PM
 
Intel was very careful in its wording (and I am glad I stuck to that, too) when they claimed that they will launch 10 nm mass production in 2019.

On the other hand, Intel has become very careless when talking about core generations, code names and the like. I almost don't want to dive into that anymore, because it has become a full time job to figure out which CPUs (usually denoted with i[x]-[yyyy](decorative label)) have which cores and are fabbed with which process. I wonder whether they are having trouble internally keeping track of this insanity as well.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 15, 2018, 09:34 AM
 
Ark.intel.com. There is no way to figure it out other than that.

Note that the guy that just got fired is the guy that developed the insane naming system, so there is maybe some hope there.

Fun fact: the next generation of Intel desktop chips is called Whiskey Lake. Now granted, there is an actual Whiskey Lake in Oregon, in the area where Intel usually picks its names, but the joke is that it is a reference to Intel’s failure to perform lately.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 17, 2018, 09:59 PM
 
What's next, Lake Titicaca?

But seriously, Intel naming schemes have progressively become inscrutable, and Intel's decision to decouple “codenames” and generations of its core designs adds insult to injury.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 18, 2018, 06:39 AM
 
The plan was Skylake - Cannonlake - Ice Lake - Sapphire Rapids, but then they kept pushing new names in between Skylake and Cannonlake. Coffeelake is a real change, but Kaby Lake was just a rename of a new stepping, and Whiskey Lake will likely by a rename of Coffeelake. I wonder if there is a Coke Lake or something to sneak in in 2019.

Cannonlake did launch, btw - snuck out, more like - and the main news is that it has LPDDR4 support (finally!) and AVX-512. Note that there are still no 8th gen Y-series parts, from either process.

More rumors about the problems Intel is having, btw: Intel has switched the metal layers from copper to cobalt, because cobalt has much smaller issues with electromigration, so it doesn't have to be isolated. It does reduce conductivity a bit, but you can compensate by making the wires thicker - they're still going to be thinner than the copper wires with the isolation.

The rumor is that Intel's problems stem from this transition, more specifically the heat expansion. If so, it may not be quite as easy as replacing the lasers, but Intel could back down to copper and still be equivalent to TSMC and friends.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 22, 2018, 03:05 PM
 
And now there is a new Intel hardware bug, a speculative store-to-load forwarding that can be exploited to leak information. More patches that cost some performance...
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 23, 2018, 06:33 AM
 
The question at this point is whether Intel wouldn’t do better if it scratched its plans and redesigned its next cores to fix these safety issues bit-by-bit: perhaps just simple fixes in the next core, but then add more and more safety features in subsequent generations?

I’m curious to what extent Apple has changed its plans for the next A1x cores here, because the generic idea behind these attacks makes pretty much all designs with speculative execution susceptible, even if each core from each maker may need a custom attack.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 23, 2018, 11:39 AM
 
Personally, I think that they key is to obfuscate the timing results for processes from userspace. All of these attacks rely on the cache being faster than main memory, and if you can’t detect that, the attacks fail.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
May 23, 2018, 06:31 PM
 
You can pseudo randomize which pieces of cache are used by which threads, like frequency hopping in radio. That should have minor performance penalties.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 23, 2018, 08:08 PM
 
I was thinking further, namely making thinks like transactional memory a thing that is much higher on the agenda and not just a feature reserved for workstation and server workloads.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 27, 2018, 04:25 AM
 
Right now, transactional memory is one of the tricks used to make Spectre-like attacks happen. How would,using it widely help here?
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
May 27, 2018, 07:43 AM
 
Originally Posted by P View Post
Right now, transactional memory is one of the tricks used to make Spectre-like attacks happen. How would,using it widely help here?
Transactional memory access was just an example of a focus shift from performance to security.

Edit: I should add that I wasn't aware that TSX was instrumental in Spectre-type assaults.
( Last edited by OreoCookie; May 27, 2018 at 06:37 PM. Reason: Clarification)
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
May 28, 2018, 07:29 AM
 
Well, it works like this: Spectre 1 relies on a speculative read that is never retired, but which can be detected because you can see in the cache which read was made speculatively. Meltdown relies on making an illegal read in thread 1, having the kernel kill thread 1 and then figuring out what that thread "saw" in another thread (thread 2). You can do the same thing with TSX as well - just wrap the entire illegal read request in a "try" block, and then have the try fail so the read is never retired (because retiring the illegal read would cause a segmentation fault and killing the faulting thread). Using the same cache timing sniffing, you can figure out the data that you weren't allowed to read. Ingenious, really - and it doesn't rely on a bug at all, because the processor does exactly what it should according to specification.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 02:11 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,