Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Mac transition to ARM to be announced at WWDC?

Mac transition to ARM to be announced at WWDC?
Thread Tools
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Online
Reply With Quote
Jun 9, 2020, 02:26 PM
 
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 9, 2020, 02:36 PM
 
I'd rather Apple transitioned to AMD. Standard CPU sockets, that you can buy upgrades for in the future. With a strong roadmap.

Apple's penchant for proprietary hardware has about done it for me. I won't buy into a walled garden again. I did it in the PPC days. Expensive upgrades were available from 3rd parties. Once Apple moved on, the upgrades stopped appearing. In this case, Apple would be the sole source for their custom CPUs. This probably rules out 3rd party upgrades entirely.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Jun 9, 2020, 05:28 PM
 
Oof. This doesn’t sound like great long term news for my new Mini....
Mankind's only chance is to harness the power of stupid.
     
OAW
Addicted to MacNN
Join Date: May 2001
Status: Offline
Reply With Quote
Jun 9, 2020, 07:55 PM
 
If this means running a VM in Parallels is going to be crippled or killed then it's a non-starter for me. Unfortunately there are some legacy applications that I simply must be able to run in Windows 10 at work. It's simply not an option for me to forego this capability. For the last 5 years I've been using a 2015 and 2017 12 inch MacBook because I simply love the ultra-portability and silence of the machine. But this WFH thing because of #TheRona has me driving an external monitor, MS Teams video-conferencing, running a Windows 10 VM, Office 365, Safari, Mail, Calendar, etc at the same time ... and that is really pushing this machine beyond any reasonable expectation of good performance. So I broke down and ordered a 2020 13 inch MacBook Pro*. And I'll have to hold onto it for years into the foreseeable future and not upgrade on my usual timetable if Apple doesn't handle this ARM transition properly.

OAW

* - It's going to feel like a brick but it'll be on a table for quite some time to come so I'm not as worried about that. But the fan noise though .... we shall see.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Jun 9, 2020, 08:37 PM
 
I also wish that Apple went with AMD or at least the option of AMD inside. ARM inside a laptop doesn't bring much these days. Intel battery life is already pretty good. Lasts a full day for me. The only thing it might do is assist in graphics (just a guess). Can't make a device any smaller or thinner than what they are now because still need a keyboard and still need structural support. Maybe an Intel/AMD chip with an ARM coprocessor (like the T1 but beefier like A14X) might be good.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 9, 2020, 10:18 PM
 
OAW, it sounds like you could use a real desktop for that much WFH. With the external monitor pushing the GPU + extensive CPU loads ... maybe a modern 13" MBP can handle it.

I just tested my 2011 MBP 15" quad-core as a desktop replacement for a couple days. While I was messing around with upgrades on my MP. Even with an eGPU box driving my dual monitors, the fan noise was unavoidable. Though I was getting more fan from the eGPU box + RX580 than from the notebook.

Let us know how it goes. It would be quite interesting if the laptop does handle it quietly.
     
Mac User #001
Mac Elite
Join Date: Mar 2007
Location: WI, United States
Status: Offline
Reply With Quote
Jun 9, 2020, 11:06 PM
 
I haven't fully formed my thoughts on the transition but I will say I am shocked to see so many against it. I thought this was something more users wanted. In my mind, Apple can get away with ARM-based in any baseline model entries because so many people have no idea what it all even means but I will agree it sounds like a dangerous move for Pro-level users.
I have returned... 2020 MacBook Air - 1.1 GHz Quad-Core i5 - 16 GB RAM
     
Doc HM
Professional Poster
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Jun 10, 2020, 05:47 AM
 
I agree most consumers don't give a fig who makes the processor inside their machine. They buy it use it, how it away and buy another. As long as its quick and the battery lasts ok. The Megahertz wars are long behind us now. The vast majority of my customers couldn't tell you who made the chip inside their laptop, or if they bought the i3,5 or 7 version or even what the clock speed is.
Running custom arm chips will just make it easier for apple to tie all the other components of the system like the T2 chip into one working whole, which is what they like and the customers really don't care about.

I also agree that for pro users it may be sub optimal. I'm sure Apple sees the actual Mac Pro as separate from all this as its pretty substantially different inside anyway. I would like to think they run a "Pro Power" line of laptops and I can see them doing this at an (even) higher price point, to cater for the mobile pro users. I don't know what sales of the iMac Pro are like (never seen one in a customer house in the wild) but that may end up being part of a three pronged "Pro" line using different chips while the Mac Pro, iMac Pro and maybe MacBook Pro cater more to the niche Pro market?
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Jun 10, 2020, 08:36 AM
 
Originally Posted by Mac User #001 View Post
I haven't fully formed my thoughts on the transition but I will say I am shocked to see so many against it.
Many of us have lived through the 68k-PPC transition, then the PPC-Intel transition where they killed off Rosetta after 5 years, and now we're seeing Apple ditching support for all 32-bit apps. There's an assumption that the transition to ARM will including an eventual phasing out of support for legacy apps and products that many users need. Couple that with the likelihood of very limited hardware upgradability, and you have a recipe for Apple pressuring people into upgrades far before they would otherwise.
     
Thorzdad  (op)
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Online
Reply With Quote
Jun 10, 2020, 11:41 AM
 
On the up side (if there is one), the bigger software companies, like Adobe and Microsoft, have been getting their Intel/Mac products working on ARM for a few years now, so they probably have the nuts-n-bolts down, at least.
( Last edited by Thorzdad; Jun 10, 2020 at 12:01 PM. )
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 10, 2020, 12:09 PM
 
Upside:

* Apple ARM chips are now significantly faster than x86 chips, and the ISA is part of the reason for that. ARM itself are also making CPUs (Cortex X1) that almost match an Apple A12.
* Power draw is lower, with all that that means for cooler chips and lighter laptops.
* Intel is always at their best with their back to the wall, and this might be just what they need to stop sandbagging.
* it seems likely that we will now finally get Macs with 4G and 5G networking.

Downside:

* I seriously can’t even with Apple doing another transition to make old software obsolete.
* I have an iPad that is thin and light, I don’t need my laptop to be that thin and light.
* Apple’s current trend is locking down the Mac more and more. They’re already past my comfort point, and this is a route to move even further.
* Apple’s CPU design team is second to none, but their GPU designs are unproven. Are they adding PCIe lanes to add AMD GPUs? They lose a lot of the power advantage if they do. This means that integrated Apple graphics is the likely answer, and that is a concern for me. Their entire idea (tile-based deferred rendering) remains incompatible with PC graphics. It is probably technically better, but it is incompatible anyway. I fear that this will lead to another dark age of Mac games.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 10, 2020, 12:10 PM
 
Originally Posted by Thorzdad View Post
On the up side (if there is one), the bigger software companies, like Adobe and Microsoft, have been getting their Intel/Mac products working on ARM for a few years now, so they probably have the nuts-n-bolts down, at least.
Office is on ARM since years. Photoshop is there as well now (iPad version), so Adobe have clearly been working on it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 10, 2020, 09:11 PM
 
Originally Posted by Thorzdad View Post
On the up side (if there is one), the bigger software companies, like Adobe and Microsoft, have been getting their Intel/Mac products working on ARM for a few years now, so they probably have the nuts-n-bolts down, at least.
Adobe has ported quite a bit of its software to iOS and their apps have been rewritten a few yeasr ago to be portable. Ditto for Microsoft Office, that's why it is on iOS today. Plus, there is Office for Windows on ARM. I think portability is not as big of a problem as it used to be for major pieces of software.
Originally Posted by reader50 View Post
I'd rather Apple transitioned to AMD. Standard CPU sockets, that you can buy upgrades for in the future. With a strong roadmap.
Most of PCs sold today — that includes Macs, obviously — are portables. And they can't be upgraded. Plus, AMD is constrained by the x86 architecture and its business model. Yes, you can ask AMD to make custom chips for you if you are willing to pay, but its cores are still less efficient than comparable ARM cores.
Originally Posted by reader50 View Post
Apple's penchant for proprietary hardware has about done it for me. I won't buy into a walled garden again. I did it in the PPC days. Expensive upgrades were available from 3rd parties. Once Apple moved on, the upgrades stopped appearing. In this case, Apple would be the sole source for their custom CPUs. This probably rules out 3rd party upgrades entirely.
I don't think the situation is analogous to PowerPC at all. PowerPC had, in its later years, only two companies that sold machines, Apple and IBM. The ecosystem, and thus, the investment into things like tools, software and the like was small. ARM, on the other hand, is the world's most popular ISA. Many companies are pouring billions of dollars into the ecosystem, money spent to develop compilers, libraries, cores, etc. So I don't think it's entirely accurate to say Apple would be making proprietary hardware. And given that Windows runs on ARM, Apple could in principle also develop a version of Boot Camp for its ARM Macs. (I don't know whether they will, though.)
Originally Posted by P View Post
Upside:
I would add the following items to the list:
* The ability for more custom silicon for e. g. accelerating machine learning and other specialized functions.
* Reduce complexity on its Macs by absorbing the T-series chips into the SoC.
* In case Apple can't/doesn't want to offer a workstation-class chip right away, it can choose amongst several vendors that offer core count and PCIe-lane monsters right now. You have 64-96 core chips out there with as much as 128 PCIe 4.0 lanes and support for up to 4 TB RAM or so. That should be enough for a while.
* Longer battery life. Pretty much my only gripe about my 16" MacBook Pro — apart from price — is that it is a step down in terms of battery life. And the battery can't get any larger (without violating TSA rules, i. e. you wouldn't be able to take it on a plane with you).
* It opens the door for multimode machines from Apple, that could morph from a touch-centric to mouse-centric device. This is pie-in-the-sky thinking at the moment, but it seems like a natural next step within the next, say, 10 years or so.

Neutral:
* Most people won't care, because they don't know: as long as you upgrade your software and you don't rely on a specialized or obsolete piece of software and/or virtual x86 environments, you won't notice. Drivers might be an issue. I mostly rely on standard software or cross-platform software that already runs on ARM (LaTeX, python, various command line tools), for example.

Downsides:
* Driver support will initially suck. That was the only thing I noticed when I moved from PowerPC- to x86-based Macs, e. g. the driver/software for my hardware photo calorimeter would no longer work. Two, three years later, the company finally offered a free upgrade, but by that time, I had lost the calorimeter. My office printer is fine (it knows Postscript), but my office scanner (a Fujitsu SnapScan) might be unsupported for a while.
* Not being able to run x86 software natively will suck for some people.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 10, 2020, 09:42 PM
 
I just read a lengthy post on MR forums from a chap saying that given their current performance per watt compared to Intel, Apple's chips could scale up to workstation class pretty spectacularly. He also pointed out they have a bunch more engineering to do that doesn't apply to iPhones or iPads but we don't know what they've been up to in secret yet.

I doubt they have 64 core workstation CPUs based on A13X cores but there does appear to be some reasonable obvious choices for the path they take.

They could transition the consumer Macs to ARM first. MacBook Air, maybe a new 12" or 14" MacBook, Mac Mini and an entry level iMac. Then they can gradually move the rest of the range over as their chips get up to speed.
One thing that could be interesting is whether they could include an x89 chip and an ARM chip in the same machine. For a while at least. Perhaps the MacBook Pro and iMac Pro could run x86 for heavy lifting jobs but when you're just checking your email or watching a movie, they sleep the x86 and switch over to the ARM. Sounds like it would take some doing software wise but would be cool if they could pull it off.
I wonder if they could scale up some models sooner rather than later by just including several ARM chips. Four A13Xs in a MacBook Pro clocked 50-100% higher than in an iPhone could be pretty potent.
If Apple does have a workstation class ARM CPU on its way, could we see a new Xserve? Would offer something different to Dell and HP servers. Low power data centre workhorses? Could be interesting.

Finally, when the changed from PPC to Intel, they changes the name of the portables and the workstation (though not the iMac or Mac Mini). I wonder what new names we might see?
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 01:00 AM
 
Originally Posted by Waragainstsleep View Post
I doubt they have 64 core workstation CPUs based on A13X cores but there does appear to be some reasonable obvious choices for the path they take.
Don’t doubt it, you can already buy Xeon-class ARM chips for servers and workstations from several vendors. While they are not based on the A13X for obvious reasons, they offer comparable or better performance than Intel Xeon and AMD parts for heavily multithreaded workloads. Ampere makes an 80-core CPU based on ARM’s Neoverse N1, a core specifically made for servers.[/url] It supports 4 TB RAM per socket and has 128 PCIe 4.0 lanes (which are twice as fast than PCIe 3.0). As a point of comparison, the 28-core CPU in the top-end Mac Pro supports up to 2 TB RAM and has 64 PCIe 3.0 lanes. Amazon has a 64-core version based on the same N1 core, which is purported to be extremely power efficient (1 W per core if run at full tilt). ThunderX3 is another ARM-based server CPU with 96 custom cores and “only” 64 PCIe 4.0 lanes (so double the throughput of a current Intel Xeon W). The Xeon costs $7k in bulk, by the way.

In principle, Apple could buy these chips now to tide them over until they can produce ARM chips for their entire Mac line-up.
Originally Posted by Waragainstsleep View Post
If Apple does have a workstation class ARM CPU on its way, could we see a new Xserve? Would offer something different to Dell and HP servers. Low power data centre workhorses? Could be interesting.
Yes, that’d be interesting. They could (and IMHO should) build their own custom servers for their datacenters, too.
Originally Posted by Waragainstsleep View Post
Finally, when the changed from PPC to Intel, they changes the name of the portables and the workstation (though not the iMac or Mac Mini). I wonder what new names we might see?
I really hope so. I never got to like MacBook (Pro). PowerBook was so much cooler.
I don't suffer from insanity, I enjoy every minute of it.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 11, 2020, 01:38 AM
 
I wish they’d bring back iBook/Mac, Powerbook/Mac. Made more sense.

MacBook still sounds stupid.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 11, 2020, 01:39 AM
 
Also, I don’t see them ditching x86 and discrete GPUs on Pro machines without a revolt.

They may think they can, and rely on iOS/iPad OS ports, but no.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 04:47 AM
 
Originally Posted by Brien View Post
Also, I don’t see them ditching x86 and discrete GPUs on Pro machines without a revolt.
What makes you think discrete GPUs are incompatible with ARM? They are not.
Originally Posted by Brien View Post
They may think they can, and rely on iOS/iPad OS ports, but no.
What do you mean? Presumably, Apple has had all of its mac OS apps compiled on ARM for years and years now. I think the earliest rumor of an ARM-based Mac I can recall was a MacBook Air with an A4 or A5. (They did the same with Intel.) So you would not rely on iOS ports.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 11, 2020, 05:36 AM
 
Originally Posted by OreoCookie View Post
Don’t doubt it
When I say I doubt it, I mean I doubt they have their own in-house 64-core ARM chip ready to sell/demo right now. The main reason I doubt it is because of the new Mac Pro. If they announced a 64+ core ARM workstation chip next week, sales of the new MP would likely tank, many would likely be furious at dropping tens of thousands on a machine that was going to be so drastically superseded so fast.


What about the idea of an OS being able to use two different CPU architectures at once? Is that absolute madness? Or to switch between them dynamically like they did with the GPUs in MacBook Pros?
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 08:16 AM
 
Originally Posted by Waragainstsleep View Post
When I say I doubt it, I mean I doubt they have their own in-house 64-core ARM chip ready to sell/demo right now. The main reason I doubt it is because of the new Mac Pro. If they announced a 64+ core ARM workstation chip next week, sales of the new MP would likely tank, many would likely be furious at dropping tens of thousands on a machine that was going to be so drastically superseded so fast.
Do you want to rip off the bandaid on your hairy arm slowly or do it in an instant? I get your argument, but while the current Mac Pro is the fastest Intel Mac you can get, it is not the fastest 1-socket x86 machine nor the fastest machine CPU-wise if no holds are barred. For $6k-$7k AMD will sell you a 64-core Epyc, that will run circles around Intel's 28-core CPU. IMHO that's the biggest ding of the Mac Pro, it wasn't the fastest machine of its kind period — but the fault wasn't with Apple.

While Apple could use a third-party chip “today”, I don't expect that they will start the transition with the Mac Pro. Realistically, they will do it exactly like with the x86 transition, they start with their laptops. They could put the A12X or one of its successors into a MacBook Air and have a machine that is significantly faster than the MacBook Air.

To me the more interesting thing is what core count Apple goes for. My current machine has 8, and I would reckon for mainstream users 6-8 cores will be plenty, especially if some of them are higher-efficiency cores. A current 8-core machine would probably be replaced with a 8+4-core machine. But at a certain point, adding more cores won't give you more performance. It's nice that there are 64-core CPUs out there, but keeping 64 cores fed is not easy. So even for a Mac Pro replacement, I don't know whether it makes sense for Apple go for that many cores.
Originally Posted by Waragainstsleep View Post
What about the idea of an OS being able to use two different CPU architectures at once? Is that absolute madness? Or to switch between them dynamically like they did with the GPUs in MacBook Pros?
For what purpose? While there have been examples of this historically (e. g. a Mac emulator for Amigas, x86 DOS cards for Macs and Acorn Risc PCs), I don't think this makes sense. Emulation will be fine to tide you over, because nowadays I don't expect it to be as difficult as it was before to switch to ARM. As I have said, all the big software houses have made their code pretty portable, I expect Adobe and Microsoft to release ARM versions of their software relatively quickly.

With GPU switching you really have a tangible advantage, it's a clumsier version of big and little cores sharing work.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 10:27 AM
 
One more thing, I read this post, which made me think: Apple will likely use this transition to get rid of older, deprecated technologies. So as a result of that, the ARM transition will kill off some older software products even though that may not have anything to do with the ARM transition itself. OpenGL and OpenCL were mentioned as natural candidates.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 11, 2020, 05:10 PM
 
Originally Posted by OreoCookie View Post
I would add the following items to the list:
* The ability for more custom silicon for e. g. accelerating machine learning and other specialized functions.
Maybe. It isn’t clear that this is something that needs to be in close proximity to the main CPU - it might as well be outside it.

* Reduce complexity on its Macs by absorbing the T-series chips into the SoC.
I don’t think they will do that. The T1/T2 chips remain powered, although at very low power, when the main CPU is off. Apple uses this trick in iPhones as well, with the M7 and its successors.

* In case Apple can't/doesn't want to offer a workstation-class chip right away, it can choose amongst several vendors that offer core count and PCIe-lane monsters right now. You have 64-96 core chips out there with as much as 128 PCIe 4.0 lanes and support for up to 4 TB RAM or so. That should be enough for a while.
Sure - but those are vendors selling tiny tiny volumes for a specialized market (cloud computing). I doubt Apple would rely on them. Still, they’re there, fair point.

* Longer battery life. Pretty much my only gripe about my 16" MacBook Pro — apart from price — is that it is a step down in terms of battery life. And the battery can't get any larger (without violating TSA rules, i. e. you wouldn't be able to take it on a plane with you).
Except I don’t think Apple would do that - I think they would make it thinner. They’re fine with 10 hours on their own made-up benchmark. More battery life absolutely, but I’m doubtful.

* It opens the door for multimode machines from Apple, that could morph from a touch-centric to mouse-centric device. This is pie-in-the-sky thinking at the moment, but it seems like a natural next step within the next, say, 10 years or so.
This is my worst nightmare. Now, I don’t think ARM does anything to bring that closer, but those multi mode machines are what I’m afraid of. I have an iPad and it is fine - I don’t want them to compromise the Mac even further to get it to be a multi-mode machine.

Originally Posted by OreoCookie View Post
What makes you think discrete GPUs are incompatible with ARM? They are not.
No, but they are, arguably, incompatible with Apple’s integrated GPUs. Apple uses GPUs based on tile-based deferred rendering. If they change Quartz to work with that, the current immediate mode discrete GPUs may not work. In any case, there will be more work to support both rendering modes.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 07:12 PM
 
Originally Posted by P View Post
Maybe. It isn’t clear that this is something that needs to be in close proximity to the main CPU - it might as well be outside it.
It’d be more efficient if they put the accelerators on the same silicon. Plus, Apple could easily re-use e. g. the Neural Engines it puts into iPhones and iPads — or perhaps beef them up by increasing the number of “cores” (or whatever the functional blocks in the Neural Engine are called).
Originally Posted by P View Post
I don’t think they will do that. The T1/T2 chips remain powered, although at very low power, when the main CPU is off. Apple uses this trick in iPhones as well, with the M7 and its successors.
Perhaps I am wrong here, but I was under the impression that the Mx-series bit was part of the A1x-series SoC. Or are those on separate chips?
Originally Posted by P View Post
Sure - but those are vendors selling tiny tiny volumes for a specialized market (cloud computing). I doubt Apple would rely on them. Still, they’re there, fair point.
My point was more to show that not just could there be ARM-based CPU that compete toe-to-to with Intel Xeon, you can already buy — or in Amazon’s case, use them. You are right that these CPUs are optimized for server workloads, and I don’t think many people will need 64+ cores — or more than 32 cores for that matter. Nevertheless, more memory bandwidth, massive PCIe bandwidth, ECC RAM, those are features that are available on ARM today.
Originally Posted by P View Post
Except I don’t think Apple would do that - I think they would make it thinner. They’re fine with 10 hours on their own made-up benchmark. More battery life absolutely, but I’m doubtful.
I’m fine with 10 hours — if I get 10 hours for normal workloads. Sure, when I turn down the screen brightness and quit all apps, I got 13 hours on my previous 13” MacBook Pro when working on the plane (dark environment, no wifi, no Bluetooth, etc.), but more like 7-9 hours under moderate workloads. On my 16” I get 5-6, I think. (I haven’t flown with it yet.) So if the difference between “TDP” and real-life TDP becomes zero again (apparently 125 W = 275 W ) and battery life becomes less workload-dependent, we might get longer real-world battery life and thinner machines.
Originally Posted by P View Post
This is my worst nightmare. Now, I don’t think ARM does anything to bring that closer, but those multi mode machines are what I’m afraid of. I have an iPad and it is fine - I don’t want them to compromise the Mac even further to get it to be a multi-mode machine.
I don’t know. It depends on the implementation. And perhaps my “dream” will be achieved by a sophisticated version of Hand-Off rather than what I awkwardly dubbed a “multi-mode machine”. My broader point is that having your entire line-up running on a single architecture may make things possible that we are not thinking of now.
Originally Posted by P View Post
No, but they are, arguably, incompatible with Apple’s integrated GPUs. Apple uses GPUs based on tile-based deferred rendering. If they change Quartz to work with that, the current immediate mode discrete GPUs may not work. In any case, there will be more work to support both rendering modes.
Stupid question: isn’t iOS based on a version of Quartz, too? Currently, I think you can use nVidia, AMD and Intel CPUs concurrently. While they work on the same rendering model, their architectures are quite different, too.

@P
If you were working for Apple and did not want to let this opportunity go to waste, what technologies would you deprecate? What changes to the architecture of mac OS would you make?
I don't suffer from insanity, I enjoy every minute of it.
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jun 11, 2020, 08:48 PM
 
Technical considerations aside, will ARM processors make enough of a difference in computing power and battery life that Joe Mac User will rush to throw down the cash for a new machine - with a new OS version that is incompatible with all of his existing software?

If you make my 13” MBP run like a workstation, I might be persuaded to buy new software for that level of power. If you make it run complex video apps (games, simulations, etc.) like a Mac Pro with a fantastically good video system, I might go for that too.

On the other hand, if you simply put out a machine that’s lighter and runs for another hour longer than a current laptop, it just isn’t worth it for me to go there. And don’t even try to sell me on how freakin’ thin the thing is...

Honestly, if they want to compete with ChromeBooks (as it would seem), where’s the “OS X-book”? Light, long battery life, and unconstrained by a software stovepipe...My ChromeBook is way more flexible than my iPad while being much lighter than even my pretty darn light MBP. I can see a new line that’s aimed at competing head-to-head with ChromeBooks. But for their main line of computers? Nah.

Glenn -----OTR/L, MOT, Tx
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 11, 2020, 11:44 PM
 
Originally Posted by ghporter View Post
Technical considerations aside, will ARM processors make enough of a difference in computing power and battery life that Joe Mac User will rush to throw down the cash for a new machine - with a new OS version that is incompatible with all of his existing software?
The revenue model for a lot of the software is now based on subscriptions, so that removes some barriers that we used to have. When the PowerPC —> x86 transition happened, people did not always upgrade e. g. their Adobe suite of applications (because of the expensive and they did not see enough value in the upgrade).
Originally Posted by ghporter View Post
If you make my 13” MBP run like a workstation, I might be persuaded to buy new software for that level of power. If you make it run complex video apps (games, simulations, etc.) like a Mac Pro with a fantastically good video system, I might go for that too.
That's quite an interesting question. Basically Apple has many different reasonable points on the power-performance curve it can aim for. My new 16" is more powerful than my trashcan Mac Pro that I used to have on my desk. The only thing the Mac Pro has going for it is noise. It is getting hot here in Japan, so when I push my machine, I can hear the fan. With a lower-powered ARM CPU Apple could make a more silent laptop. (I'd also like to have a battery saver mode just like on my iPhone.)
Originally Posted by ghporter View Post
On the other hand, if you simply put out a machine that’s lighter and runs for another hour longer than a current laptop, it just isn’t worth it for me to go there. And don’t even try to sell me on how freakin’ thin the thing is...
TL;DR I think we are in for a nice performance boost with the ARM transition.

As a lower bound for the performance, we should look at the current iPad Pro, which has 4+4 cores, so 4 more cores than the current 13" and the 4 faster cores are roughly as fast as what is in the current 13" MacBook Pro. But the iPad Pro's SoC has four more slower cores, and even if they are slower than the big cores, they can still do some work. So even if the ARM-based 13" MacBook
Pro sticks with the SoC of the iPad Pro, e. g. an A13X (based on iPhone 11-gen tech) or an A14X (based on iPhone 12-gen tech) I reckon even the fast cores alone will be quite a bit faster than whatever Intel has to offer.

But I personally think Apple will put the iPad Pro SoC in the successor to the MacBook Air and put something more high powered into the 13" Pro. One rumor I have read is that Apple has developed an 8+4-core chip for mobile Macs, and that would monster whatever is in the Intel-based 13" Pro — provided you can make use of that many cores. For the 16" this is a bit more difficult as here you also have discrete graphics. P raised a good point: is it possible to use a discrete GPU with Apple's integrated graphics (due to the differences in architecture)? While I still expect this 8+4-core chip to outperform Intel's 8-core notebook CPUs, we are reaching the point of diminishing returns when it comes to core count for even many more sophisticated workloads.
Originally Posted by ghporter View Post
Honestly, if they want to compete with ChromeBooks (as it would seem), where’s the “OS X-book”? Light, long battery life, and unconstrained by a software stovepipe...
I would love this: imagine a MacBook Air that starts at $500-$600. Perhaps you guys think I'm dreaming, but Apple makes a very good iPad for $300. Yes, it has a smaller screen and uses older tech, but a cheap-but-good entry-level machine would be great.
Originally Posted by ghporter View Post
My ChromeBook is way more flexible than my iPad while being much lighter than even my pretty darn light MBP. I can see a new line that’s aimed at competing head-to-head with ChromeBooks. But for their main line of computers? Nah.
I'm curious, why is your ChromeBook more flexible? Even if we bracket apps that benefit from touch- or pen-centric user interfaces, I can't see the appeal of web-based apps.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 12, 2020, 01:02 AM
 
Originally Posted by OreoCookie View Post
Do you want to rip off the bandaid on your hairy arm slowly or do it in an instant? I get your argument, but while the current Mac Pro is the fastest Intel Mac you can get, it is not the fastest 1-socket x86 machine nor the fastest machine CPU-wise if no holds are barred. For $6k-$7k AMD will sell you a 64-core Epyc, that will run circles around Intel's 28-core CPU. IMHO that's the biggest ding of the Mac Pro, it wasn't the fastest machine of its kind period — but the fault wasn't with Apple.
I feel like we're in agreement but you sound like you're disagreeing with me. I agree the Mac Pro will be the last machine to go to ARM, that's what I was saying. They won't go from running zero Macs on ARM to starting with the most complex and mission critical hardware. Even if the MP was a couple of years old, they wouldn't start there. As it is "Yes we made you wait 7 years, then sold you a machine for $13k+ that could be toasted by a ~$6k Threadripper before it even shipped. Yes we also decided to screw you further by charging you an extra $500 to lay it on its side, or $400 for the castor's off Steve Jobs' old sofa. But don't worry, we just made it even more obsolete right now than you reasonably expected it to be in 5 years." Thats really not going to go down well.

Given the performance and cooling needs of the Threadrippers (and I assume EPYCs) compared to the Intel workstation CPUs, and the even better performance/watt of ARM, the current Mac Pro's cooling seems likely to be wildly over the top. All that time and effort designing it, they aren't going to throw it away a few months down the line.


Originally Posted by OreoCookie View Post
For what purpose? While there have been examples of this historically (e. g. a Mac emulator for Amigas, x86 DOS cards for Macs and Acorn Risc PCs), I don't think this makes sense. Emulation will be fine to tide you over, because nowadays I don't expect it to be as difficult as it was before to switch to ARM. As I have said, all the big software houses have made their code pretty portable, I expect Adobe and Microsoft to release ARM versions of their software relatively quickly.

With GPU switching you really have a tangible advantage, it's a clumsier version of big and little cores sharing work.
I guess I just thought this might have benefits if they did something immediate. Shift to an A13X to save power on simpler tasks, shift back to the big 6-8 core Intel for heavy lifting when you need it. Its probably not worth the bother as you say.


I do wonder whether everyone here is placing too much emphasis on core counts. I figure there are three or four categories of customer when it comes to cores:

1: 'Posh Office' machines. These make up probably 75% or more of Mac users. These users are doing things they could do with a ~$300 PC but in style. Surfing the web, email, MS Office, Netflix, Photos, music, not much else. Students, home users, kids, parents and grandparents (who have other machines for work), office admins, reception desks etc. These people barely benefit from a second core if indeed they do.

2: People running a VM. Mostly the above use but with Windows running one or two specific apps for work. These people will see a benefit to having an extra core or two but that's probably about it.

3: Light Photoshoppers. People doing pretty basic design or photo work. Maybe some light video editing. They will use cores if they have them but only sometimes and they aren't going to break the bank to get lots more. Having 6-8 is nice for them sometimes but probably not a deal breaker as long as they can check emails while something is rendering or encoding.

4: Hardcore users. These people need all the cores they can possibly get and will spend whatever they can afford to get them. I don't see that there is any customer who would say "32 cores is perfect for me, I don't need any more than that right now". You can either make do with 8 or you'll take every last core you can get. If there is anyone in between, no company is catering specifically to them. Because there isn't enough of them.

Someone brought up Chromebooks. Apple has no interest in competing with those. The profit in a Chromebook is in Google tracking the variations of your inside leg measurement with the weather. Apple makes money on hardware. If they wanted to build a cheap Mac, they could do so. I expect the entry level iMacs cost about $100 a unit to build at the moment with those goddamned shitty hard drives in them.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 12, 2020, 07:07 AM
 
Originally Posted by Waragainstsleep View Post
I feel like we're in agreement but you sound like you're disagreeing with me. I agree the Mac Pro will be the last machine to go to ARM, that's what I was saying.
It wasn’t my intention to come across as combative. Sorry if I did, though.
I was honestly trying to ask how long Apple should take with this transition. The PowerPC-to-Intel transition was pretty flawless in my mind, and that took about a year, if memory serves. And to be honest, the most likely scenario is that Apple takes a page out of AMD’s playbook and couples chiplets to a memory/PCIe controller.
Originally Posted by Waragainstsleep View Post
Given the performance and cooling needs of the Threadrippers (and I assume EPYCs) compared to the Intel workstation CPUs, and the even better performance/watt of ARM, the current Mac Pro's cooling seems likely to be wildly over the top. All that time and effort designing it, they aren't going to throw it away a few months down the line.
Seeing as how they’ve painted themselves into a thermal corner, I’d rather Apple errs on the side of cooling overkill this time.
(Intel’s “TDP” recommendations get a bit ridiculous, they can be off by a factor of ~3 in terms of max power draw.
Originally Posted by Waragainstsleep View Post
I guess I just thought this might have benefits if they did something immediate. Shift to an A13X to save power on simpler tasks, shift back to the big 6-8 core Intel for heavy lifting when you need it. Its probably not worth the bother as you say.
Except that the ARM cores Apple makes are already faster than Intel’s cores. So offloading them to cores that are slower wouldn’t make much sense — unless you are executing x86 code.
Originally Posted by Waragainstsleep View Post
I do wonder whether everyone here is placing too much emphasis on core counts. I figure there are three or four categories of customer when it comes to cores: [snip]
This is an excellent point. In my mind having 4-6 cores should cover everyone’s needs. Do you need 4 cores for posh office machines? Usually not. But when I went from a 2-core 13” MacBook Pro to an 8-core Mac Pro as my primary machine at work, the difference was huge: even if e. g. 1-2 cores were pegged because e. g. a Chrome tab hogged one core and Time Machine wanted another, the Mac Pro did not slow down at all. So an iPhone-like 2+2 configuration with a higher clockspeed may suffice for Macs. I’d upgrade the GPU, though.

I don’t think you’d need more than 8 fast cores at the moment. I’d rather Apple invested the left-over die area and dedicate it to the GPU and/or various accelerators. These can really make much more of a difference.
Originally Posted by Waragainstsleep View Post
4: Hardcore users. These people need all the cores they can possibly get and will spend whatever they can afford to get them. I don't see that there is any customer who would say "32 cores is perfect for me, I don't need any more than that right now". You can either make do with 8 or you'll take every last core you can get. If there is anyone in between, no company is catering specifically to them. Because there isn't enough of them.
To be honest, this is an excellent point, and something that is worth discussing for the eventual Mac Pro replacement. Just like with Apple letting you choose amongst Xeons that balance clockspeeds and core counts in different ways, you’d probably need that with an ARM-based Mac Pro as well. Even feeding 8 cores for most of us is not easy.
Originally Posted by Waragainstsleep View Post
Someone brought up Chromebooks. Apple has no interest in competing with those. The profit in a Chromebook is in Google tracking the variations of your inside leg measurement with the weather. Apple makes money on hardware. If they wanted to build a cheap Mac, they could do so. I expect the entry level iMacs cost about $100 a unit to build at the moment with those goddamned shitty hard drives in them.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 12, 2020, 11:50 AM
 
I don't know what it costs Apple to produce an A13X chip in terms of buying an i5 or i7 from Intel but could there be any value in building a dual CPU Machine? Given the thermal specs involved it seems Apple could take the existing A13X, put a heatsink from a current MBP (as opposed to the sliver of foil you get in an iPhone), with or without a fan, and clock them up 50% or so without too much bother, and drop two of them onto one board for a mid range iMac or MBP. That said, are these A13X SoCs suitable for dual proc applications? Seems like such a machine would be an absolute beast with decent battery life.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 12, 2020, 02:29 PM
 
Originally Posted by Waragainstsleep View Post
I don't know what it costs Apple to produce an A13X chip in terms of buying an i5 or i7 from Intel but could there be any value in building a dual CPU Machine?
It is safe to say that Apple's chips are much cheaper for them than Intel's.

Regarding a dual cpu machine, unless Apple has kept a big secret from everyone, the A12X (I don't think there is an A13X yet) does not contain the logic for symmetric multiprocessing. And if it doesn't, it'd be cheaper and simpler to simply add those cores on a single chip/package. That's what AMD does. The A12X has 4+4 cores, and you are proposing 8+8. If the rumor mill is to be believed, Apple will build a 8+4 core, which falls in between. (Probably having 4 small cores is sufficient …)

But looking at the landscape out there, in principle Apple can scale up to 64+ cores if it wanted to. It is all a matter of speccing its CPUs in the way it thinks best fits the corresponding markets. For mainstream customers, 4+4 cores is plenty, especially if Apple can up the clockspeed. Adding more cores makes the chip faster for some workloads, raising clockspeeds makes it faster for all workloads. Even for the Pro market, going beyond 8+4 cores will have quickly diminishing returns. So even on an iMac, I don't think Apple needs to aim higher than this. Instead, they should moderately raise clockspeeds and add specialized accelerates where necessary. A beefy neural engine will be much faster and much more efficient at machine learning than running the same workload on the cpu or gpu cores.
Originally Posted by Waragainstsleep View Post
Given the thermal specs involved it seems Apple could take the existing A13X, put a heatsink from a current MBP (as opposed to the sliver of foil you get in an iPhone), with or without a fan, and clock them up 50% or so without too much bother, and drop two of them onto one board for a mid range iMac or MBP. That said, are these A13X SoCs suitable for dual proc applications? Seems like such a machine would be an absolute beast with decent battery life.
I don't think it is as easy as taking existing chips (apart for the MacBook Air, obviously). As I said above, dual processor configurations will be entirely unnecessary. Nowadays you can get >64 cores from a single socket, so unless your processing needs are truly extraordinary, there is zero benefit to going to a dual socket config. What configuration Apple will go with is an interesting speculation, and IMHO is determined by Apple's goals.

MacBook Air: I would expect Apple to simply re-use the iPad Pro variant of their SoC. That would eliminate the fan from the machine (which would be very appealing to me), have much lower power consumption than the current Intel CPUs and go from 2 or 4 cores to 4+4. The performance boost will be significant. It will stick to USB-C rather than Thunderbolt.

MacBook Pros: I think 8+4 cores is sufficient. That is 4 extra cores, which, granted, are slower, but they are also more power efficient. They will still retain active cooling and more powerful memory controllers and run at moderately higher clocks. Also the built-in GPU will be beefed up. The 16" should retain discrete graphics. Both models feature Thunderbolt rather than just USB-C. This means you need to have support for sufficiently many PCIe lanes on your chip. In my fever dreams I see ECC RAM, but that's not going to happen.

iMac: 8+4 cores is probably sufficient here as well. Apple could pair it with an external graphics card and perhaps increase clocks compared to the MacBook Pro versions. But other than that, I don't think average users need more than 12 cores. Even 8 is more than plenty. Perhaps Apple might offer a budget config with 4+4 cores. Thunderbolt should be present for the higher-end model, but perhaps not the budget model.

iMac Pro: Now things are getting interesting. Powerful discrete graphics is a must, as is ECC RAM. So you could add a much faster memory controller (more channels) and plenty of PCIe channels for Thunderbolt connectivity. Perhaps here coupling two 8+4 chiplets to a memory/IO chip like AMD does would be a way to offer up to 24 cores. 16 fast cores should be competitive with 18 Intel cores. And you'd have 8 slower, high-efficiency cores to spare for “background tasks”.

Mac Pro: This is going to be a big question mark. In principle, the sky is the limit. Of course, there will be lower-end configs that share the CPU of the iMac Pro, but offer expandability. Seeing as the competition sells 64-core CPUs right now, I think Apple should shoot for 64 fast cores. So perhaps you have 16-, 32- and 64-core models. Perhaps they will offer a 24-core model. The RAM ceiling should be >1.5 TB.

Apart from this, I see a role for more specialized hardware. Apple's Afterburner card is very intriguing, especially if someone figures out other applications for it. (It consists of FPGAs, so it is a reprogrammable chip. Currently it only accelerates certain video codecs, but in principle it could be used for other things.)
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 12, 2020, 05:42 PM
 
Originally Posted by OreoCookie View Post
It’d be more efficient if they put the accelerators on the same silicon. Plus, Apple could easily re-use e. g. the Neural Engines it puts into iPhones and iPads — or perhaps beef them up by increasing the number of “cores” (or whatever the functional blocks in the Neural Engine are called).
More efficient as in lower latency, sure. Less efficient in terms of yield on a big chip. With everyone and their dog moving to some sort of chiplet design at least, I question if there is any value here.

Perhaps I am wrong here, but I was under the impression that the Mx-series bit was part of the A1x-series SoC. Or are those on separate chips?
I had to look this up. The M7 and M8 at least were separate chips. These days they appear to be integrated on the CPU and just power gated, so OK, you could do that and put the T3 or whatever into the Apple CPU.

I’m fine with 10 hours — if I get 10 hours for normal workloads. Sure, when I turn down the screen brightness and quit all apps, I got 13 hours on my previous 13” MacBook Pro when working on the plane (dark environment, no wifi, no Bluetooth, etc.), but more like 7-9 hours under moderate workloads. On my 16” I get 5-6, I think. (I haven’t flown with it yet.) So if the difference between “TDP” and real-life TDP becomes zero again (apparently 125 W = 275 W ) and battery life becomes less workload-dependent, we might get longer real-world battery life and thinner machines.
I get that, but I don’t think Apple cares about better battery life than they have. They aim for 10 hours and they think they have met that now. If they make a more efficient CPU, they can reduce the battery - that is how they design things.

The 125W = 275W is because motherboard manufacturers cheat (they change the power limit). If Intel says the TDP is 125W and you don’t cheat, it is 125W.

Stupid question: isn’t iOS based on a version of Quartz, too? Currently, I think you can use nVidia, AMD and Intel CPUs concurrently. While they work on the same rendering model, their architectures are quite different, too.
All iOS devices use TBDR - all Macs use immediate rendering (IR). If you have an app that programs shaders (any game, at least), this is a fundamental difference. Making a game work on both TBDR and IR is in all respects the same as porting it to a completely new architecture. The difference between AMD and NVidia is tiny in comparison. NVidia did a lot of work to implement a tile-based immediate renderer in Maxwell, because they could not just get a game written for immediate rendering to work on TBDR.

@P
If you were working for Apple and did not want to let this opportunity go to waste, what technologies would you deprecate? What changes to the architecture of mac OS would you make?
The question to ask is - why the Mac? If iOS devices exist and are faster and cheaper, why make a Mac? What features make it essential? Clearly not CPU performance. In my view, it must be that it is open to let us do what we want, run the programs we want, store files as we like, develop programs, etc. I don’t know who said it, but the complexity of the Mac lets the iOS devices be simple. But those complex things are all the things Apple seems hell bent on taking away now. It is harder than ever to run a program that Apple didn’t sign, and you can forget about drivers. Their OS says that “not updating is deprecated”, indicating that they’re going to start forcing updates - and those updates keep taking away features. You can’t even access your own files, in your home directory, in Catalina without registering an exception.

So then the purpose of the Mac is not to be the complex device that makes iOS devices easier. Then I don’t know what it is.

As for what I would do: I would copy the 68k to PPC transition and port the OS as it stands to ARM. Make installing the x86 libraries optional - make them download if you need them - but they should remain available. Simple x86 emulation isn’t hard. It doesn’t need to be fast, and it doesn’t need to support every single extension Intel has come up with. It just needs to work - and since the OS itself will be running natively this time as it didn’t back then, the hit will be less anyway.

The rendering mode is an issue. Perhaps it would be best to license a GPU from someone (the obvious answer is AMD) and integrate it and don’t change the rendering model. If not, go all in on TBDR and make your GPU so powerful that you can truly wow with it, and challenge people to port their apps and games to it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 12, 2020, 09:27 PM
 
I do fear that pretty soon Apple will restrict macOS to App Store only, make it impossible to disable gate keeper, FileVault, SIP etc. etc.

To be fair windows 10 isn’t much better, I foresee a lot of new Linux users if these trends continue.

And IMO, I agree with regards to GPUs. Unless Apple can build some thing in the house that would knock the socks off of Tesla quadrille or other comparable high-end graphics card I think it would be a major setback for pro users to abandon discrete graphics.

If they REALLY want to get some goodwill to offset the switch, make nice with Nvidia.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 12, 2020, 11:19 PM
 
Originally Posted by P View Post
More efficient as in lower latency, sure. Less efficient in terms of yield on a big chip. With everyone and their dog moving to some sort of chiplet design at least, I question if there is any value here.
That all depends on OS-level support of this specialized hardware. But it is definitely an area where you could choose to invest die area and power in. And Apple’s tight integration between software and hardware could make that easier.
Originally Posted by P View Post
I get that, but I don’t think Apple cares about better battery life than they have. They aim for 10 hours and they think they have met that now. If they make a more efficient CPU, they can reduce the battery - that is how they design things.
The problem I have with Apple’s quoted battery life figures is that they were like official EU fuel efficiency numbers – they did not reflect battery life in real world usage. No doubt one of the problems is the large power gamut of Intel CPUs. On my iPads on the other hand, the quoted battery life numbers have been very close to lived reality. So I’d want 10 hours of “average” use first — and then Apple can feel free to make the battery smaller and my machine lighter.
Originally Posted by P View Post
The 125W = 275W is because motherboard manufacturers cheat (they change the power limit). If Intel says the TDP is 125W and you don’t cheat, it is 125W.
That is not the situation as I understand it. Intel knows full well what it is doing and I was under the impression that Intel actually specifies how much power motherboards have to be able to supply to the CPU, and it is a multiple of the official TDP. Intel e. g. specifies “tau”, which is the time the CPU can spend in “TDP overdrive” (PL2 if memory serves, which is 2x the “TDP”). So if you add some padding to 2 x max TDP of the supported CPUs, you arrive at 275-300 W.
Originally Posted by P View Post
All iOS devices use TBDR - all Macs use immediate rendering (IR). If you have an app that programs shaders (any game, at least), this is a fundamental difference.
I think I understand the differences in architectures (my brother used to have a Kyro 2 aeons ago). My rather simple question was whether Quartz, the graphical layer of OS X, supports tile-based deferred rendering already or if this is a first. If it is the latter, we may have to deal with teething issues here.
Originally Posted by P View Post
The question to ask is - why the Mac? If iOS devices exist and are faster and cheaper, why make a Mac? What features make it essential? Clearly not CPU performance. In my view, it must be that it is open to let us do what we want, run the programs we want, store files as we like, develop programs, etc. I don’t know who said it, but the complexity of the Mac lets the iOS devices be simple. But those complex things are all the things Apple seems hell bent on taking away now.
Platforms should deprecate and remove old technologies to avoid cruft. Otherwise you end up with Windows. The art is the speed at which you remove or replace APIs and the like with something new. Apple’s OpenGL implementation is ancient (it is still 3.2 and hasn’t adopted all of the 4.x features — and never will‚ so removing it seems a question of time.

Reading about some of the things Apple did change with its newer versions of mac OS — like checking scripts you execute for malware, which introduces delays, unix tools getting into arguments with SIP, etc. A lot of these changes are due to them being retrofitted to an existing OS. The ARM transition would allow for more clean sheet solutions.
Originally Posted by P View Post
So then the purpose of the Mac is not to be the complex device that makes iOS devices easier. Then I don’t know what it is.

As for what I would do: I would copy the 68k to PPC transition and port the OS as it stands to ARM. Make installing the x86 libraries optional - make them download if you need them - but they should remain available. Simple x86 emulation isn’t hard. It doesn’t need to be fast, and it doesn’t need to support every single extension Intel has come up with. It just needs to work - and since the OS itself will be running natively this time as it didn’t back then, the hit will be less anyway.
Emulation should be supported for a while. Although I think the biggest problems will be drivers, and I am not sure whether emulation will work here. (When going from PowerPC to x86, drivers were not covered by Rosetta, at least in my case my photo calorimeter was the only (temporary) casualty.)
Originally Posted by P View Post
The rendering mode is an issue. Perhaps it would be best to license a GPU from someone (the obvious answer is AMD) and integrate it and don’t change the rendering model. If not, go all in on TBDR and make your GPU so powerful that you can truly wow with it, and challenge people to port their apps and games to it.
This is a sensible suggestion. We know for a fact that Apple can scale up its CPU cores to match the best x86 cores out there. With GPUs there is a giant gulf between nVidia’s and AMD’s top-end GPUs and Apple’s fastest GPUs. Licensing AMD’s GPUs could be a sensible option. I reckon they’ll want to stick with AMD for all computers that sport discrete GPUs anyway, so they could use the exact same driver here.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 13, 2020, 07:40 AM
 
Does the ARM version of Windows rely on all the 3rd party apps being compiled for ARM?
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 13, 2020, 09:46 AM
 
Originally Posted by Waragainstsleep View Post
Does the ARM version of Windows rely on all the 3rd party apps being compiled for ARM?
No, but AFAIK Microsoft only includes emulation for 32 bit x86 apps. I don't know whether this is a limitation of the Qualcomm ARM chips Microsoft uses or a matter of software.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 13, 2020, 02:33 PM
 
Since Windows 10 doesn't prohibit use of 32-bit apps like Catalina does I don't suppose that matters too much.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 13, 2020, 11:59 PM
 
Originally Posted by Waragainstsleep View Post
Since Windows 10 doesn't prohibit use of 32-bit apps like Catalina does I don't suppose that matters too much.
I don't use Windows, much less Windows on ARM, but the lack of 64 bit support seems to be a major pain point. (Keep in mind that Microsoft is missing technologies like fat binaries that allow you to distribute executables for several chip architectures at the same time.)

I would expect that Apple includes emulation, and since it has kicked 32 bit apps to the curb, that must mean 64 bit emulation.
I don't suffer from insanity, I enjoy every minute of it.
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jun 14, 2020, 04:32 PM
 
Originally Posted by OreoCookie View Post
I'm curious, why is your ChromeBook more flexible? Even if we bracket apps that benefit from touch- or pen-centric user interfaces, I can't see the appeal of web-based apps.
USB, SD card storage, and longer battery life are a few points where the ChromeBook beats the iPad.

On the other hand, I'm limited with the ChromeBook to cloud storage with Google, rather than either iCloud or One Drive (I use both of these).

So for me it's all what I'm using the device for. At work, several folks that need computer access but aren't expected to stay at a desk use ChromeBooks for pretty much everything they do.

Glenn -----OTR/L, MOT, Tx
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 14, 2020, 08:44 PM
 
I'm wondering if ARM across the range will end up with MacOS rebranded as iOS Pro. Then maybe the MacBook Pro becomes the ProBook and the iMac/Pro becomes iDesk/iPro. Then Mac Pro becomes ProDesk or maybe ProStation. Mac Mini becomes iServer and a new Xserve is the ProServe. Cue a whole new set of very silly riots....
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 14, 2020, 09:52 PM
 
Originally Posted by ghporter View Post
USB, SD card storage, and longer battery life are a few points where the ChromeBook beats the iPad.
I'm surprised to hear battery life, I can get through an entire workday on a charge no problem with my iPad, so that's “long enough”. How much longer battery life would you want to have?
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 14, 2020, 09:59 PM
 
Originally Posted by Waragainstsleep View Post
I'm wondering if ARM across the range will end up with MacOS rebranded as iOS Pro.
I don't think so. I think Apple will want to keep the lines separate and probably will stick to the successful PowerPC —> x86 transition playbook: Apple kept the visual design of all their machines the same, from the outside the Mac Pro and the PowerMac G5 looked virtually identical (unless you were specifically looking for the small differences). Ditto for PowerBooks —> MacBook Pro. Most customers couldn't tell the difference — which was the point IMO.
Originally Posted by Waragainstsleep View Post
Then maybe the MacBook Pro becomes the ProBook and the iMac/Pro becomes iDesk/iPro. Then Mac Pro becomes ProDesk or maybe ProStation. Mac Mini becomes iServer and a new Xserve is the ProServe. Cue a whole new set of very silly riots....
Overall, these are some great naming ideas.

I like ProBook What would you call the smaller machine, iBook? (I love the name iBook, that brings back great memories. My iBook 12" G3 800 was and still is my favorite Mac.)

iMac can stay, that's an iconic, great name in my book. Mac Pro is alright, too, but if we have a ProBook, then it should be the Pro Mac.

I also like your idea to emphasize the server aspect of the Mac mini.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 15, 2020, 06:03 AM
 
I've always thought Apple could do some really awesome stuff with the Mac Mini if they were just a little more interested in offering some killer small business server apps.

For example, being able to run a private FaceTime and/or iMessage server. In my head, this piece of software would allow people to log their personal accounts into it as guests of a sort, as well as creating accounts on private domains so small companies (or large ones for that matter) could maintain control of their own secure communications.
By logging in your iCloud/iMessage/Facetime account, the iServer or ProServe would act as an intermediary relay client, so as far as Apple sees it its a client, but then your other devices in the house or office or pocket would be able to connect to it on the local network. So say there's four of you living in the family home, all logged into the iServer by your router. A call comes in for Alice via FaceTime, Alice is in the kitchen where she has her phone, her Mum's laptop is on the table where she's working and there's an old iPad stuck to a cupboard for recipes and such. She can pick whichever device she wants to answer the call. It should leverage FaceID and/or TouchID to authorise receiving the call. Useful in a home, super handy in an office as you now have a PBX for video chat.

The iServer would also cache iTunes content for everyone, there would be scope for adding camera accessories for AppleTVs, and using Apple devices to create things like smart mirrors so you could answer a call at your dressing table like in a sci-fi movie. There is room to expand on Profile Manager a lot. Location tracking would be one killer feature for me. Proper MDMs are priced for multinationals. Crazy expensive for a small business.

Such a product might seem like a waste to Apple since they abandoned the small business server market already but I think it would give a lot of households an excuse to buy a bunch more Apple kit than they already do if they flesh out their product lines as indicated. And it would work for small businesses and in some cases big ones too.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Thorzdad  (op)
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Online
Reply With Quote
Jun 15, 2020, 08:45 AM
 
Here's an interesting piece over on TidBITS about the ARM transition. Unlike most websites, definitely do read the comments. There's some thoughtful posts concerning how this might affect developers and the science/research fields, which have both seen Macs become the platform of choice.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 15, 2020, 10:58 AM
 
Originally Posted by OreoCookie View Post
The problem I have with Apple’s quoted battery life figures is that they were like official EU fuel efficiency numbers – they did not reflect battery life in real world usage. No doubt one of the problems is the large power gamut of Intel CPUs. On my iPads on the other hand, the quoted battery life numbers have been very close to lived reality. So I’d want 10 hours of “average” use first — and then Apple can feel free to make the battery smaller and my machine lighter.
Oh I agree, certainly - but I don't think this change will lead to better battery life, because better battery life beyond 10 hours (by Apple's metric) come below "being thin enough to slice cheese" in Apple's priority list.

That is not the situation as I understand it. Intel knows full well what it is doing and I was under the impression that Intel actually specifies how much power motherboards have to be able to supply to the CPU, and it is a multiple of the official TDP. Intel e. g. specifies “tau”, which is the time the CPU can spend in “TDP overdrive” (PL2 if memory serves, which is 2x the “TDP”). So if you add some padding to 2 x max TDP of the supported CPUs, you arrive at 275-300 W.
Close, but not exactly. By default, the chip clock speed is limited by both a max clock (for a certain number of active cores) and a power limit. The idea is that if you're running simple integer additions, you get the max turbo clock, but if you're running 256-bit vector FMAs, you get limited by the power limit. PL1 is the long-term average power limit - it is equal to the TDP. PL2 is the "turbo boost" power draw, which is 25% higher than PL1 in every case I've checked (with the caveat that I haven't bothered reading the specs for the desktop chips in a while, but they were 25% higher for Skylake/Kaby Lake at least, and they still are on the laptop chips). Tau is the length of time that the chip can stay in PL2. PL3 and PL4 are other power limits that govern how much power the chip can draw for shorter times, even a single cycle, to avoid turning the VRMs into small puffs of blue smoke.

What motherboard manufacturers do is that they change both PL1, PL2 and tau, so that these power limits are never the limit on the actual clock - the CPU will run up to its turbo boost clock limit. At this point, you might very well see the power draw be 275W if you run code that is heavy enough. If you run light code, you will stay comfortably below the stated TDP.

So what is Intel to do here? If they limit the max turbo clock, they will punish users who don't cheat the power limits while running light code. Lock down the power limit? That seems to be the only thing that would work, but you know that the same enthusiasts who are currently whining about the power draw will go ape if you do that, because they think they can handle the increased power draw. They could also conceivably put the squeeze on motherboard manufacturers, but that risks those manufacturers focusing on AMD instead.

So they stay quiet, the same way they did when manufacturers misreported the base clock to trick locked CPUs to overclock. It may be dishonest, but they don't lie, exactly - their chips do exactly what they say when you run them according to specs. If you ignore those specs, you're on your own.

I think I understand the differences in architectures (my brother used to have a Kyro 2 aeons ago). My rather simple question was whether Quartz, the graphical layer of OS X, supports tile-based deferred rendering already or if this is a first. If it is the latter, we may have to deal with teething issues here.
I can not find any evidence of Quartz running on top of a TBDR GPU ever. Maybe Apple built that support years ago in anticipation of this, but in either case, I'm not super worried about Quartz. I'm worried about all the apps that use the GPU. Either those apps need to support both modes, OR Apple can't use their PowerVR-derived GPUs OR discrete GPUs go the way of the dodo.

Platforms should deprecate and remove old technologies to avoid cruft. Otherwise you end up with Windows. The art is the speed at which you remove or replace APIs and the like with something new. Apple’s OpenGL implementation is ancient (it is still 3.2 and hasn’t adopted all of the 4.x features — and never will‚ so removing it seems a question of time.
That reasoning makes some sense in an era when I can choose to update or not, but increasingly, I can't. Everyone keeps pushing updates more and more aggressively, and Apple has signaled that they will also start forcing updates. If I upgrade my Mac to Catalina, I lose access to a lot of apps because Apple has decided that they don't want to bother compiling their APIs for 32-bit x86 anymore. This, along with a couple of other lost features and a general warning from the community that Catalina is unusable garbage, means that I won't be installing it. Because why should I? It doesn't bring any new features that I'm interested in. If Apple decides to just start updating my Mac without me agreeing to it, that means they're removing features I paid for. Weirdly enough, I don't want that.

Reading about some of the things Apple did change with its newer versions of mac OS — like checking scripts you execute for malware, which introduces delays, unix tools getting into arguments with SIP, etc. A lot of these changes are due to them being retrofitted to an existing OS. The ARM transition would allow for more clean sheet solutions.
Solutions to a problem I don't have. MacOS is supposed to be about me running whatever I want on my box, because if I can't, I might as well get an iPad next time. Other than the RAM (which Apple keeps skimping on in iPads for some reason) the hardware is good enough.

Emulation should be supported for a while. Although I think the biggest problems will be drivers, and I am not sure whether emulation will work here. (When going from PowerPC to x86, drivers were not covered by Rosetta, at least in my case my photo calorimeter was the only (temporary) casualty.)
I personally think that there will be no more drivers (in the sense of kexts) at all. It's remarkable that an OS that tries to prevent me from accessing files that I have stored on my desktop allows kexts at all.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 15, 2020, 08:12 PM
 
Originally Posted by P View Post
Oh I agree, certainly - but I don't think this change will lead to better battery life, because better battery life beyond 10 hours (by Apple's metric) come below "being thin enough to slice cheese" in Apple's priority list.
I have the impression that Apple has gotten a little better in the last few years, their “standard cycle” has become closer to “average” use. As far as I can tell, the vast power gamut of Intel’s CPUs contributes to that, though.
Originally Posted by P View Post
Close, but not exactly. By default, the chip clock speed is limited by both a max clock (for a certain number of active cores) and a power limit. The idea is that if you're running simple integer additions, you get the max turbo clock, but if you're running 256-bit vector FMAs, you get limited by the power limit. PL1 is the long-term average power limit - it is equal to the TDP. PL2 is the "turbo boost" power draw, which is 25% higher than PL1 in every case I've checked (with the caveat that I haven't bothered reading the specs for the desktop chips in a while, but they were 25% higher for Skylake/Kaby Lake at least, and they still are on the laptop chips). Tau is the length of time that the chip can stay in PL2. PL3 and PL4 are other power limits that govern how much power the chip can draw for shorter times, even a single cycle, to avoid turning the VRMs into small puffs of blue smoke.
I don’t think that contradicts what I have written before: Intel changed what TDP means in a substantial way. We are not talking about adding a few % around the edges, PL2 allows for double the power draw for a limited period of time. Clearly, overclockers and enthusiasts will want to fudge with tau and the voltages and run in higher-power states for longer. But Intel supports PL2 in the first place, and IMHO it is disingenuous to call what Intel dubs TDP as what TDP used to mean. Just imagine you spec your voltage regulators for “TDP” + safety margin of, say, 30-50 %, and stick an Intel chip in it. Hilarity would ensue once you run some AVX512 workload on all of your cores. You may say that TDP stands for thermal design powers, but it has been used for years and years to spec other components such as power supplies.

I only keep harping about it, because we use TDP to compare chips with one another. If Intel’s PL2 numbers are closer what other manufacturers call TDP, then we should use that.
Originally Posted by P View Post
So what is Intel to do here? If they limit the max turbo clock, they will punish users who don't cheat the power limits while running light code. Lock down the power limit? That seems to be the only thing that would work, but you know that the same enthusiasts who are currently whining about the power draw will go ape if you do that, because they think they can handle the increased power draw. They could also conceivably put the squeeze on motherboard manufacturers, but that risks those manufacturers focusing on AMD instead.
I think Intel should stop calling it TDP. We all know why Intel is doing what it is doing, to try and keep up performance-wise. It’s fine if power consumption isn’t a concern, performance is performance, and higher clocks make everything faster.
Originally Posted by P View Post
So they stay quiet, the same way they did when manufacturers misreported the base clock to trick locked CPUs to overclock.
That strikes me as a “Technically, I did not lie, I just did not tell you the truth.”
Originally Posted by P View Post
I can not find any evidence of Quartz running on top of a TBDR GPU ever. Maybe Apple built that support years ago in anticipation of this, but in either case, I'm not super worried about Quartz. I'm worried about all the apps that use the GPU. Either those apps need to support both modes, OR Apple can't use their PowerVR-derived GPUs OR discrete GPUs go the way of the dodo.
That’s an important point, and I reckon we will hear about it next week.
Originally Posted by P View Post
That reasoning makes some sense in an era when I can choose to update or not, but increasingly, I can't. Everyone keeps pushing updates more and more aggressively, and Apple has signaled that they will also start forcing updates. If I upgrade my Mac to Catalina, I lose access to a lot of apps because Apple has decided that they don't want to bother compiling their APIs for 32-bit x86 anymore. This, along with a couple of other lost features and a general warning from the community that Catalina is unusable garbage, means that I won't be installing it. Because why should I? It doesn't bring any new features that I'm interested in. If Apple decides to just start updating my Mac without me agreeing to it, that means they're removing features I paid for. Weirdly enough, I don't want that.
I think we should keep two things separate here: Apple’s lackluster Catalina upgrade and whether it is prudent to remove old, deprecated technologies from the OS. While I haven’t run into as many problems with Catalina as others, I agree, there is not much there to justify upgrading to it from the previous version. But let’s shelve that.

Any deprecation of legacy technology will lead to some software no longer being supported. I have lost one of my dearest pieces of software to the transition, Apple Aperture. Yes, deprecating old technologies can lead to problems. But so does keeping them around forever. This has led to complete stagnation on the Windows side. Microsoft has made umpteenth pushes to replace their old APIs with newer ones, and their efforts got nowhere for the most part. IMHO this is a balancing act. The deprecation of 32 bit has started in 10.5 essentially, when Apple decided not to move Carbon to 64 bit. That was 10 versions ago. So I think this has given devs enough time to adjust. To some degree, this will always have to be a case-by-case discussion. Apple’s strategy seems sound and at least from where I sit, their speed is ok.

However, recently, the payoff was less clear. Catalina was not a good upgrade. The apps that were new, were not good. (Basically all the Catalyst apps are dumpster fires.) Standard apps like Mail and the Finder haven’t seen any love in years, etc. But that’s a different discussion IMHO.
Originally Posted by P View Post
I personally think that there will be no more drivers (in the sense of kexts) at all. It's remarkable that an OS that tries to prevent me from accessing files that I have stored on my desktop allows kexts at all.
Agreed. Apple has telegraphed this change over the last few years very clearly. And I have to say the only time I have had persistent stability problems with OS X, it was because of some third-party kext. Cisco’s VPN kext almost bricked my MacBook Pro when I upgraded a few years back. (I didn’t think about it, and Cisco always seemed surprised every year anew when a new version of OS X was released. )
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 16, 2020, 05:33 AM
 
I've been keeping an eye on all rumours to do with the 16" MBP leading up to WWDC and I wonder if there is a hint or two to be gleaned now. I just ordered one as I got an 11% discount on it but then noticed it still has the 9th generation Intel chips while the 2020 13" has the new ones so I was worried that an update might be imminent. As it is, Apple quietly added a GPU update option to it yesterday so I figure there is no way they will replace the model at WWDC now.
Good for me, but it seems odd that the smaller unit would have newer (and from the sounds of it significantly better) CPUs than the flagship model, even for only a few months. Makes me wonder if Apple has decided not to bother transitioning the 16" to 10th gen Intel.

Now I'm expecting a monster AMD or ARM based one around Christmas.
( Last edited by Waragainstsleep; Jun 16, 2020 at 06:52 AM. )
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 16, 2020, 09:46 AM
 
I own a 16" MacBook Pro and this machine is amazing. I just wish I would have gone for the 32 GB option even though I am proooobably ok with 16 GB. But I love everything about it: the keyboard is amazing, to my surprise I even like the touch bar. It is fast, going from 2 to 8 cores is just wow. The graphics are fast (I love having many windows open, and the integrated graphics on my 13" would often choke). You won't regret it, this machine will last you quite a few years.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 16, 2020, 04:41 PM
 
I didn't go for 32GB either, just too much extra. I didn't opt for the extra video memory either.

I'm coming from a late 2013 15" retina so its a massive upgrade for me.
I have plenty of more important things to do, if only I could bring myself to do them....
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jun 16, 2020, 06:31 PM
 
Originally Posted by OreoCookie View Post
I'm surprised to hear battery life, I can get through an entire workday on a charge no problem with my iPad, so that's “long enough”. How much longer battery life would you want to have?
I don’t run my iPad down to nothing in a normal day. But the ChromeBook seems to do more on the first 50% of battery life than my 3rd Gen iPad Air.

Glenn -----OTR/L, MOT, Tx
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 16, 2020, 09:15 PM
 
Originally Posted by P View Post
Oh I agree, certainly - but I don't think this change will lead to better battery life, because better battery life beyond 10 hours (by Apple's metric) come below "being thin enough to slice cheese" in Apple's priority list.



Close, but not exactly. By default, the chip clock speed is limited by both a max clock (for a certain number of active cores) and a power limit. The idea is that if you're running simple integer additions, you get the max turbo clock, but if you're running 256-bit vector FMAs, you get limited by the power limit. PL1 is the long-term average power limit - it is equal to the TDP. PL2 is the "turbo boost" power draw, which is 25% higher than PL1 in every case I've checked (with the caveat that I haven't bothered reading the specs for the desktop chips in a while, but they were 25% higher for Skylake/Kaby Lake at least, and they still are on the laptop chips). Tau is the length of time that the chip can stay in PL2. PL3 and PL4 are other power limits that govern how much power the chip can draw for shorter times, even a single cycle, to avoid turning the VRMs into small puffs of blue smoke.

What motherboard manufacturers do is that they change both PL1, PL2 and tau, so that these power limits are never the limit on the actual clock - the CPU will run up to its turbo boost clock limit. At this point, you might very well see the power draw be 275W if you run code that is heavy enough. If you run light code, you will stay comfortably below the stated TDP.

So what is Intel to do here? If they limit the max turbo clock, they will punish users who don't cheat the power limits while running light code. Lock down the power limit? That seems to be the only thing that would work, but you know that the same enthusiasts who are currently whining about the power draw will go ape if you do that, because they think they can handle the increased power draw. They could also conceivably put the squeeze on motherboard manufacturers, but that risks those manufacturers focusing on AMD instead.

So they stay quiet, the same way they did when manufacturers misreported the base clock to trick locked CPUs to overclock. It may be dishonest, but they don't lie, exactly - their chips do exactly what they say when you run them according to specs. If you ignore those specs, you're on your own.



I can not find any evidence of Quartz running on top of a TBDR GPU ever. Maybe Apple built that support years ago in anticipation of this, but in either case, I'm not super worried about Quartz. I'm worried about all the apps that use the GPU. Either those apps need to support both modes, OR Apple can't use their PowerVR-derived GPUs OR discrete GPUs go the way of the dodo.



That reasoning makes some sense in an era when I can choose to update or not, but increasingly, I can't. Everyone keeps pushing updates more and more aggressively, and Apple has signaled that they will also start forcing updates. If I upgrade my Mac to Catalina, I lose access to a lot of apps because Apple has decided that they don't want to bother compiling their APIs for 32-bit x86 anymore. This, along with a couple of other lost features and a general warning from the community that Catalina is unusable garbage, means that I won't be installing it. Because why should I? It doesn't bring any new features that I'm interested in. If Apple decides to just start updating my Mac without me agreeing to it, that means they're removing features I paid for. Weirdly enough, I don't want that.



Solutions to a problem I don't have. MacOS is supposed to be about me running whatever I want on my box, because if I can't, I might as well get an iPad next time. Other than the RAM (which Apple keeps skimping on in iPads for some reason) the hardware is good enough.



I personally think that there will be no more drivers (in the sense of kexts) at all. It's remarkable that an OS that tries to prevent me from accessing files that I have stored on my desktop allows kexts at all.
We are probably <5 years from mandatory software/app updates across the board, aka Android/iOS/Mac/Win. I don’t agree with it either but clearly that and the death of local storage are the roadmap.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 16, 2020, 09:50 PM
 
Originally Posted by Brien View Post
We are probably <5 years from mandatory software/app updates across the board, aka Android/iOS/Mac/Win. I don’t agree with it either but clearly that and the death of local storage are the roadmap.
I don't think local storage will die at all, rather that it will be a local buffer. More and more storage is synced across devices, and this is really a game changer as far as I am concerned.

Regarding updates, I don't think much will have changed compared to where we are now: updates are free or included in your subscription. When you bought licenses for software, one big obstacle to updating was the users's unwillingness to fork over money for an upgrade. That is no longer an issue. Apart from updates breaking compatibility, I see no reason to not make the assumption that users will update by default. Why do you think software vendors will make updates mandatory? How would they enforce it?
I don't suffer from insanity, I enjoy every minute of it.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 05:00 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,