Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Mac transition to ARM to be announced at WWDC?

Mac transition to ARM to be announced at WWDC? (Page 3)
Thread Tools
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 25, 2020, 08:57 PM
 
Originally Posted by P View Post
...in 2008, because that is when they implemented this. Motherboard manufacturer cheating came much later, of course.
To be fair, though, the difference in 2008 was much less significant.
Originally Posted by P View Post
No, TDP means what it always did. If the chip is at PL1, it will draw an average of TDP watts over time. It can be higher for a short period of time, but the average of time will be this, and the chip will underclock to make that happen. If the chip goes to PL2, the power draw will be 25% higher for a period of time, but as soon as PL2 ends, the power draw will be reduced so that the total average eventually gos back to TDP. This means that if your chip's base clock is 3 GHz and max turbo is 5GHz and you have a task that loads the CPu 100% for a period of time longer than tau, it will drop below 3 GHz for a time to "pay back the power debt".
I think I understand how it works, I’m just saying that given the reality of Intel’s chips, their use of TDP is about as inadequate as advertising their chips with their max boost frequencies. It is to make them appear in the best possible light. Regarding cooling, I don’t think enthusiasts will want to spec their cooler by the TDP, but rather aim much higher — be it in the hopes of increasing tau or increasing stability or both.
Originally Posted by P View Post
Right. But that isn't really Intel's fault, is it? Every PSU calculator I have seen adds a nice 25% margin of error to handle power spikes, and that should be enough to handle PL3 and PL4.
Intel is running out of road and is pushing frequencies (and thus, thermals) as much as it can. On a desktop that’s less of an issue, but on a notebook, it just eats battery for breakfast. The only downgrade when going from my 13” to my 16” was battery life. Even if I occupy only two cores on average, this thing chews through my battery much, much more quickly. So I don’t think this is just about speccing power supplies. On phones this is even worse when batteries age, because they may not be able to draw enough current. My old iPhone 5 would always die when launching Strava (a fitness app), because for some reason that would peg the CPU and the battery could not supply enough current.
Originally Posted by P View Post
OK, so what is the TDP for the A12X? Is it the same in an iPad as in the DTK? Probably not, right?
Apple hasn’t published TDP numbers for either, so I don’t see how this is relevant. If Apple gave official TDP numbers for their parts and the power draw would be twice as high for longer periods, I’d ding them, too.
Originally Posted by P View Post
TDP is the average power draw over time unless someone has fiddled with the MSR. The issue is that people DO fiddle with the MSR to set PL1, PL2 and tau to something that Intel didn't intend.
My only point of contention is “… that Intel didn’t intend.” Intel knows exactly what is going on and at best looks the other way. As you remark in (b), Intel could lock down its settings (e. g. by hardcoding a tau_max), but it doesn’t. Surely, one of the main reasons is that it is way behind at the moment and needs every small boost it can get.
Originally Posted by P View Post
Intel is stuck between a rock and a hard place. They used to just advertise base clock, but then it got to be a problem because the idiots among us didn't understand why quadcore had a lower clock than the dualcore. So they started advertising the max turbo as well, and that's where we are.
Intel’s marketing has become very dishonest. I realize the difficulty of explaining the various speeds, especially since turbo clocks have gotten more and more fine grained. But I’d just say, Intel should take a page out of P’s book and follow (a)–(c). Yes, things are complicated, we have a preferred core in some of our chips that can clock the highest, we have different turbos, depending on how many cores are active, etc. Educate people about the tradeoff between high core counts and high frequencies. Just be honest — even when you are behind. That’s what AMD, ARM and Apple are doing right, when they release performance estimates or benchmarks, you know you can trust that they are in the right order of magnitude.
Originally Posted by P View Post
As for the long deprecation discussion - I get that Apple deprecates things. My issue is that if you combine this with enforced updates, things get dark very fast. If you have to do that, leave me a long-term support version where you don't remove features and just make security updates
Out of curiosity: how has Apple fared compared with you pre-Keynote expectations?
They have surprised me by ditching very little, they did not even ditch OpenGL and OpenCL, which have been deprecated for years now. In some places the Apple Silicon-based Macs are even an improvement (with per-installation security settings, for instance).
I don't suffer from insanity, I enjoy every minute of it.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 26, 2020, 12:51 AM
 
I have to agree, I half-expected them to drop support for non-App Store apps, make SIP undefeatable or some BS.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jun 26, 2020, 12:54 AM
 
Originally Posted by OreoCookie View Post
Integrated GPUs do not preclude Apple from adding discrete GPUs. Our current 16" MacBook Pros have just such a configuration, a built-in GPU that shares the memory with the CPU and a discrete GPU. And since Apple explicitly mentions that it will support PCIe devices and how they will access memory (from 9:12 onwards of the video you linked to), it would seem plausible that they will continue to support dual-GPU configurations just like they do now. Perhaps this will only apply to the iMac Pro and the Mac Pro in the future, but I don't think the publicly available information precludes that from happening.
I would expect them to keep at least one top-end 16” with a dGPU. But who knows, maybe they can make an iGPU that will blow the doors of a Radeon M.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 26, 2020, 02:51 AM
 
Originally Posted by Waragainstsleep View Post
Yeah but because they perform better thermally they get throttled much less than they used to in Apple's skinny MacBook chassis and I gather it makes a bigger than expected difference.
No, this is people not understanding how turbo boost and throttling works.

If your CPU throttles thermally in an OEM computer such as a laptop, you return it to the manufacturer and say “this is broken”. It should never throttle thermally, and it doesn’t. What it does is that it hits the power limit that Oreo and I have been beating into the ground. Making the chip thinner doesn’t change the power limit - what it does do is let Intel use something called TVB.

TVB is Intel doing what people thought they did all along - run faster when they’re cooler. Max clock goes up a little when the temperature is below 50C or 70C or whatever - but it goes up by 100MHz, or in some cases 200MHz, and it is already included in the reported max turbo. Put another way, TVB means that the max turbo drops when the chip gets hot.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 26, 2020, 02:58 AM
 
Originally Posted by Brien View Post
I would expect them to keep at least one top-end 16” with a dGPU. But who knows, maybe they can make an iGPU that will blow the doors of a Radeon M.
Integrated GPUs have a bad rep, because usually it meant lackluster performance. All next-gen consoles use shared memory and integrated graphics — and they aren't slow by any means, quite the contrary. Given the constraints (mostly price and power), this is as good as it gets given the technology AMD has to offer right now. So I could see Apple going along the same route. Moreover, there are cases where you have “integrated” GPUs, which are discrete chips but are on the same module as the CPU and perhaps even an ultra-fast RAM module for the GPU. Intel makes a processor with an AMD GPU just like that.

In the end battery-powered devices are always limited by power consumption and power dissipation. This is especially true for GPUs, which you can “easily” make faster by including more and more of it. (Most GPUs are split into building blocks, e. g. compute elements and what not, so you can substantially increase performance by increasing the number of building blocks.)

If I were in Apple's shoes, I'd start the transition with the MacBook Air, the 13" MacBook Pro, the Mac mini and the regular iMac, in that order. Then I'd go for the iMac Pro and the Mac Pro.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 26, 2020, 03:17 AM
 
Originally Posted by OreoCookie View Post
I think I understand how it works, I’m just saying that given the reality of Intel’s chips, their use of TDP is about as inadequate as advertising their chips with their max boost frequencies. It is to make them appear in the best possible light. Regarding cooling, I don’t think enthusiasts will want to spec their cooler by the TDP, but rather aim much higher — be it in the hopes of increasing tau or increasing stability or both.
It is common in enthusiast circles to spec a powerful cooler because it lets the HSF be quieter. A tower cooler is so much more efficient than the top-down bs Intel bundles that it can handle a much higher load, and modern GPUs drown it out anyway. Thus nobody noticed when motherboard manufacturers started cheating. Some of them try to do that now for Ryzen as well, but they seem to be getting called out.

Intel is running out of road and is pushing frequencies (and thus, thermals) as much as it can. On a desktop that’s less of an issue, but on a notebook, it just eats battery for breakfast. The only downgrade when going from my 13” to my 16” was battery life. Even if I occupy only two cores on average, this thing chews through my battery much, much more quickly. So I don’t think this is just about speccing power supplies. On phones this is even worse when batteries age, because they may not be able to draw enough current. My old iPhone 5 would always die when launching Strava (a fitness app), because for some reason that would peg the CPU and the battery could not supply enough current.
What happens is that the CPU will draw closer to TDP when pegged, even when the workload is lighter. That didn’t use to be the case.

The power draw from batteries should be handled by PL3 and PL4 and is something else entirely.

Apple hasn’t published TDP numbers for either, so I don’t see how this is relevant. If Apple gave official TDP numbers for their parts and the power draw would be twice as high for longer periods, I’d ding them, too.
That’s my point - Intel gets flak for publishing a number, and nobody else even tries to.

My only point of contention is “… that Intel didn’t intend.” Intel knows exactly what is going on and at best looks the other way. As you remark in (b), Intel could lock down its settings (e. g. by hardcoding a tau_max), but it doesn’t. Surely, one of the main reasons is that it is way behind at the moment and needs every small boost it can get.
I think they mainly do it because they left it open back in 2008, and closing it now would lead to another freak out like when they locked the BCLK. Intel does that when they’re on top, and they’re not, so they don’t do it now.

Intel’s marketing has become very dishonest. I realize the difficulty of explaining the various speeds, especially since turbo clocks have gotten more and more fine grained. But I’d just say, Intel should take a page out of P’s book and follow (a)–(c). Yes, things are complicated, we have a preferred core in some of our chips that can clock the highest, we have different turbos, depending on how many cores are active, etc. Educate people about the tradeoff between high core counts and high frequencies. Just be honest — even when you are behind. That’s what AMD, ARM and Apple are doing right, when they release performance estimates or benchmarks, you know you can trust that they are in the right order of magnitude.
I don’t think anyone publishes near as much data as Intel. AMD has their PBO BS, and got called out for not hitting advertised turbo on Zen 2. Apple says nothing. Intel has to publish some things for its OEMs, and the root problem is that people misunderstand what they’re saying.

The problem for Intel is that they can’t pull up a benchmark and just say “look here!” because they lose all of them right now. If they could just pull up a SPEC result, we wouldn’t be here.

But I’m not sure about dishonest. Nothing they say is untrue, they just rely on people misunderstanding what they say - but so does everyone else in the business.

Out of curiosity: how has Apple fared compared with you pre-Keynote expectations?
They have surprised me by ditching very little, they did not even ditch OpenGL and OpenCL, which have been deprecated for years now. In some places the Apple Silicon-based Macs are even an improvement (with per-installation security settings, for instance).
My immediate reaction was that they gave us the bitter pill in Catalina so people wouldn’t tie the end of x86 support and new non-sensical restrictions to the ARM transition. I still want to know what the plans for future GPUs are. I suspect that Apple plans to make even wider integrated GPUs as an option and leave the discrete ones for the Mac Pro, and maybe a 16” option.

But I shouldn’t complain. It was straight-forward, a solid transition plan and a time plan for releases. They did most things right.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 26, 2020, 03:56 AM
 
Originally Posted by P View Post
That’s my point - Intel gets flak for publishing a number, and nobody else even tries to.
Intel's main competition — AMD and ARM server CPU manufacturers — do publish TDP numbers, and they seem to be reliable.
Originally Posted by P View Post
I don’t think anyone publishes near as much data as Intel. AMD has their PBO BS, and got called out for not hitting advertised turbo on Zen 2. Apple says nothing. Intel has to publish some things for its OEMs, and the root problem is that people misunderstand what they’re saying.
I don't think this is true. When e. g. AMD publishes performance and efficiency metrics (as well as other specs like TDP), those seem to be borne out by independent benchmarks. Ditto when Apple claims that their new SoC is x % faster than its predecessor, independent benchmarks performed by the usual suspects usually agree with those claims.
Originally Posted by P View Post
But I’m not sure about dishonest. Nothing they say is untrue, they just rely on people misunderstanding what they say - but so does everyone else in the business.
I think you are letting off Intel rather easily. “Not lying, but not telling the whole truth” is also a form of dishonesty in my book. What is worse, even if you are educated enough to differentiate, Intel does not publish the necessary information.
Originally Posted by P View Post
My immediate reaction was that they gave us the bitter pill in Catalina so people wouldn’t tie the end of x86 support and new non-sensical restrictions to the ARM transition. I still want to know what the plans for future GPUs are. I suspect that Apple plans to make even wider integrated GPUs as an option and leave the discrete ones for the Mac Pro, and maybe a 16” option.

But I shouldn’t complain. It was straight-forward, a solid transition plan and a time plan for releases. They did most things right.
Agreed, they did prepare things ahead of time, but that's exactly what you would want Apple to do. Regarding GPUs, I'm very curious here, too. Your point about the differences in architecture is a very good observation, and it is interesting to see what Apple will do. (A nice anecdote, my brother had a Kyro 2 back in the day, which was a discrete GPU based on the tile-based deferred rendering paradigm. It took two decades, but it could make its comeback next year.) But I am not worried, I'm excited
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 26, 2020, 07:19 AM
 
I wonder whether Apple might choose a different path with discrete GPUs. I gather the Afterburner is basically a set of FPGAs on a card, programmed to be awesome at ProRES and ProRES Raw work. Maybe they'll offer different versions of it for other tasks to augment the integrated GPU cores?
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 26, 2020, 10:36 AM
 
Originally Posted by Waragainstsleep View Post
I wonder whether Apple might choose a different path with discrete GPUs. I gather the Afterburner is basically a set of FPGAs on a card, programmed to be awesome at ProRES and ProRES Raw work. Maybe they'll offer different versions of it for other tasks to augment the integrated GPU cores?
That's a possibility, as are other hardware accelerators. One of the few benchmarks I have seen by Apple was a comparison of some machine learning workload, where they compared it running on x86 and on an ARM-based Mac, but on specialized hardware. They only showed bar graphs, but the story was quite clear.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 26, 2020, 03:57 PM
 
Originally Posted by OreoCookie View Post
Intel's main competition — AMD and ARM server CPU manufacturers — do publish TDP numbers, and they seem to be reliable.
AMD publishes something, but I hear noise about that as well - and they notably failed to reach promised turbo targets. ARM server manufacturers I know nothing about, but note that Intel server chips don’t have the rumors their desktop chips do (because server manufacturers don’t cheat in the same way).

I don't think this is true. When e. g. AMD publishes performance and efficiency metrics (as well as other specs like TDP), those seem to be borne out by independent benchmarks. Ditto when Apple claims that their new SoC is x % faster than its predecessor, independent benchmarks performed by the usual suspects usually agree with those claims.
AMD failed to hit advertised turbo, and they do, in fact, draw more power than advertised in desktop boards - because motherboard manufacturers are cheating:

https://www.anandtech.com/show/15839...-kill-your-cpu

That they didn’t previously was because AMD didn’t make good enthusiast CPUs, so there was little competition for those mobos.

Apple makes claims comparing to their own previous chips, but they’re notable vague and don’t specify any test conditions. It is old now, but they certainly shaded the truth with the A8 that was nowhere near 25% faster than the A7 in equal conditions. Obviously they don’t have to cheat when their chips do great, but with a dud like the A8, they do.

I think you are letting off Intel rather easily. “Not lying, but not telling the whole truth” is also a form of dishonesty in my book. What is worse, even if you are educated enough to differentiate, Intel does not publish the necessary information.
Perhaps - but let me tell you a story. There exists a large manufacturer of gearboxes. That manufacturer sells a gearbox that, through various middlemen, ends up in a vehicle. Said gearbox fails in one specific application, and is replaced. Over time, said vehicle manufacturer starts noticing that those gearboxes fail often, and make a more formal complaint. After a long investigation, it turns out that said vehicle manufacturer had reprogrammed the control chip in said gearbox to change a certain parameter that governs performance. When challenged on this topic, their only statement was that they needed to do that to keep up with what their competitor was shipping.

If this sounds oddly specific, it is because it happened at work a few weeks ago. I’m being vague because you would 100% recognize the brands of everyone involved in the story and I don’t want that, but it did happen - and this piece of information changed the story for everyone in the room. From blaming the gearbox manufacturer, suddenly it looked like it was the vehicle manufacturer who was at fault.

What really, eh he, grinds my gears here, is that I think that something else is wrong with that gearbox. I don’t think that the reprogramming was the cause, but it will be very hard for me to push said manufacturer on the issue now.

So, what is different about what happened here and what the motherboard manufacturer’s did? I can see one thing - Intel knew what was happening, while this gearbox manufacturer was visibly shocked when the information came to light. Does that really make it Intel’s fault?

Agreed, they did prepare things ahead of time, but that's exactly what you would want Apple to do. Regarding GPUs, I'm very curious here, too. Your point about the differences in architecture is a very good observation, and it is interesting to see what Apple will do. (A nice anecdote, my brother had a Kyro 2 back in the day, which was a discrete GPU based on the tile-based deferred rendering paradigm. It took two decades, but it could make its comeback next year.) But I am not worried, I'm excited
I have been thinking about this. There are good reasons to think that TBDR is the better way to build GPUs in the post Dennard scaling era, and nVidia has clearly been trying to get there while remaining compatible with existing programs. What if Apple’s plan is to cover the gap in software after all? Open GL will be layered on top of Metal 2, in the way that people are running DirectX on top of Vulkan on Linux. It won’t be the best performing, but then it never was, and it should make older applications work, at least.

The issue with that is that this is essentially Rosetta for GPUs. It will be a one-way thing. Those regular GPUs built for immediate mode rendering would then not work. Another shim in the opposite direction? Seems you lose a lot of performance. Future discrete GPUs are TBDR, but large? I don’t think AMD has any tech like that left (it was sold to Qualcomm as Adreno a decade or so ago) so who would build it that isn’t Apple? Maybe there will always be a TBDR GPU in the system, but “regular” immediate mode GPUs can be used for specific tasks, i.e. compute?

I don’t know, but this feels “Appley”. What of this was the plan for a long time, and the long delay in Mac Pro updates was because Apple really did plan to kill it off to get rid of that problem? If so, the last idea sounds like a way to fix it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jun 26, 2020, 09:04 PM
 
So let's see if I really get this...

Thermal management requires, among other things, some sort of heat sink. Sometimes (often?) the heat sink has active cooling via a fan of some kind - the fan on my Early 2015 13" MBP is usually so quiet that I don't really notice it.

But the aluminum body of an Apple laptop also helps manage heat by acting as another, radiant/passive heat sink. The whole bottom of my MPB gets warm when it's working hard, fan on or not.

Point: thinner means less metal case, which in turn means less case to assist with thermal management. Does this mean that a CPU/SoC with a given TDC might max out thermally when installed in a thinner, lighter laptop case, than if installed in a thicker, heavier case? Aluminum is a great heat conductor, but is a thinner overall case structure going to have enough of an impact on dissipation to be noticeable to users?

Glenn -----OTR/L, MOT, Tx
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Jun 27, 2020, 12:42 AM
 
This may be a dumb comment, but I assume the ratio of surface area to volume of the aluminum matters.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 27, 2020, 02:27 AM
 
Surface area matters for heat dissipation. Shrinking the cases further will have minimal effect because the height is already small. While the top/bottom area can't change much due to the screen.

Metal volume matters for thermal inertia - how much heat it can take up before letting the chip get too hot. Thinning the metal means the fans must kick in sooner, and CPU / GPU throttling shortly thereafter.

A thinner laptop has less room inside for fans & ducts, so fans are smaller and less effective.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 27, 2020, 02:37 AM
 
Originally Posted by subego View Post
This may be a dumb comment, but I assume the ratio of surface area to volume of the aluminum matters.
Surface area is what matters for heat dissipation, volume is what determines heat capacity.

Think of it like a tall glass vs. a soup plate vs. a saucer. Fill all with hot water. The tall glass will remain hot for longer than the water in the soup plate or the saucer. The saucer will cool the fastest, because the surface area-to-volume ratio is the largest. But at least here, the total heat capacity of the water in the saucer is quite small. So if you have a given amount of liquid that you want to cool down, a soup plate is better, because you can cool down more liquid in one go.

So all other things being equal, a thinner enclosure would decrease the max TDP of the chips.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 27, 2020, 03:09 AM
 
Originally Posted by P View Post
AMD publishes something, but I hear noise about that as well - and they notably failed to reach promised turbo targets.
I wasn't aware that this was a big issue. I thought everyone was super happy with Zen and Zen 2.
Originally Posted by P View Post
That they didn’t previously was because AMD didn’t make good enthusiast CPUs, so there was little competition for those mobos.
AMD didn't have the numbers back then, they were the value option. Now they have become the enthusiasts's and the value option.
Originally Posted by P View Post
Apple makes claims comparing to their own previous chips, but they’re notable vague and don’t specify any test conditions. It is old now, but they certainly shaded the truth with the A8 that was nowhere near 25% faster than the A7 in equal conditions. Obviously they don’t have to cheat when their chips do great, but with a dud like the A8, they do.
My memory is a bit hazy on the details, that was a while ago, but if memory serves the main issue of the A8 was not performance, but relative lack of efficiency (since the battery size increased significantly when Apple went with the iPhone 6 form factor). In any case, even then I haven't seen complaints that Apple's performance claims were outlandish. Peeking at the benchmarks, it seems the improvement is ~20 % on average, which would be a bit lower than 25 %. Neverhteless, I don't recall wide-spread accusations that Apple has been making overly optimistic performance claims over the years. (Of course, it's tricky to reduce performance to a single number.)
Originally Posted by P View Post
Perhaps - but let me tell you a story. There exists a large manufacturer of gearboxes. That manufacturer sells a gearbox […] those gearboxes fail often, and make a more formal complaint. After a long investigation, it turns out that said vehicle manufacturer had reprogrammed the control chip in said gearbox to change a certain parameter that governs performance. When challenged on this topic, their only statement was that they needed to do that to keep up with what their competitor was shipping.

If this sounds oddly specific, it is because it happened at work a few weeks ago.

[…]

So, what is different about what happened here and what the motherboard manufacturer’s did? I can see one thing - Intel knew what was happening, while this gearbox manufacturer was visibly shocked when the information came to light. Does that really make it Intel’s fault?
That's it, Intel knows what motherboard manufacturers are doing and turning a blind eye. A better analogy is the Dieselgate scandal where everyone was in the know (but unfortunately, almost no one was prosecuted). (Of course, I am not claiming the same level of criminality or anything, it is just continues in the same spirit as your analogy.) Intel needed every bit of performance that mainboard manufacturers could squeeze out of a system, just like the suppliers for car manufacturers needed the contracts.
Originally Posted by P View Post
I have been thinking about this. There are good reasons to think that TBDR is the better way to build GPUs in the post Dennard scaling era, and nVidia has clearly been trying to get there while remaining compatible with existing programs. What if Apple’s plan is to cover the gap in software after all? Open GL will be layered on top of Metal 2, in the way that people are running DirectX on top of Vulkan on Linux. It won’t be the best performing, but then it never was, and it should make older applications work, at least.

The issue with that is that this is essentially Rosetta for GPUs. It will be a one-way thing. Those regular GPUs built for immediate mode rendering would then not work. Another shim in the opposite direction? Seems you lose a lot of performance. Future discrete GPUs are TBDR, but large? I don’t think AMD has any tech like that left (it was sold to Qualcomm as Adreno a decade or so ago) so who would build it that isn’t Apple? Maybe there will always be a TBDR GPU in the system, but “regular” immediate mode GPUs can be used for specific tasks, i.e. compute?

I don’t know, but this feels “Appley”. What of this was the plan for a long time, and the long delay in Mac Pro updates was because Apple really did plan to kill it off to get rid of that problem? If so, the last idea sounds like a way to fix it.
That's an interesting idea, and perhaps this is one thing they will do. Watching the videos, Apple seems quite convinced of their ideas. I've watched a few more WWDC sessions today, and during one on Metal-on-Apple Silicon-based Macs they made quite explicit that the GPUs in Macs with Apple Silicon are to be treated like discrete graphics and not like integrated graphics — just like on iOS. Yet another not-very-subtle dig at Intel, no doubt. (Another one was in their Apple Silicon Macs session where they illustrated asymmetric multiprocessing, Intel's “big” cores were the same size as Apple's E (for efficient) cores.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 27, 2020, 06:47 AM
 
In any Apple device you have the aluminium case which is conducting heat from inside and radiating it away to the environment. You have the heat sink on the CPU/GPU doing the same into the interior cavity of the device, and then there is the size of that cavity and the airflow into and out of it which can be bolstered with a fan. In an iPad or iPhone, that's all you have. The heatsink is tiny and thin, the internal cavity is tiny and the airflow in/out is minimal.
With the MacBooks, the heatsinks are bigger, the cavity is bigger, the case is bigger, there is a fan and heat is also lost through the keyboard which is essentially a lot of large holes in the top of the unit so the airflow compared to an iDevice is massive. Remember the iPhone is waterproof. In short, the cooling in any current MacBook chassis is much, much better than it is in an iPhone or iPad. I think that's why people are so excited to see what these first gen Apple Silicon Macs are going to be capable of doing.
I have plenty of more important things to do, if only I could bring myself to do them....
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jun 27, 2020, 10:50 AM
 
I hadn’t thought about the openings in the keyboard. I have a decorative/crumb blocking cover on my MBP’s keyboard.... I may be taking that off to see if that makes a noticeable difference in when the fans come on or rev up.

The cover came with a plastic case. I got it to protect the outside of the case from scuffs in a pack or case. The bottom of the case in particular has stand-offs to provide a little extra air flow. This is another area where I may experiment just to see how much that helps.

I’d noticed with my previous MBP (from back in 2006!) that leaving the laptop flat on a surface makes it warm up faster. Those little foot things Apple puts on the bottom don’t give it much clearance for air, so I got a folding gadget that raises the base and also angles it for better typing (I’ll post a picture sometime), and wound up almost never using the MBP without it - both for my wrist and for the heat.

Glenn -----OTR/L, MOT, Tx
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Jun 27, 2020, 11:21 AM
 
Originally Posted by reader50 View Post
Surface area matters for heat dissipation. Shrinking the cases further will have minimal effect because the height is already small. While the top/bottom area can't change much due to the screen.

Metal volume matters for thermal inertia - how much heat it can take up before letting the chip get too hot. Thinning the metal means the fans must kick in sooner, and CPU / GPU throttling shortly thereafter.

A thinner laptop has less room inside for fans & ducts, so fans are smaller and less effective.
Originally Posted by OreoCookie View Post
Surface area is what matters for heat dissipation, volume is what determines heat capacity.

Think of it like a tall glass vs. a soup plate vs. a saucer. Fill all with hot water. The tall glass will remain hot for longer than the water in the soup plate or the saucer. The saucer will cool the fastest, because the surface area-to-volume ratio is the largest. But at least here, the total heat capacity of the water in the saucer is quite small. So if you have a given amount of liquid that you want to cool down, a soup plate is better, because you can cool down more liquid in one go.

So all other things being equal, a thinner enclosure would decrease the max TDP of the chips.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jun 27, 2020, 02:19 PM
 
Something else I missed regarding heat, is that in an iPad or iPhone, the screen typically generates some heat as well. Not usually a lot but definitely some.
I have plenty of more important things to do, if only I could bring myself to do them....
     
mindwaves
Professional Poster
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Jun 30, 2020, 08:51 PM
 
Benchmarks are out and it shows the A12Z, on Rosetta applications, 27% lower than an equivalent iPad Pro. Not bad, I would say considering optimization still needs to be done (got to take out the debug code /s) and it is a Rosetta application and an OS not originally designed to work on it. And come this fall, an A14X will be running the show, which will undoubtedly be much faster.
{{{ mindwaves }}}
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 30, 2020, 09:39 PM
 
@mindwaves
I think it is important to emphasize that this is more of a benchmark of Rosetta than macOS running an an A12Z. Once Geekbench is ported to macOS on ARM, I doubt the A12Z's Geekbench numbers will be lower than what they are on the iPad. If anything they might be better as the Mac mini enclosure might have a larger cooling capacity. Now we know the price tag emulation has and for a large class of applications, this will be just fine (think something like Microsoft Word).
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 1, 2020, 07:22 AM
 
The A12Z in the DTK also has a ton more RAM than the iPad. It should beat it by a bit.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 1, 2020, 07:23 AM
 
I wonder if the A14X might include some subsystem for helping speed Rosetta up. Is that feasible?
I have plenty of more important things to do, if only I could bring myself to do them....
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jul 1, 2020, 01:40 PM
 
With all of that power, would these new chips be able to emulate an Intel CPU enough to run (what will be) legacy OS/apps? Apple did that when they transitioned to Intel, so I would think that the experience of not (entirely) torquing off much of their user base would be part of a business plan.

Of course, for all we know the planners behind this are making it up as they go, so there’s no guarantee that there actually is a transition plan...

Glenn -----OTR/L, MOT, Tx
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 1, 2020, 03:38 PM
 
Isn't that what Rosetta 2 is doing? Running Intel apps on ARM?
I have plenty of more important things to do, if only I could bring myself to do them....
     
ghporter
Administrator
Join Date: Apr 2001
Location: San Antonio TX USA
Status: Offline
Reply With Quote
Jul 1, 2020, 05:27 PM
 
That’s what I thought, but I have other questions. And I didn’t fully state my question. Will an ARM machine with a Rosetta 2 layer between it and legacy code be enough faster to Joe User that he’ll jump on it? And will Rosetta 2 (as issued) be as user friendly/transparent as Rosetta was with PowerPC apps on Intel machines? Or will it be a little nag and tell you “this could run so much faster if you paid for new software”...

Glenn -----OTR/L, MOT, Tx
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 1, 2020, 08:54 PM
 
The benchmarks imply there will be a ~25-30% performance tax on x86 apps running on ARM Macs. Maybe I'm just an optimist but I have a feeling Apple has something special up their sleeve. Not a surprise in terms of any extra tricks or such, just that they have their own early run Mac chips in their internal labs and they already know they are impressive. If they plan to ship ARM based Macs this year, those chips must already be in production by now or pretty close to it. I think they've got something that's going to genuinely wow people in the way they haven't quite managed in a good few years. Theres something about their team and the announcements at the moment that's above and beyond the usual stony faced uniform press releases you normally get out of them when people know something is coming.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 1, 2020, 08:59 PM
 
The DTK Mac Mini runs Geekbench under Rosetta 2 and scores very nearly the same as the Intel i3 Mac Mini running it natively. So the final version Macs with Apple Silicon should still be faster than the previous generations with Intel, at least for the entry level Macs.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 1, 2020, 11:10 PM
 
Originally Posted by Waragainstsleep View Post
The DTK Mac Mini runs Geekbench under Rosetta 2 and scores very nearly the same as the Intel i3 Mac Mini running it natively. So the final version Macs with Apple Silicon should still be faster than the previous generations with Intel, at least for the entry level Macs.
Yup, agreed, that bodes very well for ARM-based Macs. We should be careful not to have overinflated expectations, but even on mobile, the cores used in the A12Z are almost two generations old. And it stands to reason that Apple may beef up its A14 cores for its Macs compared to its iOS siblings (similar to ARM's strategy with its Neoverse N1 and the A76).
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 1, 2020, 11:16 PM
 
Originally Posted by ghporter View Post
That’s what I thought, but I have other questions. And I didn’t fully state my question. Will an ARM machine with a Rosetta 2 layer between it and legacy code be enough faster to Joe User that he’ll jump on it? And will Rosetta 2 (as issued) be as user friendly/transparent as Rosetta was with PowerPC apps on Intel machines? Or will it be a little nag and tell you “this could run so much faster if you paid for new software”...
Obviously, few have had hands-on experience at this point, but all signs point to yes. According to the Keynote and the State of the Union, pretty much all x86-based code will be translated to ARM. (There are only a few exceptions, kernel extensions being on of them.) And if we take the relative Geekbench scores as a basis, a 30 % performance penalty would mean that a new ARM-based Mac will likely be faster than your current Macs even when it has to emulate x86 code. A lot of code will just run natively, though, everything from Safari to Mail.

Apple's migration strategy seems to be very solid: a lot of the people who have been around for the PowerPC-to-x86 transition are around now. From the dev tools standpoint, Apple is in a better position. In the mid-2000s, quite a few big developers (Adobe, Microsoft) were still on Metroworks rather than Apple's tool chain. Now everybody is pretty much on XCode. They have fat binaries, an emulation layer and a sensible transition schedule. I wouldn't be surprised if a lot of applications just cross-compiled. (Say, something like OmniFocus, I wonder how difficult it is to migrate that to ARM …)
I don't suffer from insanity, I enjoy every minute of it.
     
Thorzdad  (op)
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Jul 2, 2020, 05:57 AM
 
[set curmudgeon_mode=1]
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 2, 2020, 08:22 AM
 
Worth noting that article states they don't know if the A12Z will ship in a Mac but it won't. Craig Federighi categorically stated as much in a Daring Fireball podcast.

I wonder if the Mac chips will be socketed. I'm betting not but you never know.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jul 2, 2020, 12:23 PM
 
I doubt it, even in the Mac Pro.
     
Doc HM
Mac Elite
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Jul 2, 2020, 12:44 PM
 
For the first time in a long time I "could" drop on a new 16in MBP. However, I'll hold now until maybe 6 months after they go ARM. Don't want to be in on the end of the old and history has taught us that Apple's first out of the box kit can quickly become rev A'd once all the stupid design flaws become apparent in real use.
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 2, 2020, 11:38 PM
 
Originally Posted by Waragainstsleep View Post
I wonder if the Mac chips will be socketed. I'm betting not but you never know.
If I had to wager, perhaps on the Mac Pro you'll have a socketed chip, perhaps also the iMac Pro. But for the rest, I don't think so.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 6, 2020, 12:03 PM
 
Originally Posted by Waragainstsleep View Post
The A12Z in the DTK also has a ton more RAM than the iPad. It should beat it by a bit.
It also has a cooler rated for 65W instead of the 4.5W or whatever the iPad Pro has. Geekbench is a shit test that doesn’t hit the powerlimit, but in real work, here should be a difference.

Originally Posted by Waragainstsleep View Post
I wonder if the A14X might include some subsystem for helping speed Rosetta up. Is that feasible?
We don’t know anything of how Rosetta 2 works. I would say that having a cache big enough to match x86 + space to run whatever emulation they have would help, but I think it is already that big. There might also be something to do about the stupid x86 4K page size (I’m envisioning a TLB that can handle more than one page size) but I’m just spitballing here.

Originally Posted by ghporter View Post
That’s what I thought, but I have other questions. And I didn’t fully state my question. Will an ARM machine with a Rosetta 2 layer between it and legacy code be enough faster to Joe User that he’ll jump on it? And will Rosetta 2 (as issued) be as user friendly/transparent as Rosetta was with PowerPC apps on Intel machines? Or will it be a little nag and tell you “this could run so much faster if you paid for new software”...
So far, Rosetta 2 works like Rosetta 1- it just works, and it is fast enough.

Originally Posted by Waragainstsleep View Post
I wonder if the Mac chips will be socketed. I'm betting not but you never know.
Why would they? It costs money in packaging, it costs a little bit of latency, and you can’t get CPUs from anyone but Apple anyway. Might as well ask for the GPU to be socketed. Just replace the entire motherboard instead.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 6, 2020, 03:57 PM
 
Originally Posted by P View Post
Why would they? It costs money in packaging, it costs a little bit of latency, and you can’t get CPUs from anyone but Apple anyway. Might as well ask for the GPU to be socketed. Just replace the entire motherboard instead.
I agree it doesn't fit their current tendencies but I guess it depends what sort of variations there will be in their CPU lineup. If they just use the same chips for the whole model range and disable/enable cores to fit their price points, no need for a socket. If the 24 core CPU is a very different one to the 12 Core then maybe it saves costs on assembly.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 6, 2020, 08:07 PM
 
Originally Posted by P View Post
It also has a cooler rated for 65W instead of the 4.5W or whatever the iPad Pro has. Geekbench is a shit test that doesn’t hit the powerlimit, but in real work, here should be a difference.
Do we actually know what the DTKs look like on the inside? Is the cooling system the same? No doubt there is much better cooling, but the original Intel transition kits looked like a PowerMac G5 on the outside, but did not have the same cooling capacity.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 7, 2020, 03:15 AM
 
On the topic of TBDR:

https://twitter.com/never_released/s...278789633?s=19

Seems it is all Apple GPUs going forward.

Originally Posted by OreoCookie View Post
Do we actually know what the DTKs look like on the inside? Is the cooling system the same? No doubt there is much better cooling, but the original Intel transition kits looked like a PowerMac G5 on the outside, but did not have the same cooling capacity.
No, of course not. They can’t make use of the 65W anyway. It is more a question of the A13 increasing the power draw momentarily (to just over 6W), so Apple can make use of a little bit more power. I suspect that they did.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 7, 2020, 04:46 AM
 
Originally Posted by P View Post
On the topic of TBDR:

https://twitter.com/never_released/s...278789633?s=19

Seems it is all Apple GPUs going forward.
Interesting, thanks for the find
Originally Posted by P View Post
No, of course not. They can’t make use of the 65W anyway. It is more a question of the A13 increasing the power draw momentarily (to just over 6W), so Apple can make use of a little bit more power. I suspect that they did.
Just to be clear: I also remarked earlier that the native Geekbench scores on the DTK may be higher due to improved cooling. But as you correctly point out, I think we would only see this on longer, sustained workloads that really push the A12Z, and it isn’t clear to what degree Geekbench does this — yet.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 7, 2020, 08:29 AM
 
I wonder if the plan for discrete GPUs is that all ARMacs have an integrated Apple GPU and that the plan is to use the sort of dual GPU setup that the 15"/16" MBPs use for the desktops as well. You could then use the AMD GPU specifically for OpenGL graphics or even for compute only.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jul 7, 2020, 10:28 AM
 
I don’t know, that tweet seems like iGPU only.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 7, 2020, 04:14 PM
 
Lots of rumour sites are pushing Kuo's predictions that MBP13 and iMac will be the first two Apple Silicon Macs, but the MBP13 was just refreshed and we know now from leaked benchmarks that the iMac is one of the remaining Intel Macs to be released. I can't see them releasing two iMacs before Christmas especially if the second will demolish the first for performance. That leaves the Air or the MBP16. The MBP16 would match what they did with the Intel transition and would be about a year after the previous model, plus it wouldn't hurt sales of the smaller ones that much or the iMacs thanks to its price point. (Can't say I'd be thrilled having just bought a 16).

The other possibility is that Apple will shake up the whole product matrix. Who knows what that would look like?

I'd like to know if the DTK uses LPDDR4 RAM or regular DDR4? People are assuming the former but there's no real need to do so.
I have plenty of more important things to do, if only I could bring myself to do them....
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 7, 2020, 07:02 PM
 
Originally Posted by Brien View Post
I don’t know, that tweet seems like iGPU only.
They can certainly do GPUs for compute in any case. What I’m thinking is that the desktop will be TBDR, and therefore Apple GPU only, but that full screen apps may use OpenGL or even Metal 2 on a separate GPU. There has to be something, because they finally launched a new Mac Pro a few months ago, and they wouldn’t do that if it was just going to die.

Originally Posted by Waragainstsleep View Post
Lots of rumour sites are pushing Kuo's predictions that MBP13 and iMac will be the first two Apple Silicon Macs, but the MBP13 was just refreshed and we know now from leaked benchmarks that the iMac is one of the remaining Intel Macs to be released. I can't see them releasing two iMacs before Christmas especially if the second will demolish the first for performance. That leaves the Air or the MBP16. The MBP16 would match what they did with the Intel transition and would be about a year after the previous model, plus it wouldn't hurt sales of the smaller ones that much or the iMacs thanks to its price point. (Can't say I'd be thrilled having just bought a 16).

The other possibility is that Apple will shake up the whole product matrix. Who knows what that would look like?

I'd like to know if the DTK uses LPDDR4 RAM or regular DDR4? People are assuming the former but there's no real need to do so.
The thing is that the iMac and the 13” are the ones due for a redesign. Apple could do that and leave the old model for sale as an Intel version, like they have done so many times before, and drop it later when the rest of the line transitions.

I suspect that the DTK uses LPDDR4 simply because I don’t think Apple’s memory controller supports regular DDR4 and its higher voltage.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Jul 7, 2020, 08:25 PM
 
I would like to see a 12” ultraportable, 14” non-pro and 14/16/18” Pros. If I were making the decisions...
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 7, 2020, 09:19 PM
 
Originally Posted by P View Post
On the topic of TBDR:

https://twitter.com/never_released/s...278789633?s=19

Seems it is all Apple GPUs going forward.
A follow-up on the link. I remember this slide from one of the presentations, and I don’t think the slide says what the tweet claims it does. Taken at face value, it states that Apple Silicon-based Macs will support both, TLDR (Metal GPU Family Apple) and immediate rendering GPUs (Metal GPU Family Mac 2). Rather, it seems to explicitly state that immediate render GPUs will be supported on ARM-based Macs. Or am I reading the chart wrong?
Originally Posted by P View Post
I suspect that the DTK uses LPDDR4 simply because I don’t think Apple’s memory controller supports regular DDR4 and its higher voltage.
I suspect the DTK sticks as closely as possible to the iPad Pro, so I tend to agree.

@Brien
“Integrated” GPU does not mean slow these days, and shared memory does offer quite a few advantages, especially if you use the GPU and other co-processors for compute on a common set of data. All current consoles do this, and they are not slow.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Jul 8, 2020, 05:02 AM
 
I had thought LPDDR4 was sacrificing performance for low power but it seems like its better all round than regular DDR4. The 2020 13" outperforms the 16" in quite handsomely in memory tests.

You have a point with the iMac redesign but we know there is another Intel iMac coming, the benchmarks leaked for it. Would they do a complete redesign that accommodates hot Intel/AMD chips knowing they could redesign for their own much cooler chips within a year or less and make them even thinner and quieter? I know they did that with the last G5 iMacs with them only being sold for 4 months, but that wasn't a radical new design and the change in cooling requirements wasn't all that drastic. If there even was one. It seems wasteful to redesign twice or to put the new lower power hardware into an Intel hotbox case.

In looking into the benchmarks for the DTK to try and glean something about the RAM its using (Geekbench doesn't seem to know), I notice Geekbench lists the CPU as having 4 cores. This implies it didn't make use of the 4 efficiency cores when running the multicore test. I don't know exactly how efficient they are, are they negligible? Or would they boost the multicore score further if Geekbench knew how to use them?
I have plenty of more important things to do, if only I could bring myself to do them....
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 8, 2020, 06:44 AM
 
Originally Posted by OreoCookie View Post
A follow-up on the link. I remember this slide from one of the presentations, and I don’t think the slide says what the tweet claims it does. Taken at face value, it states that Apple Silicon-based Macs will support both, TLDR (Metal GPU Family Apple) and immediate rendering GPUs (Metal GPU Family Mac 2). Rather, it seems to explicitly state that immediate render GPUs will be supported on ARM-based Macs. Or am I reading the chart wrong?
That is a possibility for sure. I’m far from certain here. My read on it was that ARMacs will support the older Metal interface (Metal GPU Family Mac 2) meant for current Intel and AMD GPUs (as a transition layer) but it doesn’t say that Apple will support the new interface (Metal GPU Family Apple) for current (immediate mode) GPUs.

Maybe the idea is that if all Macs have an Apple GPU, they will use that GPU exclusively when running apps that require the newer interface, but can still use the older-style GPU when running an app using the older interface.[/QUOTE]
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 8, 2020, 06:50 AM
 
Originally Posted by Waragainstsleep View Post
I had thought LPDDR4 was sacrificing performance for low power but it seems like its better all round than regular DDR4. The 2020 13" outperforms the 16" in quite handsomely in memory tests.
The only downside LPDDR has over the regular is that it is always soldered. Other than that, it is essentially “DDR Pro” now.

You have a point with the iMac redesign but we know there is another Intel iMac coming, the benchmarks leaked for it. Would they do a complete redesign that accommodates hot Intel/AMD chips knowing they could redesign for their own much cooler chips within a year or less and make them even thinner and quieter? I know they did that with the last G5 iMacs with them only being sold for 4 months, but that wasn't a radical new design and the change in cooling requirements wasn't all that drastic. If there even was one. It seems wasteful to redesign twice or to put the new lower power hardware into an Intel hotbox case.
The thing is - being better at performance per watt is fine on paper, but when you make a new iMac, it needs to be faster than the old one in actual terms. Whatever CPU Apple puts in there, it is going to have to run faster than the 2.6 GHz or whatever the iPad does. I think we’re talking 4 GHz at least, as well as more cores, and that will draw some power. Maybe the idea is to make something that can handle 65W in a pinch but runs cooler and quieter at 35W, or whatever the new CPUs will run at.


In looking into the benchmarks for the DTK to try and glean something about the RAM its using (Geekbench doesn't seem to know), I notice Geekbench lists the CPU as having 4 cores. This implies it didn't make use of the 4 efficiency cores when running the multicore test. I don't know exactly how efficient they are, are they negligible? Or would they boost the multicore score further if Geekbench knew how to use them?
They’re not negligible, they’re effectively the A6 core from iPhone 5 with a higher clock and more modern cache system. But yes, Geekbench doesn’t use them because Rosetta 2 apparently doesn’t use them. Remember that these scores are run in emulation.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 8, 2020, 08:17 AM
 
Originally Posted by P View Post
That is a possibility for sure. I’m far from certain here. My read on it was that ARMacs will support the older Metal interface (Metal GPU Family Mac 2) meant for current Intel and AMD GPUs (as a transition layer) but it doesn’t say that Apple will support the new interface (Metal GPU Family Apple) for current (immediate mode) GPUs.
No, I don't think they will. But to me it says that Apple intends to make drivers for GPUs from other manufacturers (presumably just AMD) so that e. g. you can use an eGPU with am AMD-based graphics card via Thunderbolt or in the iMac Pro.
Originally Posted by P View Post
Maybe the idea is that if all Macs have an Apple GPU, they will use that GPU exclusively when running apps that require the newer interface, but can still use the older-style GPU when running an app using the older interface.
[/QUOTE]
Well, certainly if we are looking into the far future, I guess Apple could eventually cover all its own GPU needs and deprecate support for AMD GPUs.
I don't suffer from insanity, I enjoy every minute of it.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 12:23 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,