Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Apple Silicon Macs

Apple Silicon Macs (Page 3)
Thread Tools
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Dec 7, 2020, 03:09 PM
 
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 9, 2020, 12:40 AM
 
So maybe no discrete GPUs after all.

Sounds like the top end 16” will slaughter every other notebook ever.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 9, 2020, 01:51 AM
 
Originally Posted by Brien View Post
So maybe no discrete GPUs after all.
Remember, discrete ≠ slow. Consoles have very fast integrated graphics, which rival current-gen, mid-range discrete desktop graphics cards (at prices for which you would never be able to build a gaming PC with the same performance). The GPU in the M1 rivals mid-range discrete mobile GPUs. Scaling GPUs is relatively easy, so I think the performance will just be a question of how much larger the GPU in the M-chip for the 16" will be.
Originally Posted by Brien View Post
Sounds like the top end 16” will slaughter every other notebook ever.
I suspect CPU-wise it'll be the fastest notebook, period. GPU-wise, it'll be right up there, but perhaps a gaming laptop with a quasi-desktop GPU might be able to compete.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Dec 13, 2020, 07:35 PM
 
I mentioned the idea of moving the Mac Pro and all of the wires out of the living room and wife was very pleased with the idea. My biggest hang up right now is the lack of controller support and Steam support. Once it can handle Rocket League natively with a controller and everything else via Steam Link I’m all in on a Mini.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 13, 2020, 09:59 PM
 
What concerns me with these rumors is GPU compute. The report states that will have 64 or even 128 GPU cores at the top. Sound like a lot, right? Let’s see how far that gets us in pure numbers. M1 has 2.6 TFLOPS compute power on 8 cores. Let’s make the naive assumption that we can just scale this up by a factor of 16. This gets us 41.6 TFLOPS. That’s a lot, but it doesn’t quite catch the newest AMD Navi cards at 46 TFLOPS. I actually don’t think Apple will hit that number in pure TFLOPS because it will not be able to clock a big chip as high as a small one, but still, a lot. So why worry?

Memory bandwidth. The M1 uses a 128-but bus clocked at 4.266 GHz effective. That is probably the highest it can be clocked with LPDDR4X - it is already a very high clock. But if Apple actually needs that bandwidth today, it will need 16 times that bandwidth for 16 times the core. That is just over 1 TB/s, more than any customer GPU today and more than twice that AMD card I just compared to. Some compute cards come close to that figure, but they do so using either very wide GDDR6 or GDDR6X buses clocked high. NVidia is the only customer for GDDR6X right now (and it is a bit of a dud anyway). Doing this with GDDR6 means a 512-bit 16GHz bus - wider than anyone has done in years, and clocked way higher than anyone has ever gone with a bus that high. No, to get that bandwidth, there is only option: HBM.

4 stacks of HBM2 should do it. Trouble is, that isn’t cheap. Add in a chip that is going to be over 600mm2 just for the GPUs, and that interposer to hold all of it is going to be immense. TSMC has shown massive interposers, but this is looking to be absurdly expensive. The only way this works is with chiplets. Which Apple has never done, and nobody has managed to do with GPUs yet.

All of this comes together in that I don’t buy it. If it happens, it’s no time soon. So much easier to just add in some PCIe lanes and connect an AMD GPU. Apple can still use an on-chip GPU for drawing images to the screen. God knows they’ve never cared about gaming before, so I don’t see why they should start now. They will make a variant bigger than the current M1 - perhaps even much bigger - but I don’t think it will compete with NVidia and AMD top-of-the-line cards.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 14, 2020, 12:02 AM
 
Originally Posted by P View Post
What concerns me with these rumors is GPU compute.
I think another issue to worry about is the programming model. If you really want to use GPU compute, you need to program for it specifically. And the reason nVidia is entrenched in that market isn't just the edge in performance it has had over AMD in recent years, but also CUDA.
Originally Posted by P View Post
The report states that will have 64 or even 128 GPU cores at the top. Sound like a lot, right? Let’s see how far that gets us in pure numbers. M1 has 2.6 TFLOPS compute power on 8 cores. Let’s make the naive assumption that we can just scale this up by a factor of 16. This gets us 41.6 TFLOPS. That’s a lot, but it doesn’t quite catch the newest AMD Navi cards at 46 TFLOPS. I actually don’t think Apple will hit that number in pure TFLOPS because it will not be able to clock a big chip as high as a small one, but still, a lot. So why worry?

Memory bandwidth. The M1 uses a 128-but bus clocked at 4.266 GHz effective.
Agreed.
Although even if Apple just were to get in the ballpark of AMD's high-end navi cards (from 0) would already be an achievement. Intel is trying to get a foot in the door of the GPU compute and discrete graphics business with, ahem, more moderate results.

Ideally, I would hope for Apple to offer support for AMD graphics cards at least, in addition to their home grown solutions. GPU compute problems are usually optimized for the target architecture, so even if in terms of raw compute Apple's solution were equally powerful on paper, it wouldn't automatically mean comparable performance.
Originally Posted by P View Post
No, to get that bandwidth, there is only option: HBM.
Isn't Apple's current solution on the M1 essentially the same as HBM. (AFAIK HBM is a standard and there is no reason to believe Apple is using the same.)
Originally Posted by P View Post
4 stacks of HBM2 should do it. Trouble is, that isn’t cheap. Add in a chip that is going to be over 600mm2 just for the GPUs, and that interposer to hold all of it is going to be immense. TSMC has shown massive interposers, but this is looking to be absurdly expensive. The only way this works is with chiplets. Which Apple has never done, and nobody has managed to do with GPUs yet.
If money were the only problem, I'd be less worried here: not only does Apple seem to care less about money spent fabbing things (hence, their large-compared-to-the-competition phone SoCs) and because this will be workstation-class parts with margins to fund this stuff. But the technological challenge is going to be non-trivial.
Originally Posted by P View Post
All of this comes together in that I don’t buy it. If it happens, it’s no time soon. So much easier to just add in some PCIe lanes and connect an AMD GPU. Apple can still use an on-chip GPU for drawing images to the screen. God knows they’ve never cared about gaming before, so I don’t see why they should start now. They will make a variant bigger than the current M1 - perhaps even much bigger - but I don’t think it will compete with NVidia and AMD top-of-the-line cards.
I think it makes sense for Apple to eventually use its own GPU designs, but IMHO they don't have to do it right away. In any case, I think giving people the option to go for an AMD graphics card would be welcome one way or the other. And it'd free Apple from the pressure of having to compete neck-and-neck with whatever AMD and nVidia have on the market by then.

There are also two other aspects I am curious about:

(1) Will Apple scale up the Neural Engine of the Mac Pro?
(2) Will Apple feature support for FPGA compute more prominently?

The FPGA card seems to be hugely worth it for the select few Mac Pro customers who can make use of it. But it'd be interesting if Apple invested in API support. It seems the FPGA* market is on the rise, with both, Intel and AMD investing tens of billions into it; I have added a star, because e. g. Intel's eASICs are as far as I understand technically not FPGAs, but somewhere in between FPGAs and custom, hard-wired logic.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 14, 2020, 05:33 AM
 
Originally Posted by OreoCookie View Post
I think another issue to worry about is the programming model. If you really want to use GPU compute, you need to program for it specifically. And the reason nVidia is entrenched in that market isn't just the edge in performance it has had over AMD in recent years, but also CUDA.
If only they had thought about that ten years ago, and started another open API for that and not just given up when it got hard...

There are APIs out there where you just replace "CUDA" with another prefix in your code and compile against another library - much like how Android is structured the way Java is. Apple could do the same thing, unless SCOTUS finds those illegal in the Google-Oracle trial. That would be a disaster for so many reasons, but it is a bit too large of a topic for this discussion.

Agreed.
Although even if Apple just were to get in the ballpark of AMD's high-end navi cards (from 0) would already be an achievement. Intel is trying to get a foot in the door of the GPU compute and discrete graphics business with, ahem, more moderate results.
Well, they haven't, it is just a rumor. And there are two reasons to believe that this may not be as easy. One of them is scaling - AMD's GCN design fell down hard when it went too wide. 64 CUs (let's call that a "GPU core", even though it isn't the same thing) was the absolute ceiling. They have now managed to widen it and the new one can go wider, but that was hard to do. The Imagination tech that Apple based this on was promised to go as wide as 16 cores - it is a far step to 128 cores. The other is process - Apple has the best process right now, but the yield on a 600 mm2 chip on a cutting-edge node is going to be abysmal. They need to have something in the 300mm2 range as the absolute max.

I just don't buy an 128 core model. I can buy designing the new GPU setup to go as wide as 128 cores for future uses, but not making a chip like that now.

Isn't Apple's current solution on the M1 essentially the same as HBM. (AFAIK HBM is a standard and there is no reason to believe Apple is using the same.)
Not really. HBM is based on the idea of going very wide at modest clocks, and that you need an interposer to go that wide. To make the RAM that wide, you stack multiple RAM chips on top of each other. An HBM2 stack is 1024 bits wide.

What Apple is doing is using 16-bit RAM chips - 8 of them - and clocking them through the roof. This avoids the interposer as well as the need to stack the RAM, but there is only so much you can do with the clockspeed. This makes the memory a lot cheaper, as well as getting better latency. You could certainly trade latency for bandwidth to increase the effective clock, but there is only so much you can do before you're reinventing HBM2. Apple doesn't have a unique advantage here.

If money were the only problem, I'd be less worried here: not only does Apple seem to care less about money spent fabbing things (hence, their large-compared-to-the-competition phone SoCs) and because this will be workstation-class parts with margins to fund this stuff. But the technological challenge is going to be non-trivial.
The issue for me is that the volumes for Apple's workstations are so low that Apple has already tried to kill them once, and they were wobbling for a long time before that. I see nothing to say that the volumes are going up, so the ability to pay for a bigger chip or a wider mask must be limited.

There are also two other aspects I am curious about:

(1) Will Apple scale up the Neural Engine of the Mac Pro?
Good question.

(2) Will Apple feature support for FPGA compute more prominently?
The FPGA card seems to be hugely worth it for the select few Mac Pro customers who can make use of it. But it'd be interesting if Apple invested in API support. It seems the FPGA* market is on the rise, with both, Intel and AMD investing tens of billions into it; I have added a star, because e. g. Intel's eASICs are as far as I understand technically not FPGAs, but somewhere in between FPGAs and custom, hard-wired logic.[/QUOTE]

I would guess so. FPGA is clearly a segment on the rise, and it fits Apple's MO to add support for something that can bring massive performance increases in return for better software support.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 14, 2020, 08:42 AM
 
Originally Posted by P View Post
If only they had thought about that ten years ago, and started another open API for that and not just given up when it got hard...
True.
Originally Posted by P View Post
That would be a disaster for so many reasons, but it is a bit too large of a topic for this discussion.
Oh yeah, that’ll be a disaster. And given the average age of the judges, I’m not very optimistic.
Originally Posted by P View Post
Well, they haven't, it is just a rumor. And there are two reasons to believe that this may not be as easy. One of them is scaling - AMD's GCN design fell down hard when it went too wide. 64 CUs (let's call that a "GPU core", even though it isn't the same thing) was the absolute ceiling. They have now managed to widen it and the new one can go wider, but that was hard to do. The Imagination tech that Apple based this on was promised to go as wide as 16 cores - it is a far step to 128 cores.
Sure, we are going by a rumor here, and I am not naïve enough to think that all Apple needs to do is copy and paste GPU cores onto a die. I was just thinking through a hypothetical, and having seen Intel struggle literally for years and bringing something ok to the market now just shows how momentous a task this is.

(I totally agree with your “core” argument, this is meaningless. These are just building blocks, which can be easily replicated.)
Originally Posted by P View Post
Apple has the best process right now, but the yield on a 600 mm2 chip on a cutting-edge node is going to be abysmal. They need to have something in the 300mm2 range as the absolute max.
That depends on what you call abysmal.
Going by this yield calculator the yield of a 20 x 30 mm^2 chip on a 300 mm wafer with the defect density (0.09) of TSMC’s 7 nm (≠ 5 nm) process is about 60 %. I took the values from one of Ian Cutress’s videos. Evidence suggests that a 5 nm wafer costs about $17,000 a piece, which would peg the cost-per-chip at $17,000/88 = $193 per chip. That sounds doable to me. Of course, you’d have to add packaging and all that, which could very well double the cost (that’s the fudge factor Cutress used on the hypothetical and real Intel chips he compared).
Originally Posted by P View Post
I just don't buy an 128 core model. I can buy designing the new GPU setup to go as wide as 128 cores for future uses, but not making a chip like that now.
I don’t think making such a chip is the difficult part, that can be solved with money. My back-of-the-envelope calculation does not rule out a 600 mm^2 chip, me thinks. I think the actual issue is memory architecture-related. Right now the big advantage of shared memory is that you don’t have to copy data around. You don’t need drivers that take care of this, memory bandwidth isn’t being eaten away, energy is not used. That all changes once Apple makes the GPU discrete.
Originally Posted by P View Post
Not really. HBM is based on the idea of going very wide at modest clocks, and that you need an interposer to go that wide. To make the RAM that wide, you stack multiple RAM chips on top of each other. An HBM2 stack is 1024 bits wide.
I meant the larger idea, which is to put memory on-package.
Originally Posted by P View Post
The issue for me is that the volumes for Apple's workstations are so low that Apple has already tried to kill them once, and they were wobbling for a long time before that. I see nothing to say that the volumes are going up, so the ability to pay for a bigger chip or a wider mask must be limited.
I think the reasoning is that a Mac Pro-type computer will allow them to introduce/try new, expensive technologies like FPGA/eASICs. Currently, they are hampered by volume in some respects. E. g. I remember when Apple lagged behind when it came to certain display technologies, and the purported reason was that these technologies were not available in iPhone volume. But ultimately, I think Apple wants to make something like the Mac Pro.
Originally Posted by P View Post
I would guess so. FPGA is clearly a segment on the rise, and it fits Apple's MO to add support for something that can bring massive performance increases in return for better software support.
That’s probably the biggest advantage of Apple SoCs have over Intel’s offering is IMHO — the integration of all these accelerators and spending die areas in the way Apple thinks it’ll get the most bang for the buck.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 14, 2020, 10:07 AM
 
Originally Posted by OreoCookie View Post
Oh yeah, that’ll be a disaster. And given the average age of the judges, I’m not very optimistic.
OK, this is TAN, but a) the mean age of the judges has dropped significantly and b) losing Ruth Bader-Ginsburg is absolutely awful for all sorts of issues, but it is probably a win for this specific case. She was a copyright maximalist and would absolutely have voted for Oracle. I am not well versed in how the various judges will vote on something like this to game it out, but I actually think that the chances have improved somewhat.

That depends on what you call abysmal.
Going by this yield calculator the yield of a 20 x 30 mm^2 chip on a 300 mm wafer with the defect density (0.09) of TSMC’s 7 nm (≠ 5 nm) process is about 60 %. I took the values from one of Ian Cutress’s videos. Evidence suggests that a 5 nm wafer costs about $17,000 a piece, which would peg the cost-per-chip at $17,000/88 = $193 per chip. That sounds doable to me. Of course, you’d have to add packaging and all that, which could very well double the cost (that’s the fudge factor Cutress used on the hypothetical and real Intel chips he compared).
Ah, but the chip isn't 600mm2 on 7nm. We need the defect density on 5nm to come down to the same level before this makes sense. And it misses the cost for developing the chip. Estimate show that a new chip costs half a billion to develop on 5nm. That half a billion needs to be paid for by the customers of those chips. If the volume is that of the iPhone, not a problem. If the volume is Macs, now that is a different story. Apple doesn't report unit sales any more, but the estimate (Gartner etc) is about 20 million a year or just under. According to Gruber, Mac Pro sales are "low single digit percent" of the total Mac volume. One percent of 20 million is 200'000 units. Should we guess that the Mac Pro sells 500'000 units a year, and that it gets a new GPU every other year like the iPad? That is $500 per chip in development costs alone. Any Mac Pro-specific chip at all is debatable from a cost perspective.

The only way I can square this circle is by going to chiplets. Then we could have a chiplet with say 16 cores (or even 18, if we do it the console way and assume that a couple are bad and we always want 16 per chiplet) and use 1 in the base models. We can then continue to 2, 4 or possible even 8 in the bigger models without paying the development cost every time.

I don’t think making such a chip is the difficult part, that can be solved with money. My back-of-the-envelope calculation does not rule out a 600 mm^2 chip, me thinks. I think the actual issue is memory architecture-related. Right now the big advantage of shared memory is that you don’t have to copy data around. You don’t need drivers that take care of this, memory bandwidth isn’t being eaten away, energy is not used. That all changes once Apple makes the GPU discrete.
There is a fundamental problem in that designs that give very high memory bandwidth tend to do so by sacrificing latency (that is the fundamental trick of GDDR). A CPU wants low latency memory, and Apple has spent a lot of effort on reducing memory latency in the last few generations (they're still behind Intel on that in A14, I haven't seen a test of M1 in that). Getting your main memory up to the bandwidth GDDR6-level bandwidth fundamentally means losing latency.

There is a low-latency HBM standard in the works. It sounds like it might be a solution here, but now we're on the bleeding edge of tech again. This is getting extremely expensive, and that tiny Mac Pro volume has to pay for it.

I meant the larger idea, which is to put memory on-package.
Well yes, but doing it with the low bus widths used here isn't uncommon. I believe Apple has been doing that for iPad SoCs for some time. The trick is the material of the interposer - for HBM, that has to be an integrated circuit of reasonably modern process. For something like what Apple is doing, it can be any old circuit board, as far as I can tell.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 14, 2020, 09:43 PM
 
Originally Posted by P View Post
OK, this is TAN, but a) the mean age of the judges has dropped significantly and b) losing Ruth Bader-Ginsburg is absolutely awful for all sorts of issues, but it is probably a win for this specific case. She was a copyright maximalist and would absolutely have voted for Oracle. I am not well versed in how the various judges will vote on something like this to game it out, but I actually think that the chances have improved somewhat.
I’m not very hopeful to be honest. Perhaps because it is 2020 and a lot of things have chipped away at my optimism. But so far copyright has only expanded, and I don’t see any appetite amongst conservatives especially to change that.
Originally Posted by P View Post
Ah, but the chip isn't 600mm2 on 7nm.
Brain fart, please swap 5 nm and 7 nm: the estimates are for 5 nm, not 7 nm.
Originally Posted by P View Post
We need the defect density on 5nm to come down to the same level before this makes sense.
Of course, I have extrapolated a little here by assuming that in 1–2 year’s time, TSMC has the defect density down to the same levels as on 7 nm. You can play with the defect density if you wish and clearly, this will increase chip cost. But for a workstation-class part, it still seems within the realm of possibility. Intel’s 28-core parts are larger with 698 mm^2, which AFAIK is close to the theoretical maximum that is determined by the size of the aperture. I know the horrendous cost of that part, but that’s with a very healthy margin slapped on top of it.

However, like you I think a better option for Apple will be to take a page out of AMD’s playbook and use chiplets. This way they could use, say, 1–4 32-core GPU modules and connect them to an IO chip. This way, you could offer powerful GPUs in various Macs.
Originally Posted by P View Post
And it misses the cost for developing the chip. Estimate show that a new chip costs half a billion to develop on 5nm.
Point taken, I completely agree that this does not capture all of the cost, especially the up-front cost for development, masks and other things like packaging. Once chiplets are involved, cost for packaging goes up. Nevertheless, I think even a 600 mm^2 part is feasible if Apple wants to pay for it. IMHO the margins are there.
Originally Posted by P View Post
That is $500 per chip in development costs alone. Any Mac Pro-specific chip at all is debatable from a cost perspective.
Still, I think that is doable for a workstation part. However, I think Apple will most likely want to go the chiplet route and put powerful GPUs in some of its iMacs, the iMac Pro and the Mac Pro. That’d spread costs across several product lines, including those with higher volumes.
Originally Posted by P View Post
The only way I can square this circle is by going to chiplets. Then we could have a chiplet with say 16 cores (or even 18, if we do it the console way and assume that a couple are bad and we always want 16 per chiplet) and use 1 in the base models. We can then continue to 2, 4 or possible even 8 in the bigger models without paying the development cost every time.
Agreed. I’ve read the teaser on Semiaccurate’s rumors/analysis on the M2, and you could interpret the last paragraph as indication that Apple might move to chiplets sooner rather than later. They can re-use the same chiplets for a lot of their chips and re-allocate them according to need (= demand, binning). Yields are higher, too, and in principle, some chiplets can be manufactured on a cheaper process node (which is something to consider for AMD, but perhaps not for Apple). Chiplets would also open the door to Apple offering parts with very different accelerator/GPU configurations potentially.
Originally Posted by P View Post
There is a fundamental problem in that designs that give very high memory bandwidth tend to do so by sacrificing latency (that is the fundamental trick of GDDR). A CPU wants low latency memory, and Apple has spent a lot of effort on reducing memory latency in the last few generations (they're still behind Intel on that in A14, I haven't seen a test of M1 in that). Getting your main memory up to the bandwidth GDDR6-level bandwidth fundamentally means losing latency.
Feeding 16+ cores and a healthy GPU and a Neural Engine becomes a non-trivial endeavor. There are lots of ways this can go, which makes it exciting for me to speculate. Apple could offer some “reasonable” amount of RAM in the Mac Pro as standard, say, 32 or 64 GB, which is on-package or close enough, and then expandable RAM slots would serve as a slower cache in between on-package* RAM and the SSD.
Originally Posted by P View Post
There is a low-latency HBM standard in the works. It sounds like it might be a solution here, but now we're on the bleeding edge of tech again. This is getting extremely expensive, and that tiny Mac Pro volume has to pay for it.
Given the Mac Pro’s price, high margins and low volume, I think this would be the perfect product to introduce bleeding-edge tech like HBM. That would also be another reason for Apple to keep the Mac Pro, just like a halo car you’d be able to introduce very expensive tech, see if it works and introduce it in lower-end products later on if it is worth it and after prices have come down. Of course, there would have to be a set of mature APIs to go along with it, too.

Even though we are all excited about Apple’s CPU prowess, I think in the long run, the biggest advancements are due to the use of accelerators and perhaps something like FPGAs. Presumably, Apple could dedicate some die area to FPGA circuitry, but it’d have to have mass market use cases.
Originally Posted by P View Post
Well yes, but doing it with the low bus widths used here isn't uncommon. I believe Apple has been doing that for iPad SoCs for some time. The trick is the material of the interposer - for HBM, that has to be an integrated circuit of reasonably modern process. For something like what Apple is doing, it can be any old circuit board, as far as I can tell.
The M1 is the spawn of a mobile SoC, and you can tell. It’ll be interesting to see what Apple will do if it reaches into the toolboxes with more desktop/workstation-class tech. HBM (and its successors) seems like a sensible upgrade from LPDDR4 for desktop chips.
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep  (op)
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Dec 15, 2020, 07:38 AM
 
I can't be bothered to quote everything I'm going to say but hopefully there won't be that much so it should be easy enough to follow along.

In terms of the volumes of Mac Pros sold, if Apple can come up with something for workstation class machines that is half as good an improvement as the M1 is to consumer machines they will sell much bigger volumes than they have previously. Maybe not enough to cover the expensive development costs of new chips that you are talking about, but I don't believe Apple will insist that Mac Pro sales cover the development of Mac Pro chips by themselves. Firstly, they are playing long games here and any tricks and techs they can whip up for these Mac Pros will likely trickle down into the other Macs over time. Second, as that happens, the economy of scale improves enormously. Third, I haven't heard about it recently but I assume Apple still has enormous piles of cash it can't repatriate without paying way too much tax so they will be happy to open this abroad. In places like Taiwan perhaps. Finally, Apple has never shied from spending on R&D in the last decade or two.


Could Apple build a SoC thats just a GPU? By which I mean like an M12 (only bigger) but without the CPU cores, just GPU cores and memory? If they can do that, then maybe they could do something like SLI only faster via PCIE4 or 5 and put two or more of these discrete GPU SoCs on the board next to the CPU? Just a thought.

The other thing is that despite the extraordinary memory efficiency on these Apple silicon chips, a Mac Pro is going to need more RAM (at least optionally) than the 16G or even 32GB that are likely to be available on the smaller Macs by the time Mac Pro is updated. I can't see how you could stack 1-2TB of any kind of RAM onto an SoC, can you? Maybe the Pro will require a very different architecture from the others.
I have plenty of more important things to do, if only I could bring myself to do them....
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 15, 2020, 02:43 PM
 
Originally Posted by OreoCookie View Post
I’m not very hopeful to be honest. Perhaps because it is 2020 and a lot of things have chipped away at my optimism. But so far copyright has only expanded, and I don’t see any appetite amongst conservatives especially to change that.
Looking over the list of important cases, it hasn’t, over the last few years? In fact, that last case about the copyrightability of laws was decided 5-4 with the 5 newer justices prevailing over the 4 veterans to limit copyright.

Brain fart, please swap 5 nm and 7 nm: the estimates are for 5 nm, not 7 nm.

Of course, I have extrapolated a little here by assuming that in 1–2 year’s time, TSMC has the defect density down to the same levels as on 7 nm. You can play with the defect density if you wish and clearly, this will increase chip cost. But for a workstation-class part, it still seems within the realm of possibility. Intel’s 28-core parts are larger with 698 mm^2, which AFAIK is close to the theoretical maximum that is determined by the size of the aperture. I know the horrendous cost of that part, but that’s with a very healthy margin slapped on top of it.
I know, it is doable. NVidia makes even larger chips, including paying TSMC for a special modification of a process so they can make those absurd 815mm2 Volta chips. It’s just expensive. If Apple was wobbling on the Mac Pro as a product, I remain sceptical.

However, like you I think a better option for Apple will be to take a page out of AMD’s playbook and use chiplets. This way they could use, say, 1–4 32-core GPU modules and connect them to an IO chip. This way, you could offer powerful GPUs in various Macs.
I think that if you do it anyway, go wide. AMD uses 8 core complex dies and 1 central I/O die for its EPYC chips - that’s what I want to see, because I don’t want the base 16” MBP running away and being too expensive because of the GPU cost either. Just the thought of HBM in that model is scary, because there is no cheap HBM anywhere.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 15, 2020, 02:56 PM
 
Originally Posted by Waragainstsleep View Post
Could Apple build a SoC thats just a GPU? By which I mean like an M12 (only bigger) but without the CPU cores, just GPU cores and memory? If they can do that, then maybe they could do something like SLI only faster via PCIE4 or 5 and put two or more of these discrete GPU SoCs on the board next to the CPU? Just a thought.
Of course they can, but why, when they can just pay AMD to make one?

The other thing is that despite the extraordinary memory efficiency on these Apple silicon chips, a Mac Pro is going to need more RAM (at least optionally) than the 16G or even 32GB that are likely to be available on the smaller Macs by the time Mac Pro is updated. I can't see how you could stack 1-2TB of any kind of RAM onto an SoC, can you? Maybe the Pro will require a very different architecture from the others.
Strictly speaking the memory isn’t on top - it is next to for cooling reasons, yet within the package. But how much you could fit is a relevant point. Right now, HBM2e comes in stacks of as much as 16GB per capsule using 8-layer stacks. 4 capsules is common enough, and NVidia’s A100 has 6 stacks. That’s 96GB that we know that we can do. How far this can be scaled further remains to be seen. Of course you could add two more stacks, but that’s probably it. There has been talk of 12-layer stacks, but nobody has done that yet. Now we’re at 192GB and we’re trying new things. Maybe we can double again if there is space enough for another shrink in the dies, but that still isn’t even half a TB. No, we’re not getting to 2TB.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 15, 2020, 08:05 PM
 
Originally Posted by P View Post
Looking over the list of important cases, it hasn’t, over the last few years? In fact, that last case about the copyrightability of laws was decided 5-4 with the 5 newer justices prevailing over the 4 veterans to limit copyright.
I was thinking longer-term than that. Sure, there are occasional wins, but on average it is a loss I’d say. There seems to be a net expansion of copyright across time with no one lobbying government to curb copyright. Copyright in academia is getting ridiculous, too, with journals having convinced some funding agencies that double dipping is ok (i. e. authors have to pay an extra fee to make their work “open access”).
Originally Posted by P View Post
I know, it is doable. NVidia makes even larger chips, including paying TSMC for a special modification of a process so they can make those absurd 815mm2 Volta chips. It’s just expensive. If Apple was wobbling on the Mac Pro as a product, I remain sceptical.
I was speaking about feasibility and that it wouldn’t be a unprecedented to produce dies of this size. Clearly, the smart money is on a chiplet or chiplet-type solution, I totally agree with you there. Apple doesn’t need a brute force solution.
Originally Posted by P View Post
I think that if you do it anyway, go wide. AMD uses 8 core complex dies and 1 central I/O die for its EPYC chips - that’s what I want to see, because I don’t want the base 16” MBP running away and being too expensive because of the GPU cost either. Just the thought of HBM in that model is scary, because there is no cheap HBM anywhere.
Agreed. That’d cut down on development cost substantially. It’d be interesting to see whether they put CPU and GPU on separate dies or have one for each chiplet. With regards to HBM specifically, you are right that this is an expensive technology, and I don’t expect Apple to tackle this right away. At least for the 16” MacBook Pro I expect Apple to stick to LPDDR. If Apple wants to use HBM on a larger scale, I expect them to do so with plans to roll it out across a larger set of products, i. e. I expect them to make long-term contracts like they did with e. g. flash memory to supply iPod nanos. Or perhaps they can roll their own solution?
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 15, 2020, 08:14 PM
 
Originally Posted by P View Post
Of course they can, but why, when they can just pay AMD to make one?
Ultimately, that's the big question. To be honest, I was initially expecting that Apple would do just that, so the rumors surprised me a bit. AMD's latest GPUs are competitive with nVidia's offerings and Apple isn't lagging behind as it did in the CPU department (comparing its homegrown cpu cores with Intel's offerings). You can find me in earlier threads arguing that Apple will use AMD GPUs for its higher-end machines, so the best thing I can come up with is that Apple seems to think it can offer something better. But Apple doesn't have a track record with high-end GPUs, and all we have is speculation (= fun!).
Originally Posted by P View Post
Strictly speaking the memory isn’t on top - it is next to for cooling reasons, yet within the package. But how much you could fit is a relevant point. Right now, HBM2e comes in stacks of as much as 16GB per capsule using 8-layer stacks. 4 capsules is common enough, and NVidia’s A100 has 6 stacks. That’s 96GB that we know that we can do.
Just one thing to add here: nVidia compute cards are supremely expensive. Unless prices have dropped substantially (I am not actively following this market), higher-end cards are of the order of $30,000 a piece. And prices started at $10k. Of course, nVidia is making a very healthy margin here. Such a solution won’t be cheap at present.

If you reduce HBM RAM capacity to e. g. 32 GB, perhaps 64 GB if you are pushing it, I think this could be a nice compromise for a workstation part. Perhaps some people wouldn’t even need to add any RAM (which could act as a higher-level cache). And for people that run applications with high memory demands, I think it is still a net gain to having more larger, slower memory rather than less, faster memory and having to swap.
( Last edited by OreoCookie; Dec 16, 2020 at 01:40 AM. )
I don't suffer from insanity, I enjoy every minute of it.
     
Waragainstsleep  (op)
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Dec 15, 2020, 08:26 PM
 
Originally Posted by P View Post
Of course they can, but why, when they can just pay AMD to make one?
Like the dual GPU cards in the current Mac Pros? Maybe thats the plan, an Apple GPU SoC they can stack two or more of on a single PCI-E 4 or 5 card and you can get up to 4 cards in your 2022 Mac Pro and of course they could always drop one or two more right on the motherboard.
I have plenty of more important things to do, if only I could bring myself to do them....
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 16, 2020, 12:40 AM
 
You still have the tbdr issue though.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 16, 2020, 05:53 AM
 
Originally Posted by OreoCookie View Post
Just one thing to add here: nVidia compute cards are supremely expensive. Unless prices have dropped substantially (I am not actively following this market), higher-end cards are of the order of $30,000 a piece. And prices started at $10k. Of course, nVidia is making a very healthy margin here. Such a solution won’t be cheap at present.
Nvidia is expensive period, but that is margin. AMD sold a 16GB Radeon VII with four stacks of HBM2 for $700. It was at that point widely reported that each stack cost $80, for $320 in total, two years ago. Memory manufacturers tend to keep prices constant and increase capacity. Since the capacity per layer has doubled, that probably means that 8GB costs $80 right now. Higher stacks are probably more expensive. Still, it means that 32GB should be about reasonable by now for a four-stack model.

Originally Posted by Brien View Post
You still have the tbdr issue though.
For graphics, yes, but this discussion was at least initially about GPU compute, ie using the GPu to render stuff or whatnot. TBDR doesn't affect that.

Originally Posted by Waragainstsleep View Post
Like the dual GPU cards in the current Mac Pros? Maybe thats the plan, an Apple GPU SoC they can stack two or more of on a single PCI-E 4 or 5 card and you can get up to 4 cards in your 2022 Mac Pro and of course they could always drop one or two more right on the motherboard.
This doesn't sound like something Apple would do. They would make a big package of all of their chips and sell for an absurd price.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 18, 2020, 07:20 PM
 
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
Dec 20, 2020, 10:55 AM
 
x86 or x86-64? x68 has been dead for quite some time.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 21, 2020, 02:53 PM
 
No, x86 isn’t dead, but it is seriously hurting. Remember that there are still x86 chips that are faster than any ARM on single threaded code. They are that by using way more power, but they exist. (I happen to have just used one to build myself a new PC).

I do wonder what the likes of Dell are going to do. Call up Qualcomm and beg for Snapdragons?
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 21, 2020, 08:20 PM
 
Originally Posted by P View Post
No, x86 isn’t dead, but it is seriously hurting. Remember that there are still x86 chips that are faster than any ARM on single threaded code. They are that by using way more power, but they exist. (I happen to have just used one to build myself a new PC).
Care to share? I am sure a lot of us want to geek out on the specs.

I thought the M1 was essentially neck-and-neck with Zen 3, one being faster in integer tasks, the other having an edge in FP. Of course, there may be individual tasks where one has a clear edge over the other. Of course, that picture changes if you swap in non-Apple cores.

In any case, it seems to me that x86 will get into trouble in the server world first/more quickly. Amazon, Microsoft and others have been contributing for years now to make sure that standard server software just runs on ARM. And this year the first really competitive ARM-based server chips have been released. Amazon is pushing its ARM-based server instances and I wouldn’t be surprised if Apple found uses for its Mac Pro ARM SoC in its data centers.

Arguably, this will be a much bigger problem for Intel and AMD than losing the consumer business, that’s where they make a whole lot of money. (Probably it’ll be a much bigger problem for Intel, since if you want to stick to x86, AMD is clearly the way to go at this point.)

Looking at my brother (an IT tech consultant specialized in server infrastructure), it seems that we should not underestimate the inertia, though. His company has a client that pays big bucks every year, but refuses to overhaul its aging infrastructure. How aging you may ask? They still have some networked equipment that requires coax cables. Try debugging that!
Originally Posted by P View Post
I do wonder what the likes of Dell are going to do. Call up Qualcomm and beg for Snapdragons?
I don’t think they are in good shape. It is antithetical to these companies to be vertically integrated, so they are literally years and years behind. I don’t think they are used to pressuring their suppliers to build something to suit their needs. Qualcomm is a bigger mystery, they could be the company for ARM-based chips, but they’re not. Eventually, a company will fill that niche, me thinks. I reckon one of the companies that makes these great server SoCs might release a derivative for desktops and workstations at one point. A giga processor ultralite if you wish (that’s a reference for old Mac heads).
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 23, 2020, 06:29 AM
 
Originally Posted by OreoCookie View Post
Care to share? I am sure a lot of us want to geek out on the specs.
Ryzen R9 5900X. There is really nothing else that is special about it - B550, 16GB DDR4 (3600MHz), 1TB M.2 SSD, various doodads from my old box including the Vega 56 GPU - no chance of scoring a new GPU this fall. I decided to make life a little harder for myself with a smaller box, and because I was tired of everything being black or white, I went with a red one - Raijintek Styx. It is old enough that it actually has a space for a slot loading optical drive - I reused one from an old iMac. Unfortunately missing a USB-C on the front panel, but I do like what it looks like.

I thought the M1 was essentially neck-and-neck with Zen 3, one being faster in integer tasks, the other having an edge in FP. Of course, there may be individual tasks where one has a clear edge over the other. Of course, that picture changes if you swap in non-Apple cores.
Apple M1 has a bit of a weird floating point profile, actually - it has lots of scalar FPU units (four in parallel, IIRC), because JavaScript uses doubles for everything. Modern CPUs don’t usually bother with that because it’s more efficient to use vectors if you need to process that many floats per cycle (using vectors means that your data is organized in memory, which means that it is much quicker to access - using scalar operators means waiting for memory access more often).

In any case, it seems to me that x86 will get into trouble in the server world first/more quickly. Amazon, Microsoft and others have been contributing for years now to make sure that standard server software just runs on ARM. And this year the first really competitive ARM-based server chips have been released. Amazon is pushing its ARM-based server instances and I wouldn’t be surprised if Apple found uses for its Mac Pro ARM SoC in its data centers.

Arguably, this will be a much bigger problem for Intel and AMD than losing the consumer business, that’s where they make a whole lot of money. (Probably it’ll be a much bigger problem for Intel, since if you want to stick to x86, AMD is clearly the way to go at this point.)

Looking at my brother (an IT tech consultant specialized in server infrastructure), it seems that we should not underestimate the inertia, though. His company has a client that pays big bucks every year, but refuses to overhaul its aging infrastructure. How aging you may ask? They still have some networked equipment that requires coax cables. Try debugging that!
IBM is still selling it’s z-series mainframes that is compatible with the System/360 from the sixties. There will be a market for those sort of things for the x86 for a very long time, but it may die in other sectors.

I don’t think they are in good shape. It is antithetical to these companies to be vertically integrated, so they are literally years and years behind. I don’t think they are used to pressuring their suppliers to build something to suit their needs. Qualcomm is a bigger mystery, they could be the company for ARM-based chips, but they’re not. Eventually, a company will fill that niche, me thinks. I reckon one of the companies that makes these great server SoCs might release a derivative for desktops and workstations at one point. A giga processor ultralite if you wish (that’s a reference for old Mac heads).
Right now, enterprise computing is moving towards the big customers making their own computers from components. There is space for one, maybe two companies to sell complete enterprise solutions to medium-sized companies that won’t want to put everything in the cloud. Dell had a good path towards being the one, until this ARM transition really got going.

AMD has apparently revived the K12 project - ARM processor based on the Zen backend. I guess that they’re seeing the same opportunity that we are. That would work for the current OEMs.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 24, 2020, 02:22 AM
 
Originally Posted by OreoCookie View Post
I don’t think they are in good shape. It is antithetical to these companies to be vertically integrated, so they are literally years and years behind. I don’t think they are used to pressuring their suppliers to build something to suit their needs. Qualcomm is a bigger mystery, they could be the company for ARM-based chips, but they’re not. Eventually, a company will fill that niche, me thinks. I reckon one of the companies that makes these great server SoCs might release a derivative for desktops and workstations at one point. A giga processor ultralite if you wish (that’s a reference for old Mac heads).
Nvidia? They did just but ARM Holdings after all.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 24, 2020, 04:03 AM
 
Originally Posted by Brien View Post
Nvidia? They did just but ARM Holdings after all.
No, they’re trying to buy ARM. Chances are they’re not going to be allowed to - China is signaling that they don’t want ARM under US export restrictions - but they’re clearly planning for to do something like this. The thing is - NVidia doesn’t play well with others. They’ve managed to piss off everyone they partner with for years, so I don’t think the Dells of the world want to work with them if they can avoid it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 24, 2020, 08:26 PM
 
Originally Posted by P View Post
Ryzen R9 5900X. There is really nothing else that is special about it - B550, 16GB DDR4 (3600MHz), 1TB M.2 SSD, various doodads from my old box including the Vega 56 GPU - no chance of scoring a new GPU this fall. I decided to make life a little harder for myself with a smaller box, and because I was tired of everything being black or white, I went with a red one - Raijintek Styx. It is old enough that it actually has a space for a slot loading optical drive - I reused one from an old iMac. Unfortunately missing a USB-C on the front panel, but I do like what it looks like.
Sounds like a nice small form factor desktop to me.
Originally Posted by P View Post
Apple M1 has a bit of a weird floating point profile, actually - it has lots of scalar FPU units (four in parallel, IIRC), because JavaScript uses doubles for everything. Modern CPUs don’t usually bother with that because it’s more efficient to use vectors if you need to process that many floats per cycle (using vectors means that your data is organized in memory, which means that it is much quicker to access - using scalar operators means waiting for memory access more often).
But the M1 doesn’t seem slower in general single performance benchmarks than Zen 3, though, they are roughly the same.
Originally Posted by P View Post
IBM is still selling it’s z-series mainframes that is compatible with the System/360 from the sixties. There will be a market for those sort of things for the x86 for a very long time, but it may die in other sectors.
Agreed, x86 is going to be the Cobol of ISAs
I think it’ll take about 10-15 years for x86 to slowly fade from the mass market. This is just the beginning. Plenty of servers are more than powerful enough for a lot of applications already. Perhaps in some pockets of SMB it’ll live longer. (My brother’s tale of having to deal with problems bridging Gigabit ethernet to coax networking equipment is fresh in my mind.) If your SMB has a database application and does some file sharing, etc., you already don’t need a super fast system and plenty of companies are only replacing equipment once it breaks. In that market, it is more about consolidation.
Originally Posted by P View Post
Right now, enterprise computing is moving towards the big customers making their own computers from components. There is space for one, maybe two companies to sell complete enterprise solutions to medium-sized companies that won’t want to put everything in the cloud. Dell had a good path towards being the one, until this ARM transition really got going.
As far as I can tell, Dell’s enterprise strategy seems to be to provide great service. No-questions-asked 24-hour replacement service and the like is very popular. But now they can’t even compete.
Originally Posted by P View Post
AMD has apparently revived the K12 project - ARM processor based on the Zen backend. I guess that they’re seeing the same opportunity that we are. That would work for the current OEMs.
I hope AMD manages to bring this to fruition, they have the chops to instantly become a serious player in the ARM enterprise market. They have enterprise class memory controllers that can feed 64 cores, interconnects, etc. It’ll be a while, though.
Originally Posted by Brien View Post
Nvidia? They did just but ARM Holdings after all.
I don’t think nVidia is a good candidate. On the one hand, the ARM ecosystem lives off of the fact that ARM does not want to go into the SoC business. Plenty of smaller companies already go to RISC V for smaller services cores in e. g. SSD controllers. On the other, nVidia is, well, not well-liked. People have to pay through the nose for compute cards. I do not get the impression people who use nVidia high-end products really love the company, they begrudgingly use it.
I don't suffer from insanity, I enjoy every minute of it.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 25, 2020, 02:18 AM
 
That is true, nVidia is user hostile to boot as well.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 25, 2020, 08:53 AM
 
Originally Posted by Brien View Post
That is true, nVidia is user hostile to boot as well.
They are not very cooperative with the Linux/open source community either, which is surprising, because a lot of supercomputers use nVidia GPUs or nVidia compute cards these days. Linus Torvalds has had some choice words for them (NSFW). That’s one big reason why a lot of companies who rely on ARM IP are very skeptical.
I don't suffer from insanity, I enjoy every minute of it.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Dec 25, 2020, 11:00 AM
 
Not to mention the telemetry BS.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Dec 26, 2020, 05:37 PM
 
Originally Posted by OreoCookie View Post
Sounds like a nice small form factor desktop to me.
That was the plan! I have some ideas for improving the cooling a little, and I want another BIOS update before I start fiddling with the finer details, but it is already a nice box. Since it is Christmas, it has mostly been gaming, but there is a real difference there - at least in my never-ending strategy games.

But the M1 doesn’t seem slower in general single performance benchmarks than Zen 3, though, they are roughly the same.
They’re close to even, and Zen 3 is relying heavily on its brute force tricks - massive L3 and 5GHz turbo clocks - to get there. It DOES win on integer, however, and the losses it has on floating point math are largely inconsequential to me (they appear to be things that are best run on a GPU anyway, quite frankly).

I hope AMD manages to bring this to fruition, they have the chops to instantly become a serious player in the ARM enterprise market. They have enterprise class memory controllers that can feed 64 cores, interconnects, etc. It’ll be a while, though.
This is an excellent point. Infinity Fabric is awesome, and Apple has nothing like it right now. AMD can use that and its skills with chiplets to make a manycore monster.

I do not get the impression people who use nVidia high-end products really love the company, they begrudgingly use it.
This is my feeling as well, and they keep burning their partners. A company that didn’t think twice about burning Microsoft, Sony and Apple isn’t going to pull its punches with Dell.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Dec 26, 2020, 09:58 PM
 
Originally Posted by P View Post
They’re close to even, and Zen 3 is relying heavily on its brute force tricks - massive L3 and 5GHz turbo clocks - to get there. It DOES win on integer, however, and the losses it has on floating point math are largely inconsequential to me (they appear to be things that are best run on a GPU anyway, quite frankly).
True, and for a desktop power is less of a concern (at least when we talk about the relatively low TDPs of Ryzen desktop parts). A few years ago, CPUs were producing a lot more heat — my cheese grater Mac Pro was a very nice foot warmer.
Originally Posted by P View Post
This is an excellent point. Infinity Fabric is awesome, and Apple has nothing like it right now. AMD can use that and its skills with chiplets to make a manycore monster.
That’ll be the secret sauce. Apple did do some engineering in that direction, but that’s almost 20 years ago. (They developed the first chipset for the G5, although that was interestingly enough based in part on AMD’s interconnect at the time — ground hog day?) The other thing I don’t know how to judge is IP: what patents are connected to chiplets, because engineering these days seems to be about not finding the best solution, but the best solution that does not violate existing patents.
Originally Posted by P View Post
This is my feeling as well, and they keep burning their partners. A company that didn’t think twice about burning Microsoft, Sony and Apple isn’t going to pull its punches with Dell.
It seems to have something to do with nVidia’s company culture: if they had reached out and contributed to the open source community early on, I am sure they would have had a loyal following (rather than reluctant customers) in the compute market. That’s also why I share the pessimism regarding the ARM purchase: ARM’s corporate culture seems antithetical to nVidia’s. I hope I am wrong.
I don't suffer from insanity, I enjoy every minute of it.
     
Dex13
Mac Elite
Join Date: Dec 2002
Location: Bay Area of San Jose
Status: Offline
Reply With Quote
Dec 31, 2020, 03:31 PM
 
Originally Posted by mindwaves View Post
I really hate synthetic benchmarks. I wish more people would post real-world tests like duplicating files (huge files and thousands of small files), etc. I also wish that more people post non-video editing or photoshop related tasks (e.g., compiling files, web browsing battery test, file processing). Not everyone who buys a Mac runs their own video studio.
this thing slaps.
chrome started to behave a couple of weeks in and everything is perfect.
i don't believe there is a better computer atm for the price, especially the macbook air.

60 + tabs in chrome, typical postman garbage (just searching our database, creating/modifying/deleting/cleaning in bulk)
nobody is compiling seti, but no one has complained about anything taking longer than the intels either
one note, numbers, excel, textwrangler all open currently w/ multiple windows + slack (barf)
zoom doesn't even make this thing sweat, even w/ my video backgrounds

10/10 would do it again
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Jan 12, 2021, 02:50 PM
 
About ready to pull the trigger on an M1 Mini. This would replace the Mac Pro in the living room with the intent of cleaning up the area of cords, wires, hard drives, and general clutter.

Hesitations:
- Gaming
---- No booting into Windows
---- No controller support in Steam (yet)
---- Could theoretically Steam Link into the downstairs computer and run game through there
- Storage - I could splurge on the 512GB model, but that still wouldn't give me enough space for my iTunes and Photo libraries
---- May have to keep one external hard drive locally for media storage
---- Existing MP might become file server in the basement for Time Machine backups
- Video - The MP is connected to my current display via DP and optionally to our TV via HDMI
---- Looks like I can get a USB-C to DP or HDMI cable that will let me keep this situation going

So here's my conundrum. I have a PC I built back in 2012 running an i5-2500K and an R7-265 that serves as the movie/gaming computer in the theater.

The Mac Pro has a better RX 580 graphics card and can game better, especially when booted into Windows.

So in the end, in the basement I need:
- Gaming, preferably Windows
- Light web browsing/movie watching, platform agnostic
- File server, preferably Mac

Do I throw the RX 580 into the old Windows box and keep that for gaming, then dedicate the MP to being a file server? The MP is running Catalina, will that even boot with the original HD 5870 card?

Do I sell the upgraded MP for ~$800 and use the proceeds to buy a graphics card for the PC and an old Mini for a file server?
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Jan 13, 2021, 01:23 PM
 
Can the iTunes/Photos libraries go on the server?
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Jan 13, 2021, 02:16 PM
 
Are you asking if it's technically possible, or if I'm okay with doing it that way?

I think the answer to both questions is yes. The iTunes library serves basically no purpose at this point with streaming where it is.

The Photos library I'm a bit more sensitive about as it dates all the way back to the advent of iPhoto in '02 and has basically the only copies of all of the photos I've taken since then.

Frankly, I'd like a better solution for media backup than this, as I've had the library go corrupt before and appear to have lost important photos, like my son's birth/first few weeks of life.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Jan 13, 2021, 03:03 PM
 
I was asking a bit of both. I know iTunes can do it, but Photos is a POS.

Lightroom is probably overkill, but it stores everything as files in folders the way iTunes does. So, in the same way, nothing happens to the actual files if the library explodes.

Assuming it’s doable, that gets the extra hard drive out of the living room, and negates the need for a bigger SSD, but having the Pro Time Machine to itself is a little iffy. That should really be on a different computer... ideally, in a different room.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Feb 3, 2021, 02:14 PM
 
M1 is officially snappy™.

Getting files transferred over from the Pro. Since I've always used Migration Assistant, I have stuff on here from 20 years ago on my Ruby iMac G3. Maybe it's time to let some of that go.

Right now I'm considering selling both the Mac Pro and the gaming PC. Then buy an older Mini to act as the file/Time Machine server with all of my existing external drives, and build a new gaming PC to just do gaming/theater room shenanigans. 8yo is saving up for a VR setup so I want something to support that.

In the meantime, the Mini seems to game great - Rocket League is smooth. I loaded up the only other real Mac game in my library, Tomb Raider (2013), and it played that on Ultra with no issues. Interestingly, Steam recognized my wired XBox 360 controller as "GAME FOR WINDOWS," and it worked in Tomb Raider without issue.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Feb 3, 2021, 07:52 PM
 
Originally Posted by Laminar View Post
Right now I'm considering selling both the Mac Pro and the gaming PC. Then buy an older Mini to act as the file/Time Machine server with all of my existing external drives, and build a new gaming PC to just do gaming/theater room shenanigans. 8yo is saving up for a VR setup so I want something to support that.
I don’t know whether that makes financial sense. Last time I checked the prices for used Mac minis were crazy. Perhaps you’re better off just getting another M1-based Mac mini.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Feb 4, 2021, 10:29 AM
 
There's a 2012 i5 on Facebook locally for $125. Looks like I need at least 2014 for Big Sur compatibility, those go for around $260. I don't know if I could even get a good NAS box for $260.

Minecraft works, though it crashed a few times. It definitely doesn't work in "Fabulous" graphics mode, but it warned me that it wasn't compatible when I changed the setting.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Feb 4, 2021, 07:16 PM
 
Originally Posted by Laminar View Post
There's a 2012 i5 on Facebook locally for $125. Looks like I need at least 2014 for Big Sur compatibility, those go for around $260. I don't know if I could even get a good NAS box for $260.
The prices are much lower than I thought.
As far as NAS boxes go, my go-to brand is Synology. An entry-level 2-bay model sells for 160 £ on Amazon. My Synology DS214+ has been one of the most reliable pieces of technology I have ever owned. It’s been literally flawless. Not once did it crash, it only rebooted when it updated itself or power was lost. The price you pay is that it is more limited — which for some people is a plus.

Of course, a Mac mini would likely be more powerful. The only caveat is that you probably want to run Big Sur if you want to use the new version of Time Machine, which is based on APFS rather than HFS+. It is much, much faster and I reckon also more reliable.
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Feb 5, 2021, 12:06 AM
 
+1 Synology. Very happy with their stuff. Worst I’ve had happen is a creaky fan after running for several years without stopping.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Feb 5, 2021, 01:47 AM
 
Originally Posted by Laminar View Post
There's a 2012 i5 on Facebook locally for $125. Looks like I need at least 2014 for Big Sur compatibility, those go for around $260. I don't know if I could even get a good NAS box for $260.
According to the MacRumors Big Sur patching thread, a 2012 mini can have BS installed today. It may or may not have graphics acceleration - the imac6,1 is listed in both lists. The ones with acceleration, and the ones without. The WiFi drivers may be missing also. If you use it for TM / NAS duty over ethernet, neither issue would matter.

I have a budget QNAP NAS, used only for basic TM and NAS duty. It wasn't hard to set up and use. Never had a Synology, so I cannot offer a comparison.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Feb 5, 2021, 02:10 AM
 
Originally Posted by reader50 View Post
According to the MacRumors Big Sur patching thread, a 2012 mini can have BS installed today. It may or may not have graphics acceleration - the imac6,1 is listed in both lists. The ones with acceleration, and the ones without. The WiFi drivers may be missing also. If you use it for TM / NAS duty over ethernet, neither issue would matter.
Having had a Mac that needed a frankensteined OS, I wouldn't want to do that again. With such a machine, I really want a install-and-forget solution. That's what's so great about my Synology, it never gave me any trouble ever. Zero. It is the most boring purchase ever, but when you are talking about backups and storage, that's a Good Thing™.

I reckon you could never, ever update that machine, but as soon as you may want to use it for other things, you may be forced to. (Perhaps you want to use the server Mac mini to compile your XCode project in the background or some such.)
Originally Posted by reader50 View Post
I have a budget QNAP NAS, used only for basic TM and NAS duty. It wasn't hard to set up and use. Never had a Synology, so I cannot offer a comparison.
Just to add one more comment: when you ask around what companies make quality NASes, you'll mostly likely hear Synology and QNAP as the top two mentions. I don't have personal experience with QNAP, but at the time I went for Synology simply because I like their design miles better. So QNAP seems to be an equivalent (albeit in my eyes not as pretty a) choice.
I don't suffer from insanity, I enjoy every minute of it.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Feb 5, 2021, 02:20 AM
 
Originally Posted by OreoCookie View Post
I reckon you could never, ever update that machine, but as soon as you may want to use it for other things, you may be forced to.
I use a patched installer on my Mac Pro for unsupported OS installs. Non-reboot updates are safe to apply in place. For updates that require a reboot:

I have a 2nd clean install on a flash drive. Plug that in, and apply the update to it. If my Mac required it, I would next reapply patches to the USB install. Reboot into the flash drive.

If all is well, apply the update to the main Mac. Reapply patches if needed. Reboot back into the main Mac install.

The above procedure assumes you can download the update files. This works through Catalina, but Apple stopped offering them as of BS. You'd probably have to do a (patched) OS reinstall each time you need an update that requires a reboot.
     
Doc HM
Professional Poster
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Feb 5, 2021, 08:58 AM
 
Originally Posted by subego View Post
+1 Synology. Very happy with their stuff. Worst I’ve had happen is a creaky fan after running for several years without stopping.
Have done plenty of NAS installs at clients. Whatever the budget/size etc I always go for Synology. 100% reliable, excellent performance and features and the setup is a snap.

Used a Terrastore RAID as well but that was a TB3 RAID not NAS. Impressed with that as well and no issues so far. 64TB RAID for 4k video producer.
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Feb 5, 2021, 10:20 AM
 
Originally Posted by OreoCookie View Post
Having had a Mac that needed a frankensteined OS, I wouldn't want to do that again.
Yeah, that's one of the forces driving me out of the cMP. I already had to hack it to get Catalina running, and I can't go back to the old Apple-supplied graphics card because it's no longer supported, so the whole thing is on borrowed time.
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
Feb 5, 2021, 11:11 AM
 
Originally Posted by Doc HM View Post
I always go for Synology. 100% reliable, excellent performance and features and the setup is a snap.
I had a Sinology NAS years ago and they refused to patch it after Apple made a change that broke the file sharing.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Feb 5, 2021, 06:17 PM
 
Originally Posted by BLAZE_MkIV View Post
I had a Sinology NAS years ago and they refused to patch it after Apple made a change that broke the file sharing.
When was that? My Synology is from 2014 (DS 214+) and it still getting regular updates to the latest version of DSM.
I don't suffer from insanity, I enjoy every minute of it.
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
Feb 7, 2021, 10:48 PM
 
It was a CS406 in 2009. If I recall they switched CPUs and abandoned the previous platforms.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Feb 7, 2021, 11:54 PM
 
Originally Posted by BLAZE_MkIV View Post
It was a CS406 in 2009. If I recall they switched CPUs and abandoned the previous platforms.
Ah, ok, so that was before my Synology time. The only feature they did not port to ARM was btrfs support. I reckon the ARM cpus they use are too wimpy and that they do not include enough RAM. Would have been nice, though. 
I don't suffer from insanity, I enjoy every minute of it.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 10:24 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,