Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Hardware - Troubleshooting and Discussion > Mac Desktops > Mac Pro sails into the distance

Mac Pro sails into the distance
Thread Tools
Doc HM
Mac Elite
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Apr 16, 2018, 12:58 PM
 
Apparently some time in '19 now. Probably. Also 2019 for the new Apple display. Mostly because Apple don't currently build anything that will actually run the new display.
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 16, 2018, 01:18 PM
 
I'd hoped Apple would just make an updated motherboard for the cheesegrater, and work on a fancier model on the side. Like they did for the PowerMac + Cube. That way, if the fancy model has too many limitations, the market could pick the fully expandable standard model.

But it appears Apple wants to do fancy-only, with the inevitable limitations. When it does arrive, I'm sure it will lack something. Like no 3.5" drive bays. Or only have a single CPU socket. Or like the Cube, only have one PCIe slot. Or maybe they'll solder down the RAM.
( Last edited by reader50; Apr 16, 2018 at 06:59 PM. Reason: typo)
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Apr 16, 2018, 01:45 PM
 
Fancy and limited?

Or fancy and completely modular? Something like a homebuilt processor for UI functions and basic OS, coupled with extra Intel processor configurable for the task at hand…?

Internal 3.5" bays are not going to happen, I'd think.

Thunderbolt 4?
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 16, 2018, 05:57 PM
 
Apple has said “modular”, so no soldered down RAM. Single socket is very possible - the leaks that are coming out about Ice Lake-SP indicate a frankly ludicrous 8 memory channels and 1TB RAM without even going fully buffered. They’re already up to 28 cores in Skylake-SP, so it will likely be more.

3.5” bays are likely dead. I keep saying that there is no advantage to having them internal anymore, and nothing is happening to change my mind on that.

Personally, I think that the look-wow feature will involve Optane for insanely fast cache storage.

TB “3.1” is already out in the form of new controllers, but the only thing they add is DisplayPort 1.4. Not sure where PCIe 4.0 is in time, but it ought to be close?
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 16, 2018, 06:50 PM
 
High core counts per CPU ought to cause severe clockspeed ceilings, due to limited thermal budget. I'd expect 2x 6-cores to outperform 1x 12-core. However, browsing the wikipedia charts for recent Intel CPUs doesn't show any significant clockspeed penalties as the cores go up. I don't see how they're doing it, but that would remove my main objections to a single socket.

3.5" drive bays are a different matter, mostly because the price of flash memory has remained stuck for over 3 years now. With no strong indication it's going to change soon. This has limited SSD sizes to about 4TB. The prices go insane if you go larger than 4TB.

Current prices from NewEgg:

2TB SSD $400-500 ($225 per TB)
2TB HDD for $50-60 ($27 per TB) {1/8 price}

4TB SSD $1500 ($375 per TB)
4TB HDD $90-100 ($24 per TB) {1/15 price}

8TB HDD $220-230 ($28 per TB)
10TB HDD $310-320 ($31 per TB)
12TB HDD $420 ($35 per TB)

16-20 TB HDDs expected "soonish"

HDDs are dropping in price per TB, while getting steadily bigger in the 3.5" size. The 2.5" size appears stuck around 4-5TB. For mass storage, 4K/8K video editing, etc -- the only game in town is 3.5" HDDs. Unless flash prices drop dramatically, 3.5" HDDs will be around for some time.

As to internal bays vs external - I've found internal ones more practical. Especially quieter. My TM array was in an external beside my MacPro for years. It rarely ran, because I hated the drive and fan noise. I finally moved it to another room, and converted it into network access via a spare laptop. It's slower now. But since I no longer hear it, I finally have frequent backups.

The cheesegrater has a larger case to dampen out noise and vibrations. Making the bays internal allows higher speed without requiring distance. Or large hush cabinets.
( Last edited by reader50; Apr 16, 2018 at 10:26 PM. Reason: math, brevity)
     
And.reg
The Mighty
Join Date: Feb 2004
Location: Well the sports issue was within arm's reach but they closed up shop and kicked me out. And I'm out of toilet paper.
Status: Offline
Reply With Quote
Apr 16, 2018, 07:01 PM
 
Any chance that Apple will allow for an option to have liquid cooling in a modular?
This one time, at Boot Camp, I stuck a flute up my PC.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 17, 2018, 12:33 AM
 
One question is how Apple will balance the number of PCIe lanes it spends on internal expansion (e. g. dedicated to PCIe SSDs) and external expansion (in the form of Thunderbolt ports). And this may be connected to whether the system will also support dual CPU configurations or not: Xeons for one CPU systems tend to support less PCIe lanes. Moreover, with a dual CPU system you could have more faster cores because the high-core count Xeons have much lower base clocks.

Overall, I hope that the design brief starts with configurability: let the user decide how many cores at what speed he or she wants, whether one or two GPUs are appropriate and include plenty of connectivity (such as two 10 Gbit/s ethernet ports). For that reason I hope that Apple allows dual CPU configurations, it just gives it more options.

As far as 3.5 drive bays are concerned, I think they don't have to be internal, and Apple could offer a third-party drive array on the Apple Store like they did for some time after they discontinued the XRAID.

@Andy.reg
That depends on what kind of liquid cooling you are speaking of: you can have all components liquid cooled, but then you need custom coolers and you cannot easily upgrade e. g. graphics cards afterwards. Or you can have liquid CPU coolers that Apple used to have on some of the G5s.

If you want something easily upgradable, that speaks against liquid cooling, me thinks.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 17, 2018, 06:26 PM
 
Apple used to have reliability problems with its liquid coolers, but modern closed loop systems are generally more reliable. As long as there is a PCIe slot and at least one 120mm fan slot, nothing stops you from installing one yourself.

Skylake-SP has 48 PCIe lanes in total, and Intel offers that even for cheapish Xeons. Shouldn’t be a problem to get that. Apple could also get a dual-socket capable model and just use one of them, there is nothing in Intel’s specs to prevent that.

https://ark.intel.com/Search/Feature...essPortsMax=48

Of course, two Xeons means 96 lanes from the CPUs, but then you have to contend with inter-socket traffic.

Apple can also put the flash storage on the PCH, like Intel clearly intends and like they have done on all Macs for the last few years (with reservation that I haven’t looked up how the iMac Pro does it) and not use any PCIe lanes from the CPU at all. 2x PCIe x16 + 4x TB3 from the CPU and all storage from the PCH seems like an appropriate combination to me. Alternatively, give 16 lanes total to the internal graphics, one x16 or two x8, and use the saved lanes for some more TB3.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 17, 2018, 07:12 PM
 
Originally Posted by P View Post
3.5” bays are likely dead. I keep saying that there is no advantage to having them internal anymore, and nothing is happening to change my mind on that.
One less enclosure, power supply, and external cable?
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 18, 2018, 12:33 AM
 
Originally Posted by P View Post
Apple used to have reliability problems with its liquid coolers, but modern closed loop systems are generally more reliable. As long as there is a PCIe slot and at least one 120mm fan slot, nothing stops you from installing one yourself.
To add to that: I don't think liquid cooling the overclocker's way (where you have a single reservoir for CPU, GPU and all other things that need cooling as opposed to a closed solution for a single component) is in the cards, simply because you don't actually need it and it would make upgrades much more difficult.
Originally Posted by P View Post
Skylake-SP has 48 PCIe lanes in total, and Intel offers that even for cheapish Xeons. Shouldn’t be a problem to get that. Apple could also get a dual-socket capable model and just use one of them, there is nothing in Intel’s specs to prevent that. [...] Of course, two Xeons means 96 lanes from the CPUs, but then you have to contend with inter-socket traffic.
Compared to AMD's Zen that's a paltry amount (they feature 128 PCIe 3.0 lanes, although in 2-processor configurations, that doesn't double as half of each CPU's PCIe lanes are used for interprocessor communication). Of course, I don't expect Apple to ship the Mac Pro with an AMD server CPU.

48 lanes isn't really that much in the grand scheme of things: I can easily spend 68 lanes without even including USB 3.0 and other IO:

2 x 16 lanes for graphics
4 x 4 lanes for PCIe SSD storage
4 x 4 lanes for Thunderbolt 3 ports
2 x 2 lanes for two 10 Gbit ethernet ports

You could of course make compromises here and not give each SSD 4 lanes or give one of the GPUs only 8 lanes, for example, but if you build a machine that is designed to be as fast as possible with as few bottle necks as possible, I think there are good arguments for having at least a dual CPU option.
Originally Posted by P View Post
Apple can also put the flash storage on the PCH, like Intel clearly intends and like they have done on all Macs for the last few years (with reservation that I haven’t looked up how the iMac Pro does it) and not use any PCIe lanes from the CPU at all. 2x PCIe x16 + 4x TB3 from the CPU and all storage from the PCH seems like an appropriate combination to me. Alternatively, give 16 lanes total to the internal graphics, one x16 or two x8, and use the saved lanes for some more TB3.
For an iMac Pro that is intended to only use a single SSD (well, technically, two SSDs in a RAID0), this might be a solution. But if you, say, allow for four PCIe SSDs, I doubt connecting them via the PCH is the most performant solution.
I don't suffer from insanity, I enjoy every minute of it.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Apr 18, 2018, 08:24 AM
 
Originally Posted by subego View Post
One less enclosure, power supply, and external cable?
For a stationary workhorse?
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 18, 2018, 11:07 AM
 
One less enclosure, power supply, and external cable?
You don’t need the external power supply anymore, as USB 3.0 (and even more so USB-C) provides enough power for an external drive. The enclosure can be as small as the drive itself, so if you put it inside the computer chassis, it has to grow by the same amount.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 18, 2018, 11:24 AM
 
Originally Posted by OreoCookie View Post
To add to that: I don't think liquid cooling the overclocker's way (where you have a single reservoir for CPU, GPU and all other things that need cooling as opposed to a closed solution for a single component) is in the cards, simply because you don't actually need it and it would make upgrades much more difficult.
"Closed loop" liquid cooling means that each thing to be cooled has its own pump and its own radiator. It is probably more common than "rolling your own" and connecting everything to the same pump and radiator these days. I don't think Apple will do that, but it should be possible to do it yourself if you want to.

Compared to AMD's Zen that's a paltry amount (they feature 128 PCIe 3.0 lanes, although in 2-processor configurations, that doesn't double as half of each CPU's PCIe lanes are used for interprocessor communication). Of course, I don't expect Apple to ship the Mac Pro with an AMD server CPU.
No, but Intel is likely to take the hint and add more lanes as well.

48 lanes isn't really that much in the grand scheme of things: I can easily spend 68 lanes without even including USB 3.0 and other IO:

2 x 16 lanes for graphics
4 x 4 lanes for PCIe SSD storage
4 x 4 lanes for Thunderbolt 3 ports
2 x 2 lanes for two 10 Gbit ethernet ports

You could of course make compromises here and not give each SSD 4 lanes or give one of the GPUs only 8 lanes, for example, but if you build a machine that is designed to be as fast as possible with as few bottle necks as possible, I think there are good arguments for having at least a dual CPU option.

For an iMac Pro that is intended to only use a single SSD (well, technically, two SSDs in a RAID0), this might be a solution. But if you, say, allow for four PCIe SSDs, I doubt connecting them via the PCH is the most performant solution.
The point about connecting storage to the PCH is that then the RAID work can be done there and not in the CPU. Giving each drive 4 lanes with no connection closer than the main memory does not seem efficient. It only provides an advantage if you're saturating all four drives with independent tasks, which seems unlikely in practice. Alternatively, add a second RAID controller with a wider datapath than DMI 3.0, say 8 lanes total, but I don't see anyone doing that on the market right now, so I doubt that the x4 connection in DMI 3.0 is a limitation in practice.

Ethernet is also integrated in the PCH - Intel supports 4x 10Gbit connections from the Purley PCH.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 18, 2018, 11:57 AM
 
Originally Posted by Spheric Harlot View Post
For a stationary workhorse?
Yup. I’m crazy anal about this. My rigs are clean, goddammit.



     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 18, 2018, 01:04 PM
 
Originally Posted by P View Post
Apple used to have reliability problems with its liquid coolers, but modern closed loop systems are generally more reliable.
It looks to me like Apple solved those problems by the end of the G5 run. My G5 Quad still works today, with zero coolant issues. 13 years after manufacture.

I remember forum posts about early G5 coolant failures in the duals. But those kinds of posts slowly petered out as the last models became dominant.
     
And.reg
The Mighty
Join Date: Feb 2004
Location: Well the sports issue was within arm's reach but they closed up shop and kicked me out. And I'm out of toilet paper.
Status: Offline
Reply With Quote
Apr 18, 2018, 02:16 PM
 
I was just asking because many eGPU setups use liquid cooling just for the GPU so I was curious if Apple would consider the space needed for an easy install/removal of a liquid-cooled PCI GPU (or dual GPU).
This one time, at Boot Camp, I stuck a flute up my PC.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Apr 18, 2018, 02:25 PM
 
I'm really diggin' that Persian mouse pad.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 18, 2018, 05:12 PM
 
Thank you!

I found the company more than 20 years ago now, and they’ve been my mousepad of choice ever since.

https://www.mouserug.com

They also sell direct on Amazon.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Apr 18, 2018, 05:43 PM
 
Hmmmm...Their website says Apple’s Magic Mouse can have problems adapting to the rugs. Nuts.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 18, 2018, 08:34 PM
 
Originally Posted by P View Post
The point about connecting storage to the PCH is that then the RAID work can be done there and not in the CPU. Giving each drive 4 lanes with no connection closer than the main memory does not seem efficient. It only provides an advantage if you're saturating all four drives with independent tasks, which seems unlikely in practice. Alternatively, add a second RAID controller with a wider datapath than DMI 3.0, say 8 lanes total, but I don't see anyone doing that on the market right now, so I doubt that the x4 connection in DMI 3.0 is a limitation in practice.
On “closed systems” like the iMac Pro where Apple designs everything up to and including the nuts and bolts this is exactly the way to go, but I am not sure whether a more standard approach isn't more versatile. On the iMac Pro their T2 is the SSD controller. If you want to be able to plug in vanilla PCIe SSDs, I am not sure this is the best option. I take your point that it isn't very likely that all SSDs will saturate a 4x link with IO, but on the other hand, the Mac Pro is supposed to be a machine for rather extreme applications. Plus, a workstation-level RAID controller is a rather delicate thing. (I remember that one of the few times data was lost on a Sun Storage array was due to bugs in the RAID controller's firmware.

Fact remains that Intel is quite stingy compared to the competition in the server and workstation space when it comes PCIe lanes, and your proposal feels like a workaround. (Which is why Apple might do it.) Even if you just insist on 2 x 16 lanes for graphics and 4 x 4 lanes for Thunderbolt 3.0, you have used up all of Intel's PCIe lanes.
Originally Posted by P View Post
Ethernet is also integrated in the PCH - Intel supports 4x 10Gbit connections from the Purley PCH.
That'll put a lot of pressure on the PCH interface, won't it? IO, networking, USB 3.0, etc. All of those have become bandwidth hogs by themselves. And while I grant that it might not be likely to saturate 4 SSDs, I think it is quite likely you can stress the USB subsystem, one 10 GBit networking port and 2 SSDs at the same time. (Just connect an SSD via USB 3.0, and read/write from a network share at the same time.)
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 18, 2018, 09:27 PM
 
Originally Posted by Thorzdad View Post
Hmmmm...Their website says Apple’s Magic Mouse can have problems adapting to the rugs. Nuts.
Forgot about that!

This is true. The Magic Mice shaped like Pro Mice are fine. That’s one in the pic. Any model past that stutters.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 18, 2018, 09:29 PM
 
Originally Posted by P View Post
You don’t need the external power supply anymore, as USB 3.0 (and even more so USB-C) provides enough power for an external drive. The enclosure can be as small as the drive itself, so if you put it inside the computer chassis, it has to grow by the same amount.
Forgot about this too... good point!
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 19, 2018, 05:10 AM
 
Originally Posted by reader50 View Post
It looks to me like Apple solved those problems by the end of the G5 run. My G5 Quad still works today, with zero coolant issues. 13 years after manufacture.

I remember forum posts about early G5 coolant failures in the duals. But those kinds of posts slowly petered out as the last models became dominant.
Liquid cooling is more finicky, as a rule. There is a reason that pro systems tend to avoid it. Add in the fact that it limits upgradeability, and I don't think Apple will do a single liquid cooling loop.

Originally Posted by And.reg View Post
I was just asking because many eGPU setups use liquid cooling just for the GPU so I was curious if Apple would consider the space needed for an easy install/removal of a liquid-cooled PCI GPU (or dual GPU).
I don't see why not, it is really just a PCIe slot and a 120mm fan slot. You can do it in a cheesegrater, I think?

On “closed systems” like the iMac Pro where Apple designs everything up to and including the nuts and bolts this is exactly the way to go, but I am not sure whether a more standard approach isn't more versatile. On the iMac Pro their T2 is the SSD controller. If you want to be able to plug in vanilla PCIe SSDs, I am not sure this is the best option. I take your point that it isn't very likely that all SSDs will saturate a 4x link with IO, but on the other hand, the Mac Pro is supposed to be a machine for rather extreme applications. Plus, a workstation-level RAID controller is a rather delicate thing. (I remember that one of the few times data was lost on a Sun Storage array was due to bugs in the RAID controller's firmware.

Fact remains that Intel is quite stingy compared to the competition in the server and workstation space when it comes PCIe lanes, and your proposal feels like a workaround. (Which is why Apple might do it.) Even if you just insist on 2 x 16 lanes for graphics and 4 x 4 lanes for Thunderbolt 3.0, you have used up all of Intel's PCIe lanes.
Note that it is 4 lanes for each pair of TB3 ports, so your setup has 8 total TB3 ports - two more than that the trashcan. Yes, this means that you if you saturate two ports next to each other, the link will be congested, but that is the way Intel designed the TB3 controller.

But the main difference is that I don't think two GPUs need 16 lanes each. Every test I can find of gaming on fewer lanes shows that x4 lanes gives you >98% of the performance, and x8 is indistinguishable from x16. Compute tasks are even less demanding, generally - miners give each GPU a single x1 lane. Apple has moved to x8 lanes for the GPU in the MBP. I can well see them dedicating x16 lanes to GPU total - x16 for one, or 2 times x8 for two - if they run out of lanes. x16 lanes made a lot more sense back with PCIe 1.0 and less onboard memory, but it isn't really needed now.

If you want to let people add their own SSDs, you need a controller with a documented interface. Intel ships one that is good enough for everyone else - why not use it?

Originally Posted by OreoCookie View Post
That'll put a lot of pressure on the PCH interface, won't it? IO, networking, USB 3.0, etc. All of those have become bandwidth hogs by themselves. And while I grant that it might not be likely to saturate 4 SSDs, I think it is quite likely you can stress the USB subsystem, one 10 GBit networking port and 2 SSDs at the same time. (Just connect an SSD via USB 3.0, and read/write from a network share at the same time.)
Intel simulates things like this. It wouldn't be hard to make the link to the PCH x8 lanes if they saw a need for it - it is a separate PCH for the Xeons anyway - and they haven't bothered. In my mind, that indicates that the current x4 link is fast enough for now.

Also note that USB 3.1 would typically be handled by the TB3 controllers with your proposed setup (like on the current MBP), so that load would not go over the PCH.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 19, 2018, 05:12 AM
 
Originally Posted by subego View Post
Forgot about that!

This is true. The Magic Mice shaped like Pro Mice are fine. That’s one in the pic. Any model past that stutters.
Logitech's mice are better, IMO. Have an MX Anywhere 2 as a travel mouse now - excellent little thing.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 19, 2018, 06:56 AM
 
Originally Posted by P View Post
Note that it is 4 lanes for each pair of TB3 ports, so your setup has 8 total TB3 ports - two more than that the trashcan. Yes, this means that you if you saturate two ports next to each other, the link will be congested, but that is the way Intel designed the TB3 controller.
I thought that this was optional, and the reason why e. g. the 15 inch MacBook Pro's left and right Thunderbolt ports are created differently.
Originally Posted by P View Post
But the main difference is that I don't think two GPUs need 16 lanes each. Every test I can find of gaming on fewer lanes shows that x4 lanes gives you >98% of the performance, and x8 is indistinguishable from x16. Compute tasks are even less demanding, generally - miners give each GPU a single x1 lane. Apple has moved to x8 lanes for the GPU in the MBP. I can well see them dedicating x16 lanes to GPU total - x16 for one, or 2 times x8 for two - if they run out of lanes. x16 lanes made a lot more sense back with PCIe 1.0 and less onboard memory, but it isn't really needed now.
Again, my argument isn't that if you need to work within a limited PCIe budget that this may be the best way to go, but my initial point was that if you wanted to design a machine that is as fast as possible and as flexible as possible, allowing for dual CPU configurations still makes sense — even if your aim is not to maximize the number of CPU cores but to, say, have more aggregate IO bandwidth or the same number of cores as a single CPU config but with higher clocks. Configurability should be one of the key design goals: this way Apple does not have to put a laser focus on a particular use case.
Originally Posted by P View Post
If you want to let people add their own SSDs, you need a controller with a documented interface. Intel ships one that is good enough for everyone else - why not use it?
Isn't that what PCIe SSDs that use NVMe do? If Apple used their own controller, presumably it could rely on the same protocol, but then you have to manage the RAID controller.
Originally Posted by P View Post
Intel simulates things like this. It wouldn't be hard to make the link to the PCH x8 lanes if they saw a need for it - it is a separate PCH for the Xeons anyway - and they haven't bothered. In my mind, that indicates that the current x4 link is fast enough for now.
On the other hand, they are shipping Xeons with only 48 lanes when the competition has 128
Originally Posted by P View Post
Also note that USB 3.1 would typically be handled by the TB3 controllers with your proposed setup (like on the current MBP), so that load would not go over the PCH.
Ok, that's a good point, I forgot about that. (Although that is assuming there are no traditional USB-A-type ports.)
I don't suffer from insanity, I enjoy every minute of it.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Apr 19, 2018, 07:14 AM
 
Originally Posted by subego View Post
Forgot about that!

This is true. The Magic Mice shaped like Pro Mice are fine. That’s one in the pic. Any model past that stutters.
I'm using the wireless Magic Mouse that came with my late-2009 iMac. So, no rug for me
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Apr 19, 2018, 12:01 PM
 
Originally Posted by OreoCookie View Post
I thought that this was optional, and the reason why e. g. the 15 inch MacBook Pro's left and right Thunderbolt ports are created differently.
I guess it’s optional in that you can just not wire up one of the two outputs to a physical port, but other than that, no. All TB controllers are made this way.

The left and right ports thing is because of something else. On the 13” TB (not the 15”), Apple has run out of PCIe lanes. It spends 4 on the storage and 4 on the left TB controller, which leaves only 4 for the right controller and WiFi (and possibly the T1) to share. This means that somehow, here won’t be enough bandwidth. Most likely Apple has just wired up two lanes to the right TB3 controller, although there is a chance that they’re muxing them somehow.

Also note that all of these 12 lanes are from the PCH, so they’re all stuck behind the equivalent of 4 lanes to the CPU anyway. The 15” (and the trashcan MP) run them from the CPU instead.

Again, my argument isn't that if you need to work within a limited PCIe budget that this may be the best way to go, but my initial point was that if you wanted to design a machine that is as fast as possible and as flexible as possible, allowing for dual CPU configurations still makes sense — even if your aim is not to maximize the number of CPU cores but to, say, have more aggregate IO bandwidth or the same number of cores as a single CPU config but with higher clocks. Configurability should be one of the key design goals: this way Apple does not have to put a laser focus on a particular use case.
Of course you get more lanes in total if you add more sockets, but there is a cost to that. Lots more power and heat to remove, traffic between the sockets mean that you to consider which slot and port you use, and most importantly - it gets expensive. Crazy expensive.

Also, why not add 4 sockets in that case? The relevant Xeons support that now. What is the limit?

On the other hand, they are shipping Xeons with only 48 lanes when the competition has 128
Sure, but Epyc has 128 because it is 4 dies in one package, and each die has 32 lanes. I don’t think any customers asked for 128 lanes. 32 lanes makes sense for an octocore highend desktop chip (especially as the chipset has to use some of them), and everything else flows from that. Could they have got away with 24 per die? Maybe, but if you’re building for the future, it makes sense to have a few more.

Ok, that's a good point, I forgot about that. (Although that is assuming there are no traditional USB-A-type ports.)
You can run USB 3.1 Gen 2 over USB-A. It doesn’t make much sense to use a TB3 controller to do that as you would lose a TB port, but you can.

If Apple puts any USB-A ports on the new MP (and they should), I think they will be USB 3.0 from the PCH. Current Intel PCHes don’t support more bandwidth than that over USB (although that will likely change by next year), and Apple never shipped a faster USB-A port, so it makes sense to keep the backwards-compatible ports “bug compatible”, so to speak.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 20, 2018, 12:30 AM
 
Originally Posted by P View Post
I guess it’s optional in that you can just not wire up one of the two outputs to a physical port, but other than that, no. All TB controllers are made this way.
I was under the impression that it depends on the precise model of the controller, but that you were not forced to use two outputs. Alpine Ridge supports x2 and x4 PCIe links, but I seem to remember that the 13" MacBook Pro muxes a 4x link between the right-hand side Thunderbolt controller and other hardware (including wifi).
Originally Posted by P View Post
The left and right ports thing is because of something else. On the 13” TB (not the 15”), Apple has run out of PCIe lanes.
I mixed up the sizes, but at the end it was due to a lack of PCI lanes.
Originally Posted by P View Post
Of course you get more lanes in total if you add more sockets, but there is a cost to that. Lots more power and heat to remove, traffic between the sockets mean that you to consider which slot and port you use, and most importantly - it gets expensive. Crazy expensive.
Crazy expensive is relative once you are in workstation territory. One of my professors had a decked out Alpha workstation for 40,000 Marks (about $30k then). A single CPU cost 5k. Prices have come down quite a bit from that with the move to Intel, but seeing as the iMac Pro scales up to $14k, I wouldn't be surprised if you can configure a Mac Pro for $20k. Is that expensive for a workstation? It depends. And if you shell out even $10k, is it unreasonable to expect to be able to configure two CPUs? I don't think so.
Originally Posted by P View Post
Also, why not add 4 sockets in that case? The relevant Xeons support that now. What is the limit?
Most workstation lines have at most two CPUs, I think this is the accepted optimum. I can really see use cases where you want a two socket system, e. g. for workloads that benefit from faster single core performance.
Originally Posted by P View Post
Sure, but Epyc has 128 because it is 4 dies in one package, and each die has 32 lanes. I don’t think any customers asked for 128 lanes. 32 lanes makes sense for an octocore highend desktop chip (especially as the chipset has to use some of them), and everything else flows from that. Could they have got away with 24 per die? Maybe, but if you’re building for the future, it makes sense to have a few more.
I think having the extra lanes gives you extra flexibility, and 128 lanes is a lot, but with 128 lanes you don't have to play the game of sharing lanes, simulating whether for certain use cases lack of total IO bandwidth becomes a problem.
I don't suffer from insanity, I enjoy every minute of it.
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Apr 20, 2018, 12:33 PM
 
I’d be curious to see if this ends up an x86-64 or ARM based product.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 20, 2018, 10:09 PM
 
Originally Posted by Brien View Post
I’d be curious to see if this ends up an x86-64 or ARM based product.
Since Apple has confirmed it plans to release the Mac Pro in 2019, I don't think it is in the cards to design a workstation class CPU that quickly. I think if they did opt to go that route that the Mac Pro would be the last product to make the switch. Also, I would expect that Apple would start using their own ARM-based silicon for their data centers first, and that that would eventually leak.
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 20, 2018, 10:27 PM
 
I see a catch-22 with trying to build a pro machine off a different architecture.

Let’s take the 1st gen Adobe suite on ARM as an example.

It’ll be gimped. No one will want to use it. Since no one wants to use it, Adobe won’t put enough resources to making it better, which means less people want to use it.

Apple would have to subsidize it some way.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Apr 21, 2018, 06:58 AM
 
Originally Posted by subego View Post
I see a catch-22 with trying to build a pro machine off a different architecture.

Let’s take the 1st gen Adobe suite on ARM as an example.

It’ll be gimped. No one will want to use it. Since no one wants to use it, Adobe won’t put enough resources to making it better, which means less people want to use it.

Apple would have to subsidize it some way.
Why would a fully-fledged macOS running on a different hardware core result in gimped anything? It's still a Mac, not a different platform.

Adobe will drag their heels and be among the very last to actually recompile for the new platform, as they were in the intel transition, and they will try to slam Apple as hard as possible in negative PR in an attempt to get Apple to either reconsider or convince their users to switch platforms, but as long as a significant portion of their income is generated on the Mac platform...
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 21, 2018, 09:04 AM
 
Also this time there is hope that Adobe will make the switch faster: they have a subscription model, which means people will update more quickly. Moreover, they now use Apple’s development tool chain and have ported parts of their apps to iOS (= ARM). It’ll take them a while, but I think it’ll be quicker than the transition to x86. (Don’t forget that they were late to the Mac OS X party, too.)
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Apr 21, 2018, 12:33 PM
 
Originally Posted by Spheric Harlot View Post
Why would a fully-fledged macOS running on a different hardware core result in gimped anything? It's still a Mac, not a different platform.
It wouldn’t, and that’s not what I said.

Originally Posted by subego View Post
Let’s take the 1st gen Adobe suite on ARM as an example.

It’ll [the 1st gen Adobe suite on ARM] be gimped
IOW,

Originally Posted by Spheric Harlot View Post
Adobe will drag their heels
Precisely.
     
   
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 11:02 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,