|
|
Mac Pro config
|
|
|
|
Senior User
Join Date: Jan 2001
Location: Southern CA
Status:
Offline
|
|
So I'm looking at getting a Mac Pro within the next few months. I'm going to use it for Adobe CC, Final Cut Pro X, Aperture, and some light Modo, Lightwave work. I would love a fully spec'd machine but at $9K that isn't happening. I'm planning to start off with the base system since I'm going to replace the RAM with third party and max it out. I'm also going with 500GB of flash storage and getting an external TB RAID from OWC. Now, I'm debating whether to max out the GPU's with the D700's or go with the D500's and the 6-core upgrade for about the same price (+/-$100). I'm thinking about future proofing and if I max out the GPU's I can always upgrade the proc down the road since that is replaceable. The GPU's are custom-made for the Mac Pro and may not be replaceable.
Thoughts?
|
Who'sDaMac?
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
I'd go with the better CPU... not much is fully GPU accelerated today, just bits here and there.
|
|
|
|
|
|
|
|
|
Professional Poster
Join Date: Mar 2003
Location: Down by the river
Status:
Offline
|
|
Originally Posted by mduell
I'd go with the better CPU... not much is fully GPU accelerated today, just bits here and there.
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
|
"Like a midget at a urinal, I was going to have to stay on my toes." Frank Drebin, Naked Gun 33 1/3: The Final Insult
|
|
|
|
|
|
|
|
Moderator
Join Date: May 2001
Location: Hilbert space
Status:
Offline
|
|
That's a tough one: It depends on how much you're relying on Final Cut Pro, because right now this is the only app where the GPUs make a night-and-day difference.
On the other hand, the D500 + 6-core variant is probably the most balanced configuration, and I'd probably go for that.
|
I don't suffer from insanity, I enjoy every minute of it.
|
|
|
|
|
|
|
|
Mac Elite
Join Date: Oct 1999
Location: Montréal, Québec (Canada)
Status:
Offline
|
|
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
|
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
Originally Posted by cgc
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
Please spare us the same banter we heard 3-5 years ago. GPU acceleration is slowly improving, keyword slowly.
Originally Posted by FireWire
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
The possibility and the tools haven't been the issues for decades... algorithms still remain a big blocker (especially with the very high latencies of the GPUs) as well as hardware limits (not enough bandwidth or memory in the right places, exacerbated by the algorithms and latencies).
(
Last edited by mduell; Jan 20, 2014 at 02:45 AM.
)
|
|
|
|
|
|
|
|
|
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status:
Offline
|
|
Originally Posted by FireWire
How come multi-threading isn't the norm?
If I give you ten copies of the same book, how much faster will you be able to read than if I only gave you one book?
It's a hard problem, and it is not necessarily solvable. There are subsets that are just as single-threaded as the example above. We have moved from single threads to usable every-day apps utilizing 4 threads, so it's not like we're standing still, but it's been hard. Trying to move to the thousands of threads that a GPU can support is going to be even harder.
(Sidenote: nVidia is currently on a spree to claim that its latest, as yet un-released, mobile GPU is a 192-core GPU, and its bigger desktop chips have thousands of cores. That is BS. It supports 192 threads if you squint a bit, but those 192 threads have to be scheduled together, so it is still only one core. AMD is marginally more honest when it claims that its newest Kaveri CPUs have 12 "compute cores". It's correct in the sense that you can schedule 12 independent tasks, but those 12 cores are anything but equal. 4 are regular x86 cores that share 2 FPUs all together, and the other 8 are graphics CUs. Using this math, a D300 has 20 cores and a D700 has 32 cores, and each of those cores support 64 threads - again with the squinting)
|
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
|
|
|
|
|
|
|
|
Fresh-Faced Recruit
Join Date: Jan 2014
Location: Heart of Dixie where eagles wage war and tides roll red to win BCS x4 in a row.
Status:
Offline
|
|
Originally Posted by I'mDaMac
So I'm looking at getting a Mac Pro within the next few months. I'm going to use it for Adobe CC, Final Cut Pro X, Aperture, and some light Modo, Lightwave work. I would love a fully spec'd machine but at $9K that isn't happening. I'm planning to start off with the base system since I'm going to replace the RAM with third party and max it out. I'm also going with 500GB of flash storage and getting an external TB RAID from OWC. Now, I'm debating whether to max out the GPU's with the D700's or go with the D500's and the 6-core upgrade for about the same price (+/-$100). I'm thinking about future proofing and if I max out the GPU's I can always upgrade the proc down the road since that is replaceable. The GPU's are custom-made for the Mac Pro and may not be replaceable.
Thoughts?
Just a suggestion: In considering whether to maximize your compute performance by either CPU(s) or GPU(s) or both, first rank your applications in terms of their importance to you. Then explore whether the most important application or more important applications benefit more from CPU or GPU compute performance. If it's CPU performance, try to pin down whether core speed and/or core count is more important - then you'll be better able to select your CPU(s) more wisely. If it's GPU parallel computing performance, try to pin down whether it's OpenCL (where AMD/ATI excels) or CUDA (where despite the fact that Nvidia cards also support OpenCL, they're the only ones that support CUDA). It could be that your important software demands a wise selection of both CPU(s) and GPU(s). Making your final decision(s) based on the needs of your important application(s) and your budget will help to make your days happier.
(
Last edited by Tutor; Jan 20, 2014 at 08:35 AM.
Reason: added: Making ... .)
|
15 self-built multi-OS CUDA rigs and 4 Mac Pros with a total GTX Titan RD Octane Rendering Equivalency of >55. AKA-TheRealTutor.
Benches: CB11.5-48.5; CB15-3,791; GB2-58,027; GB3-71,691; LuxMark/Sala-12,330.
|
|
|
|
|
|
|
|
Professional Poster
Join Date: Mar 2003
Location: Down by the river
Status:
Offline
|
|
Originally Posted by mduell
Please spare us the same banter we heard 3-5 years ago. GPU acceleration is slowly improving, keyword slowly.
Why don't you spare us the condescending know-it-all attitude? I'd bet Apple wouldn't put dual GPUs in a machine then decide not to optimize their "professional" applications to utilize them. THAT is why I predict must better support for off-loading tasks to the GPUs and that's why I'd opt for the higher-end GPUs out of the gates.
|
"Like a midget at a urinal, I was going to have to stay on my toes." Frank Drebin, Naked Gun 33 1/3: The Final Insult
|
|
|
|
|
|
|
|
Moderator
Join Date: May 2001
Location: Hilbert space
Status:
Offline
|
|
Originally Posted by FireWire
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
Multithreading is a hard problem and not every workload can be multithreaded. The same goes for utilizing the GPU for computational tasks. You really need to optimize your application by hand in most cases, and not every developer has the expertise to do so.
One way to improve the situation is for Apple to optimize its own APIs so that they take advantage of OpenCL and multiple cores. For instance, web rendering engines can and do utilize the GPU (that's the case on Windows Phone at least, I don't know whether and how much WebKit can make use of the GPU).
That's one reason why Apple focuses on dual core CPUs in many of its products: most other ARM SoC vendors offer 4+ weaker cores while Apple stuck to 2 beefy cores. Some SoC come around, e. g. nVidia's flagship K1 offers 2 homegrown beefy cores (in addition to a 4+1-core Cortex A15 solution). Ditto for Apple's notebooks, in most cases, you don't utilize 4 cores well and the die area is better spent on improved graphics rather than more CPU cores.
The question is even more complicated when it comes to the Mac Pro: for many people the 4- and 6-core versions are faster than the 12-core version, because the 12-core version has lower turbo clocks. Ditto for which graphics is best, there is simply no universal answer at this point.
However, it seems to me that Apple is getting serious about GPU acceleration, and if they can manage to offload, say, chunks of WebKit to the GPU, that could lead to significant performance improvements for the average user.
|
I don't suffer from insanity, I enjoy every minute of it.
|
|
|
|
|
|
|
|
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status:
Offline
|
|
From what I can find, Webkit uses the GPU for 2D canvas rendering, video & animations (<video>, Flash & CSS), CSS transforms & transitions, and the entire compositing step.
|
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
|
|
|
|
|
|
|
|
Senior User
Join Date: Jan 2001
Location: Southern CA
Status:
Offline
|
|
Originally Posted by cgc
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
That's what I was leaning towards.
I understand that many programs today won't take advantage of the GPU acceleration but I am thinking down the road and attempting to "future proof" a bit. I typically try to use Apple's pro applications over their 3rd-party counterparts (FCPX over Premiere, Aperture over Lightroom, etc...), and Apple has been pretty good about updating their apps to showcase their hardware. I've already read an article that shows that the processor is upgradeable so I'm thinking down the road I can always pop-in a new one, but the GPU's will be a question mark unless Apple releases an upgrade path which they haven't shown too much of an interest doing in the past.
|
Who'sDaMac?
|
|
|
|
|
|
|
|
Senior User
Join Date: Jan 2001
Location: Southern CA
Status:
Offline
|
|
Originally Posted by Tutor
Just a suggestion: In considering whether to maximize your compute performance by either CPU(s) or GPU(s) or both, first rank your applications in terms of their importance to you. Then explore whether the most important application or more important applications benefit more from CPU or GPU compute performance. If it's CPU performance, try to pin down whether core speed and/or core count is more important - then you'll be better able to select your CPU(s) more wisely. If it's GPU parallel computing performance, try to pin down whether it's OpenCL (where AMD/ATI excels) or CUDA (where despite the fact that Nvidia cards also support OpenCL, they're the only ones that support CUDA). It could be that your important software demands a wise selection of both CPU(s) and GPU(s). Making your final decision(s) based on the needs of your important application(s) and your budget will help to make your days happier.
Thanks, solid advice.
|
Who'sDaMac?
|
|
|
|
|
|
|
|
Fresh-Faced Recruit
Join Date: Oct 2004
Location: Ottawa, Canada
Status:
Offline
|
|
You may want to consider maxing out your SSD first. This will likely give you the biggest performance boost overall. Drives connected via Thunderbolt are speed capped by the (likely 7200 rpm) drives that are connected, so you want the biggest internal drive you can afford. Unless you do mainly Final Cut work, you may find that a maxxed out iMac will offer better bang for the buck. That said, I received my Mac Pro last Monday (January 13th); it is a six core machine with 32GB and 1TB SSD. -- it may be overkill for my use (mainly Photoshop, InDesign, and Acrobat), but I love it.
|
|
|
|
|
|
|
|
|
Mac Elite
Join Date: Mar 2004
Location: Truckee, CA
Status:
Offline
|
|
In the past at least Aperture was heavily GPU-dependent. The very best G5 tower would not run v1 Aperture well without a graphics card upgrade. We are at v3 Aperture now and I would be surprised if the Aperture team has removed the ability of Aperture to take advantage of GPU power.
Apple should answer questions like the OP's regarding usage at least of Apple's pro apps. Forcing us to speculate on forums is absurd.
-Allen
|
|
|
|
|
|
|
|
|
Fresh-Faced Recruit
Join Date: Dec 2013
Status:
Offline
|
|
The issue is not really CPU vs GPU ... nor even whether apps are written to take advantage of the GPU ... because if they do take advantage of the GPU, you will get that advantage across ALL of the D300, D500 and D700 dual GPUs. The issue then is whether apps that are written to take advantage of the extra GPU specifically require double-precision numerics. The D500 and D700 are about 4 times faster for double-precision numerics than the D300 ... but apparently, the D300 is actually faster (it has a higher clock speed) than the D500 for standard blog single-precision numerics.
Unfortunately ... I am not aware of any apps that take advantage or require dual precision numerics on a GPU at the moment. One would think that a program like Mathematica would be a natural contender, but I think it will be years before we see Wolfram re-write Mathematica to automatically invoke the GPU for double-precision numerics ... rather than use the CPU. Such graphic cards would need to be far more standardised to justify going to all the trouble just for some video cards on just 1 Mac, in my view.
So, I ordered the 6core CPU, with the 1TB SSD, 32 gig RAM ... and the D300.
I must admit to still twitching back and forth in my mind about changing to a D500 ... but I am certainly not going to make any changes now, lest they throw me to the back of the queue.
|
|
|
|
|
|
|
|
|
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status:
Offline
|
|
Originally Posted by tehwoz
The issue is not really CPU vs GPU ... nor even whether apps are written to take advantage of the GPU ... because if they do take advantage of the GPU, you will get that advantage across ALL of the D300, D500 and D700 dual GPUs. The issue then is whether apps that are written to take advantage of the extra GPU specifically require double-precision numerics. The D500 and D700 are about 4 times faster for double-precision numerics than the D300 ... but apparently, the D300 is actually faster (it has a higher clock speed) than the D500 for standard blog single-precision numerics.
That's not what Apple reports. On the tech specs, the D300 has 2 GFLOPS single precision while the D500 has 2.2 GFLOPS. Anand gets slightly different numbers - they're almost identical - but the D300 doesn't win.
|
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
|
|
|
|
|
|
|
|
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status:
Offline
|
|
Originally Posted by MagiMac
You may want to consider maxing out your SSD first. This will likely give you the biggest performance boost overall. Drives connected via Thunderbolt are speed capped by the (likely 7200 rpm) drives that are connected, so you want the biggest internal drive you can afford.
Nothing stopping you from connecting an SSD to Thunderbolt.
|
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Forum Rules
|
|
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
|
HTML code is Off
|
|
|
|
|
|
|
|
|
|
|
|