MacNN Forums (http://forums.macnn.com/)
-   Mac Desktops (http://forums.macnn.com/mac-desktops/)
-   -   Mac Pro config (http://forums.macnn.com/65/mac-desktops/507564/mac-pro-config/)

 
I'mDaMac Jan 17, 2014 05:35 PM
Mac Pro config
So I'm looking at getting a Mac Pro within the next few months. I'm going to use it for Adobe CC, Final Cut Pro X, Aperture, and some light Modo, Lightwave work. I would love a fully spec'd machine but at $9K that isn't happening. I'm planning to start off with the base system since I'm going to replace the RAM with third party and max it out. I'm also going with 500GB of flash storage and getting an external TB RAID from OWC. Now, I'm debating whether to max out the GPU's with the D700's or go with the D500's and the 6-core upgrade for about the same price (+/-$100). I'm thinking about future proofing and if I max out the GPU's I can always upgrade the proc down the road since that is replaceable. The GPU's are custom-made for the Mac Pro and may not be replaceable.

Thoughts?
 
mduell Jan 18, 2014 05:50 PM
I'd go with the better CPU... not much is fully GPU accelerated today, just bits here and there.
 
cgc Jan 19, 2014 09:22 AM
Quote, Originally Posted by mduell (Post 4264690)
I'd go with the better CPU... not much is fully GPU accelerated today, just bits here and there.
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
 
OreoCookie Jan 19, 2014 09:56 AM
That's a tough one: It depends on how much you're relying on Final Cut Pro, because right now this is the only app where the GPUs make a night-and-day difference.

On the other hand, the D500 + 6-core variant is probably the most balanced configuration, and I'd probably go for that.
 
FireWire Jan 20, 2014 12:36 AM
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
 
mduell Jan 20, 2014 01:29 AM
Quote, Originally Posted by cgc (Post 4264731)
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
Please spare us the same banter we heard 3-5 years ago. GPU acceleration is slowly improving, keyword slowly.

Quote, Originally Posted by FireWire (Post 4264774)
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
The possibility and the tools haven't been the issues for decades... algorithms still remain a big blocker (especially with the very high latencies of the GPUs) as well as hardware limits (not enough bandwidth or memory in the right places, exacerbated by the algorithms and latencies).
 
P Jan 20, 2014 03:29 AM
Quote, Originally Posted by FireWire (Post 4264774)
How come multi-threading isn't the norm?
If I give you ten copies of the same book, how much faster will you be able to read than if I only gave you one book?

It's a hard problem, and it is not necessarily solvable. There are subsets that are just as single-threaded as the example above. We have moved from single threads to usable every-day apps utilizing 4 threads, so it's not like we're standing still, but it's been hard. Trying to move to the thousands of threads that a GPU can support is going to be even harder.

(Sidenote: nVidia is currently on a spree to claim that its latest, as yet un-released, mobile GPU is a 192-core GPU, and its bigger desktop chips have thousands of cores. That is BS. It supports 192 threads if you squint a bit, but those 192 threads have to be scheduled together, so it is still only one core. AMD is marginally more honest when it claims that its newest Kaveri CPUs have 12 "compute cores". It's correct in the sense that you can schedule 12 independent tasks, but those 12 cores are anything but equal. 4 are regular x86 cores that share 2 FPUs all together, and the other 8 are graphics CUs. Using this math, a D300 has 20 cores and a D700 has 32 cores, and each of those cores support 64 threads - again with the squinting)
 
Tutor Jan 20, 2014 07:19 AM
Quote, Originally Posted by I'mDaMac (Post 4264629)
So I'm looking at getting a Mac Pro within the next few months. I'm going to use it for Adobe CC, Final Cut Pro X, Aperture, and some light Modo, Lightwave work. I would love a fully spec'd machine but at $9K that isn't happening. I'm planning to start off with the base system since I'm going to replace the RAM with third party and max it out. I'm also going with 500GB of flash storage and getting an external TB RAID from OWC. Now, I'm debating whether to max out the GPU's with the D700's or go with the D500's and the 6-core upgrade for about the same price (+/-$100). I'm thinking about future proofing and if I max out the GPU's I can always upgrade the proc down the road since that is replaceable. The GPU's are custom-made for the Mac Pro and may not be replaceable.

Thoughts?
Just a suggestion: In considering whether to maximize your compute performance by either CPU(s) or GPU(s) or both, first rank your applications in terms of their importance to you. Then explore whether the most important application or more important applications benefit more from CPU or GPU compute performance. If it's CPU performance, try to pin down whether core speed and/or core count is more important - then you'll be better able to select your CPU(s) more wisely. If it's GPU parallel computing performance, try to pin down whether it's OpenCL (where AMD/ATI excels) or CUDA (where despite the fact that Nvidia cards also support OpenCL, they're the only ones that support CUDA). It could be that your important software demands a wise selection of both CPU(s) and GPU(s). Making your final decision(s) based on the needs of your important application(s) and your budget will help to make your days happier.
 
cgc Jan 20, 2014 08:47 AM
Quote, Originally Posted by mduell (Post 4264777)
Please spare us the same banter we heard 3-5 years ago. GPU acceleration is slowly improving, keyword slowly.
Why don't you spare us the condescending know-it-all attitude? I'd bet Apple wouldn't put dual GPUs in a machine then decide not to optimize their "professional" applications to utilize them. THAT is why I predict must better support for off-loading tasks to the GPUs and that's why I'd opt for the higher-end GPUs out of the gates.
 
OreoCookie Jan 20, 2014 10:01 AM
Quote, Originally Posted by FireWire (Post 4264774)
How come multi-threading isn't the norm? I mean more than 15 years ago BeOS was using pervasive multithreading, and multiple-CPU machines were becoming more frequent... and now with Apple's Grand Central Dispatch, developers have no excuses... That and OpenCL which makes it easy to tap the GPU power for regular computing...
Multithreading is a hard problem and not every workload can be multithreaded. The same goes for utilizing the GPU for computational tasks. You really need to optimize your application by hand in most cases, and not every developer has the expertise to do so.

One way to improve the situation is for Apple to optimize its own APIs so that they take advantage of OpenCL and multiple cores. For instance, web rendering engines can and do utilize the GPU (that's the case on Windows Phone at least, I don't know whether and how much WebKit can make use of the GPU).

That's one reason why Apple focuses on dual core CPUs in many of its products: most other ARM SoC vendors offer 4+ weaker cores while Apple stuck to 2 beefy cores. Some SoC come around, e. g. nVidia's flagship K1 offers 2 homegrown beefy cores (in addition to a 4+1-core Cortex A15 solution). Ditto for Apple's notebooks, in most cases, you don't utilize 4 cores well and the die area is better spent on improved graphics rather than more CPU cores.

The question is even more complicated when it comes to the Mac Pro: for many people the 4- and 6-core versions are faster than the 12-core version, because the 12-core version has lower turbo clocks. Ditto for which graphics is best, there is simply no universal answer at this point.

However, it seems to me that Apple is getting serious about GPU acceleration, and if they can manage to offload, say, chunks of WebKit to the GPU, that could lead to significant performance improvements for the average user.
 
P Jan 20, 2014 10:13 AM
From what I can find, Webkit uses the GPU for 2D canvas rendering, video & animations (<video>, Flash & CSS), CSS transforms & transitions, and the entire compositing step.
 
I'mDaMac Jan 20, 2014 02:30 PM
Quote, Originally Posted by cgc (Post 4264731)
But that's today, in a couple years the pro apps may be much more GPU accelerated...kind of like how everything was single-threaded and now quite a few CPU-intensive apps are multithreaded. I'd personally go with the upgraded GPUs.
That's what I was leaning towards.

I understand that many programs today won't take advantage of the GPU acceleration but I am thinking down the road and attempting to "future proof" a bit. I typically try to use Apple's pro applications over their 3rd-party counterparts (FCPX over Premiere, Aperture over Lightroom, etc...), and Apple has been pretty good about updating their apps to showcase their hardware. I've already read an article that shows that the processor is upgradeable so I'm thinking down the road I can always pop-in a new one, but the GPU's will be a question mark unless Apple releases an upgrade path which they haven't shown too much of an interest doing in the past.
 
I'mDaMac Jan 20, 2014 02:43 PM
Quote, Originally Posted by Tutor (Post 4264786)
Just a suggestion: In considering whether to maximize your compute performance by either CPU(s) or GPU(s) or both, first rank your applications in terms of their importance to you. Then explore whether the most important application or more important applications benefit more from CPU or GPU compute performance. If it's CPU performance, try to pin down whether core speed and/or core count is more important - then you'll be better able to select your CPU(s) more wisely. If it's GPU parallel computing performance, try to pin down whether it's OpenCL (where AMD/ATI excels) or CUDA (where despite the fact that Nvidia cards also support OpenCL, they're the only ones that support CUDA). It could be that your important software demands a wise selection of both CPU(s) and GPU(s). Making your final decision(s) based on the needs of your important application(s) and your budget will help to make your days happier.
Thanks, solid advice.
 
MagiMac Jan 21, 2014 08:04 PM
Mac Pro config
You may want to consider maxing out your SSD first. This will likely give you the biggest performance boost overall. Drives connected via Thunderbolt are speed capped by the (likely 7200 rpm) drives that are connected, so you want the biggest internal drive you can afford. Unless you do mainly Final Cut work, you may find that a maxxed out iMac will offer better bang for the buck. That said, I received my Mac Pro last Monday (January 13th); it is a six core machine with 32GB and 1TB SSD. -- it may be overkill for my use (mainly Photoshop, InDesign, and Acrobat), but I love it.
 
SierraDragon Jan 21, 2014 09:19 PM
In the past at least Aperture was heavily GPU-dependent. The very best G5 tower would not run v1 Aperture well without a graphics card upgrade. We are at v3 Aperture now and I would be surprised if the Aperture team has removed the ability of Aperture to take advantage of GPU power.

Apple should answer questions like the OP's regarding usage at least of Apple's pro apps. Forcing us to speculate on forums is absurd.

-Allen
 
tehwoz Jan 22, 2014 09:32 AM
The issue is not really CPU vs GPU ... nor even whether apps are written to take advantage of the GPU ... because if they do take advantage of the GPU, you will get that advantage across ALL of the D300, D500 and D700 dual GPUs. The issue then is whether apps that are written to take advantage of the extra GPU specifically require double-precision numerics. The D500 and D700 are about 4 times faster for double-precision numerics than the D300 ... but apparently, the D300 is actually faster (it has a higher clock speed) than the D500 for standard blog single-precision numerics.

Unfortunately ... I am not aware of any apps that take advantage or require dual precision numerics on a GPU at the moment. One would think that a program like Mathematica would be a natural contender, but I think it will be years before we see Wolfram re-write Mathematica to automatically invoke the GPU for double-precision numerics ... rather than use the CPU. Such graphic cards would need to be far more standardised to justify going to all the trouble just for some video cards on just 1 Mac, in my view.

So, I ordered the 6core CPU, with the 1TB SSD, 32 gig RAM ... and the D300.

I must admit to still twitching back and forth in my mind about changing to a D500 ... but I am certainly not going to make any changes now, lest they throw me to the back of the queue. :\
 
P Jan 22, 2014 05:33 PM
Quote, Originally Posted by tehwoz (Post 4265040)
The issue is not really CPU vs GPU ... nor even whether apps are written to take advantage of the GPU ... because if they do take advantage of the GPU, you will get that advantage across ALL of the D300, D500 and D700 dual GPUs. The issue then is whether apps that are written to take advantage of the extra GPU specifically require double-precision numerics. The D500 and D700 are about 4 times faster for double-precision numerics than the D300 ... but apparently, the D300 is actually faster (it has a higher clock speed) than the D500 for standard blog single-precision numerics.
That's not what Apple reports. On the tech specs, the D300 has 2 GFLOPS single precision while the D500 has 2.2 GFLOPS. Anand gets slightly different numbers - they're almost identical - but the D300 doesn't win.
 
P Jan 22, 2014 05:34 PM
Quote, Originally Posted by MagiMac (Post 4264996)
You may want to consider maxing out your SSD first. This will likely give you the biggest performance boost overall. Drives connected via Thunderbolt are speed capped by the (likely 7200 rpm) drives that are connected, so you want the biggest internal drive you can afford.
Nothing stopping you from connecting an SSD to Thunderbolt.
 
All times are GMT -4. The time now is 08:12 PM.

Copyright © 2005-2007 MacNN. All rights reserved.
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2016, vBulletin Solutions, Inc.


Content Relevant URLs by vBSEO 3.3.2