Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Hardware - Troubleshooting and Discussion > Mac Desktops > Is my graphics card faster than my computer?

Is my graphics card faster than my computer?
Thread Tools
l008com
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 5, 2001, 03:48 AM
 
I don't understand this, go to http://www.apple.com/powermac and you'll see 11.8 Gigaflops for the now dual 800 G4. Wowee! Then you scroll down and you see that the GeForce video card can do 76 Gigaflops!!! Thats almost 7 times faster than the comptuer itself. I don't udnerstand this, why not have the graphics card do everythign then, why not have nVidia make your processors instead of motorolla??? There's gotta be something about this whole thing I don't know right?
     
Nimisys
Banned
Join Date: Apr 2001
Location: San Diego, CA
Status: Offline
Reply With Quote
Aug 5, 2001, 04:21 AM
 
Originally posted by l008com:
<STRONG>I don't understand this, go to http://www.apple.com/powermac and you'll see 11.8 Gigaflops for the now dual 800 G4. Wowee! Then you scroll down and you see that the GeForce video card can do 76 Gigaflops!!! Thats almost 7 times faster than the comptuer itself. I don't udnerstand this, why not have the graphics card do everythign then, why not have nVidia make your processors instead of motorolla??? There's gotta be something about this whole thing I don't know right?</STRONG>
the GF# is hyper tuned for just one thing VIDEO. the CPU is tuned to anything... kinda like a SEMI and a sports car... the SEMI might have more hosre power, but which would be faster in raicing, now how about for pulling large weights? see my point. the Video card has been optimized and hardware acclerated for the graphics and graphics only where hte cpu has not. next read arstechnica explanation on FLOPs benchmarking, beacuse unless you feed the exact same code, bit per bit to the machines and know EXACTLY how many lines of code there are in them then the flops Rationg can't travel between the CPUs.
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 5, 2001, 04:28 AM
 
Can I have a more technical explanation, because, I jsut don't get it. Bit in, Bit out? Spee is speed, adn to be 7 times faster, even if you have to modify each line of code to run on graphics hardware, it would still run faster. What makes graphics hardware so grpahics specific. Can't you use the OpenGL language to make processor intensive parts of you application run on the graphics card. Like on GeForce cards for example?
     
Cipher13
Registered User
Join Date: Apr 2000
Status: Offline
Reply With Quote
Aug 5, 2001, 05:09 AM
 
The GPU in the GF3 is not capable of the same things a normal CPU can do.
Its extremely limited - its instruction set is so simple and basic, one could not get it to run apps on its own, like an operating system. Not very well anyway.
It is reliant on the CPU to feed it, also.
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 5, 2001, 02:15 PM
 
But for a program that is completely mathematical, like the Dnet client, wouldn't you still be able to have the graphics card 'do the math' alot faster than the cpu?
     
Nimisys
Banned
Join Date: Apr 2001
Location: San Diego, CA
Status: Offline
Reply With Quote
Aug 5, 2001, 03:59 PM
 
yes and no... the graphics card internals are designed to handle polygons and fill rate and such...

internlly GPU and CPUs aren't close at all... read ta ARS for the internal CPU and look for the GF3 and Radeon Early intros for their internals...

also most video cards run at 250-350 speeds not the ghz levels of today... secondly FLOPS worl buy knowing how many lines of code were fed in and comparing it with the time the machine took to spit them back out. so unless you feed the same exact code to the Processors their flop equivalent is meaningless. and as such the FLOPS the GOPU scores are completely and utterly different than that of the CPU. they are to entirely different entities designed for two completly different jobs.
     
Atomic Beat Boy
Junior Member
Join Date: Jun 2001
Location: UK
Status: Offline
Reply With Quote
Aug 5, 2001, 04:56 PM
 
The GF3 is hard-wired to do 3D graphics and nothing but 3D graphics, and the 76 GFlop/s score is probably from some completely specialised part of the chip that does something where theoretically a lot of calculations are required but it does it all in one go, however it's incapable of anything else. The G4 is designed to do absolutely anything.

Trying to run Photoshop in a GF3 is like trying to watch TV on a lightbulb.
     
Sean7
Forum Regular
Join Date: May 2001
Location: South Wales, UK
Status: Offline
Reply With Quote
Aug 5, 2001, 08:06 PM
 
Look at it this way, your graphics card is a turbo, your cpu is your engine.
osx.vr9.com
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 5, 2001, 08:14 PM
 
Good example, it seems to me that anything that can do a computation, can do an computation. I'm not talking about running photoshop on your graphics card, i'm talking about running specific processor intensive tasks on it. Like a Dnet app that would run a core on each of my processors, and a 3rd core on my graphics card.
     
<WizOSX>
Guest
Status:
Reply With Quote
Aug 5, 2001, 10:00 PM
 
A graphics card is designed to receive instructions and then use those instructions to draw on the screen. It is not designed to send instructions back to the CPU. The CPU does all the computations about where things should be placed on the screen, how they should move, etc.

An analogy,I suppose, would be that an engine in a car drives the wheels but the wheels do not send energy back to the engine. Cars like the Toyota Prius do, however, have the ability to send some energy back. Similarly, one could design a graphics card with a processor that could do calculations when the particular computer program has very limited graphics needs, but then that graphics card would be much more expensive and not as fast at doing graphics.
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 5, 2001, 10:04 PM
 
So a graphics card can't send any data back to the computer?
     
Cipher13
Registered User
Join Date: Apr 2000
Status: Offline
Reply With Quote
Aug 6, 2001, 12:53 AM
 
Originally posted by l008com:
<STRONG>So a graphics card can't send any data back to the computer?</STRONG>
It doesn't need to.
It does its stuff, and puts the data onscreen.

As for your dnetc question, you could design a processor specifically for dnetc alone, and it would probably fly...
But, it wouldn't be able to do anything else.

This is like comparing RAM and hard drives...
They're just different.
Forget the fact that they both process stuff - RAM and HD's both store stuff... and for that matter, so does a pen and pad...
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 12:59 AM
 
yes and even though RAM and Hard Drives are meant for different things, you can still use RAM as a Hard Drive if you want to, and you can use Hard Drive as RAM.
     
Nimisys
Banned
Join Date: Apr 2001
Location: San Diego, CA
Status: Offline
Reply With Quote
Aug 6, 2001, 03:57 AM
 
Originally posted by l008com:
<STRONG>yes and even though RAM and Hard Drives are meant for different things, you can still use RAM as a Hard Drive if you want to, and you can use Hard Drive as RAM.</STRONG>
because there both input/ouput... the video card is like input only whatever the computer puts in it comes back on the monitor not in the system. the same can't be said for RAM/HDD
     
<sdgdsgdsdsg>
Guest
Status:
Reply With Quote
Aug 6, 2001, 04:20 AM
 
Hmm...I'm not sure if this is 100% correct, but it sounds plausible.

First, what is a FLOP? It's a floating point operation per second.

Say the GeForce does a texture mapping task that needs 76 billion floating point operations, and has the hardware that can do that task in one second. That's 76 GigaFLOPS, right?

So the G4 itself, doing that same task, would take 6.5 times as long, because it can only do 11.8 billion floating point operations per second.

BUT -- The GeForce can ONLY do that texture mapping task at 76 billion ops/sec. Can it do an FFT at that speed? NO. Heck, it might not even be able to do an FFT at all -- because it can't do 76 billion ARBITRARY operations per second.

HOWEVER -- The G4 can do LOTS of things at 11.8 billion ops/sec. (I think that number relies on using AltiVec, but AltiVec is still a heck of a lot more flexible than a GeForce.)

I think that's generally correct, but I could have a detail or two wrong.

Alex
     
GravitationX
Fresh-Faced Recruit
Join Date: Jan 2001
Location: Manassas, VA, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 09:33 AM
 
I assume by FFT you mean fast fourier transform, and not final fantasy tactics. Yeah, the graphics card of a computer is unable to do calculations unrelated to graphics. This is why it is so much faster, all it's pathways are hardwired in. And that's correct, for the most part it cannot send processed data back to the cpu (except as processed images)
Kittens, cats, sacks, and wives,
How many were going to Saint Ives?
     
rogerkylin
Dedicated MacNNer
Join Date: Apr 2001
Location: Columbia, MD
Status: Offline
Reply With Quote
Aug 6, 2001, 09:52 AM
 
I suppose another bad analogy of the graphics card specialization is the console game.

In terms of performance and graphics, the specialization of the chips in a console game provide an excellent gaming experience for 10% of the cost of an entire computer, but can not handle the full array of applications available to a computer.
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 06:32 PM
 
Originally posted by GravitationX:
<STRONG>Yeah, the graphics card of a computer is unable to do calculations unrelated to graphics.</STRONG>
Can you explain this to me? Why not, what prevents it? Technical info appriciated.
     
Atomic Beat Boy
Junior Member
Join Date: Jun 2001
Location: UK
Status: Offline
Reply With Quote
Aug 6, 2001, 07:26 PM
 
Alright then, quoting from William Stalling's highly technical book, "Computer Organization and Architecture":

"[When designing computer chips,] there is a small set of basic logic components that can be combined in various ways to store binary data and to perform arithmetic and logical operations on that data. If there is a particular computation to be performed (eg processing graphics), a configuration of logic components specifically designed for that purpose can be constructed. We can the process of connecting together the various components in the desired configuration as a form of programming. The resulting "program" is in the form of hardware and is termed a hardwired program.

"... But now consider the alternative. Suppose we construct a general purpose configuration of arithmetic and logic functions. This set of hardware will perform a variety of functions on data depending on control signals applied to the hardware. In the original case (the GeForce3), the system accepts data (graphics commands) and produces results (the screen image). With general purpose hardware (the G4), the system accepts data and control signals (any program) and produces results (anything). Thus instead of rewiring the hardware for each new program, the programmer merely needs to supply a new set of control signals."

To massively oversimplify the above: To use the GeForce3 for anything but graphics, you would need to completely rewire it. The G4 on the other hand contains lots of additional hardware that allows it to be "rewired" by software programs to do anything (in fact a large proportion of it is this additional hardware). The reason the GeForce appears so much faster is because it has very little control hardware so it can be made up of almost entirely functional (arithmetic and logical) hardware, more than the G4.

As a side note, the idea behind RISC processors (like the G4) is to reduce the amount of control hardware so that much more of it can be devoted to actually running the program and processing data.

I hope this answers your question.

(edit: fixed dumb typo)

[ 08-06-2001: Message edited by: Atomic Beat Boy ]
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 07:30 PM
 
Not really. That wasn't very technicall, all it says was that graphics card are "hardwired' to specifically do graphics tasks. Pretty much the same thing everyone has been saying.
     
Randycat2001
Forum Regular
Join Date: Jan 2001
Location: Victorville, CA
Status: Offline
Reply With Quote
Aug 6, 2001, 08:02 PM
 
My understanding is that the graphics processing unit is a computer unit in of itself. Instead of having a RAM area where a program can be stored to tell it what to do, the "program" is permanently-stored on silicon (like the game on a N64 cartridge). So this means that the graphics processing unit can only do the one thing it was designed to do- it cannot be changed or altered because there simply is nowhere to store this program for it to read it. Having said that, there is nothing stopping a 3D card designer from implementing a volatile RAM area where you could put your own program in there along with a high-speed, 2-way bus so you can get your data back out of the unit. As it stands now, there is no reason for them to bother doing that because virtually no one will be bothering to use their card like that. Even if they did, the graphics processing unit would still only be limited to doing the kind of 3D calculations it is accustomed to doing. The only practical way to do this is to design the graphics unit from the ground up to do a wider range of operations along with the 2-way bus to comunicate over and the resident RAM to keep this user-defined program (this brings up the additional difficulty of having this resident RAM being fast enough so that the GPU doesn't end up waiting for the next instruction in the program which would effectively kill the throughput benefits; that's why the "program" is frozen in silicon and logic circuits in the first place- to allow the GPU to do one task really, really fast). As mentioned before, the card designers won't be doing this, because they don't intend their card to be used as such.

The current generation of cards is designed to only accept data (the data bus protocol is intentionally one-way only and that happens to be "in"), process that data in a specific manner, and then send that data to your screen. Now the new generation of videocards are ironically adapting greater programmability (but still no where near the level of a general processor). This is seen in the higher features of nVidia's GF3 and the GS in the current Sony PS2. The programmer is allowed to define their own miniature programs or algorithms that will work within the GPU itself. This can only be applied to inbound video data and then to your screen, AFAIK. There is still no way to stream data back out of the unit, and the program will still be subject to the limited range of operations that the GPU offers. As far as that goes however, the range of unique video effects is limited only by the creativity of the programmer and the ability to put it into code the GPU can understand. That's the exciting thing about the latest round of videocards- virtually endless new tricks for as long as the programmer is willing to support such cards.

Well that's my take on it...

Perhaps there is a future for fast, yet programmable, hardware-assist modules within a desktop computer (in essence, a co-processor [raises pinky to lower lip like Dr. Evil]). Essentially, this is what we are waiting for when it comes to the OSX Aqua graphics which need to be offloaded from the CPU, need to be accelerated by many orders of magnitude, but are not supported by current videocard architectures. Of course, that is just another high-performance part which must be integrated and costed into the computer product, and the question is why not just have the CPU do it if such operations can be integrated as little programs in the OS? So we end up where we are right now. We'll see if this really takes hold in the future though... (Anybody remember long ago when we had simple CPU's, then the hot thing was to add a math co-processor to offload certain calculation intensive chores, then the co-processor was integrated back into the CPU as a sub-unit, and now we have CPU's with integrated SIMD units to, yet again, speed up certain calculation intensive chores, and now we are talking about another outboard processing unit based on the speedy videoprocessing architecture? Chances are that will be integrated back into the CPU core one day, as well. It's a round-a-bout cycle that never ends as progress steps on.)
What's the deal with Star Wars severed limbs?
     
kaboom
Forum Regular
Join Date: Jan 2001
Status: Offline
Reply With Quote
Aug 6, 2001, 08:14 PM
 
I'm beginning to believe that you know exactly what we're talking about and are just screwing around with everyone now.

Ok, my turn...
Let me see if I can explain it as I understand it from other posts here.
Let's say that you and another person both have genius IQ's (insert obvious joke here).
Now let's say that this other person has studied ONE field and ONE field only. We'll say, biology. This person has eaten, slept and dreamed biology for his whole life. It's all he knows. Literally.
You, on the other hand, have studied various things throughout your career. Physics, chemistry, computer science, astronomy and so on.
Both of you have extremly high IQ's but the other person knows biology in and out and can whip out biology facts faster than any other person on the planet. You may know some biology, but you also have to retain information on chemistry, physics etc... He's dedicated, you're not. But you're both really smart.

The graphics processor on your card only has x number of instructions to deal with. No more. It is dedicated to doing those processes and those processes only. It doesn't have to know about any other instructions. Let's pick an arbitrary number. We'll say that the GPU only knows 10 instructions. That's all it needs to know. Well, a CPU needs to know 1000 instructions. Sure, it's all math, but when you're dedicated to doing only 10 different equations, you'll be faster than a chip that needs to do 1000 different equations.

I can only pray that I have been helpful in this endevor.
     
jonahfish7
Fresh-Faced Recruit
Join Date: Mar 2001
Status: Offline
Reply With Quote
Aug 6, 2001, 08:37 PM
 
Well, how about this question?

MacOS X uses Quartz to run the basic 2D graphics of the computer. Quartz is based on OpenGL. The GeForce3 speeds up OpenGL graphics.

So, with that lead in, wouldn't having a GeForce3 speed up the MacOS X operating system because the graphics for the operating system would be sped up by the GeForce 3?
     
SS3 GokouX
Dedicated MacNNer
Join Date: Jul 2001
Location: The Land of More :(
Status: Offline
Reply With Quote
Aug 6, 2001, 08:53 PM
 
No, Quartz is not based on OpenGL, it's based on PDF. Quartz and PDF are 2D graphics, OpenGL is 3D graphics.

"And I will rule you all with an iron fist! You! OBEY THE FIST!" -Invader Zim
     
robotic
Forum Regular
Join Date: Jan 2001
Location: San Luis Obispo, California, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 08:56 PM
 
MacOS X uses Quartz to run the basic 2D graphics of the computer. Quartz is based on OpenGL. The GeForce3 speeds up OpenGL graphics.
Where did you get the idea that Quartz is based on OpenGL? I've never heard that.

-robotic
     
l008com  (op)
Addicted to MacNN
Join Date: Jan 2000
Location: Stoneham, MA, USA
Status: Offline
Reply With Quote
Aug 6, 2001, 09:18 PM
 
You guys are all looking at the task from a very different perspective than me. The one way bus is a brick wall. That I understand, but assuming that graphics card can comminucate back to the computer... Here how I see it...
My graphics card has a very fast processor on it. It also has 32 MB of RAM on it. These great things sit on a fast PCI (AGP actually but who's counting) but connecting everything together. Now these are certainly designed to do graphics. They even have thier own language, OpenGL. It jsut seems to me like with all these great pieces, its impossible for you not to be able to link them together in a way slightly different than was intended. I'm not really talking about running OSX on my graphics card, i'm talking about programs that use little or no graphics power using the graphics card in addition to the man CPU to speed up whatever is being done. So for instance, as i said before, you could run a dnet core on each G4, and write another core 'creativly' using OpenGL and sorta transforming the data that needs to be computed into graphics like chunks. When it all comes down to it, if you break it down enough, all processors do the same thing.
dnet FAQ
     
<sdgdsgdsgdsg>
Guest
Status:
Reply With Quote
Aug 6, 2001, 11:42 PM
 
From the dnet FAQ which you reference:

"It is doubtful that any 3D Accelerator cards would even contain any functionality that could be utilized in a meaningful way for accurate mathematical computation. Presumably most all of the operations provided by video cards are designed for eventual display rendering, and not general purpose math output."

I think that about sums it up.

All processors are NOT created equal. This is similar to the whole Pentium vs. G4 issue, only taken to the extreme. Different processors are designed differently, and this impacts their performance on different tasks.

The end of that FAQ does contain a fun mind game -- yes, if you sit there and figure it all out, you might be able to get your GeForce to calculate a couple of RC64 blocks. But it would be EXTREMELY inefficient, to the point of being a worthless exercise, because it's not designed to do that.

Conversely, there are government projects to break encryption that use custom designed chips that are EXTREMELY fast at the break-down-a-big-prime-number task, but would make crappy video cards.

Alex
     
blakespot
Dedicated MacNNer
Join Date: Jul 2001
Location: Alexandria, VA
Status: Offline
Reply With Quote
Aug 7, 2001, 09:21 AM
 
Can you imagine a Beowulf cluster of GeForce3 GPU's??? Man...


:-)


blakespot
iPodHacks.com -- http://www.ipodhacks.com
     
<Steve S>
Guest
Status:
Reply With Quote
Aug 7, 2001, 02:52 PM
 
The problem here is that someone is taking a number off of a spec sheet from a graphics card and trying to apply it to another use. I'm not even going to attempt another analogy at this point.

My advice to the original poster is to try to program the video card for some other task and see now far you get. After you do your research, you'll (hopefully quickly) realize that you don't have a sufficient instruction set to complete your task.

It's important to understand how video cards work. A program is written, for this example, say it's a game. At some point, the game spins through it's logic, manipulates the data of the scene you are viewing, then proceeds to visualize the data into a graphic that you can view. This is usually accomplished through an API such as OpenGL. OpenGL (still on the cpu for the moment) receives a specific instruction to draw a graphic effect on the screen. The API works in conjunction with the OpenGL drivers for the video card. If the specific graphic effect you are trying to use in OpenGL happens to exist on the video card (hard wired), then the instruction is handed off to the video card to finish the operation. Otherwise, it is performed in software by the main CPU. This of course is an over simplification of what happens, but you get the picture.

To answer your question, yes, the video card is capable of sending data back to the cpu. Most often, with low memory video cards, textures are swapped back and forth between the video card and main memory for example. Also, status results of functions performed are returned to the CPU, etc. However, the type of cummunication is predefined and limited in scope.

An interesting trend is starting with cards like the Geforce 3 though. This generation of video cards are semi-programmable. For example, the Geforce 3 allows the programmer to use customized shaders on the video card. As video cards become more flexible, maybe entire sub-routines as you suggested may be able to run on the video card. For the time being, the available instruction set is way to limited create a program (even a small one) to run on it.

Steve
     
Whisper
Junior Member
Join Date: May 2001
Location: Walnut Creek, CA
Status: Offline
Reply With Quote
Aug 7, 2001, 03:50 PM
 
Ok, my turn!

Basically, Graphics CPUs only undestand the "draw me a pretty picture" command, whereas something like the G4 would understand a couple hundred commands. The one command that Graphics CPUs have is next to useless for anything other than drawing pretty pictures. It's not they can't add numbers or anything, but there's no way to get the results of just an add or just a multiply or just a whatever. They can only give you the pretty picture at the end. Make sense?

Now if you're clever (it's near the bottom), you can represent data in such a way that drawing a picture out of non-picture parts will speed some things up, but the only trick I know of has to do with 3D rendering anyway. I suppose it's theoretically possible to do tricks like that with non-graphics applications, but I really doubt you'll be solving differential equations or multi-variable calculus problems on your GF3 any time soon.
-Whisper
     
pjkim
Junior Member
Join Date: Oct 2000
Location: Dallas
Status: Offline
Reply With Quote
Aug 7, 2001, 10:02 PM
 
This thread is inane. Yes, the newer graphics cards (and Playstation 2s) are faster at raw hardwired FLOPs than most CPUs.

Yes, a Colt 45 is faster and gets better gas mileage than any car. Do you want to sit on a Colt and ride it to work?

Can you do ANY meaningful work if you rip your graphics card out of your computer, hook it up to your computer, keyboard and power supply?

Let this thread die.
     
SkiBikeSki
Grizzled Veteran
Join Date: Sep 2000
Location: Florida
Status: Offline
Reply With Quote
Aug 7, 2001, 11:41 PM
 
Lets compare a CPU and a GPU, no names or brands, simply generic.

Can it do...

... Graphics Yes and Yes
... Memory access Yes and Yes
... draw on screen Yes and Yes
...access storage Yes and No!
...recieve input from input devices Yes and No!
...communite on networks or modem Yes and No!
...Drive other devices Yes and No!
...use any versatile programming language Yes and NO!

The GPU is far to limited to become and CPU! And if it did it would lose it 76GFLOPS! It's a fun dream to toy with, but just that, a dream. So be happy with the giga flops in your G4, and use your GF3 for graphics, and only graphics.
-- SBS --
     
   
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 11:57 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,