What is the Difference Between the Data and Functionality of a CPU and GPU?

Posted on

The CPU contains some (usually 1, 2 or 4) “cores”-i.e, which basically consists of four CPU on a single chip that they cooperate in small ways as sharing a cache of RAM. ”

With each core occupying a quarter of the Silicon area available (excluding the area consumed by the cache and some other bits and pieces), -each CPU is an extremely complex.

A GPU contains hundreds (maybe 500) is “cores”-but there’s a price to pay for it. “Each one receives only 1/500 of Silicon area.

What is the Difference Between the Data and Functionality of a CPU and GPU

 

tl;dr

So the tl;dr response is that the GPU cores are very simple in comparison with CPU cores – so you can have more than them. For tasks that do not require a complex core – therefore have many ways simpler than its programs can run 100 times faster on the GPU than the CPU. But this speed comes at a price, things you can do are limited in some very specific aspects.

 

Long Answer:

So, as you would expect, each core of the GPU is very simple in comparison with a CPU core. But some of the simplifications are not a problem in the use of the device which is to produce 3D computer graphics very fast.

An important distinction here is between the so-called “SIMD machines” and “MIMD “. (Statement of “Sola, multiples and data ” Multiple instructions, multiple data “).

In a MIMD machine (which is what the CPU cores) – each processor can be running different instructions-even a completely different computer program-and working in different parts of the memory data. They are individual equipment (more or less) complete.

In a SIMD machine (that are a little more GPU cores)-all processors running the exact same sequence of instructions in perfect lock-step-but working on different pieces of data.

So, imagine that it is a drawing on the screen, and each pixel in the image is played to make some calculations. There are a few million pixels and each pixel has the same exact math made about him.

With a SIMD machine-we have each core in a different pixel-but do exactly the same steps.

Eliminating the need for each core individually, read and decode the sequence of machine code instructions-we save a lot of Silicon area. You could imagine having a copy of that e-mail to all 500 cores.

In modern GPU-it’s more like we have small groups of cores working in SIMD fashion-but each group can run different instruction sequences … so you’re somewhere between SIMD and MIMD … but there are economies of Silicon.

Also have sets of simple instructions, that are never allowed to write in the same places of memory tend to read data about memory locations for storing cached data for them are much simpler.

Also do not run an operating system-have no i/o circuits without memory management-there are uninterrupted, without timers, interprocess communication, not the disk drives, keyboard and mouse, there’s not … well … a lot of things “in”! ”

So if you have a spreadsheet that does exactly the same thing in a lot of different data-then using the GPU instead of the CPU makes a lot of sense … but if you just need to do the calculation once (or even a dozen times)-or if the calculation is super complicated -t limited benefits of GPU is slow-dog tag and using the CPU makes more sense. Let’s imagine some things that would make sense on the GPU:

  • 3D graphics scenes obviously where they are made from triangles and each triangle inside of an object 3 – d exact mathematics need same did it.
  • Graphics – again – where the image on the screen the mathematical needs of each pixel.
  • Password-hacking – where you want trying to encrypt a lot of passwords to see if they produce the same encrypted result as it is stored in the target system.
  • Weather forecast where they want to run the same equations for each fraction of the atmosphere within 1 of 1,000 x 1,000 x box of 10 km.
  • Neural networks, AI where there are a bazillion simulated neurons, each of them do the same but in different data and “pesos” different. “

If you can use a GPU in your application the speedup is incredible. It is not uncommon that a 100x or even 1000 speedup x using the GPU instead of the CPU. But most of the software cannot run on the GPU.

Learn how to program a GPU is a strange experience. You’ll find that specialized programming languages (HLSL, Cg and GLSL) are almost identical to each other and are a lot like writing code in C, but any program you write will be long-more than a few hundred lines and many applications really useful are only 10-timelines.

Things that seem to be more efficient in a CPU are less efficient in a GPU, because of the SIMD nature. For example, if you decide that a part of mathematics might not need 99% of the time, so in a CPU, it is often worth putting a statement of “if” everything in your surrounding to ignore the math 99 times out of 100. “But in a SIMD computer-if, one of the parties has little to do math-then all they have to step through the code (ignoring the results thereof) that a small kernel is doing what needs to be done. So, in fact, the Declaration of “if” can really slow things down rather than speeding up them! ”

There are profound implications for memory access and caching and a lot of subtle things.

Another strange thing is that no permanent storage that is not the end result of the program. So, if you are drawing a picture and I want to remember the results of a calculation, then you can use 1 second from now-again may not … you will need to use the CPU to help you in this case.

Often has a dance between the CPU and GPU to where you send data from behind and what each can play it are strength and decide which code to run in what may prove to be too subtle, too.

 

FINALLY:

There was a research project in the 80s called “por pixels procesador”-which would perhaps have 1 very slow million simple / nuclei: one for each pixel on the screen. ” This should not to be very practical, for reasons relating to how the RAM chips are made, but the idea was good.