Results 1 to 8 of 8

Thread: Using GPUs as CPUs... (and games)

  1. Top | #1
    Veteran Member excreationist's Avatar
    Join Date
    Aug 2000
    Location
    Australia
    Posts
    1,206
    Archived
    4,886
    Total Posts
    6,092
    Rep Power
    78

    Using GPUs as CPUs... (and games)

    The original Ultimate Epic Battle Simulator could involve tens of thousands of characters at reasonable frame rates. It used vertex animation rather than skeletal animation though. It relied a lot on the GPU. The sequel apparently uses bones and this is all handled on the GPU. It even has pretty good path-finding AI (maybe "A Star"). It can now involve more than a million characters at once....



    It uses Unity engine and here is a related tutorial that I'm working through about "compute shaders".... (though at the moment there is a CPU bottleneck)


    It says that using GPUs as CPUs can result in a performance improvement of 1000x..... that's quite a big jump!
    Last edited by excreationist; 04-29-2021 at 07:06 AM.

  2. Top | #2
    Contributor
    Join Date
    Nov 2017
    Location
    seattle
    Posts
    6,876
    Rep Power
    21
    To be pedantic CPU is an old term, it refered to the old main frame and minicomputers with the central processing unit comprised of lot of discrete digital components.

    Today it is microprocessor, an integrated circuit that contains all what was once done on multiple circuit boards. a GPU is a microprocessor wit additions to simplify graphics processing.

  3. Top | #3
    Veteran Member excreationist's Avatar
    Join Date
    Aug 2000
    Location
    Australia
    Posts
    1,206
    Archived
    4,886
    Total Posts
    6,092
    Rep Power
    78
    Quote Originally Posted by steve_bank View Post
    To be pedantic CPU is an old term, it refered to the old main frame and minicomputers with the central processing unit comprised of lot of discrete digital components.

    Today it is microprocessor, an integrated circuit that contains all what was once done on multiple circuit boards. a GPU is a microprocessor wit additions to simplify graphics processing.
    It seems like CPU is still used - e.g. of the main chips on a motherboard.... like Apple M1, etc. Though like you said they do use the term microprocessor (i7) (still with some mention of CPU) ["processor" is also often used]

    BTW GPUs can have thousands of cores....

  4. Top | #4
    Contributor
    Join Date
    Nov 2017
    Location
    seattle
    Posts
    6,876
    Rep Power
    21
    For a simulation of such magnitude you have to consider the energy requirement.

    You could derive a watts per mega flops metric. FLOPS meaning floating point operations per second, a computer speed metric.

    Server farms take a lot of power and generate heat. It is actually a real problem.

  5. Top | #5
    Veteran Member excreationist's Avatar
    Join Date
    Aug 2000
    Location
    Australia
    Posts
    1,206
    Archived
    4,886
    Total Posts
    6,092
    Rep Power
    78
    Quote Originally Posted by steve_bank View Post
    For a simulation of such magnitude you have to consider the energy requirement.

    You could derive a watts per mega flops metric. FLOPS meaning floating point operations per second, a computer speed metric.

    Server farms take a lot of power and generate heat. It is actually a real problem.
    The Apple M1 chip is related:

    MAC MINI MODEL Idle power (W) Max power (W)
    2020, M1 (8 cores) 7 39
    2018, 6-core Core i7 20 122
    2014, 2-core Core i5 6 85

    The M1 uses 1/2 to 1/3 of the power.... and the new M1 iMac is 11.5mm thick because it doesn't need much cooling....

  6. Top | #6
    Administrator lpetrich's Avatar
    Join Date
    Jul 2000
    Location
    Eugene, OR
    Posts
    15,276
    Archived
    16,829
    Total Posts
    32,105
    Rep Power
    95
    To understand this issue better, let us look at Flynn's taxonomy

    Does the processor execute a single instruction stream (SI) or multiple ones (MI)?
    Does the processor work on a single data stream (SD) or multiple ones (MD)?
    This gives four permutations: SISD, SIMD, MISD, MIMD.


    The basic architecture of a CPU is SISD: single instruction stream, single data stream.

    MIMD systems - multiple instruction streams, multiple data streams - are generally implemented as multiple SISD systems that share main memory and I/O devices. Multiple CPU chips and multicore ones are all MIMD systems.

    SIMD systems - single instruction stream, multiple data streams - are common for speeding up calculations in problems where one has to do the same calculation on several different data items. Problems like computer-graphics rendering. One has to do the same rendering calculations on several different pixels at a time. That is why GPU's are usually SIMD systems.

    The main downside of SIMD is very limited flow of control. If one wants to do some if-then-else conditional calculation, one has to calculate both branches for all the data streams, then select which one according to some condition code calculated for each data stream. By contrast, a SISD CPU will calculate the condition code in advance, then use it to select which branch to calculate.


    MISD systems are rare, and they are mainly used as a way of achieving fault tolerance by having different CPU's doing the same calculations, like in the Space Shuttle guidance computers.

  7. Top | #7
    Administrator lpetrich's Avatar
    Join Date
    Jul 2000
    Location
    Eugene, OR
    Posts
    15,276
    Archived
    16,829
    Total Posts
    32,105
    Rep Power
    95
    Despite general-purpose CPU's all being SISD, many of the higher-performance CPU's over the last few decades have had SIMD capabilities alongside their SISD features. This was MMX ("MultiMedia eXtensions") in Intel-x86 chips in the mid-1990's, and later SSE ("Streaming SIMD Extensions") in those chips. Much like SSE is AMD's 3DNow, PowerPC's AltiVec, ARM's Neon, etc.

    Adding SIMD to SISD chips gives the processing performance of SIMD alongside the flow of control of SISD, so it's the best of both worlds.

    For SISD and SIMD systems side by side, the SISD ones have to be the master ones, commanding the SIMD ones, because of SIMD being poor at flow of control.

  8. Top | #8
    Veteran Member excreationist's Avatar
    Join Date
    Aug 2000
    Location
    Australia
    Posts
    1,206
    Archived
    4,886
    Total Posts
    6,092
    Rep Power
    78
    This talks about SIMD and GPUs...

    https://www.quora.com/If-a-GPU-has-a...-only-12-cores

    A GPU does not have a thousand cores. It has a couple of dozen, which have a bunch of SIMD lanes. (Those are the “warps”: threads in a warp are really SIMD lanes in a single processor.) All this “massively parallel” stuff is just NVidia marketing.
    Oh, and how many cores does a CPU have? The Intel Xeon Phi “Knights Landing” has 68 cores....
    .....That’s why NVidia coined the term “SIMT” Single Instruction Multiple Thread: a GPU executes only a single (sub)program, but it spreads it over many threads. Like having just one core, but with an enormous SIMD width.
    Don’t get me wrong: I think GPUs are great and CUDA is a brilliant idea, but NVidia marketing-speak gets on my nerves.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •