UrbanScan: 3D modeling of urban scenes

  • "Datasets for PPR"
  • "How StereoScan Works"
  • "PPR Reconstructions"

GPU Computing

Recently, the common desktop CPUs (central processing units) consist of a few cores (typically 4 cores) optimized for sequential execution while GPUs (graphics processing units) present massively parallel architectures including thousands of smaller processing units (called cores) and very high-memory bandwidths. These GPU architectures are becoming increasingly efficient for dealing with compute-intensive workloads, offering high speedups when compared to CPUs. To overcome the main obstacle regarding the developments of parallel programs on GPUs, the CUDA programming interface and framework emerged the dissemination of a worldwide new parallel programming style, based on extensions to the C language, and accessible by a broader community of programmers and corresponding new application models.

CPU GPU

(Workload distribuition between CPU and GPU)

 

GPU VS CPU

In this section, we will show the most important technical characteristics. With this, we can see some of the reasons that support the use of parallel processing on the GPU. A simple comparison is made below:

  • CPU (e.g. core i7-4770K)
    • Small core number (4 cores running 2 threads each)
    • Execute instructions sequentially
    • Low memory Bandwidth (~30GB/s)
    • Higher clock frequency
    • Optimized for low latency access to cached data sets
    • Orchestrates data transfers and execution of parallel kernels on the GPU
cpu
  • GPU (e.g. GTX TITAN X )
    • Huge core number (3072 cores)
    • Execute multiple instructions in parallel
    • High memory Bandwidth (~363.5GB/sec)
    • Lower clock frequency
    • Optimized for data parallel, throughput computation
    • Architecture tolerant of memory latency
    • Runs the compute-intensive tasks
gpu

CPU Analogy:

GPU Analogy: