GPUs are highly scalable processor architectures with potentially thousands of cores and, compared to CPUs, magnitudes higher memory bandwidth.
On the other hand, they have less single-thread performance.
Therefore, GPUs are predestined to process the scalable part of the code, whereas the CPU processes the other part.
For example, the training and inference of Deep Neural Networks benefits from the use of GPUs.
For this purpose, we have our institute's CUDA-cluster. The dominant programming language for Nvidia-GPUs is CUDA.
In the following a fraction of our institute's applications:

  • SIFT-Features
  • Stereo through Semi-Global-Matching
  • Pixelwise segmentation to enhance our online calibration