InfoQ Homepage GPU Content on InfoQ
Presentations
RSS Feed-
Pitfalls of Unified Memory Models in GPUs
Joe Rowell explores the use of unified memory on modern GPU, the low-level details of how unified memory is realized on an x86-64 system, and some of the tools to understand what's happening on a GPU.
-
Fuelling the AI Revolution with Gaming
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference.
-
Taming GPU Threads with F#
Daniel Egloff overviews Alea, an F# alternatives to CUDA C/C++ and OpenCL C++, showing how to write GPU scripts and perform dynamic compilation in F#.
-
Machine Learning at Netflix Scale
Aish Fenton discusses Netflix' machine learning algorithms, including distributed Neural Networks on AWS GPUs, providing insight into offline experimentation and online AB testing.
-
Extreme Speedups and GPGPU: A Tale of Two Practical Uses of Reified Trees
Olivier Chafik discusses how to make a practical use of reified trees in Scala, with two applications: run-time (re)compilation for extreme speed, and conversion to another language (OpenCL).
-
Excel Coding Errors Are Destroying World Economies and F# (with Tsunami) Is Here to Stop Them!
Matthew Moloney discusses using F# and .NET inside Excel, demonstrating doing big data, cloud computing, using GPGPU and compiling F# Excel UDFs.
-
Accelerating the Web: How GPUs Make Browsers Fast
Jarred Nicholls explains how browsers leverage the GPU to speed up complex web pages by primitive drawing, composing layers and using tiles backing stores.
-
Integrating GPUs in Application Development - From Concept to Deployment
Graham Brooks discusses using GPU for application development, explaining how GPUs can be used for general purpose programming and how continuous integration can be applied.
-
GPUs in Finance
Andrew Sheppard overviews the driving forces behind GPU’s adoption by the financial industry, and explains the use of the Monte Carlo technique on GPUs.
-
Introduction to CUDA C
Cyril Zeller introduces NVIDIA CUDA development, showing how to write and execute C programs on the GPU, how to manage GPU memory and communication with the CPU.