InfoQ Homepage GPU Content on InfoQ
Articles
RSS Feed-
Level up Your Java Performance with TornadoVM
GPUs, FPGAs, or multi-core CPUs are present in almost every computing system today. These devices help increase performance and run more efficient workloads, but most frameworks are built on C or C++ only. At QCon Plus, Juan Fumero spoke about TornadoVM, a high-performance computing platform for the JVM, allowing to offload, at runtime, Java code to run on heterogeneous hardware accelerators.
-
AI, ML and Data Engineering InfoQ Trends Report - August 2021
How AI, ML and Data Engineering are evolving in 2021 as seen by the InfoQ editorial team. Topics discussed include deep learning, edge deployment of machine learning algorithms, commercial robot platforms, GPU and CUDA programming, natural language processing and GPT-3, MLOps, and AutoML.
-
Accelerating Deep Learning on the JVM with Apache Spark and NVIDIA GPUs
In this article, authors discuss how to use the combination of Deep Java Learning (DJL), Apache Spark v3, and NVIDIA GPU computing to simplify deep learning pipelines while improving performance and reducing costs. They also show the performance comparison of this solution with GPU vs CPU hardware, using Amazon EMR and NVIDIA RAPIDS Accelerator.
-
Evolution of Azure Synapse: Apache Spark 3.0, GPU Acceleration, Delta Lake, Dataverse Support
At Microsoft Build 2021, Azure Synapse has announced significant improvements for its Apache Spark pool, its performance, and data querying and integration capabilities. This article outlines the improvements and provides the context.
-
TornadoVM: Accelerating Java with GPUs and FPGAs
The proliferation of heterogeneous hardware represents a problem for programming languages such as Java that target CPUs. TornadoVM extends the Graal JIT compiler to take advantage of GPUs & FPGAs and provides a flexible, high-level model whilst still enabling high performance and features such as live task migration.
-
Joint Forces: From Multithreaded Programming to GPU Computing
In this IEEE article, authors Frank Feinbube, Peter Troger and Andreas Polze discuss two major hardware trends in the desktop parallel programming space, multi-core CPU architectures and Graphics Processing Units (GPUs). They also talk about the best practices for GPU code optimization like algorithm design, memory transfer, control flow, instructions and precision.