InfoQ Homepage News
-
Scaling Uber’s Batch Data Platform: a Journey to the Cloud with Data Mesh Principles
Some months ago, Uber started the migration to the cloud, on Google Cloud Platform (GCP), of its batch data analytics and machine learning platform. In a recent post on its engineering blog, Uber provided additional information regarding its batch data cloud migration that incorporated crucial data mesh principles.
-
Maybe WebAssembly Is the Next Evolutionary Step From Containers: Fermyon at InfoQ DevSummit Munich
During her presentation at the inaugural edition of the InfoQ Dev Summit Munich, Danielle Lancashire, principal software engineer at Fermyon and co-chair of the CNCF wasm-wg, hinted at WebAssembly containers as a greener alternative and a potential evolution from the current containerised approach to serverless computing.
-
NVIDIA Unveils NVLM 1.0: Open-Source Multimodal LLM with Improved Text and Vision Capabilities
NVIDIA unveiled NVLM 1.0, an open-source multimodal large language model (LLM) that performs strongly on both vision-language and text-only tasks. NVLM 1.0 shows improvements in text-based tasks after multimodal training, standing out among current models. The model weights are now available on Hugging Face, with the training code set to be released shortly.
-
InfoQ Dev Summit Munich: How to Optimize Java for the 1BRC
Java applications passed the 1 Billion Row Challenge (1BRC) in 1.5 seconds. 1BRC creator Gunnar Morling detailed their optimizations at the InfoQ Dev Summit Munich 2024. General optimizations applicable to all Java applications cut the runtime from 290 seconds to 20 seconds. Getting to 1.5 seconds required niche optimizations that most Java applications should forego, except for possibly GraalVM.
-
OpenAI Developer Day 2024 (SF) Announces Real-Time API, Vision Fine-Tuning, and More
On October 1, 2024, OpenAI SF DevDay unveiled innovative features, including a Real-Time API enabling instant voice interactions and function calling. Enhanced model distillation and vision fine-tuning empower developers to customize AI for diverse applications. Upcoming events in London and Singapore will further expand these capabilities.
-
The Linux Kernel to Support Real-Time Scheduling out-of-the-Box
Linux 6.12 will officially include support for real-time processing in its mainline thanks to a PR that enables PREEMPT_RT on all supported architectures. While aimed at applications requiring deterministic time guarantees, like avionics, robotics, automotive, and communications, it could bring improvements to user experience on the desktop, too.
-
Setting up a Data Mesh Organization
A data mesh organization: producers, consumers, and the platform. According to Matthias Patzak, the mission of the platform team is to make the lives of the producer and consumers simple, efficient and stress free. Data must be discoverable and understandable, trustworthy, and shared securely and easily across the organization.
-
Hugging Face Upgrades Open LLM Leaderboard v2 for Enhanced AI Model Comparison
Hugging Face has recently released Open LLM Leaderboard v2, an upgraded version of their benchmarking platform for large language models. Hugging Face created the Open LLM Leaderboard to provide a standardized evaluation setup for reference models, ensuring reproducible and comparable results.
-
JFrog Integrates Runtime Security for Enhanced DevSecOps Platform
JFrog has introduced JFrog Runtime to its suite of security capabilities, adding real-time vulnerability detection to its software supply chain platform. This update is aimed at developers and DevSecOps teams working with Kubernetes clusters and cloud-native applications.
-
Data Teams Survey: Lag in DataOps and Value Delivered
We report on Jesse Anderson's 2024 Data Teams Survey which showed a lag in DataOps capabilities, slow LLM adoption, and a concerning decline in perceived value creation by data teams. It called out the importance of teams spread with data science, engineering, and operations capabilities. We also cover Petr Janda's recent podcast on the need for more engineering rigour for parity with other teams.
-
UNO Platform 5.4 Improves App Performance
The Uno Platform has unveiled its latest update, version 5.4, packed with over 290 new features and enhancements. As part of this update, the team has prioritised addressing concerns raised by enterprise clients, alongside improving the overall performance of applications built on the Uno Platform.
-
MongoDB 8.0 Now Available with Performance Gains and Enhanced Sharding
MongoDB has announced the general availability of MongoDB 8.0, introducing significant performance enhancements and new features. Highlights include embedded sharding configuration servers, expanded support for queryable encryption, and the capability to move collections across shards without requiring a shard key.
-
Breaking down Python 3.13’s Latest Features
Python 3.13 introduces a revamped interactive interpreter with streamlined features like multi-line editing, experimental free-threaded mode, alongside the introduction of a Just-in-Time (JIT) compiler. Lastly, the update removes several outdated modules and introduces random function for the CLI.
-
PayPal Adds GenAI Support with LLMs to Its Cosmos.AI MLOps Platform
PayPal extended its MLOps platform Cosmos.AI to support the development of generative AI applications using large language models (LLMs). The company incorporated support for vendor, open-source, and self-tuned LLMs and provided capabilities around retrieval-augmented generation (RAG), semantic caching, prompt management, orchestration, and AI application hosting.
-
University of Chinese Academy of Sciences Open-Sources Multimodal LLM LLaMA-Omni
Researchers at the University of Chinese Academy of Sciences (UCAS) recently open-sourced LLaMA-Omni, an LLM that can operate on both speech and text data. LLaMA-Omni is based on Meta's Llama-3.1-8B-Instruct LLM and outperforms similar baseline models while requiring less training data and compute.