InfoQ Homepage Machine Learning Content on InfoQ
-
Modern Compute Stack for Scaling Large AI/ML/LLM Workloads
Jules Damji discusses which infrastructure should be used for distributed fine-tuning and training, how to scale ML workloads, how to accommodate large models, and how CPUs and GPUs can be utilized.
-
Platform and Features MLEs, a Scalable and Product-Centric Approach for High Performing Data Products
Massimo Belloni discusses the lessons learnt in the last couple of years around organizing a Data Science Team and the Machine Learning Engineering efforts at Bumble Inc.
-
Going beyond the Case of Black Box AutoML
Kiran Kate covers the basics of AutoML and then presents Lale (https://github.com/IBM/lale), an open-source scikit-learn compatible AutoML library which implements Gradual AutoML.
-
Simplifying Real-Time ML Pipelines with Quix Streams
Tomáš Neubauer discusses Quix Streams, an open-source Python library that helps data scientists and ML engineers to build real-time ML pipelines.
-
Improve Feature Freshness in Large Scale ML Data Processing
Zhongliang Liang covers the impact of feature freshness on model performance, discussing various strategies and techniques that can be used to improve feature freshness.
-
Needle in a 930M Member Haystack: People Search AI @LinkedIn
Mathew Teoh explores how LinkedIn's People Search system uses ML to surface the right person that you're looking for.
-
PostgresML: Leveraging Postgres as a Vector Database for AI
Montana Low provides an understanding of how Postgres can be used as a vector database for AI and how it can be integrated into your existing application stack.
-
Introducing the Hendrix ML Platform: an Evolution of Spotify’s ML Infrastructure
Divita Vohra and Mike Seid discuss Spotify’s newly branded platform, and share insights gained from a five-year journey building ML infrastructure.
-
Strategy & Principles to Scale and Evolve MLOps @DoorDash
Hien Luu shares their approach to MLOps, and the strategy and principles that have helped them to scale and evolve their platform to support hundreds of models and billions of predictions per day.
-
Declarative Machine Learning: a Flexible, Modular and Scalable Approach for Building Production ML Models
Shreya Rajpal discusses declarative ML systems, and how they address key issues that help shorten the time taken to bring ML models to production.
-
LLMs in the Real World: Structuring Text with Declarative NLP
Adam Azzam discusses why building machine learning pipelines to extract structured data from unstructured text is a popular problem within an unpopular development lifecycle.
-
Fabricator: End-to-End Declarative Feature Engineering Platform
Kunal Shah discusses how their ML platform designed Fabricator by integrating various open source and enterprise solutions to deliver a declarative end-to-end feature engineering framework.