InfoQ Homepage Big Data Content on InfoQ
-
QCon SF 2024 - Incremental Data Processing at Netflix
Jun He gave a talk at QCon SF 2024 titled Efficient Incremental Processing with Netflix Maestro and Apache Iceberg. He showed how Netflix used the system to reduce processing time and cost while improving data freshness.
-
Setting up a Data Mesh Organization
A data mesh organization: producers, consumers, and the platform. According to Matthias Patzak, the mission of the platform team is to make the lives of the producer and consumers simple, efficient and stress free. Data must be discoverable and understandable, trustworthy, and shared securely and easily across the organization.
-
Measuring and Reducing the Environmental Impact of Software
Software applications often manage big amounts of data; most of them are internet-based applications, and incorporate artificial intelligence. According to Coral Calero, these three aspects improve the capabilities and functionalities provided by software but they have also increased the amount of energy needed. We need to measure energy consumption of software to control its environmental impact.
-
Uber’s Journey to Modernizing Big Data Infrastructure with Google Cloud Platform
In a recent post on its official engineering blog, Uber, disclosed its strategy to migrate the batch data analytics and machine learning (ML) training stack to Google Cloud Platform (GCP). Uber, runs one of the largest Hadoop installations in the world, managing over an exabyte of data across tens of thousands of servers in each of its two regions
-
How Data Mesh Platforms Connect Data Producers and Consumers
A challenge that companies often face when exploiting their data in data warehouses or data lakes is that ownership of analytical data is weak or non-existent, and quality can suffer as a result. A data mesh is an organizational paradigm shift in how companies create value from data where responsibilities go back into the hands of producers and consumers.
-
Uber Migrates 1 Trillion Records from DynamoDB to LedgerStore to Save $6 Million Annually
Uber migrated all its payment transaction data from DynamoDB and blob storage into a new long-term solution, a purpose-built data store named LedgerStore. The company was looking for cost savings and had previously reduced the use of DynamoDB to store hot data (12 weeks old). The move resulted in significant savings and simplified the storage architecture.
-
QCon London: Lessons Learned from Building LinkedIn’s AI/ML Data Platform
At the QCon London 2024 conference, Félix GV from LinkedIn discussed the AI/ML platform powering the company’s products. He specifically delved into Venice DB, the NoSQL data store used for feature persistence. The presenter shared the lessons learned from evolving and operating the platform, including cluster management and library versioning.
-
Netflix Uses Metaflow to Manage Hundreds of AI/ML Applications at Scale
Netflix recently published how its Machine Learning Platform (MLP) team provides an ecosystem around Metaflow, an open-source machine learning infrastructure framework. By creating various integrations for Metaflow, Netflix already has hundreds of Metaflow projects maintained by multiple engineering teams.
-
Spotify's Approach to Leverage Recursive Embedding and Clustering to Enhanced Data Explainability
One of the main challenges of any online business is to get actionable insight from their data for decision-making. Spotify shares its methodology and experience to solve this problem by clustering diverse data sets through a unique method involving dimensionality reduction, recursion, and supervised machine learning.
-
Netflix Creates Incremental Processing Solution Using Maestro and Apache Iceberg
Netflix created a new solution for incremental processing in its data platform. The incremental approach reduces the cost of computing resources and execution time significantly as it avoids processing complete datasets. The company used its Maestro workflow engine and Apache Iceberg to improve data freshness and accuracy and plans to provide managed backfill capabilities.
-
AWS Announces European Sovereign Cloud for Government Agencies and Regulated Industries
AWS has recently announced that it is working on a European Sovereign Cloud, a new European region that will be operationally independent of all existing AWS regions. No availability date has been provided for the new option that targets government agencies and regulated industries that store sensitive data and run critical workloads in the European Union (EU).
-
Distributed Materialized Views: How Airbnb’s Riverbed Processes 2.4 Billion Daily Events
Airbnb created Riverbed, a Lambda-like data framework for producing and managing distributed materialized views. The framework supports over 50 read-heavy use cases where data is sourced from multiple data sources within the company’s service-oriented architecture (SOA) platform. It uses Apache Kafka and Apache Spark for online and offline components, respectively.
-
QCon San Francisco 2023 Day 1: Architectures, Data Engineering, Infra Languages, Staff+ Skills
The 17th annual QCon San Francisco conference was held at the Hyatt Regency San Francisco in San Francisco, California. This five-day event, organized by C4Media, consists of three days of presentations and two days of workshops. Day One, scheduled on October 2nd, 2023, included a keynote address by Suhail Patel and presentations from four conference tracks and two sponsored tracks.
-
Managing 238 Million Memberships of Netflix: Surabhi Diwan at QCon San Francisco
During the first day of QCon San-Francisco 2023, Surabhi Diwan, a senior software engineer at Netflix, presented on managing 238 million Memberships of Netflix. The talk is a part of the “Architectures You’ve Always Wondered About" track. Diwan's work at Netflix involves the backend work regarding membership engineering, which is critical for both signups and streaming at Netflix.
-
Grammarly Replaces its in-House Data Lake with Databricks Platform Using Medallion Architecture
Grammarly adopted the medallion architecture while migrating from their in-house data lake, storing Parquet files in AWS S3, to the Delta Lake lakehouse. The company created a new event store for over 6000 event types from 40 internal and external clients and, in the process, improved data quality and reduced the data-delivery time by 94%.