InfoQ Homepage Big Data Everywhere 2014 Content on InfoQ
-
High Performance Computing Contributions to the World of Big Data
Sharan Kalwani presents the history of HPC and the technologies and trends which have contributed to creating the world of big data, covering applications of HPC resulting in big data technologies.
-
A Distributed Transactional Database on Hadoop
John Leach explains using HBase co-processors to support a full ANSI SQL RDBMS without modifying the core HBase source, showing how Hadoop/HBase can replace traditional RDBMS solutions.
-
Why Would You Integrate Solr and Hadoop?
Yann Yu discusses how Solr and Hadoop complement each other, and how to use Solr as a real-time, analytical, full-text search front-end to data stored in Hadoop.
-
1.5 Million Log Lines Per Second: Building and Maintaining Flume Flows at Conversant
Mike Keane presents how Conversant migrated to Flume, managing 1000 agents across 4 data centers, processing over 50B log lines per day with peak hourly averages of over 1.5 million log lines/sec.
-
The Big Data Imperative: Discovering & Protecting Sensitive Data in Hadoop
Jeremy Stieglitz discusses best practices for a data-centric security , compliance and data governance approach, with a particular focus on two customer use cases.
-
Why Spark Is the Next Top (Compute) Model
Dean Wampler argues that Spark/Scala is a better data processing engine than MapReduce/Java because tools inspired by mathematics, such as FP, are ideal tools for working with data.
-
Customer Analytics on Hadoop
Bob Kelly presents case studies on how Platfora uses Hadoop to do analytics for several of their customers.
-
Unleash the Power of HBase Shell
Jayesh Thakrar shows what can be done with irb, how to exploit JRuby-Java integration, and demonstrates how the Shell can be used in Hadoop streaming to perform complex and large volume batch jobs.
-
Leading a Healthcare Company to the Big Data Promised Land: A Case Study of Hadoop in Healthcare
Mohammad Quraishi presents implementing a Big Data initiative, detailing preparation, goal evaluation, convincing executives, and post implementation evaluation.
-
TSAR: How to Count Tens of Billions of Daily Events in Real Time Using Open Source Technologies
Gabriel Gonzalez introduces TSAR (TimeSeries AggregatoR), a service for real-time event aggregation designed to deal with tens of billions of events per day at Twitter.
-
Building a Data Pipeline with the Tools You Have - An Orbitz Case Study
Steve Hoffman, Ken Dallmeyer share their experience integrating Hadoop into the existing environment at Orbitz, creating a reusable data pipeline, ingesting, transporting, consuming and storing data.
-
SQL on Hadoop - Pros, Cons, the Haves and Have Nots
Ted Dunning discusses the different options for running SQL on Hadoop including pros and cons.