InfoQ Homepage Articles
-
Building a CI System with Java 9 Modules and Vert.x Microservices
Java 9 and Vert.x microservices are compatible for building applications, as showcased by this example application that implements a minimal, but working CI system.
-
Migrating Batch ETL to Stream Processing: A Netflix Case Study with Kafka and Flink
At QCon New York, Shriya Arora presented “Personalising Netflix with Streaming Datasets” and discussed the trials and tribulations of a recent migration of a Netflix data processing job from the traditional approach of batch-style ETL to stream processing using Apache Flink.
-
The Kubernetes Effect
To successfully design for, implement, and run applications on Kubernetes requires knowledge of primitives, and awareness of design principles and practices. This article provides an overview of Kubernetes and guidance for how to best use it.
-
Advice on Starting Your Own Software Company
No matter how great your idea is, there are a lot of down-to-earth things which should be considered and carefully planned if you want to found a software company and ensure its survival. Why didn't Youtube’s predecessor ever get the success of today’s favorite? Why did the right time save Airbnb? To come up with a good idea and make it actually work are two different things.
-
Is Project Treble the Answer to Android Updates?
While iOS updates can be usually installed on all supported devices the day they are released, Android updates are annoyingly slow to roll out. As a result, fragmentation has been a major problem in the Android world for several years. Project Treble is an attempt to remedy this entire set of problems. This article will introduce its architecture and discuss its chances of success.
-
Q&A on the Book The Startup Way
The book The Startup Way by Eric Ries explores how large organizations can use startup techniques to innovate and accelerate growth. It provides methods for creating a transformation roadmap towards an entrepreneurial way of working: to experiment and collect data, roll out entrepreneurial ways of working throughout the organization, and tackle the supporting systems like legal, finance, and HR.
-
DevOps and Cloud InfoQ Trends Report - January 2018
This article, following on from the Culture and Methods piece we published last week, provides a summary of how we currently see the operations space, which for us is mainly DevOps and cloud.
-
Why and How Database Changes Should Be Included in the Deployment Pipeline
Eduardo Piairo on why databases and applications should coexist in the same deployment pipeline and different scenarios and steps to achieve it.
-
Exploring the Fundamentals of Stream Processing with the Dataflow Model and Apache Beam
At QCon San Francisco 2016, Frances Perry and Tyler Akidau presented “Fundamentals of Stream Processing with Apache Beam”, and discussed Google's Dataflow model and associated implementation of Apache Beam.
-
Scaling Agile – Big Room Planning
This third article in the series about making scaled agile work explores how to do big room planning. It’s two days of planning together with all program and team members every three months providing an overview of all the work to be done in the next quarter. Towards the end of the two days, team and program objectives for the three months are agreed upon, and risks are discussed and mitigated.
-
Playing with Messaging Chatbots in the Omnichannel Contact Center
The proliferation of messaging platforms is forcing companies to shift towards an omnichannel strategy, where they need to be able to contact people in their preferred channel. In this article we will develop an omnichannel messaging chatbot that offers two-way communications over SMS and Facebook using the Twilio Studio visual tool.
-
What Do Data Scientists and Data Engineers Need to Know about GDPR?
Andrew Burt on the implications of GDPR on data collection, storage and use for any organization dealing with customer data in the EU. Burt explains what's the minimum an org needs to pass the GDPR test, as well as how to take the opportunity to improve their overall data governance.