The processing API lets the developer quickly assemble complex distributed processes without having to "think" in MapReduce. And to efficiently schedule them based on their dependencies and other available meta-data.The core concepts of the cascading API are pipes and flows. A pipe is a series of processing steps (parsing, looping, filtering, etc) that defines the data processing to be done, and a flow is the association of a pipe (or set of pipes) with a data-source and data-sink. In other words, a flow is a pipe with data flowing through it. Going one step further, a cascade is the chaining, branching and grouping of multiple flows.
There are a number of key features provided by this API:
- Dependency-Based 'Toplogical Scheduler' and MapReduce Planning - Two key components of the cascading API are its ability to schedule the invocation of flows based on dependency; with the execution order being independent of construction order, often allowing for concurrent invocation of portions of flows and cascades. In addition, the steps of the various flows are intelligently converted into map-reduce invocations against the hadoop cluster.
- Event Notification - The various steps of the flow can perform notifications via callbacks, allowing for the host application to report and respond to the progress of the data processing.
- Scriptable - The Cascading API has scriptable interfaces for Jython, Groovy, and JRuby - making it readily accessible for popular dynamic JVM languages