Facebook published an academic paper and blog detailing the Lua-based Torchnet, it's new open-source project centered around deep learning and built on the previously open-sourced Torch library.
Laurens van der Maaten of the Facebook Artificial Intelligence Research laboratory (FAIR) noted in an interview that it can be applied to things like image recognition, natural language processing, and that its approach is similar to the Blocks and Fuel Python libraries for the Theano framework. He also noted:
It makes it really easy to, for instance, completely hide the costs for I/O [input/output], which is something that a lot of people need if you want to train a practical large-scale deep learning system.
The abstractions provided via Torchnet are reportedly not limited to Torch and can be applicable to Caffe and TensorFlow because Torchnet makes few assumptions about the underlying learning framework.
The Torch-7-based Torchnet reportedly provides reusable abstractions and boilerplate logic around things like asynchronous data loading and multi-GPU computations for machine learning. Torch 7 is built in Lua and is for algebraic computations on CPU using OpenMP/SSE, and on GPU via CUDA architectures. According to FAIR, Torch 7 is one of the primary frameworks for research in deep machine learning. FAIR envisions Torchnet as a community-driven, plugin-based platform. On the motivations and vision for the Torchnet project, FAIR noted that:
The open source Torch already has a very active developer community that has created packages for optimization, manifold learning, metric learning, and neural networks, among other things. Torchnet builds on this, and it is designed to serve as a platform to which the research community can contribute, primarily via plugins that implement machine-learning experiments or tools...We envision Torchnet to become a community-owned platform that, next to the core implementation of Torchnet, provides a collection of subpackages in the same way that Torch does.
According to FAIR's paper, Torchnet applies five main abstractions that allow for efficient reuse and optimization that would have otherwise been custom-code for any number of projects. The Dataset abstraction provides the number of samples in the dataset and a get() function that returns samples. The DatasetIterator abstraction allows for looping over the dataset and manages asynchronous parallelism. The Engine abstraction implements the interaction between a model, a DatasetIterator and loss function, as well as a training and testing function. The Engine also provides hooks for users to inject experiment-specific code such as performance Meters. Implementing the hooks as closures reportedly allows Torchnet to share logic between code that is used for training and testing models. The Meter abstraction typically implements two functions, an add(output, target) function and value() function. Meters provide the ability to measure performance properties such as:
the time needed to perform a training epoch, the value of the loss function averaged over all examples, the area under the ROC curve of a binary classifier, the classification error of a multi-class classifier, the precision and recall of a retrieval model, or the normalized discounted cumulative gain of a ranking algorithm.
Lastly, the Log abstraction provides the ability to output raw text or JSON logging of the experiment.