In a recent article, Alex Payne, organizer of the Emerging Languages Camp, provides insight on how the language landscape has changed in the last five years and how it might change in future. InfoQ has talked with him.
From his vantage viewpoint, Alex identifies three main criteria that help tell the recent story of language evolution and understand their future:
- The importance of tooling.
- Using virtual machines as "implementation strategy".
- Language polyglotism.
As Alex recounts, when Go co-creator Rob Pike wrapped up his Emerging Languages talk in 2010, he was asked why he had shipped a language that "seemingly ignored the last 30+ years of programming language theory research." Sure enough, Go conservative character has not prevented its wide adoption, says Alex. At the other extreme, Scala "has struggled with tooling from its inception," thus making many Scala developers "swim back to the familiar shores of Java" or alternately to Clojure, Go, and Rust.
How do you explain that language adoption does not follow from its feature set?
I guess a succinct way of putting this is, “it doesn’t matter how feature-rich your language is if programming in it is unapproachable.” A number of my industry friends work in Clojure and ClojureScript. Clojure has a reputation for being a throughly modern language: functional, high-performance immutable data structures, support for “gradual typing” viacore.typed
, logic programming viacore.logic
, etc. It’s also notoriously difficult just get a working Clojure development environment up and running, and all the more so if you’re using ClojureScript. People have bundled up their Emacs configurations, written tutorials, offered project templates for the Leiningen build tool, developed IDE plugins, and more. Still, because the tooling around Clojure is largely community-provided, it’s a constant moving target, and last month’s build setup is unlikely to be working today if you’re keeping up with the latest libraries.
By contrast, the Go team ships practically all the tooling necessary to use the language productively, leaving the easier and less error-prone work of integration with specific tools up to the community. This means that it’s easier to gain actual experience in Go. Clojure is both harder to get set up with *and* likely unfamiliar in terms of language semantics and programming paradigms. Clojure has more to offer the programmer, but it’s hard to get productive with.
Why is tooling a weak point for many otherwise advanced languages?
Certainly, the more complicated the language, the harder it is to develop tools for. Tooling is also an afterthought for most language designers, though that seems to be changing. Really, I think it’s an issue of stewardship. As above, most languages leave their tools (editor support, package managers, linters, etc.) up to the community. Some third-party language tools will be good, and some will be bad; either way, they’re not held to any standard of quality or completeness by the language authors themselves. This lack of central planning also leads to poor coordination between the various projects necessary to provide a good developer experience.
Do we understand the value of tooling better today than we did five years ago?
I’m not sure. Microsoft and Apple have understood this perfectly well for many years now, and while the tools they provide aren’t perfect, they’ve clearly enabled a proliferation of software for their respective platforms.
This year's Emerging Languages Camp has broadened its focus on tooling. How did this work out?
Under the umbrella topic of "the future of programming," we encouraged presenters to submit talks about projects that improve the programming process at any level. We ended up with a number of tooling-related talks, including a Photoshop-like application for designing 3D shaders and a brilliant Eclipse plugin that offers code completion via dynamic analysis.
We’re not yet sure if we’ll maintain the same focus next year, but the event went well and attendees seemed to appreciate the tooling-oriented presentations.
What has been the value of using VMs as an “implementation strategy” for languages, as you say, and why this might not fit the bill anymore?
Back in 2008, I made a prediction that "VMs designed for high-level languages (ex: Java, Erlang, and JavaScript) were going to flourish as host platforms for higher-level languages." This prediction has been confirmed by the birth and success of languages such as Clojure, CoffeeScript, and Scala, but the VM as implementation strategy is starting to show its limits. This can be seen in the diminishing interest in CoffeeScript, as well as in Scala being set aside for Java, and even in Clojure community periodic noises of longing regarding a port to LLVM.
If I’m being held to another prediction, it’s that the languages five years from now will be untethered by a host VM, like Rust, Julia, and Go are. The low-power, high-connectivity Internet of Things world we’re moving towards leaves far less room for fat VMs.
You wrote that it is now almost common sense to assume a certain degree of polyglotism in professional developers. Why is this important?
As long as we have heterogenous networked systems, we’ll have programming language polyglotism. Most software developed today interacts with a network. That typically entails a client/server architecture. Differing technologies have “won” the client and server, respectively (with the tenuous exception of JavaScript). This necessitates some degree of polyglotism.
Even modern front-end web development can be a polyglot affair, meshing markup, templating, and styling languages with multiple dialects of JavaScript. There have been attempts to unify all this, but most developers seem content to work in several specialized languages.
Since 2010, the Emerging Languages Camp has aimed at bringing together programming language creators, researchers, and enthusiasts aiming at advancing the state of the art in programming languages. This year's conference took place on September 17, 2014 in St. Louis.