BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Joe Duffy on Uniqueness and Reference Immutability for Safe Parallelism

Joe Duffy on Uniqueness and Reference Immutability for Safe Parallelism

Bookmarks

We interviewed Joe Duffy, author of Concurrent Programming on Windows, about his research into the use of type systems to ensure safe parallelism. This work was presented in the paper titled Uniqueness and Reference Immutability for Safe Parallelism. We asked for this interview because there seemed to be a lot of misconceptions about this research project.

InfoQ: In what ways do people misunderstand your research?

Joe: First, I want to say that I sincerely appreciate the community feedback and discussion. This is exactly why we published in the first place. Overall I would say the reception has been very good, and I look forward to my team publishing more in the years to come. It’s great to finally see some light after years of hard work.

That said, I do think there are some common misunderstandings. This is to be expected, as the research is quite new and with many research papers like these there isn’t much context set ahead of time.

  • "Implicit parallelism is impossible. You guys are fools for trying."

    This assumes that our primary goal was actually to enable implicit parallelism. As I explain on my blog, this was actually not the main driving force. It was a tantalizing possibility, one which I'm happy to say we have achieved in some limited circumstances, however the applications of the type system go far beyond this.

    For better or for worse, I am acutely aware of the many decades' of work on auto-parallelization and, related, auto-vectorization. I've worked on these and related problems for the better part of my career. And unfortunately, I’ve personally hit the same wall that many other researchers have here. Traditional control-flow-driven programs written in a classic C-style quite simply do not automatically parallelize perfectly, especially compared to a hand-tuned parallel program. Of course, there are some opportunities and effective techniques, but they aren't widespread enough to really change the game.

    One goal of our work, however, is to enable programmers building large, complex systems - such as operating systems, web servers, and device drivers - to have their cake and eat it too. They can use age-old C-like constructs where needed for efficiency - like simple for loops, mutable arrays and bit fields - and yet reap the benefits of safe concurrency, isolation, and immutable where profitable. These same for loops over mutable data structures can often be trivially rearranged to exploit data parallelism. The key here is not that the restructuring is fully automated, but rather that the type system guarantees safety. Such refactoring simply cannot lead to the introduction of races or deadlocks, by construction.

    An alternative reaction, by the way, is "Implicit parallelism is our savior! Moore's Law is back!" Though more positive, this response is equally wrong. Parallelism gets easier in this system, but it still isn’t free.

  • "This has all been done before: see Haskell, Rust, etc."

    Language design these days is different than it was 30 years ago. You are more likely to create a successful language by tastefully incorporating the many good ideas from the past, and adding unique value by delivering a nice, easy-to-use syntax, and by delivering great frameworks and tools. Certainly I think we have some clever ideas in the paper, such as the simplicity of converting between isolated and immutable state, and how naturally this reflects how many C# programs are written (e.g., with the “builder” pattern for immutable state), however I very much appreciate the influences by prior art.

    I will be the first to admit that Haskell was a major inspiration for our work. The initial idea behind this work actually originated from me wondering what the equivalent of Haskell's pure functional approach with its state monad would look like if we turned it on its head. In other words, let’s try to embed pure functional programming inside an otherwise mutable, imperative language. I was working on the CLR and .NET Frameworks at the time, so C# was a natural starting point. I've always been open about this. In fact, over the years I've discussed the ideas regularly with some of Haskell's original designers.

    And I love what Rust is doing. It's absolutely wonderful to see their work progressing all in the open.

    That said, just as Haskell and Rust have many ideas that our language doesn’t have, we have some that they don't have also. In fact, everyone stands on the shoulders of giants. It's the nature of computer science, and science generally. There is no shame in this - it's a beautiful thing.

  • "Nobody needs mutable, imperative code. Just use functional programming."

    I agree that some teams and problem spaces are amenable to pure functional programming. This is particularly true for scientific programs. However, if we're talking about a functional language other than Haskell, this is missing a major point in the paper completely.

    In LISP, Scheme, ML, O'Caml, F#, etc. you are “discouraged” from mutating state. But you can do it if you really want. And this mutation, though often visible in the annotations on data structures, is invisible to the program's overall analysis. This includes interprocedural contracts, like whether a particular method can mutate state and, if so, what particular state. Simply put, I cannot look at the argument to a map function and say for sure whether it is free of race conditions if I ran it in parallel. With Haskell, of course, you can, thanks to the wonder that is monads (setting aside unsafePerformIO).

    Of course, in C-style programming languages like C, C++, Java, and C#, it's way worse. You are not encouraged to write immutable and side-effect free code, and obviously there is no type system support when you decide to do so. It's a complete mess to write any serious parallel program. I guess ‘const’ offers a glimmer of hope for C++ programmers, but it is used inconsistently and can easily be cast away.

    That's a key difference in our system. We take the approach of functional languages, to encourage immutability and side-effect freedom, but allow you to do it, whether it be for convenience of efficiency. But when you do, the type system knows. There is absolutely no way to cheat the type system. We have our own equivalent of unsafePerformIO, however we do not allow 3rd parties to use it. Instead, we build safe abstractions like memorization and caches and encapsulate the unsafety.

    The net result is that you can have a truly safe parallel programming experience. And it's not just parallelism. Erlang suffers from races "in the large" due to message passing interleaving. The same type system techniques can be used for safe message passing also.

    So the point that many people miss is that you can have the best of imperative programming, and the best of functional programming, and they can work together quite seamlessly.

InfoQ: You say you have a restricted version of unsafePerformIO. Can you go into more details about how you use it?

Joe: The most important point is that we don't give it to 3rd parties. Just like we don't give the equivalent of C# 'unsafe' to 3rd parties.

Instead, we build the system using this capability. In a few nooks and crannies, we need to "cheat," but we still respect some golden rules of the system when we do. I think Tony Hoare once called these "benevolent side-effects." That is, there are side-effects happening, but they are happening at a layer where they aren't actually detectible externally by an observer. They are benevolent because they are done in good nature, and are not meant to violate the overall principles of the system.

For example, some of our parallelism libraries alias state across parallel workers, technically breaking the language's rules for isolation and side-effects. But we build these APIs to abide by certain rules like no mutable state is ever visible in parallel simultaneously, isolated state remains isolated, and so on. Another large class of abstraction is memoization, where we offer standard library abstractions for lazy evaluation, caches, and memoized state, for example. And finally, we have found that "write only" abstractions like events and perf-counters are easier to use using some of these techniques.

Our theory is that there are a finite number of patterns we need to support. Certainly over time, we have found the uses shrinking, rather than growing. Clearly it is critical to verify that these are done safely, since an error here amounts to opening up a type-safety hole just as with C# 'unsafe'. We rigorously review each and every one of them, and we have even done some work to formalize what it means to be “safe,” given that typechecking is basically nonexistent for such cases. I'd very much like to invest in verification research here down the road. Verifying these ad-hoc techniques is nowhere near as easy, elegant, or rock-solid as verifying a transitively sound type system based on subtyping.

InfoQ: A lot of this research seems like it would dove-tail nicely with the Code Contracts project, especially with aspects such as the Pure attribute. Have you looked at that at all?

Joe: That’s a great observation.

Our language does offer 1st class contracts along the lines of what Eiffel and Spec# provides. And the language uses its inherent knowledge of side-effects to ensure that such contracts are side-effect-free.

Back in the early days when I was building this type system, I had originally toyed with something very much like .NET's [Pure] attribute. But once you start to write reasonably interesting code, you quickly run into limitations. My favorite example is enumeration. If I have an immutable List, then the enumerator I get back needs to be mutable - since I must MoveNext on it - and yet it somehow needs to know to project out immutable elements. Without a complete type system, you are left with something that feels very "bolted on" and limiting.

InfoQ: Are there any aspects of this project that you think are especially rare or unique?

Joe Duffy: I'm surprised that we don't see more of the marriage of the best of functional with the best of imperative programming, actually. It's true that with languages like F# - and indeed most major functional languages dating back 30 years - that you've got the ability to mutate state and, sometimes, even employ familiar C-style control flow constructs. But it's rare that the mutation is ever tracked in the type system in a way that allows symbolic analysis and type-driven behavioral differences.

Haskell of course achieves this thanks to monads, but there are other reasons I fear that most mainstream developers shy away from using Haskell – many being surface-level, like syntax.

The fact that in our language we can literally write methods that overload on side-effect freedom is game-changing, I think. I can have overloads of LINQ operators where one set runs in parallel automatically, simply by virtue of the lack of side-effects. And it turns out that side-effect-free queries are rather common, given the legacy of SQL and the fact that anything else is quite difficult to reason about due to the deferred evaluation and lack of evaluation order guarantees. So it's not just about parallelism, but also guiding programmers to use safe patterns. And yet, we can still have those rare overloads for developers who need to do the occasional mutating operation inside their LINQ query.

These patterns pop up all over the place in framework design. Just think about sorting comparators.

I think the most interesting aspect of our project is that we've built an entire operating system using the language, doing simultaneous "co-innovation" all the way. Too often, languages are created in a vacuum, and it takes a long time to find out what works, and to fine tune things. The iterations take a long time when you need to ship a release to customers, wait for them to build stuff, and only get the feedback. Whereas we find out almost instantly. After working this way for several years now, I simply can't imagine going back. This doesn't just apply to the language, but the entire development platform.

InfoQ: You built an entire operating system? That seems like a lot of work just to prove the effectiveness of a type system. What inspired you and your colleagues undertake that project?

Joe: To be fair, the OS came first. We simply realized over time that, to truly achieve our goals, we needed to innovate in the language. Codevelopment of an entire system is a powerful thing.

InfoQ: Aside from the idea of separating mutable state from other code, are there any other aspects of Haskell that you are looking at incorporating?

Joe: Haskell is a breeding ground of incredibly inspiring type systems work.

It's hard to point at any one thing that's been as inspiring as the initial state monad idea. However, it's fair to say that I've read hundreds of papers about Haskell during the time we were developing our language, and that nearly every paper had an impact on me in some way.

I am wholly jealous of Haskell’s gorgeous higher order parametric polymorphism, including type kinds, and often fantasize about ways to incorporate those ideas into C#'s generics type system.

There have also been some interesting papers over the years about how to achieve what looks like mutable statics, but in a monads- and capability-friendly way. These helped the creative process as we worked to incorporate more immutability and capability-based objects in place of mutable statics.

The various error models, though there are quite a few ranging from monadic errors to simple syntax, are also quite interesting, particularly in the areas of composition and programming model.

Finally, I have also followed all of the Data Parallel Haskell work very closely for years - particularly some of the recent work around GPGPU - and we have found many kernels of great ideas in Concurrent Haskell that have influenced how we do message passing. I still remember presenting my team’s work on parallel LINQ at a conference in Nice in 2007 alongside some Haskell guys, and suddenly realizing we were working on the same thing, just in two very different type systems. We went out for beers that night, and I actually think that’s when the ideas really started germinating. Ever since then, I’ve been trying to marry the best of both worlds.

InfoQ: You said that “Erlang suffers from races "in the large" due to message passing interleaving. The same type system techniques can be used for safe message passing also.” I'm not sure what you meant by this. Are you saying that this research can address the interleaving problem? If so, can you expand a bit on it?

Joe: It's funny. Most people define "race condition" very narrowly, to just cover the traditional thread-based instruction-level interleaving you see in classic multithreaded programs. The reality is, however, that a race condition can happen at much coarser-grained levels. For example, take two processes that are reading to and writing from the same file. Clearly the outcome of this situation depends on timing - often the underlying OS scheduler - and can lead to subtle interactions.

It turns out that most shared resources like this have a concurrency models. Files can be locked for exclusive access. Databases use transactions. Etc.

But what happens when these processes start sending messages to one another? Sure, the processes may be isolated (a.k.a., "shared nothing"), but the interactions between them very much depend on timing and scheduling. In other words, there are races! There have been interesting papers over the years in the context of Erlang that talk about this.

I can't disclose all of the details, but I can give a hint. Thinking about state management at coarse-granularity in the same light as we treat state management at fine-granularity, to enable race-free task and data parallelism, can be quite illuminating and clarifying in the context of message passing too.

About the Interviewee

Joe Duffy is lead architect on an OS incubation project at Microsoft, where he is responsible for its developer platform, programming language, and concurrency programming model. Prior to this role, he founded the Parallel Extensions to .NET effort, and served as its architect. Joe is an author and frequent speaker, most recently writing Concurrent Programming on Windows (Addison-Wesley), and currently working on Notation and Thought, a fun historical journey through the world of programming languages. When not working or writing, he can be found traveling with his wife, Kim, writing music, guzzling wine, or reading about the history of mathematics. You can read more of his thoughts on his blog.

Rate this Article

Adoption
Style

BT