First, thanks for having me here. Eric Meijer invited me to give a talk here and I've known Eric from many years back, when were involved in the early golden years of functional programming. I've been at Yale for 26 years now and things are going very well and I still do Haskell and functional programming research and that's why I'm here.
Yes, the fact that I wrote the book that way means that that was what I thought would be the best way to teach those concepts, but the reason for teaching monads later rather than earlier is that I figured most people reading the book would already have some experience with imperative programming and would expect to see some kind of notion of imperative computation somewhere. I actually introduced that very early. That is introduced, I think, in the 3rd chapter or so, out of 20 some chapters, but I just call them actions. I refer to them as IO actions and that, I think, people who have familiarity with imperative programming, have no trouble with.
What I didn't introduce early was the notion of formally what a monad is and, of course, to do that you also have to know what typed classes are to really do it in the most generic sense. I didn't even use the work "monad" early in the book because it's a scary word and there is this running joke in the functional programming community, that we should have called them "warm fuzzy things" from the very beginning instead of monads and maybe that's right - I don't know.
That's one thing, but in the case of typed classes, I actually think I could have introduced the notion of the context of a typed class earlier than I did and earlier than the formal introduction of what a typed class is. In other words, if you look at the type signature of a function that has a context or constraint on it, that relates to a typed class, that idea you can actually introduce before you introduce what a typed class declaration is and what instance declarations are and so forth.
I think it might have been a good idea to do that, because one of the complaints that I hear from people - not just using my book, but people learning Haskell - is you get type errors, you get errors in your programs and the error messages are naturally going to refer to typed classes and whatever. And if you don't understand those error messages, then you're out of luck. I probably do that a little differently if I rewrote the book. Going back to the original question: Yes, that was all done on purpose.
I think it's a great success. I don't think that a language has to be measured in the dimension of success by just on how well the language itself is being used or accepted or whatever. I think its impact on other languages or programming systems in general is great. There are lots of examples of that kind on language design, including new languages created from scratch recently, like Scala. The other thing is that I hear this over and over again, in fact I just heard it over lunch, is that someone says "I've been programming in Haskell; I don't use it in my every day programming, but I love the language. I don't use it at work, but I understand it and I appreciate the ideas and I'm finding that it's affecting how I program in Java or how I program in C#" and that's pretty cool.
I feel good about Haskell. I think, especially in recent years, I think it's safe to say that it's a success. Even if it was to completely die at this point, its influence has already manifested itself. I'm not saying we are going to quit and go home now, but it's is a success. I think so.
It can definitely be used in the Mainstream and it is being used in the Mainstream. It's being used more and more in the Mainstream. Finally, we have the libraries and the implementations and the programming environments to support it better in the Mainstream. That was an impediment for a long time, but people like Simon Peyton Jones and his group at Microsoft have done an unbelievable job creating the tools that people need to use the languages in industries. It used to be we would worry about performance as a big problem in addition and the compilers are getting better and better, as well, so I really see no reason not to use Haskell, to be honest. We are starting to see more and more people use it, so who knows?
5. Can you tell us more about high order programming?
Can I tell you more about high-order programming? It is a frame of mind, it is a way of thinking. It is a way of understanding that everything is first class. When you do something at one level, you can abstract it away to a higher level. Once you get that idea, then it pops up everywhere. If I were to suddenly not be able to use Haskell or be forced to use say Java or C# - although those languages are now moving more in the functional direction - it wouldn't be so much the types I would miss or the lazy evaluation, but I could just absolutely not survive without high order functions.
It's just a critical abstraction to computation and it leads into such great ways of thinking about problems and writing programs, that I don't see how people get by without higher order functions. I could do without, I would miss the other features, but not high order functions. When you say "high order programming", of course that can mean other things as well. There are high order types and so forth and Haskell has also gotten actually a little bit crazy lately with its typed system and high orderedness, if you will.
We are only recently discovering the power of that. I don't think that we've seen the end of the story there. High order functions are pretty straight forward in the sense that once you have lambda, you can abstract away everything. The typed system things aren't quite as easy because of decidability issues and other things. I don't think we have the design quite right yet, and there are still extensions and ideas being developed and so forth. Nevertheless, as we have moved into the high order types, the power of them is also becoming evident there and it's clear that it's something that we want and we are going to continue to develop and who knows where it will end up?
Animations, in a graphic sense are one kind of what we call an instance of functional reactive programming in general. This idea is due to Conal Elliott originally and when he brought the ideas over to Haskell, we started to collaborate a bit and developed FRAN, which is a functional reactive animation language. At Yale, we also started looking at other applications of the idea, because it is a very general idea and Connell knew this at the very at the very outset, although he is main application originally was animation.
We took the idea and went into robotics and now we are doing signal processing and sound synthesis - we've used it for parallel programming. It's a pretty powerful and general idea and it basically begins with the idea that, instead of thinking as values, as variables in a program as being static, so that a given variable exe has the same value in this iteration of the loop or in this invocation of this function or recursion, that you actually think of it as constantly at every infinitesimal change in time as changing so, it is truly a continuous value.
That's the abstraction that we tried to present. Of course, that's not how we implement it underneath, but when you do that, so many things fall naturally into that paradigm, just to give you a quick example besides animation. An image is normally thought of as a static value, but a time varying image is an animation. There are tons of other examples and the one that we are using very recently is in graphical user interfaces and in particular in the context of music or sounds synthesis doing computer music applications. You very typically have knobs and sliders that represent values that control some sound.
That's just one of these signals - as we call them - or behaviors. It's a time varying quantity it's captured very elegantly in this Framework and when you just lift everything up, every operation you do, whether it be addition or other arithmetic operations, but also stateful operations like integration or differentiation, you get a very different style of programming. The real challenge becomes - when the whole world isn't always continuous - how do you integrate that with the fact that there are things that happen at discrete moments in time. You do click a mouse or you do press a key - and how was all that integrated.
To do that, it turns out there is a very elegant way to do it, which is to introduce a notion of a discrete event, but to think of these events are now of being streams of events - just like we had continuously changing values, now we have a continuous stream of potential events. You develop operators that will allow you to mediate between the discrete and the continuous and you have what we call switchers, that, on the basis of an event calls continuous behaviors to change. It works out very nicely - creates a very high order way of programming.
To implement it in Haskell, you just absolutely depend on things like high order functions, but not just to implement it, but actually conceptualize it. It's the right way to think about it. And it works out very nicely.
Yes, I think so. There has been a lot of recent work in that area. I know I'm not nearly an expert enough to comment very deeply on the issue, but all the same concepts that we were talking about, including the fact that you want web applications to be stateless, fit in very nicely into this model, including the idea of a continuation based notion of a web application falls in very naturally with this model.
As I said, people are still developing those sorts of things. One of the areas that I find interesting in that is not just web applications, but Internet transactions in general, where you have overlapping, you have very timing oriented sequence of events, you may have things happening concurrently and something might time out or something might happen that causes you to want to cancel other things that have been going on. That introduces very quickly another important thing that's going on in the Haskell community and that is concurrency.
One of the pleasant surprises that came out of dealing with all of the problems you have in implementing high order functions and those sorts of things, is that, by the time you have all those mechanisms in place, adding concurrency turns out to be trivial. Because just to deal with lazy evaluation in high order functions you have to have a way to encapsulate computation, suspensions, delays - whatever you want to call them -, closures. Once you have that mechanism in place, then concurrency is just another form of that idea. Simon Peyton Jones and his group have done a great job providing a thread library for Haskell that makes that kind of stuff really easy.
8. What is the difference between functional abstraction power and OOP, the object oriented one?
They are very different. There is a lot of baggage that comes with OOP that doesn't really exist so much in the functional world. It gets a little bit confusing in the sense that Haskell's typed classes are classes in an object oriented sense, but there is no state associated with them. I think one of the problems with OOP languages - and this is just my personal opinion - is that they confuse several different kinds of issues and I don't think the perfect OOP language has been designed.
They are too easily confused issues of state, issues of encapsulation, issues of objects and inheritance and you see objects being used therefore in languages such as Java in lots of different ways. The fact that they are being used for really different conceptual ideas, means that maybe there is something not quite right there. Again, that's just my opinion and I don't think that learning object oriented programming is easy. In fact, I've taught Java and C# enough at Yale to know that it's not easy and I know where students get confused.
There is a sort of trend recently in teaching these languages to demphasize the object oriented aspects of these languages until later in the semester - at least - so that students can grab the basic ideas of computation first. Obviously, there are some good ideas in object oriented programming, as well and I don't mean to minimize that. In order to achieve certain kinds of effects, such as inheritance, in a functional language one has to work a little bit harder. You can simulate it with high order functions and I have done that, I do it in my computer music library HasKore, for example.
It could perhaps be expressed more easily in an object oriented language, which isn't to say that I would be ready to switch over to object oriented languages, but there are some good things there.
Certainly, they can be used wherever they are appropriate to be used. Trying to understand monads well enough to understand where they're appropriate use and where they're not is one of the challenges and to be able to teach people the skills in doing that and get them to understand what a monad really is. Unfortunately, a lot of people think of or pretty much equate monads with what is known as a state monad - a particular kind of monad - because it relates most strongly to IO and imperative computation, but state monad is just one kind of monad.
Getting a good handle on really what the abstract notion of a monad is, I think is a challenge. That said, I do think that monads should be understood and can have an impact in helping people understand their software, but there are things even beyond monads. So, the really important thing to understand is that a monad is capturing, in some sense, an abstract kind of computation and that computation can be described very formally in terms of the monadic operators and the monadic laws, but it's not the only kind of computation.
For example, a more recent thing that has become popular in certain contexts is something known as arrows, which is an idea due to John Hughes. Arrows can be seen as a generalization of monads. You can encode any monad as an arrow, but not the other way around. Haskell now, actually also in addition to the monad syntax, has arrow syntax. So, here we go: we have yet another abstract computation. We need to understand it, we need to understand when it's appropriate to use a monad, when it's appropriate to use an arrow.
Of course, the question naturally arises "Is there something else?" There are other things. Applicative functors is another example of something that's actually moving in the other direction - it's an even simpler concept, simpler than monads - that is more appropriate in certain contexts. There are certain contexts where one's tendency might be "Oh, I'm going to use a monad!" When it turns out an applicative functor would have done just fine, and the applicative functor has a simpler set of algebraic laws. It's a simpler set of operators, it's easier to use in some sense, so it would make more sense to just use that. Why use a sledgehammer to crack an egg?
At the same time, someone may have an application that really demands an arrow and instead they are trying to force a monad into it and then you end up with something that either doesn't work or isn't able to express what you want. Coming to grips will all that, the functional programming community itself is still sorting out the details. Papers are being published pretty much non-stop these days, on those kinds of ideas and they need to be sorted out.
In the mean time, that doesn't mean the Mainstream shouldn't get excited about monads because maybe it's not so as much as a scary a word any more and they can see that good things were happening in other contexts and they may try to use them in new contexts, which is great. That should happen. What I'm saying is there is even more there. Will all that stuff eventually end up in the Mainstream? I don't know, but it's all exciting stuff for a programming language researcher such as myself.
I haven't had huge amounts of experience doing that, but I have an example that I've talked about earlier today. In fact, I have worked with musicians, for example, in using our computer music library or DSL Haskore and HaskSound and they have loved it, at least in the circumstances that I've been there to observe and handhold a little bit, perhaps. It's been a very positive experience.
I would say that's the biggest area where I've actually worked with complete novices In most of the other areas that I've worked in, take robotics: if you talk to a roboticist, you are not going to find a roboticist who has never programmed before. For example Greg Hager and his research group (Greg is now at Jones Hopkins, he was at Yale): when he was at Yale, we worked with him on a DSL called FRAD, which is for controlling robots using the functional reactive programming paradigm that I was describing earlier.
To that point in time he had done all his programming in C and C++. He'd never programmed in Haskell before, so he was using Haskell for the first time in the context of this embedded DSL called FRAD. He also liked it quite a bit as did a couple of his students and they continued working with FRAD even after they left Yale. It wasn't just a one-time down-the-hallway collaboration or something - they found it to be a productive way to program.
To be honest, I think that we need to have more of those kinds of experiences with non-programmers - certainly, non-functional programmers and see how it goes. In the context of music, one of the things that I'm doing right now, which is - I think - going to be a challenge, but it's something that I think we need to see more of, is write a book on a subject matter that is not about programming. Perhaps, along the way, teach programming, but the main purpose of the book is to teach some other subject matter. In my case, what I'm talking about is computer music.
I'm actually taking my previous textbook, the Haskell School of Expression and rewriting it in the context of computer music and introducing computer music ideas very thoroughly. To do that means potentially lots of pros about things that have nothing to do with Haskell, but along the way teaching a language and teaching it thoroughly, not just in a superficial sort of way, but really get them to understand the power of functional programming so that once they understand Haskell or once they understand the computer music ideas then they can really do some cool things. You don't see that happening very often. We could do more of that if spent the time and energy to do it. We'll see how this effort pans out.
I feel very good about Haskell, first of all. Its success recently is a pleasant surprise and a satisfying one after all these years of having a niche in the sense of a research language, but not getting a lot of attention in the Mainstream. I wouldn't be sitting here right now, having this interview if it hadn't gotten the recent attention that it has. That said, even if it hadn't got that attention and I wasn't sitting here, I don't think I would be too upset, because I feel good about the language and I use it all the time and it's still my favorite language - if I'm going to program something that's my language of choice.
As we had talked earlier, I feel great about the fact that it has influenced a lot of other languages, but there are a lot of cool languages out there and a lot of cool frameworks that are very powerful. The best compliment is to mimic somebody's work and people are stealing ideas from Haskell, which is great, but there is no reason why we can't steal ideas from Ruby on Rails or whatever new technology comes along and really has an impact on the community. I think Scala is a really cool language.
We'll see more of those kinds of things coming along, but what's great right now, is everybody is borrowing everybody else's ideas and there is a lot of symbiotic development going on. The whole programming language paradigm is very interesting. I used to have a lot of funding from DARPA and there was a time when DARPA designed the ADA programming language. The purpose of ADA was to be the end-all and be-all ofprogramming languages. That was supposed to be it: no more languages after ADA. That didn't work.
Then, DARPA had a program where they were going to try to standardize on a prototyping language. Again, "Let's all get together and design one language and that will be the end of it. We're done!" That didn't work. Over my 26 years I've heard numerous times from DARPA or the funding agencies "Programming language research is dead" or "Compiler research is dead. We're done!". It's not done! It keeps evolving and it keeps getting, if anything, more exciting. I think there is some great stuff going on these days in lots of different directions, not just Haskell and that's all good stuff.