Transcript
Cormack: Welcome to the modern compilation targets track. I'm really here to talk about the modern platform and why languages are so important to everything we do now. Really, highlight some of the things that are happening with compilers and languages, and what things we're building and that you can build. Give you a flavor of why things are changing. What's going on and why we're having this track? What things we're going to be working on and building in the next few years?
I'm an engineer at Docker in Cambridge, UK. I've been speaking and track hosting at QCon for quite a lot of years now. I have broad interest and I've been working a lot in security. I have a bit of security bar sometimes, about system software, applications and what's going on with applications, containers, and things like that. You can find me everywhere as Justin Cormack, easy to find if you remember my name. If you don't know Cambridge, this is just down the road from where I live. Basically, Cambridge is a tech village. It is a fun place. This is where just over 50 years ago, quasars were discovered, not using these telescopes. It's an interesting mix of the countryside and the high tech, which is always fun.
For a while, I was running the operating systems track at QCon. I did the language track in New York. Now I'm doing the modern compilation targets track. It's like, why have I moved this way? I don't know. I find it's not really a big change. These are both areas that I've been really interested in for a long time. Part of my interest in new, developing operating systems has been around things we can do with programming languages and new programming languages. The fact that we've literally spent all this time for many decades building operating systems in C. This is really a terrible idea, and we should have stopped doing that a long time ago. There's loads of new developments going on with programming languages in the operating system space and other areas that have been really fascinating. A lot of the drivers for change are similar in some ways. Some of the bits that I was talking about in operating systems a few years ago also made the changes in languages.
Drivers of Change
With languages in compilers and compilation targets, has anything really happened recently? I'd say, yes. Actually, a lot of things have happened. I'll talk a lot about performance, because it's an area that is really important and really interesting. I think that we underestimate the drivers that performance has on what we do with computers. It also affects all sorts of other things. Hardware is changing because of performance, and the end of Moore's law, and those reasons. Hardware is a thing that as programmers, we sometimes ignore. Hardware has changed a lot. There is the thing that is saying that C was based on the PDP-11, which is not quite true, but it's got a lot of symmetry with it. We're going to talk a lot in this track about different architectures like vectors, and GPUs, and FPGAs, and things like that. Hardware is fundamentally changing and programming languages are changing because of this. How we use them is changing. We're also going to talk about tiny devices. ARM ships 20 billion microcontrollers a year, and someone has to program these things. This is a really interesting area, which we're going to talk about. Security is something that I've been working on quite a lot. There are some real changes in security. Software is being attacked faster than ever, and programming languages can help us with this, which is useful.
In the programming environment, we've had three really gigantically successful language projects: GCC, LLVM, and OpenJDK. Those are gigantic projects and have been immensely successful. GCC itself really witnessed the birth of commercial open-source software. Cygnus was founded as Cygnus your GNU support, was what Cygnus stands for because everyone loves recursive acronyms, especially back then. It was really the first commercial open-source company founded in 1989. It was acquired by Red Hat a little later than that. Their main business was building GCC ports for all the many hardware platforms that existed at the time. They were so successful that most other commercial compiler vendors closed down over the years after that because open source was immensely more efficient because they could just add a little bit of backend fill thing and use the same frontend for the compiler. LLVM was a university project originally but was supported by Apple and later almost everyone in the ecosystem has become immensely successful in providing competition to GCC. OpenJDK has really created the whole JVM ecosystem and has been immensely successful, and really innovative in performance, and so on. These projects have allowed a huge number of other languages to thrive these days off as C, C++ compilers, and JVM compilers. Actually, lots of languages like Rust, for example, use LLVM. Some languages build their own stuff. These huge projects have made proper high performance compilers very accessible to all sorts of new projects and experimentation. They've really molded out the environment we're in.
Performance
Performance is something that I talk about a lot. One of my favorite quotes from a long time ago is, "A super computer is a device for turning compute-bound problems into I/O bound problems," which is one of the historical things that you end up doing a lot of compute and then you run into other problems. The big performance change that we've really seen in the last 5 years is that, I/O got really fast. Cheap 25 gig Ethernet and 100 gig Ethernet have changed the world since. This was actually a forecast from a few years ago about network controllers. Amazon enormously invested in building out 25 Gigabit Ethernet, which is what they use in all their new machines. They created this whole market segment and made it really cheap. I think the majority of servers shipping now have 25 gig, 100 gig Ethernet. This is really fast compared to the gigabit we were using just a decade ago. SSD, NVMe, NVDIMM, all these things, we got fast storage now, really fast. Millions of I/O operations a second, which is a huge change from hard drives, which would do 10 or 20 a few years ago. We've got loads more CPU cores to drive these things, but clock speeds have not really gone up all that much.
We've had a few orders of magnitude performance improvement in I/O in our CPUs in the last few years. Even if you look at your laptop, USB has gone from something that you would plug your mouse in, 20 years ago, to something that runs 10 gigabits. Thunderbolt and those types of things are even faster than that. Back in the 2000s, there was the famous C10K problem, can we have 10,000 connections to a server? We have people doing 10 million connections to a server now. These orders of magnitude change are really big, and they actually require high performance. Every CPU cycle counts. Even for just 10 Gigabit Ethernet, you've only got 130 clock cycles per packet, for a small packet. For a 100 gig Ethernet, it's 13. You really can't do much work in that time. There's a lot of optimization and performance that's really important for these things.
Storage has gone through the same thing as networking. We've got really fast storage. We're ending up with flash storage and memory form factors, and things like that, which are incredibly high performance compared to what we had before.
Power Consumption of Computers
To drive all these stuff, there's a whole bunch of issues. First of all, power consumption is something that is really important to think about. Data centers consume about 3% of all electricity we use, and it's growing. That's a worrying trend. Some of the estimates suggest it might go up to 10%, 20% which is just scary if you think about the amount of energy that we need to produce to do that. While large cloud providers are more efficient than traditional data centers, a lot of their power is still fossil fuels. We've got to transition that into renewable energy in the next few years. We need to make our use of computers more efficient. Also, not do things that we don't need to do. Make sure that renewable energy is the source of what we do. Power consumption is important, and obviously performance and power consumption are linked. If you're just wasting your CPU cycles doing things slowly, then it uses a lot more power.
I highly recommend this paper. It is not at all well known. I'm thinking I might actually do a papers we love talk on this particular paper. It's a fascinating paper where they basically take a problem of image processing, of video conversion, and work out exactly how to implement it more efficiently. First of all, they just implement it on a CPU and then they vectorize it. Then they start building specialized hardware to solve the problem. You can get a 10 times performance increase just by vectorizing it and using the hardware more efficiently. Then there's other factors of a few hundred over that on top if you build specialist components to actually solve the specific problem you have. That's going to be a difficult problem for us to solve. We're not used to building custom hardware to solve problems. We don't have programming languages that easily map to this problem. This is a research area. I'm going to talk about some of the issues involved in that.
What Can We Do Now?
What we can do is we can use the hardware we have now that's more efficient and more performant. All modern server CPUs have vector processing, so things like AVX-512 was actually an Intel GPU design that they built into the CPU instead. A factor of 10 performance on that is definitely entirely possible. GPUs and FPGAs are things that many computers have, and you can use, and they're of much higher performance for the right algorithms. Our programming languages that we use don't actually natively generate these things in general. If you program for GPU, you have to use special toolkits, and things like that. Juan and Jose is going to talk about this, which is a really exciting talk about making Java just JIT compile for GPUs and FPGAs, which is a really exciting breakthrough. This is practical research that actually works and you can use now, which is really exciting, I've found. You can get factors of 10 or more in performance just by doing this. This is something that we all should be thinking about in terms of language design, and what we're outputting.
JIT compiling is a great solution for this. When you're thinking about programming languages, why don't our programming languages natively support these platforms? Because they were all designed for the PDP-11 back in the day and we haven't really rethought much about this. There's a whole lot of handwritten vector code now. There's some amazing projects showing what can be done, what speed-up. There's this simdjson project, which I love. Daniel Lemire has done a load of work on performance and obsessive about using AVX-512 in particular, which is a quite fun instruction set to do actual practical things. JSON parsing is not something that, when you think of vectorization, you think of. You think vectorization, you think about numerical computation type things. Actually, it turns out lots of problems especially with the new instruction sets that we have, these things can be used to solve all sorts of perfectly normal things. The programming model is weird, and interesting, and difficult, and not something that many people do. We don't have general-purpose software tool chains that will generate this. This is handwritten code with intrinsic assembly operations and directly written assembly, and other horrible things to write. We really ought to be doing better at making this actually accessible. The Julia programming language for numerical computation has been doing really interesting work in exploring this space as well. That's another area that's really making progress, and actually putting this in people's hands and making it usable.
Another thing that we've been doing a lot recently, and we haven't thought of as a thing, but it's very much a thing, is using programming languages to output other programming languages. I work a lot in the cloud computing space, with Kubernetes we deal with a lot of YAML. No one likes writing YAML. We've been starting to develop frameworks for the programming languages, particularly TypeScript based, such as Pulumi and jk that basically output as the end product, YAML. Because then you can just write a program with abstractions and output YAML. There's a great talk about Pulumi by Joe Duffy from QCon, New York, if you're interested. The same things happen with hardware. There's the Chisel programming languages, so basically Scala but designed with a set of libraries to output hardware descriptions. It's being used by a lot of the RISC-V processors. You can parameterize and write abstract functions. Then the output of this is then gates, which you convert to Verilog, and output into hardware. This form of hardware design has been happening for quite a while. TensorFlow is basically a set of libraries for outputting computation graphs for ML. PyTorch and TensorFlow are basically using Swift and Python as ways to output computational structures, which are languages. Language technology is just ending up everywhere in the stacks we're using. If you look at things like TensorFlow, it's not, I'm writing a program, it's I'm writing a program that generates code. There's this level of indirection we have. This turns out to be essential in terms of optimizing that code and outputting it to different backends, and different hardware, and different TPUs and GPUs, and those types of things.
The small is really important. Servers are interesting and fun. ARM is shipping 20 billion chips a year and many other people are shipping microcontrollers. These are turned into actual, normal 32-bit RISC machines. Its performance is slow, computers a few decades ago. There's a lot of people working on really fun problems like scavenging power from the environment, picking up solar, or RF, or something to program really low power machines. There are projects that I've heard about that are thinking of shipping billions or trillions of microcontrollers for all sorts of really exciting projects. We need to program these things. Historically, these were programmed in C or assembly directly. We can't program billions of microcontrollers in low-level languages. We really need high-level languages. These are weird environments with weird things. I'm really excited for Ron's talk about TinyGo. Go is a programming language that I program in a lot. It's had great success in the cloud and in containers. Turns out, that's also a great language for programming microcontrollers.
Accelerators on Microcontrollers
Like servers, microcontrollers are doing all sorts of interesting new tasks. Encryption is something that microcontrollers always had a problem with. I remember Chris did a talk years ago about the problems involved in that. We're now adding hardware accelerators for doing encryption and for doing ML, and running ML models on microcontrollers. I've talked to people who are building toys who want hardware support and encryption on the microcontrollers they're using in the toys, because the toy has to talk to an app so that the app can control what the toy does. You can't actually talk to something without encryption and security if you're a sane person anymore. People want to run AI models. It's possible to run AI models on really tiny, very low power microcontrollers that do voice recognition. There's a whole lot of work on 1 bit models, which is taking the numerical precision down as low as it can possibly go. Then you can actually run them on really small very low power controllers. There's a load of the same trends in microcontrollers that we need high-level programming languages and frameworks to work with these things because you don't want to write these things by hand in assembly.
What Is So Different About The Wasm Platform?
WebAssembly has had a lot of hype. Why is it different from other platforms such as the JVM, that we've had, that can run everywhere? Obviously, the huge thing about WebAssembly is it ships in the browser, and suddenly we can run lots and lots of programming languages in the browser in a way that doesn't involve compiling, everything is JavaScript, but is actually a native backend. Pretty much, this is since the JVM was removed from the browsers a long time ago. This is a point which we can suddenly run languages we like in the browser. This is causing a lot of innovation, and loads of interesting new platforms are being built. Pretty much every programming language now has a WebAssembly backend, you can write in any language you like and run it in the browser. It's a new, open process. It's a new, open design. Unlike the JVM, it was not designed originally for one specific language, it was designed to be quite general purpose. There is people working on extensions for it, and building new platforms for it. CloudFlare's web workers is one thing that's really interesting as you can just run code on the edge in a CDN using WebAssembly. Colin is going to talk about building your own compiler. I think compilers are great fun. If you're interested in languages, it's great fun to learn about compilers. Colin's talk is going to be really hands-on and interesting. You're going to learn how to do it yourself. You can also understand how WebAssembly works from that. It's going to be highly recommended.
Multi and Mixed Language Support
The dream of WebAssembly is to have every language interoperate by converting it to WebAssembly and being able to share libraries written in any language. It's a really hard problem. It's something we haven't ever done before. This is hard because languages have different things. Python doesn't understand the linearity restrictions in Rust. Every language has its own garbage collector, or doesn't have a garbage collector, and things like ownership are really hard problems. The JVM got around this by forcing every language into one model. There were extensions just over time, but everything used the same object model and garbage collection model. For WebAssembly, the dream is to actually make it so that you can interoperate all programming languages together. We're way away from this. It's a really fun problem in language design. There's a lot of work going on on this. It's a dream that we can actually make it so that you can just pick and choose a library that you want to use from one language and use it with another language totally seamlessly. It's a really exciting possibility in language technology. I know we're years away from this working, but little bits of it are going to start to happen gradually. We're going to get towards that. I think it's a really exciting opportunity with languages.
The Rise of Automation in Security Testing
Security and safety is really important. Fuzz testing is something that has broken into the mainstream gradually. The first really successful fuzz tester was called American Fuzzy Lop, which is a very cute rabbit, released in 2013. It's now available as a service. Microsoft and Google and various other people have fuzzing as a service for open-source projects and commercial projects. There's tools such as Semmle, which GitHub bought last year, which isn't a fuzzer. It analyzes security issues you find, and finds other ones in the same code base or different code base, which is really exciting. Security testing used to be manually people trying to hack things and see what happens. Now we got this tooling. This tooling is working. If you look at a number of CVEs filed, it's gone up a lot in the last few years. CVEs are not necessarily the best measure of anything, but I think automation has just been finding a lot of these CVEs and better tooling. If you look at what issues they are, we've had a lot of memory safety issues: buffer overflows, heap buffer overflows, use after free. These are all memory safety issues, a large number of which come from C and to some extent C++ code.
Something like 70% of all the security issues filed in CVEs are memory safety issues, which is ridiculous given that we haven't released a programming language that's not memory safe for maybe 20 or 30 years. We have all this critical infrastructure written in C. We've just got to replace it with stuff that's written in a language that is less than 20, 30 years old. Any language, it doesn't really matter. Memory safety is this big security problem. We already know how languages can help. You can literally just use JavaScript and your memory is safe, or anything. It doesn't matter. You don't have to obsess about Rust or new languages, you can use anything to be memory safe.
Beyond Memory Safety
There are more problems than just memory safety. I think safe concurrencies and concurrency that doesn't have bugs in, is really important. We're writing a lot of programs that do a lot of concurrency. We've done lots of asynchronous coding and things like that. If you've done that, you discover things like deadlocks, and race conditions, which are just really annoying. It turns out that languages can help us with problems like that. Sophia is going to talk about a language you probably haven't heard of called Pony, which is a really fun new language. She's going to talk about some of the Type issues and safety issues about having a massively concurrent language that hasn't got any of these concurrency issues. It's a really fun language. It has also been quite influential in terms of influencing other languages. It's being used in quite a lot of high performance applications. Let's use our type systems to avoid these other types of issues, many of which are security issues as well. Once you get past all the memory safety issues, you find all the other security issues. Race conditions can often be turned into security issues as well.
Languages Everywhere
Languages are appearing everywhere. This is the recent RedMonk language rankings. We have a lot of programming languages. This is only the most commonly used ones. Why have we got so many programming languages? We have even more than that. There's actually thousands and thousands of programming languages. Languages are our tools as programmers. If we try and do different things, we build different languages to do those things. That's really great. We've been trying to do more and more complicated things and complex things. We've expanded the scope of the problems we're trying to solve, and how easy we want these things to be. Language turns out to be an essential tool to handle complexity. I've not talked about things around ML and configuration. Language is incredibly important. We want to start doing things that are more complicated. We started doing things like just IDEs and having our Editors understand languages. Now we want to do more things with our languages, with code, such as analyze them, find bugs. There's tools that reduce test case errors that your fuzz tester finds into understandable examples. We want to manipulate code with code. There's all sorts of things like that we want to do. The languages that we have are not necessarily that great for doing that with. A lot of the experimentation has been in things that we need to do.
What's coming up next? There's a blog post by Graydon Hoare, who is the original author of Rust, which has a bunch of really great ideas. I recommend you read that. If you're a C, C++ programmer, undefined behavior is something that causes a lot of bugs. Undefined behavior is this terrible concept that originally existed for good reasons. We don't know what your machine does, we're going to say your language can do whatever the machine natively does because that'll be faster. It turns out that now undefined behavior is something that compiler writers use to screw you over. Basically, say, if you write this code, which looks perfectly reasonable to you, it's undefined. Therefore, we can do anything. We will just delete your code, or do other things that cause terrible bugs that you won't understand. We won't tell you about this, because we're just doing it for performance reasons, which has been a total design disaster in programming languages. I think it's something that we've really got to get rid of, because it's going to cause more issues. It's just not helpful. We actually need to be able to reason about what code does and think about code. I'm a big fan of trying to make formal reasoning tools more accessible to people to do to understand their code better. You can't reason about code if it's undefined behavior, because it just doesn't actually mean anything. I think that's something that we've really got to change.
We're going to make language technology more accessible. I think it's really important that people can build languages into their tooling. We've actually built these big projects like LLVM, which are great because they are accessible in one sense in that you can build them into your technology. They're designed as libraries. LLVM was built as a set of libraries rather than a specific tool chain like GCC. That's great. It's big and complicated. It's actually difficult to use, I think. You can actually build languages in compilers that are really small and simple to understand. Learn how to build a WebAssembly compiler later, you'll understand it. It's not some big, magic thing. Small compilers and small tools are actually still possible in the language space. We've neglected them a bit. I think we need to go back to those. TinyGo is an example as well. I've always been a big Lua fan. Small languages are great. Small languages that are understandable and you can embed them into other projects and things are really great. Safe languages like Pony are a really exciting frontier. I think Rust has reinvigorated everyone's interest in safety, and the kinds of let's actually make it possible to write things that you can tell are correct. Because there's nothing worse than code that actually doesn't work right. Let's work with modern hardware. I think the TornadoVM tool is very important because hardware has changed and we've got to work with these things. We've got to actually change our mental model of what hardware is like. The bare knuckles performance track in this QCon is always really good from that point of view, just people who are obsessed by what hardware does. I think all programmers need to understand a little bit more about hardware and how hardware is changing. I think that learning about those orders of magnitude difference in performance that you can have is really important.
Language Shapes the Way We Think
Diversity in ways of communicating and thinking can be divisive. I think that's what the Tower of Babel biblical story was about, people not being able to communicate. I think the great thing about programming languages is that they shape the way we think. We can learn lots of languages and we can understand different languages. We can learn things from them. A long time ago I learned Haskell. It was very influential in how I think about things as a programmer. I thought it was incredibly useful. I have never written a production line of Haskell in my life. As a thinking tool, it was really valuable. I think there are many programming languages like that, which help us think about different ways of solving problems. I think it's really important that we go out and learn programming languages, and think about, why is TensorFlow so different from other things? Why are we doing these new things with languages? What would a native GPU programming language be? Why is it so difficult to work with these things?
There are so many things going on with programming languages now that it's a really exciting time. Language communities are becoming very friendly, I think. Rust is one of the great examples of a community that has grown up through open source, and open participation, and open specification. It's not sitting there with a commercial background or anything like that. There are many other languages like that too. You can join in, and you can participate, and you can shape the future of programming languages in a way that was difficult a few years ago. I think I'd really encourage you if you're interested in that to find a language community or language communities and participate, and understand what the problems they're trying to solve are. These are hard and interesting problems. Languages give us leverage, and if you can build tools that everyone can use that makes their life better, it's an amazing thing to do. I'd really encourage you to think about this. Hopefully, you can get some inspiration from the talks in the track.
Questions and Answers
Participant: What are some of the other languages that are not going to be participating in the conference that you think are especially interesting? Of course, other languages would like to have attended but they were not able to make it?
Cormack: There were lots actually. When I did the language track at QCon, New York, we had a great Rust talk. I encourage you to watch the videos. We had a great talk about making npm safe, about security in JavaScript, which was really amazing. A bunch of my friends have become obsessed about Elixir and Erlang, and those languages are something that has a load of really different ideas that other languages still haven't adopted about. We have a lot to learn about if you're interested in the cloud native world and updating code. Most languages don't even think about the problem of updating code. There's a whole set of stuff about UX of deploy languages that Dark is exploring. Dark is really early stage. I think it's something that I'd like to spend more time on. There are a bunch of people who couldn't make it to speak, the whole area of interop with WebAssembly. There is a project called SOIL project, which has just been set up recently, which is all about how are we going to build tooling to effectively do multi-languages with WebAssembly. There's a new language a bunch of the Pony team are working on at Microsoft Research, which was announced recently. Again, Microsoft doing a lot of work on safety and languages.
Participant: Language Generate Rust, or something like that?
Cormack: No. We were going to have a talk about the new stuff that's going on in the OCaml area, which is in ML. There's a lot of work going on in ML. They're trying to do stuff that's really different. There's a lot. There was a whole bunch of talks I really wanted to have, that we couldn't have in the track. Another time.
Participant: I've found myself over the last year or so doing a lot of very small DSL actors, you didn't really mention that?
Cormack: I think small DSLs are something that people keep coming back to. I concentrate on the bigger language use cases, but small DSLs are ways of basically helping people express a problem. It's very specific to that problem, and really important. I think part of the reason I didn't was I've been recently interested in the whole, let's use TypeScript for doing this thing, which is a recent trend particularly in the spaces I've been working in. There's been this let's do small DSL movement. Ruby and that community have always been involved in that, and the Lisp communities, and Scheme, and lots of people. I do think embedding languages as the interface to how users experience your product is really a valuable thing to do because it gives them a lot more power to actually do stuff that is just different and scriptable and programmable in a way that exposes the domain in a really understandable way. Yes, I should have covered some of that as well, probably.
Participant: Looking at the graph that you showed for the different languages and the popularity, to me it looks like it's almost as if things are getting more fragmented. Are there certain areas in languages where things are consolidating in some sense?
Cormack: It's interesting because those numbers are based on cumulative lines of code, so they move quite slowly. One interesting thing is that some programming languages have been adopted into lots of different new areas. Python has been amazingly resilient in the things that people have done with it. It's gone from being a web programming language, and a teaching language to a language for ML and big data, which is an unusually big transformation for any programming language. It has proved to be very adaptable in that way, even if, in many ways, it's being used in weird ways in ML. You're not really writing Python. You're using Python to drive other things. That's been adaptable. JavaScript, ever since it broke out of the web in Node has really become a very universal and adaptable language for all sorts of problems as well. Obviously, it's very popular. The use cases for JavaScript and TypeScript and variants of it, are very broad.
Languages do change in those ways. Some of them adapt. Some of them don't adapt as well. Some of them just get stuck in niches. It's interesting to see whether Kotlin and Swift will break out of the single platform thing. They're both languages people like but their communities are very rooted in Android and iOS. It's hard to see whether those will break away or not. Kotlin is showing some signs of becoming a broad language. Swift seems to have stalled in that at the moment. It's a huge investment to build a language. It takes decades and a lot of work, and building community, and libraries, and everything. To build a truly successful language costs tens, hundreds of millions of investment which you often don't really see. Some of it is people's time they're giving away in open source. Some of it is real, hard money. If you look at the amount Apple has spent on Swift, or Google's spent on Go, it's a lot of money.
Participant: In infrastructure software at least, we've seen a swing from scripting languages, Perl to Ruby, to then Go, to then everything's being rewritten in Rust now, is that an oscillation thing or is that the direction of travel? Are we going to see a renaissance of scripting languages? If so, what do you expect that might look like?
Cormack: It's really fascinating because we had swung so much in the other direction towards interpreted languages. I think it's partly driven by tooling. Rust picked up LLVM and you have a great compiler infrastructure just off the shelf. I think it's partly a reaction to people have gone back to wanting types, because type safety makes large scale program refactoring and those kinds of things just easier. I think part of it is that it took languages like Java quite a long time to adapt to the container world of restricted memory and not owning the whole machine. Even just basic things like the JVM knowing about cgroup memory limits, and things like that, was just a barrier to use in those areas. Some of it is fashion. There's definitely a lot of fashion in languages. Docker was very influential in initially making Go break out beyond Google. I think those things can't be ignored. I think we're going to stay that way because performance matters more. Also, I think, partly Go showed that you could have a compiled language that was still compiled in a few milliseconds. Therefore, you didn't have to wait an hour to compile C++ build cycle. That developer friendliness was actually what people wanted from interpreted languages. It was just instant, there. Go showed it was possible. That's been incredibly influential in other languages. Short compile times are now something that once you've had them, you can never go back.
See more presentations with transcripts