BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Generally AI - Season 2 - Episode 6: the Godfathers of Programming and AI

Generally AI - Season 2 - Episode 6: the Godfathers of Programming and AI

Hosts discuss the Godfather of AI, Geoffrey Hinton, who developed pivotal algorithms like backpropagation, contributed to neural visualization with t-SNE, and inspired a resurgence in neural networks with AlexNet's success. They turn to John von Neumann, whose impact spanned mathematics, the Manhattan Project, and game theory, but most importantly: the von Neumann computer hardware architecture.

Key Takeaways

  • Geoffrey Hinton was a pioneer in neural networks, developing key technologies like backpropagation, dropout, and t-SNE.
  • Today Hinton expresses concerns over AI’s societal impact, advocating for caution as it advances.
  • Hinton's work has now resulted in his winning a Nobel Prize.
  • John von Neumann's contributions to technology and science were diverse, spanning fields from computer architecture to game theory.
  • His von Neumann architecture, which allows stored programs, is foundational to modern computing.

Transcript

Editor’s note: less than a month after this podcast was recorded, Geoffrey Hinton won the Nobel Prize for Physics for his research in neural networks.

Roland Meertens: For the Fun Facts, Anthony, did you know that Russia observes the Day of the Programmer every year?

Anthony Alford: What day is that?

Roland Meertens: Well, good that you're asking. It is held on the 256th day of the year, so it is on the 13th of September in common years, but on September 12 in leap years.

Anthony Alford: I like that. That's clever.

Roland Meertens: I think one of the few interesting days, which is switching days based on leap year, not leap year, but in China, they hold Programmers Day on October 24th because you can write it as 10/24.

Anthony Alford: Okay.

Roland Meertens: That's the fun fact. Welcome to Generally AI, an InfoQ podcast, and today me, Roland Meertens, is going to talk with Anthony Alford.

Anthony Alford: Yes, good to be here.

Geoffrey Hinton: the Godfather of AI [01:17]

Roland Meertens: It's the last episode of this season, and we are going to talk about famous programmers and we are going back from who is currently still relevant back to foundations of computer science. And I want to talk with you, Anthony, about the Godfather of AI.

Anthony Alford: Oh.

Roland Meertens: Do you know who is commonly referred to as the Godfather of AI?

Anthony Alford: I do not. Well, when you say it, I'll go, "Oh yes, of course". I'm sure, but tell me.

Roland Meertens: It is Geoffrey Hinton.

Anthony Alford: Of course.

Roland Meertens: So I wanted to talk about Geoffrey Hinton basically because he pushed ideas of neural networks through and through over the years, and he also really persevered in AI winters. So he wasn't just like a one-day, one-hit wonder or like a one-century wonder. He really managed to create amazing contributions to the field of AI over multiple decades.

Anthony Alford: That AI winter was pretty bad stuff.

Roland Meertens: Yes.

Anthony Alford: I think we have an episode about that.

Roland Meertens: Absolutely. I also, by the way, tried to find where the name Godfather of AI comes from and why he is called that and I tried finding the first reference, but I just can't find it. But in the AI community, everybody seems to agree that this is his nickname, that's what we're sticking with.

Anthony Alford: He certainly deserves it, I think.

Roland Meertens: Yes. And in this podcast it will become clear that this is a name which is rightfully given to him. And the first thing which is interesting about him is that his real interest is not computer science, but rather the question of how the brain works. So when we're talking about famous programmers in these episodes, we often think of people who are really dedicated to the art of programming, and he is more excited about the question of: how can we study the brain?

Anthony Alford: Well, think about that though. Really the purpose of computers and programming is to solve some problem. We do tend to get very meta in this community, but ultimately a lot of people got into computers trying to solve a particular problem.

Roland Meertens: But before AI was a thing, I would always explain it to friends as, what can computers learn from humans and what can humans learn from computers? Nowadays, you don't need to explain what AI is anymore. I really got into AI because I liked the in-between Genius, and it seems that he is the same. By the way, he studied experimental psychology in Cambridge in 1970. So that's quite an interesting field of study. And he got a PhD in artificial intelligence from Edinburgh in 1978.

Anthony Alford: So they were already offering a PhD program in AI back then.

Roland Meertens: As you can see here, he got one, he managed to get one.

Anthony Alford: That's great.

Roland Meertens: I know that in the Netherlands, the study I did, they have been offering studies in AI for over 30 years, but there they renamed a couple of degrees. For example, the study I did used to be called cognitive psychology. And I think by renaming a lot of studies to be called AI, you get a certain sense of: this is clearly a field instead of... and these are a couple of people who are doing experimental psychology somewhere.

Anthony Alford: I think that's pretty good recognition of where the field has come.

Backprop and Other Inventions [04:54]

Roland Meertens: No, definitely. The fact that you don't have to explain to people what you're doing, I think, is enough recognition on its own. So one of the biggest contributions of Geoffrey Hinton is his paper from 1986 called, Learning Representations by Back-propagating Errors. It's a Nature paper. So this is the famous back-propagation algorithm, which we use right now to train large neural networks.

Anthony Alford: And did he invent that?

Roland Meertens: So he is actually the second author. So together with two other people, they came up with this.

Anthony Alford: Very cool.

Roland Meertens: That's rightfully a staple of being a Godfather of AI. So it's great because now using this back-propagation algorithm, we can learn from a lot of examples and then generalize to an unseen set of examples.

Anthony Alford: Right.

Roland Meertens: Actually that is pretty cool about his persistence is that back in the day there were really two schools of thought in AI. One is how can we represent things and what are the grammars to learn from it? And the other field was more how can we just learn automatically without having to represent anything? And I think for a very long time, the grammar-based field and the rule-based field was ahead. And now the "just learn from examples" field is very clearly winning.

Anthony Alford: It's funny, some colleagues and I were just recently joking about Prolog, which was definitely one of those like a symbolic logic programming language that was a darling of the other branch of AI that you mentioned.

Roland Meertens: No, indeed. Definitely. Something else he worked on but is not very well known for are stochastic neighbor embeddings.

Anthony Alford: Interesting.

Roland Meertens: Well, if I say it like this, you probably think, "I don't know this". Right? It's okay to admit it. So he thought of a probabilistic approach to place objects in high dimensional vectors, and you might know this better from the follow-up paper, which he also authored together with a Dutch person, which is called, t-SNE.

Anthony Alford: Really?

Roland Meertens: Yes. And if I'd say t-SNE, you of course say, "Yes".

Anthony Alford: I’d say gesundheit.

Roland Meertens: Yes. I love probabilistic approaches to place objects in high dimensional vectors close together. So this is a technique, for people who don't know it, it's a technique to visualize high dimensional data in two or three dimensions. And these two papers were written in 2002 and 2008, again, well before the big deep learning revolution. Another paper I found from him, which I really liked, is his paper on a mixture of experts. So he had an interesting insight: if you have multiple neural networks who are going to be experts on different topics and you pick the most expert like one, how can you best get the answer from them and how can you best train them?

Anthony Alford: And we see mixture-of-expert networks are, I wouldn't say common, but there's a lot of popularity behind them.

Roland Meertens: Yes. Some of the big tech companies are now using them. Which again, wouldn't have been possible if it wasn't for the work of Hinton. So if I say the years like 2002, 2008, I would wager at this point in time, many people, including me, believed that neural networks were not going anywhere. So when I followed a neural networks course in 2010, it felt like an old-fashioned thing which nobody really believed in, even though I was distributing the training of my networks over eight computers in a university.

Anthony Alford: Really?

Roland Meertens: Yes.

Anthony Alford: Nice.

The Deep Learning Revolution [08:43]

Roland Meertens: It was still a terrible network, but for most of the world, this would really change in 2012, when a PhD student of his, Ilya Sutskever and a certain Alex Krizhevsky wrote a neural network to recognize objects in images.

Anthony Alford: That would've been AlexNet.

Roland Meertens: Yes, indeed.

Anthony Alford: That's very bold to name it after yourself like that.

Roland Meertens: I really like boldness. So this network won the ImageNet competition with such a large margin that other labs jumped on neural networks. So in the previous years, the error rates in recognizing these thousand categories for the ImageNet competition…they were measuring the top five error rates, and that dropped, two years before and the year before from 28% to 25%. So humanity was making progress in recognizing objects. And with AlexNet it just directly went to 16%.

Anthony Alford: Wow.

Roland Meertens: By the way, since then, top one accuracy went from like 58% with AlexNet to now I think the best network has 92%.

Anthony Alford: That's remarkable. Now, I know people have their issues with ImageNet, and there are other things like CoCo. Just as an aside, do you think object recognition is, maybe we wouldn't call it solved, but it's pretty solved.

Roland Meertens: I think it's solved enough to kickstart any problem you might have and solve it temporarily. Maybe not at the financial and practical scale you want, but we can know... there are still caveats, but it's definitely solved for most practical purposes.

Anthony Alford: The one that's built into my iPhone, for example, is pretty good. You can do a search on your images for dog, mountain, things like that, and that's pretty cool.

Roland Meertens: No, definitely for a lot of practical use cases, you might want to have an understanding for what's in the image. Computer vision is fairly easy nowadays, especially if you have access to the ChatGPT API.

Anthony Alford: Back in the days in between Hinton's successes there in the AI winter, I remember when computer vision was doing things like finding edges using convolutions and doing Hough transforms and gosh, I don't even remember what.

Roland Meertens: I think the XKCD comic on how easy it is to recognize a bird in an image and needing a research lab in five years, I think is still relevant. 

Anthony Alford: Absolutely,

Roland Meertens: The generation nowadays might not understand this joke anymore. Anyways, AlexNet had 60 million parameters, 650,000 neurons and had five convolutional layers and three fully connected layers. And this was a significant milestone, which demonstrated the power of deep learning in computer vision. It was also the start of larger data sets.

So ImageNet created by Fei-Fei Li was already amazing because they had more than a million images, but it was also the start of more compute power and thus also larger networks, which can get this generic understanding of the world without overfitting. So when I was preparing this, I was reading all these papers. And a really fun snippet I encountered in the AlexNet paper was, "Our network takes between five and six days to train on two 3GB GPUs. All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger data sets to become available".

Anthony Alford: And guess what?

Roland Meertens: Yes. Now we are talking about AI scaling laws like there's no tomorrow.

Anthony Alford: Definitely that was an anticipation of it. I wonder…probably that's an idea that people have had for a while.

Roland Meertens: It's quite interesting that they already had the insights that simply having more compute and more data would improve things. I think actually that's something which Hinton also says in one of the lectures I've been watching in preparation, that he also always thought, "Oh, we need more data and more compute, but we also need more innovations". And he's like, "Oh, we actually mostly just needed more data and more compute".

Anthony Alford: You're probably familiar with that essay called The Bitter Lesson.

Roland Meertens: I am familiar.

Anthony Alford: That's what it is. It's just more data, more compute. Don't try to be clever.

Roland Meertens: It is a bitter lesson. Other things which are interesting in this AlexNet paper is that they talk about using ReLU activation: rectified linear units. So this is yet another invention by Hinton. They also show in the AlexNet paper how much faster their network is able to learn compared to the previous most common activation layer of tanh.

The other thing is that they prevent overfitting using dropout. Yet another paper from Hinton which came out in that year. In other words, at this point in time, the lab of Hinton is absolutely on fire, both because of these large margins towards more classical computer vision methods.

So there are other options you would have for recognizing things in images were really things like SIFT, like Scale-Invariant Features and other labs basically looked at this and were like, "Oh, wow, this is way beyond what we can do". So they pivoted towards his approach. Nowadays, as you already mentioned, we keep scaling data, compute, networks, and we are still improving our understanding of the world without overfitting.

Also, his research group in Toronto made massive breakthroughs in speech recognition as well. So he's just really going at all the fields at once, and we all know what happens after that competition. All the big tech companies were already looking at his work. So I went through everything on his personal websites. And one thing I found funny is that in his grants overview, he just basically says Microsoft kept sending him unsolicited gifts of 10 or 20K. So he just wrote down in the grants "Unsolicited grant from Microsoft for 20K".

Anthony Alford: I guess if you could survive winter long enough, it'll just fall into your lap.

Existential Threats of AI [15:31]

Roland Meertens: Anyways, in 2013, Hinton actually joined Google part-time as a researcher, and notably, he quit Google last year. First of all, fair game. He's now 76 years old. He's still very active as a researcher, but the reason he quit is because he wants to freely express his concerns about the existential threats he's seeing with AI. So I watched a couple of his lectures to figure out what his fear is and the concerns he has is... Do you want to guess concerns?

Anthony Alford: I do remember reading something, but let's let you tell it.

Roland Meertens: So the first one is of course, will AI replace our jobs? So intellectual labor is getting better and better, and that's something of all ages. But now we always replaced manual labor with intellectual labor, and now we go from intellectual labor to…don't know yet what. And he's more like, "Oh, this will probably make the rich richer and the poor poorer again". Fakes, another problem, is it possible to know whether it's true or not? It is just too easy to generate more content.

Anthony Alford: I think that's a pretty important concern actually.

Roland Meertens: And that's why people have to keep listening to the InfoQ podcast.

Anthony Alford: Oh, I was going to say, we've already put our voices out there, so if you get a phone call from me in the middle of the night asking for money, it's probably not me.

Roland Meertens: I will transfer it to you. Another problem is lethal autonomous weapons. You are living in America, and you can comment on this.

Anthony Alford: No, I don't think I want to comment on that. No matter where you live, I think that's scary.

Roland Meertens: I agree. The last thing is discrimination and bias, but he also had an interesting insight here in that you can maybe solve this problem by showing that your machine learning solution has less bias than the systems you are trying to replace. And the benefit, if you're talking about bias and AI, is that now you can actually measure the bias of a model instead of having to try to measure the bias of a bank employee, like of a person.

By the way, he thinks that AI will be immensely helpful in areas like healthcare. So he also sees benefits. He thinks we should continue research. He still thinks that there is a long term existential threat where superintelligence will basically be used by bad actors for their electorates. If random people go too far, AI may take over.

And his other reasoning is that if you have an intelligent agent who can create subgoals, one of the subgoals to achieve more is probably to get more control. And as long as this superintelligent AI is able to talk to people, it'll be able to pursue us as humans. We are too weak to be resistant against the sweet-talking of AI. So yes, this is existential fear, yes, with AI, and he thinks that by not being with a big tech company, he can actively talk about his fears of big tech companies and AI.

Anthony Alford: That's certainly one of those topics that is guaranteed to be an interesting discussion no matter what. And I always say, personally, I'm not sure how likely this existential threat is from AI, but when someone like Geoff Hinton speaks, I will listen with respect because he obviously knows a lot more about it than I do.

Roland Meertens: I can definitely recommend listening to some of his lectures. It's definitely a treat. I will put some of my favorite ones in the show notes together with some of my favorite papers. So take a look at the show notes to find those.

Overtaken by Events [19:12]

Anthony Alford: So he's no longer with Google. What's he working on now?

Roland Meertens: I assume he's still working at his research lab in Toronto. I think in 2017, at some point, he came out with capsule networks, which was a more hierarchical approach to knowledge representation. And they didn't really go anywhere, or at least nobody's talking about them anymore. Of course if you work for that long, you have a lot of ideas which didn't end up being groundbreaking, like Boltzmann machines I think is one of the things he worked on a lot. Didn't incorporate them into the podcast because frankly I don't understand them. [Ed: after this recording, Hinton won a Nobel Prize for Boltzmann machines.]

Anthony Alford: That might be why they didn't go anywhere. That's the fully connected one, right? Where it's like a circle.

Roland Meertens: Yes, like a circle where between the neurons, there's connections to everything. Anyways, rightfully Godfather of AI. It is Geoffrey Hinton.

Anthony Alford: Very cool. Thank you, Roland.

John von Neumann: the Godfather of Programming [21:10]

Anthony Alford: All right. Well, it's my turn. And you remember last season I talked about Claude Shannon.

Roland Meertens: Yes.

Anthony Alford: And one of the things that I really loved about Claude Shannon was he made multiple groundbreaking contributions to technology, somewhat like Geoff Hinton. And if you think about it, this pattern really ought not be surprising. We're talking about very smart, very talented people, and really nobody is a one-hit wonder. It's just that I think most people are known for their biggest hit. So here's an example. What is Einstein known for?

Roland Meertens: Theory of relativity.

Right. But you know he won a Nobel Prize for his work on photoelectricity.

Roland Meertens: I did not know that.

Anthony Alford: That's right. So he did not get a Nobel Prize for Relativity. Likewise, Linus Torvalds, he's known for Linux, but most programmers probably also know he invented Git, the source control. Today we're going to talk about someone else who maybe not everyone has heard of the way that everybody knows Einstein, but he did make several important contributions to computer science among other things. And in my opinion, he's one of the fascinating intellectual figures of the 20th century, and I'm talking about John von Neumann.

So von Neumann was born in Budapest, Hungary in 1903, just a few days after the Wright brothers made their historic first flight. I live in North Carolina, so we take credit for that one.

Roland Meertens: Everything is before or after the first flight.

Anthony Alford: Yes. Von Neumann was, very early on, recognized as a prodigy. According to Wikipedia, at the age of six, he could divide eight digit numbers in his head. He could speak ancient Greek, he could memorize entire books. By the time he was a teenager, he was publishing mathematical papers and making serious contributions to the field.

But his father didn't think that studying mathematics was a great career strategy. So he convinced his son to get a practical degree in chemical engineering. So in 1923, von Neumann finished high school. He enrolled in a two-year Chem.E. program, but at the same time, he enrolled in a PhD program in mathematics, and he finished them at the same time.

Roland Meertens: Impressive.

Von Neumann: Everywhere Except the Movies [23:34]

Anthony Alford: That's pretty impressive. He then did a postdoc year at Göttingen under David Hilbert, and then he became a Professor at the University of Berlin. In 1933, he came to America and he was appointed to a position at Princeton's Institute for Advanced Studies. You may remember that from our episode about Claude Shannon and Alan Turing, both of them spent a little time there.

Roland Meertens: Yes.

Anthony Alford: It's also a familiar setting from a couple of movies. I love to tie pop culture and movies in. So the IAS can be seen in Oppenheimer and A Beautiful Mind.

Roland Meertens: Does von Neumann have a role in Oppenheimer as well, or not?

Anthony Alford: I'm getting to that. Spoiler alert. The IMDB page for the Oppenheimer movie does not list von Neumann.

Roland Meertens: Oh, because I think you see him at some point, right?

Anthony Alford: Maybe. But they don't credit anyone as playing von Neumann. So the Oppenheimer movie, of course, is mostly about Robert Oppenheimer and his role at the Manhattan Project. But you may remember there's also a plot line with Robert Downey, Jr.'s character, Lewis Strauss, where Oppie becomes head of the IAS. And then A Beautiful Mind is about John Nash. He's an important figure in the development of game theory, which again, spoiler alert, we're going to talk about in a minute. And that movie is mostly set at the IAS. And von Neumann's not in that movie either.

Roland Meertens: Okay. Famously absent from big movies.

Anthony Alford: It's pretty strange. But he was indeed a part of those stories. So at the IAS, he came in 1933 and he joined the School of Mathematics. And by the way, there's a great book called Turing's Cathedral, it's written by the son of physicist, Freeman Dyson. And it talks about some of von Neumann's work there working on computers. I'm getting ahead of myself.

Anyway, according to Freeman Dyson, who was also there, he said, "The School of Mathematics has a permanent establishment, which is divided into three groups. One consisting of pure mathematics, one consisting of theoretical physics, and one consisting of Professor von Neumann". So he was definitely in a class of his own.

Now, he had done his graduate work in mathematics. He was making contributions to mathematics, was technically on the faculty of mathematics, but he did so much more. There's a Wikipedia page listing the things named after him: there's more than 70 entries. In the Wikipedia article about him, there are major sections on mathematics, physics, economics, computer science, and defense work.

Roland Meertens: So he did a bit of everything.

Anthony Alford: He did. He was definitely what you might consider a polymath. So in physics and defense work, he worked on the Manhattan Project. He was the leading authority on the shaped charges, which were used in the Fat Man bomb, and that's the one we see detonated in the movie.

So talking about existential threats earlier, we've managed to live with nuclear weapons for quite a long time. Hopefully AI will also be controlled. Part of that control, at least in this country, was that nuclear technology was placed under the control of the Atomic Energy Commission. Von Neumann was a member of that. He also worked on our Intercontinental Ballistic Missile program. You may remember in Oppenheimer, the movie, Oppenheimer had some regrets about what they had done. I'm not sure von Neumann did, but if he did, he was going to make sure that at least he was in charge.

But it's his contribution to economics that ties him in with John Nash and A Beautiful Mind. Von Neumann invented game theory in collaboration with an economist named Oskar Morgenstern. They did it as a side project during the war. So in their spare time, they wrote a book called The Theory of Games and Economic Behavior.

And this provided a mathematical model of two-player, zero-sum games with perfect information; that covers a lot of games like chess. So anyway, they also included work on games with imperfect information and games of more than two players. Now, John Nash famously extended game theory to include cooperative games. That's the famous scene in the movie where they're in the bar, all the great ideas come to you in the bar. So he was in the bar with the other mathematicians.

Roland Meertens: I also like how in a world dominated by men it was still the best way to explain an idea, just by referring to beautiful women. That already is sexist on its own.

The Von Neumann Architecture [28:27]

Anthony Alford: But we're talking about famous programmers. And the question is: was von Neumann a programmer? Well, he worked with computing machinery quite a lot during the war, not only on the Manhattan Project, but also on ballistics and other military projects. He is credited with inventing the merge-sort algorithm. So that's programming.

Roland Meertens: That's interesting. I didn't know that.

Anthony Alford: Yes. It's in Donald Knuth's book, he's credited with that. But probably we know von Neumann better as a famous computer hardware designer. So he led the project at the IAS to build a computer. And remember that Wiki page list of things named for von Neumann? There's one called The von Neumann Architecture, and this is still the high level design of most modern computers, which he described in 1945.

So the architecture is a processing unit with both an ALU and registers, a control unit that includes instruction register and program counter, memory for storing both data and instructions, and then external storage and IO. Now, we don't think of it as unusual now, but those early computer devices, the MANIAC and ENIAC and things like that, and the computers used during the war, they didn't really have stored programs. The program code was not stored in the machine's memory.

Roland Meertens: Then where was it stored?

Anthony Alford: It was hard-coded physically, often with wires or patch cables. So imagine your Moog synthesizer with all these patch cables: doing that to program a computer.

Roland Meertens: Okay. So each computer would only be like a one-purpose machine.

Anthony Alford: It could be reprogrammed, but you had to physically reprogram it.

Roland Meertens: Oh, okay. That sounds like a lot of work if you just want to start Slack.

Anthony Alford: The idea is kind of similar in the old video game consoles, only reprogramming was you swap out a ROM, but same kind of idea. But with this program stored in memory, that's the crux of the von Neumann architecture. There's a different architecture called the Harvard architecture, which has separate address and data buses for instructions versus data. And, tangent, I actually worked with one of those back in the ‘90s when I was doing digital signal processing. Analog devices had a DSP chip called the Sharc, the Super Harvard Architecture. Anyway, I'm getting off-

Roland Meertens: Is there still anyone doing different architectures or not?

Anthony Alford: Probably, but I think it's one of those ideas that pretty much everyone is going with is the von Neumann architecture where data and instructions are just stored in memory and it's interchangeable. And one of the nice things about that is that now the program is potentially self-modifying, and so now these things can be considered Universal Turing Machines. And there's not hard evidence but most people think that von Neumann was actually aware of Turing's work and that it influenced this design.

Roland Meertens: When did he design his architecture?

Anthony Alford: He described it in a paper in 1945. The team began working on it at the Institute in 1946. It was not operational until 1951.

Roland Meertens: So this is after Turing already published some ideas, but Turing was working on other physical machines during the war.

Anthony Alford: More or less at the same time too, I think. And as they say, the rest is history. I didn't have a lot of time to talk about von Neumann's life outside his work. By all accounts, he was an extremely interesting person, something of a bon vivant, nattily dressed, he loved music, he loved parties. I mentioned he had a great memory. He could recite entire books. He was particularly fascinated with what we call the Byzantine Empire, the Eastern Roman Empire, which lasted until 1453.

Roland Meertens: Byzantine or Bayesian?

Computers are Alien Technology [32:39]

Anthony Alford: I can't come up with a good joke for that, you're going to have to let me get back to you on that. Maybe we can edit it in. So von Neumann was certainly unique, but interestingly, he was just one of a remarkable group of about a dozen Hungarian Americans called The Martians. These are mostly mathematicians and physicists. They were pretty much contemporaries. Several of them did appear in the Oppenheimer movie. For example, there was Edward Teller who helped develop the fusion bomb. There was Leo Szilard who co-wrote the letter that convinced FDR to launch the Manhattan Project.

So they were called The Martians because of the Fermi Paradox. Enrico Fermi, another key figure in Oppenheimer, he once posed the question of, "Why don't we see any evidence of extraterrestrial intelligence?" The universe is big somewhere that must be some intelligence, something that evolved superintelligence, we should see them by now. And so Szilard joked that, "Well, we do have them here. They just call themselves Hungarians".

Roland Meertens: Okay, nice.

Anthony Alford: Edward Teller, whose initials are actually ET, when he heard this, he acted very concerned that someone had leaked this secret information. So to sum up, if it ever feels like computers are alien technology, the answer is, they are.

Roland Meertens: You would wonder now that with neural networks can't you create some architecture where you separately load in the weights and then the data, but then have to just the inference part fixed or something like that?

Anthony Alford: If we want to kind of riff on the Oppenheimer theme, everybody in our community probably knows Richard Feynman, and in the eighties, he worked briefly at a startup in the Boston area that was making basically hardware neural networks, the Connection Machines.

Roland Meertens: Feynman was working on Connection Machines?

Anthony Alford: Yes.

Roland Meertens: Interesting. This guy worked on everything.

Anthony Alford: We could do a whole podcast series on Feynman, but we don't need to because he's got a book.

Roland Meertens: The book's quite good.

Anthony Alford: "Surely you're joking". It is a great, great read. Anyway, I guess I should say there are people who have tried to do hardware patterns that look more like the structure of neural networks, to more and less success.

Roland Meertens: One recommendation, by the way, for people who studied may be experimental psychology instead of computer science. I really like the book, But How Do it Know? The Basic Principles of Computers for Everyone, from J. Clark Scott, which really goes into how do computers load things into memory, how do they process things, et cetera. So I can recommend that.

Anthony Alford: We'll have to put that in the notes for sure.

Roland Meertens: Cool. Thank you very much.

Anthony Alford: A pleasure as always.

Words of Wisdom [35:35]

Roland Meertens: All right. Words of wisdom. Is there anything you learned in the last couple of weeks, which you want to share, a fun fact, something useful?

Anthony Alford: I didn't learn anything recently, but when you mentioned that the Chinese celebrate Programmers Day on 10/24, it reminded me of a little thing from an Isaac Asimov mystery. Now, everyone knows Isaac Asimov for science fiction, but he also loved to write mysteries. And in one of his mystery stories, somebody says that "Halloween is precisely equal to Christmas".

Roland Meertens: Yes. Because octal 31 is equal to decimal-

Anthony Alford: 25.

Roland Meertens: ... 25. Okay. It's one of these jokes which you have to first compute to then really understand.

Anthony Alford: It's great though.

Roland Meertens: It is a fun joke. In terms of fun facts. I have one fun fact, which is not related to computers, but it will upset some developers who are working on taxonomy, logic, or rule-based systems. And that is, do you know what a capybara is, this animal?

Anthony Alford: I do. It's the largest rodent.

Roland Meertens: The largest rodent. Basically it's a massive guinea pig. So the Vatican has classified capybaras as fish so that Christians could eat them during Lent.

Anthony Alford: Okay.

Roland Meertens: So the whole story is that during Lent, Christians are not allowed to eat meat from certain animals. And clergymen in Venezuela between the 16th and 18th century, we're like, "Hey, capybaras are living in the water. They have webbed feet, and apparently they taste like fish". So they sent a request to the Pope, and the Vatican granted the request in 1784. So if you're ever in South America and to see a capybara, just know that they are classified as fish by the Vatican.

Anthony Alford: Well, again, in the ‘80s, there was supposedly an incident where someone in Ronald Reagan's government declared that ketchup was a vegetable for some government purpose.

Roland Meertens: I think this is way more recent. Wasn't this under the Obama administration or something?

Anthony Alford: Oh, no. Well, it could very well have happened more than once.

Roland Meertens: Pizza is a vegetable.

Anthony Alford: Oh, well, it's also quite likely that it never happened, and it's just one of those stories that you hear, but...

Roland Meertens: I think the White House at some point voted that pizza should count as a vegetable during school lunches.

Anthony Alford: As long as it doesn't have pineapple on it. We could call it whatever we want.

Roland Meertens: It's actually funny because you think, "Okay, this doesn't make sense". Then again, it's the American healthy lifestyle.

Anthony Alford: Hey!

Roland Meertens: But they actually explained that if you look at the amount of vitamins in the tomato sauce, which goes on the pizza, it is actually way healthier than some of the other things which are normally classified as vegetables. So they were like, "You know what? It should count towards the vitamin intake we want the children to have".

Anthony Alford: Why not?

Roland Meertens: But as I said, this is going to upset the developers working on any rule-based system or logic-based systems.

Anthony Alford: Oh, man.

Roland Meertens: Anyway, I think that is the ending of this episode. Thank you very much everybody for listening.

Anthony Alford: It's the end of the season. This was the season finale.

Roland Meertens: This is the season finale. I hope you enjoyed listening to the entire season. And it was a pleasure to make it with you, Anthony.

Anthony Alford: Yes, and you as well.

Roland Meertens: If people are listening and they think, "I want to do something, what do I do now?" I think the best tip is to share this episode with your friends, host listening parties. Ask them which episode was their favorite, share it with your colleagues. Go to the random channel on Slack and say, "Please listen to this".

Anthony Alford: And if you're in QCon San Francisco here coming up, come say hi.

Roland Meertens: If you recognize our voice, please do not be shy. So don't think, "Oh, man. Don't want to come say hi". Please come say hi. I would love to actually meet people who listen to this.

Anthony Alford: As would I.

Roland Meertens: Also, for people who don't know how this is recorded, we are sitting in completely different time zones, all alone in our rooms.

Anthony Alford: You make it sound sad.

Roland Meertens: It is very sad. QCon is that time of the year when you can finally meet the people you are actually creating the content for, because normally you have no clue who's actually reading what you're producing or listening to what you're producing.

Anthony Alford: Very true.

Roland Meertens: Show notes are on infoq.com. And thank you very much for listening.

Anthony Alford: All right.

Mentioned:

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT