BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts AI, ML and Data Engineering InfoQ Trends Report

AI, ML and Data Engineering InfoQ Trends Report

This is a re-post from August 2021.

Each year, the InfoQ editors discuss the current state of AI, ML and data engineering to identify the key trends that you as a software engineer, architect, or data scientist should watch. We curate our discussions into a technology adoption curve with supporting commentary to help you understand how things are evolving. We also explore what we believe you should be considering as part of your roadmap and skills development. 

Key Takeaways

  • We see more and more companies using deep learning algorithms. We, therefore, moved deep learning from the innovator to the Early Adopter category. Related to this, there are new challenges in deep learning, such as deploying algorithms on edge devices and training very large models. 
  • Although adoption is increasing at a slow pace, there are more commercial robotics platforms now available. We see some use outside of academia, but believe there are more undiscovered use-cases in the future. 
  • GPU programming remains a promising technology that is underused right now. Besides deep learning, we believe there are more interesting applications. 
  • Deploying machine learning in a typical compute stack is becoming easier with technologies such as Kubernetes. We see an increase in tools to automate more and more parts, such as the data collection and retraining steps. 
  • AutoML is a promising technology that can help refocus data scientists on the actual problem domain, rather than optimizing hyperparameters. 

Transcript

Introduction [00:05]

Roland Meertens: Hello, and welcome to the InfoQ podcast. On today's special episode, we have a panel to discuss this year's trend in artificial intelligence, machine learning and data engineering. In the trend report, we are going to talk about deep learning, edge deployments of AI models, commercial robot platforms, CPU and CUDA programming, natural language processing and GPT-3, machine learning deployment using containers and Kubernetes.

Roland Meertens: And last but not least, ml Ops and data Ops. But before we do that, let's introduce the speakers for today. My name is Roland Meertens, and I'm an editor at InfoQ and Product Manager at Anatel. Work on data for self driving cars. I am joined here today by Srini.

Srini Penchikala: Thanks, Roland. Hi, everybody. My name is Srini Penchikala. I am the lead editor for AI, ML and Data Engineering community at InfoQ. My recent experience has been in architecting, designing and implementing software solutions for data management applications, especially in data analytics space using technologies like Apache Cassandra, Spark and streaming frameworks like Apache Kafka and Pulsar. It's great to be part of this panel. Thank you.

Roland Meertens: I'm also joined by Anthony.

Anthony Alford: Hello, I'm Anthony Alford. I'm an editor for the AI and ml space on InfoQ. And I'm also a Senior Manager of Development at Genesis. We're working on cloud-based contact center software.

Roland Meertens: Thank you. I'm also joined by Rags.

Raghavan Srinivas: Hi, my name is Raghavan Srinivas, but I go by Rags. And like everybody else in this panel, I'm also an editor in the AI and ml space. I've done some work in big data in general. And also in the AI and ml space. One of the panels that I wrote very early was about what is the difference between DL, ml, AI, big data and so on. And I still find it very useful to fall back to it, especially as a developer who's trying to understand the space. I'm glad to be here.

Roland Meertens: Thank you. And we have a special guest, Kimberly.

Kimberly McGuire: Hi, my name is Kimberly McGuire. I'm currently a developer at Bitcraze, which is currently developing the Crazyflie quadcopter. And my background's mostly within robotics. And the last few years, I did my PhD in gaining autonomy on edge devices, namely a very, very lightweight quadcopters. And thanks for having me here.

Deep learning moves to Early Adopters [02:25]

Roland Meertens: Great. Thank you all very much for being here. Should we just get started with the first topic of the evening, which is deep learning, which we think is moving to an early adopter thing? Does anyone have thoughts about the latest trends in deep learning and frameworks?

Anthony Alford: I can start out, one of the things that we definitely notice is there are two major players in the deep learning framework space. And that's, of course, PyTorch, which came out of Facebook and TensorFlow, which came out of Google. And an interesting trend alongside that is that PyTorch seems to have become the dominant player in the academic research space, versus TensorFlow is the leader in the commercial or enterprise space.

Roland Meertens: Yes, you're right, but doesn't mean that if you want to learn more about deep learning, should you start with PyTorch and then later move to TensorFlow or should you have a preference for one of the other?

Anthony Alford: I don't know that a preference necessarily is required. It looks like both frameworks tend to stay fairly even in terms of features and roadmaps. I think either or is probably okay. That said, I do hear people say that PyTorch is a bit easier to pick up. And it may come down to what are your requirements in terms of production performance, being able to operationalize and maybe scale your training and your inference.

Raghavan Srinivas: Kind of a related question there is, if I'm a developer and I've already made my investment in PyTorch, is there any way I can transfer my models to TensorFlow or vice versa? Efforts on going from, I think Facebook and Microsoft, OnX framework, how far along is it? And is it really feasible to be able to go from one model to another because I really don't want to be basically stuck to a monitor, no matter how good or bad, that particular platform or framework is, right? I want to be able to simply move my models around and to be able to do that. Is that something which is theoretically possible, but not practically feasible? Or where are we in the evolution?

Anthony Alford: I think that's a good point. You mentioned ONNX. And that is definitely one of the things that we see out there. I think those are especially popular for the case where you've trained a model and maybe you want to deploy it on different platforms. And that's what we get to the talking about AI at the edge, being able to deploy on mobile versus an edge device versus in the cloud with a lot of resources. ONNX is definitely something that we see in that space.

Roland Meertens: Maybe something I also see is that more and more people or more and more companies are storing their data or collecting data in such a way that it is easy for deep learning to learn things. In the past, you had different algorithms for different types of data. And nowadays, a lot of machine learning projects assume that you're going to use deep learning. So, people are storing data in such a way that's easy for deep learning projects. That's a feature that easy to use. Do you guys also see that?

Anthony Alford: I haven't really looked at anything in particular there. Although I know that, for example, the frameworks like TensorFlow and PyTorch are building abstraction layers to help in that case, and they're including a lot of public data sets in their distributions as well. So, that's definitely the abstraction layer over the data set is definitely something that's being built into these frameworks.

Roland Meertens: And then this sensor, so maybe the size of the data sets you lately, I think you're writing a lot about it, Anthony, is the training on massive data sets just gives you better results nowadays, and more and more companies are leveraging that to get better models into production.

Anthony Alford: That's right. That's one thing that open AI found when they research the power laws of scaling, and found more data, surprise, more data gives better results. What we're seeing is, all of these frameworks are building in support for the distributed training. So distributed data, parallel training, as well as this model parallel so that you can train even bigger models.

Anthony Alford: We're starting to see a lot of these frameworks have added into their ecosystem tools for large scale training, such as mesh TensorFlow for training TensorFlow models on a very large cluster machines. Of course, Microsoft has been working on their deep speed framework for PyTorch, and Facebook has their own called FairScale. We're seeing a lot of that as the frameworks get more mature, now we're starting to spread out and look at ways to scale the training.

Roland Meertens: We can basically say that deep learning is moving from innovator to early adopter on the topic map? But then the new topic is large scale deep learning with ourself. For example, another framework is hora vault, we can just train on many GPUs, many machines at the same time they get it, so that in the presentation of America party, where Tesla bought thousands of GPUs to do all their training on and wonder how many other companies have such massive server firms for training their models?

Raghavan Srinivas: So, one of the other things with a specter delineation of these frameworks is, is a good to work with pre train models, that you don't need to train or models that you have to train, because typically, again goes back to, again, how much data you got, right? You really need a large amount of data to be able to train the model as well. And just managing data, that data is a big order, right? Is there any advice that we can give developers with respect to pre train models versus models that you can train, and so on, and anything to note there?

Anthony Alford:  I think that's a very good point. And in fact, if you look at these large language models, like GPT-3, which we're going to speak about in a minute, they're essentially, the P stands for pre trained. So, the use case for a lot of applications is, you take this pre trained model that's trained with resources and data sets that you don't have access to. But you can fine tune that for your application, or use it as an input into something that you fine tune. And I do think that is definitely a trend that we're seeing now, not only for natural language, but also for vision, we're seeing pre trained vision models that you can take and fine tune on your specific application.

Raghavan Srinivas: And especially for somebody who's starting out new in this field, you don't want to be dealing with just about every challenge, at least you can keep that constant and then worry on your own problem domain rights.

Anthony Alford: Absolutely.

Roland Meertens: For the data part of others, I think is interesting is that Andrew Yang is now hosting a data centric AI competition, where he takes a fixed model. And just as what is this, this is apparently the model architecture you're using. And then the challenge becomes how do you make the best data set to train on this model? I think it's really interesting, because we're getting to programming 2.0, where instead of defining an algorithm and saying, "If this then that." You basically show examples and say, if you see this example, then that this class, or this action has to be taken.

Roland Meertens: I think that's a really interesting new way of programming. We just programmed by example, by just collecting massive data sets, with all the examples which you want to capture. And deep learning just seems to be able to learn it and capture it. In that sense, maybe the only major challenging is deploying it, then later in your data center on the edge. Shall we talk a bit about that?

Edge deployment of deep learning applications is a challenge [09:52]

Anthony Alford: I was going to say that's a very good segway.

Kimberly McGuire: Thank you, Roland. It's deployments. I guess what you can see is probably like from last year, definitely if you just look at the amount of papers have just mentioned edge AI or tiny ml or something like that. That has definitely increase a lot. Therefore, I think this is definitely something to talk about, as well.

Kimberly McGuire: It's a bit crazy ourselves, we're working on something of this, sorry, light weights and also was able to hold in deep learning and deep neural network. But I must say that if I look at the users that first they're going to start out with it, is definitely very much of a challenge. I've never had any experience with anything like TensorFlow before. And it just starts with that system right off the bat, it's usually in very big gap to go for.

Kimberly McGuire: I see that a lot of people are definitely on the edge of the gap. I guess the difference is that there are between the different device, I guess, like the ones that you can run on your phone, or perhaps a Raspberry Pi, or the ones that are even more smaller microprocessor style, like in the USB microprocessor, or even smaller? Can I even already run some form of AI, and you definitely see that skill going down. What do you think Anthony?

Anthony Alford: Well, I was going to ask if you're seeing the use of these frameworks, like Rags mentioned, ONNX or maybe CAFFE? Those are attempts to take models that were trained, say, on very large clusters or trained in the cloud, and to deploy them on the edge? Is that something that you're seeing people take advantage of?

Kimberly McGuire: Well, it's like, I'm not sure if I've seen those type of models. Mostly, I would say, the ones that have been implemented on mobile net, for instance, and then actually quantization those even smaller.

Anthony Alford: So, that was what I was going to ask next. Are you seeing people do things like pruning and quantization where you take a model or maybe distillation? So, you're definitely seeing that quantization, then?

Kimberly McGuire: Oh, yeah, definitely. At least that's what's the gap, a chip that I've been working on myself for the company like you have this tiny ml like TensorFlow light model. But you still have to do five extra steps in order to make it work on a extreme edge AI device. So, that's definitely more and more levels of extreme show, you have to do more quantization, probably like Antony already said, maybe it goes from 64 floats to the 32 floats, perhaps.

Kimberly McGuire: And then you even have to make it even smaller to even eight bits. So, that the quantization is really super efficient. And UCF also in another step that takes model that you have quantified and perhaps distributed this over to memory that you have like, if you have a multi core system, of course, you have to got it up and implement in such away that you're able to do it in parallel.

Kimberly McGuire: At least the gap aid has that. I am not sure like the other type of tiny mal microprocessors that exist out there. I think, for instance, the coral, this, I think, also in product, Google also has like multicores, but for instance, the ESBIF in the broken Michelle facility called the ESBI, which has only a single core, which is already able to do some simple face detection. And that is only the, "Hello, world example." So, that's what is going on.

Anthony Alford: Are you seeing that these models that have been quantized, or otherwise shrunk? Are you seeing a big loss in the performance accuracy? For example, with the face detection, does the performance drop when you have to shrink these models? Or is it good enough that it doesn't really matter?

Kimberly McGuire: I would say that's usually good enough, because I think you do have to realize that on these small systems, especially with a deck that was talking about has to fly or has to be carried by a 20 grand drone, let's say. So, they cannot really have a very big camera to begin with. If you shrink down the resolution of the camera itself, I don't think really the quantization maybe from like eight bits might be noticed a little bit, or 16 bit, definitely not. I think that definitely also says about the type of sensors, it's such an edge device is able to handle.

Srini Penchikala: Hey, Anthony and Kimberly, I have a question for you. So, outside of IoT and mobile applications, do you guys see any other use cases where AI and ML are being done at the edge?

Kimberly McGuire: Yes, I guess the only example that can give this on already small drones. So, that's at least Ed means research that I see, sets that lets you, for instance, have a drone that can recognize the shape of roads and able to follow it. And that's already from a year ago. And now that will probably currently, I guess it will only roll in terms of capabilities.

Kimberly McGuire: And it was also and gas seeking drone that has been also released also somewhere last year, and it was even, I think even though it was a simple neural network on an STM 32 processor. And that was also pretty impressive. So, that's the only example that I go. That's mostly from my robotics. I feel that I'm currently working in.

Srini Penchikala: The drone technology has been evolving a lot in the recent years.

Kimberly McGuire: Exactly. And gaining smaller and then they become smarter, smarter as well as far to go.

Srini Penchikala: I think we should call it an AI ML on the fly, right?

Anthony Alford: Srini, he's got the best joke.

Raghavan Srinivas: I know we're going to get to Docker and Kubernetes in a bit, but do you think Kubernetes is moving to the edge? That's what seems to be where the community is heading, is that going to help part or doesn't really matter that AI and ml and AH?

Anthony Alford: I don't know, Kimberly, do you run containers on these drones? Does it have the resources?

Kimberly McGuire: No, they don't have the resources to do that. But I think that's only mostly for the drones that are able to run some Linux on there. These are like really sea based simple firmware. And that's a little bit too complicated to build a framework on there that you can actually do that. Put the container on there.

Kimberly McGuire: I use the Docker container myself to fully afford a tool chain to even be able to quantify these kinds of things in the first place, because usually, these tools are not very easy to install something, I always have a lot of trouble. That's why I use Kubernetes for that, but definitely not on the drones or the chips itself.

Roland Meertens: At the QCon conference in November, Toshika gave a talk about Audi and their surviving cars. And she said that they are running Docker on these Audi's when they are testing things. I guess that's as much on the edge as you're going to get.

Kimberly McGuire: You mean the car Audi's?

Roland Meertens: Yes, the car. When they are building prototypes of cars at Audi, they are running Docker on their seagulls, which I thought was interesting, fun facts we discovered during the Q&A.

Srini Penchikala: I agree with you. Yes.

Kimberly McGuire: Where does the edge start?

Srini Penchikala: I agree with you guys.

Kimberly McGuire: Or the edge ends and the cloud starts?

Roland Meertens: In the past, it was really cool to deploy things in a cloud. And nowadays, it's really cool to have things running on your own laptop, and you can go on the edge and be really cool.

Anthony Alford: If there's no wire, it's on the edge, right?

Kimberly McGuire: Yes, if it's running on a one cell LiPo battery, that's only... Something like that.

Srini Penchikala: Yes, I agree with you guys. I think Docker and Kubernetes, I think the next frontier for those technologies is edge computing, right? I mean, they are popular, but they're still traditional heavyweight infrastructure right now. I think we definitely need a Docker Lite or Kubernetes Lite or whatever. I know there is a product called Kubernetes Edge, KubeEdge. Rags, you may have heard about it.

Raghavan Srinivas: K3s, it's affectionately called K3s because Kubernetes is quite [inaudible 00:17:02]. But I think the points that have been made in the panel are still relevant in the sense that if you're running it on an Audi car, you have plenty of space to do a lot of containers. But if you're running it on a drone, probably not. Or maybe even running on a smaller device it's going to be a big challenge. But I think, Srini, you're right in the sense that I think we will see a lot of uptick with Kubernetes on the edge, and that is going to automatically make its way into just about every other field, including AI and ML I think.

Srini Penchikala: Yes, definitely. Yes, Linux Foundation has been doing a lot of work in this area Rags. They have products like Akraino and also KubeEdge, which is different from K3s. They're still in the initial stages, but I believe they are going to be kind of addressing this edge computing and containerization together in the future.

Roland Meertens: Maybe one thing which is interesting is that you also see that people or companies are adapting their hardware. I think that's Apple, and one chip has special tensor accelerators. I think that iPhones nowadays have special chips to accelerate neural networks. So, companies are adapting their hardware, their products to support machine learning, which I think is a massive difference from before the deep learning hype, when you just had to work with the hardware you had. I think we are going to see a lot more in the coming years there. But maybe that brings us to the next topic. Shall we talk maybe a bit about commercial robot platforms? And where do we also see robots coming into our lives?

Commercial robot platforms for limited applications have become more popular [18:38]

Kimberly McGuire: Yes. And what kind of sense? In the household? I guess, where do we want the robot to be part of and where do we need the intelligence?

Roland Meertens: I think in the households nowadays, many people have a Roomba already. Or at least I have one. And my friends have one because it's just very convenient.

Kimberly McGuire: After Roborock?

Roland Meertens: Yes. So, there's many different brands, let's not be sponsored by one yet. So, you do see that these devices are getting more intelligent. But they're also in the last year, I think it was really like the year of sports robot by Boston Dynamics, which is now being used by police stations, is being used by the army as being used for surveillance. So, you do see that there's now more and more robotic platforms out there, which are doing useful tasks, also used by traditionally not robotics companies.

Kimberly McGuire: But I guess it's also because the sport used to be very inaccessible for research communities as well. But nowadays, I think it was Clearpath that now also spots in there, not curriculum, but there's either Clearpath, obviously offer solutions offer through research institutes, and they now also offer the possibility for them to get a Spot. So, that's actually pretty cool.

Srini Penchikala: Speaking of robot, I'm excited to see what the Olympics, the upcoming Olympics are going to show us, how they're going to use robots. It's going to be in Tokyo. So obviously, Japanese technologies have been using robotics. I'm curious to see what are you going to see as part of the Olympic Games starting later this week.

Roland Meertens: But it would be nice having a robot Olympics where you just [inaudible 00:19:56] as robots or the highest jumping robots.

Raghavan Srinivas: And I think with robots, you don't need to worry about COVID-19, right?

Kimberly McGuire: Yes, that's true. They don't get the virus, they get different viruses. 

Raghavan Srinivas: They get different viruses. One other thing I don't know if this is opening up a can of worms is the self driven cars. Is that something that we want to talk here? Or do we have another topic?

Kimberly McGuire: I guess they are being used for research institutes to implement. But they I guess, my feeling is that it's still like, it's having a step back now. And also, Tesla said that it was actually much more difficult than they initially thought. So, that maybe puts back the trust that people have in autonomous driving, which is too bad. But I think Roland is probably more of an expert than I am.

Roland Meertens: Yes, I mean, in terms of self driving, I think that we are seeing massive successes. So we do see that, for example, Waymo has driverless cars on the road now and other companies are testing with driverless cars as well. So, that means that you already have a very large confidence in the driving capabilities of your car. And I think now the next challenge is the same old problem in software engineering. And that scaling? Can you scale to a larger area with your car? And can you prove that it's safe?

Roland Meertens: So it's, again, about maybe the traditional topics, we had infoQ normally writing about scaling software and proving that it's always works, making sure that it always works. I think those are now the challenges again, in self driving. I think that if we managed to solve that we unlock so many things, if we can have delivery robots, point A to point B, and rotation, I think is great. But this scaling and making sure it works on a massive volumes here. Shall we move on to GPU programming?

GPU and CUDA programming allows parallelisation of your problem [21:53]

Raghavan Srinivas: Absolutely. So here, we are talking about CUDA. And essentially this has been around for a while, so probably going to be short. But essentially, this is the way to program GPUs. And GPUs, for those who have not heard of GPUs at all, is basically different from CPUs where CPU use the von Neumann architecture, where GPUs are intended to be massively parallel at the processor level, and to be able to deal with the parallelism or to be able to take advantage of that, you want to be able to do it programming in a different way.

Raghavan Srinivas: If you're doing vectorization, you need to be able to program those to be able to specify how many threads, how many blocks, and how you do that in parallel. And that's really what CUDA programming is all about. And there are plenty of resources on the Nvidia side who basically came up with this concept of the GPU, and talks about how you can take your regular program and adapt it to CUDA or to be able to run it on GPUs.

Raghavan Srinivas: I think there is some flak that CUDA gets, there are things that it could probably do, for example, the developer needs to know the number of blocks, and needs to have idea of how to do this. And it's not easy for programmers to pick this up, just like that. Although there are plenty of resources, like I said.

Raghavan Srinivas: And despite all that, I think CUDA is here to stay because I mean, GPUs are everywhere, just about, if you don't have access locally, you can get access on just about any cloud that you want. And you can do some really, really cool things with GPUs, obviously. And CUDA is really the way to go. But like I said, I think the evolution of the language itself has been somewhat at the same level as it has been for a long time. And maybe there is some opportunity there to come up with something different there. Those are my thoughts. What do you guys think?

Roland Meertens: I think that the way I heard someone describe CUDA and parallel programming is that if you want to work the land, you could either use a cow or a thousand chickens, and a GPU programming or like more CPU is like a cow. It can do very strong things one thing at a time, and chickens can do only tiny things, but can do a thousand things at a time.

Roland Meertens: And I think that is a really good metaphor for CUDA programming. You see that Nvidia is releasing large and larger GPUs. I think you can now have a 100 instances on Amazon. And you get, I don't even know how many CUDA cores. And you have a massive amount of memory. I see more and more people move the whole databases into GPU memory.

Roland Meertens: So, you can do massive operations on your whole database at once. And I think the biggest success so far in terms of CUDA programming is really TensorFlow and deep learning and PyTorch, which are heavily using it. Did anyone see any other really interesting applications of GPU programming or GPU applications?

Raghavan Srinivas: No, I think those are good. And I think the analogy of common chicken is pretty apt in the sense that it's very hard to write bad programs for a CPU because the compiler is going to organize, reorganize, optimize, re-optimize all that. But it's very easy to write notepad programs for the chicken, because I think the tools are not still there, I think there's definitely an opportunity there to be able to optimize it better.

Roland Meertens: I do see that people are writing Python bindings, which already makes it a bit more accessible. That's I don't know, for people listening here. And I would say, don't get discouraged. Because writing programs in CUDA is, I think, the biggest fun I had since playing around with deep learning. It is really satisfying if you finally have a program, which is doing a thousand things at the same time.

Srini Penchikala: Also, Roland, I think the GPUs are getting definitely more attention in vision related machine learning use cases, like image recognition or video analytics. So, that's where I think the power really comes through.

Roland Meertens: And also, the whole way of processing data in a parallel way and parallel fashion, maybe even specific to deep learning applications. You do see that more and more. And we already talked about this before. But more and more devices are having special chips to run things, tiny GPUs where you can do lots of parallel operations, which is fantastic.

Srini Penchikala: I want to ask a question maybe to Anthony and Kimberly, or maybe Rags also. We mentioned about self driving cars earlier. So, do you guys see these autonomous vehicles being equipped with GPUs to process the data more efficiently?

Anthony Alford: Well, almost certainly, I think so. We mentioned comparing with the edge something like a car has way more electrical power to support something like a GPU. I certainly think that, if autonomous vehicles are going to be solved, will most likely solve it by throwing compute power at it.

Raghavan Srinivas: But I think the compute power is not going to be all at the edge for sure. It's going to be centralized as well. In other words, one of the biggest things that you think about in self driving cars is how quickly can the alerting mechanisms from everywhere else other than just that edge device help you as well, or the other way around.

Raghavan Srinivas: I think, Srini, to answer your question, I think GPUs are going to make an impact just about everywhere, but especially in the context of the self driving cars, I'm sure more and more of the intelligence, central intelligence, if you will, is going to be in the GPU power.

Raghavan Srinivas: And one other thing I talked about this changing context a little bit is that I talked about Docker and Kubernetes, Docker and Kubernetes also make it a lot easier to incorporate GPU or CUDA programming. Because there is support for CUDA with Docker and really, as a toolkit that helps you with that. It makes it a lot easier.

Raghavan Srinivas: If you're smart like Roland, you're probably going to do CUDA. But if you're like me, then what I'm going to do is I'm going to use Docker and incorporate the CUDA code from there and still be able to get the same advantages.

Srini Penchikala: Yes, sounds like that's "CUDA as a Service," I guess.

Raghavan Srinivas: CUDA as a service. Yes, if you want to think about that.

Semi-supervised Natural Language Processing performs well on benchmarks [27:48]

Roland Meertens: So, let's talk maybe a bit about natural language processing and GPT-3. So Anthony, you have written a lot of articles on this topic, what do you think is currently going on in their natural language processing space?

Anthony Alford: Well, one thing that's certain is that people have ditched the recurrent neural networks like LSTM, and the transformer is the clear winner, and especially very big transformers. So, GPT-3 is not even the biggest now, everybody's trying to train their out one up GPT-3. So, the next trend that we're going to see, I think, is attacking problems like sequence length.

Anthony Alford: So right now, these transformer models, they have a maximum sequence that you can input into them. Basically, the biggest sentence that you can write as the input. The other one is that they get a lot of criticism for the resources that are needed to train these large models. So, nobody knows for sure, but the estimate is it cost $12 million, maybe to train GPT-3. And that was only one of the models that open AI trained for that research project. So, these models cost a lot of money. And that money is a proxy for energy consumption. So, that's a big criticism.

Anthony Alford: The other one is now you've got this giant model with a billion parameters. Inference, speed is an issue. When you input something into that, at runtime to try to get an output, it can take a while. If you're working on a real time application, maybe that's not fast enough.

Anthony Alford: And then one more topic is cross language applications. We're seeing a lot of that research, where Facebook and Google and these other big players, they're training these natural language processing models with multiple languages at one time, and it turns out that actually helps the performance. So, these are all some interesting trends that I've noticed.

Roland Meertens: Well, I think especially interesting with QT-3, you're talking about multi language processes, but also multiple tasks?

Anthony Alford: Yes.

Roland Meertens: So, you have one model and it's like an API. It's like natural language API for whatever you want it to be. So, you can use it to make chat bots, which is one of the things I did but I also at some point, it was GPT-3 to correct my spelling when I was learning Swedish, so I would enter it what I thought would be a Swedish sentence, and it just corrected my grammar.

Roland Meertens: And if I had to build that in the past, it would have been very difficult. But with GPT-3, you just give us three examples of correct sentences. And then it understands what has to do, what it has to do with the next census, you feed it. I think it's just amazing that you have one model, not for languages, not for applications.

Anthony Alford: Another thing is multiple media types or multiple modes. So lately, we've been seeing language plus vision models, and they're all using this transformer. In fact, the transformer is now being used for things besides natural language. They're being used for vision, vision transformers. And we're seeing very powerful and interesting results from these models that combine language and images. For example, open AI, again, they trained a generative model where you could give it even in a nonsensical text input, like an avocado in the form of a chair, and it'll create several pictures that really do look like an avocado in the form of a chair. It's very surreal. So, they called it Dali, after Salvador Dali, the surrealist painter. It seems like the transformer model has taken over it, a lot of applications besides just NLP.

Roland Meertens: Especially defected these transformers work with natural language processing. And with image processing, what I'm seeing is that apparently, we already said this started a podcast. But if you just have more data, and more compute power, your understanding of the world's three eyes of neural network becomes better. So, GPT-3 is already way better than GPT-2, and the first GPT we had in terms of understanding in terms of text generation.

Roland Meertens: And it's really, really, I'm now going to wonder what, so I think that you also wrote an article Anthony about Google, who managed to get 3 billion images into a semi supervised neural network?

Anthony Alford: Right.

Roland Meertens: And apparently, that really improves their whole recognition of this neural network. It's crazy.

Anthony Alford: And another thing in that same direction as the models get even better and better, it's going to be hard to measure how good they are. Because already, there are models that outperform humans on certain benchmarks. So, for example, the super glue benchmark, there are natural language processing models that do better than people.

Anthony Alford: That said, maybe we need to realize that benchmarks are not the full story. Benchmarks are good for comparing natural language processing models. But just because something is great at a benchmark, we're not done. Even things that perform well on benchmarks may fail spectacularly in a lot of ways. And I don't think we need to think very hard for some examples like Microsoft Tay, I don't know if we should bring that up.

Anthony Alford: But these models are very good at the benchmarks, because that's the target during training is to perform well on the benchmark, but it doesn't tell the whole story. They're quite good. And they're quite good in a lot of cases. But in some cases, they may not be.

Raghavan Srinivas: I completely agree. And let me look at it from the other side, where I was in the early '90s. I was a digital, most of the people who are listening to this podcast, probably never heard of this company. But they had the fastest chip at that time. And they wanted to show off what you can do with natural language processing.

Raghavan Srinivas: And a company called Dragon was used at that time, which is now really the underpinnings for many of the voice recognition. But to get to the point, when I talked to Alexa or Siri or somebody, the chances of getting it right is about maybe 50%. My kids think I have an accent. I don't.

Raghavan Srinivas: But I think it goes back to what you were saying, which is like it might work fine with benchmarks, but really, in the practical cases, can we call it done yet? I mean, obviously, it's gotten much, much, much, much better. It may be from 60 to 80%, maybe 85. But it's still not quite hitting the benchmarks. Is it lack of data? Is it lack of compute power? Is it lack of what?

Roland Meertens: I think it depends really on what application you're looking at. Because right now, if you look at generative capabilities, just have a conversational capability of GPT-3 or the tax writing capabilities. I sometimes already struggle knowing whether something was written by a generative AI, or by a human with mediocre language skills.

Anthony Alford: I was going to say that it's not the machines passing the Turing test. It's the humans failing it.

Roland Meertens: I think we even had articles submitted to InfoQ where, we as editors, were wondering if someone was just getting started with machine learning and had problems expressing themselves, or whether it was an article generated by GPT-3 because it was like switching from topic to topic pretty quickly. Being quite superficial is making some claims, but not really saying, while they were true, and I didn't know if it was generated or not. Like you say, human not good, or is it like computer generated? I think that will be way bigger problem in the future than we're expecting.

Kimberly McGuire: Could I perhaps present some parallel? I don't know if it makes any sense. But it makes me think about what's the point of AlphaGo at one point is that they ran out of good players to play to make the AlphaGo actually play again. So, they made this actually play against itself constantly, by them improving it by winning from the other ones. But with language that is difficult, because what is the game? What is the rules that you win? Like, "Oh, I've written a very good book." But it's so complex, because one person would think, another book is better than the other.

Roland Meertens: But especially for the writing books, I've been reading some older books lately. And the more poetic books, I wouldn't see the difference between this book or something being generated by GPT-3, and it actually ruins books for me.

Kimberly McGuire: Or poetry. It ruins poetry for you.

Roland Meertens: Yes, let's say GPT-3 ruined poetry for me, because apparently, you can generate it.

Kimberly McGuire: And soon it will also ruin arts and painting.

Srini Penchikala: Oh, yeah.

Kimberly McGuire: No, but this is something that crossed my mind.

Srini Penchikala: I have a question for y'all. I've been reading about this new GitHub Copilot, AI pair programmer tool. I mean, it looks pretty interesting. I have not tried it yet. But I don't know, Anthony, maybe you. I mean, does it use something like GPT-3 behind the scenes or under the hood? Or some similar technology?

Anthony Alford: I have not looked into it that closely. I imagine it is. So, most of these text generators do something like that. You give it an input sequence, and it tries to complete a sequence for you.

Srini Penchikala: Yes, this seems to be more powerful. I know it basically completes the code for you, given the context and some rules.

Roland Meertens: I think it is powered by exactly the same architecture as GPT-3. So far, I didn't have a chance to try it. So, Nat Friedman, if you're listening, please give me access.

Kimberly McGuire: I also signed up for that same copilot needs. I don't have access to it.

Roland Meertens: I'm sure Nat Friedman is listening.

Srini Penchikala: Yes, it's in a limited preview only right now. There you go.

MLOps and Data ops allow easy training and retraining of algorithms [37:19]

Roland Meertens: Should we maybe move on to one of the remaining or seemingly remaining challenges in machine learning and their deployment?

Srini Penchikala: Sounds good. I mean, I'm going to lead the discussion there. I know Rags already mentioned about containers and cloud platform like Kubernetes. I've been definitely doing a lot of work in this area lately. So, Kubernetes has started as a cloud platform for application deployments, mainly web apps and mobile apps, API, that are stateless in nature, and compliant with what they call 12-factor architecture. So, they were the first candidates to be deployed to Kubernetes.

Srini Penchikala: The advantage of Kubernetes, it supports all kinds of workloads, whether it's an application, functionality or data processing, like batch processing, or streaming data processing, or machine learning. It has the support for this comprehensive use cases. And no wonder it has become the standard cloud platform.

Srini Penchikala: All the vendors are supporting it, including Amazon, Microsoft, Redhat you name it. Everybody has Kubernetes in their infrastructure roadmaps. So, after the applications, I think the database has started using it, like Cassandra and CockroachDB, have taken advantage of the cloud platform. So, you can deploy these databases as containers, onto this cloud platform, and you can scale up and down, comes with built-in monitoring and all that good stuff.

Srini Penchikala: The so-called Database as a Service has become a reality with Kubernetes and other container orchestration platforms. I think now I'm seeing more and more machine learning solutions getting on the Kubernetes platform. We have Apache Spark that can be containerized, and run as a Kubernetes hosed application. And then also the framework, Kubeflow has been getting a lot of attention. I want to hear from others. So, have you guys used anything like AWS Sagemaker or OpenShift? And what use case are you using the Kubernetes for in the machine learning space?

Raghavan Srinivas: I think just to add to what Srini talked about, if you think about DevOps, is just about everybody is doing DevOps because even though you may not have a full fledged CI CD lifecycle, you still gain a lot of advantage in terms of being able to be more efficient, more productive, and so on.

Raghavan Srinivas: And the main thing about the DevOps lifecycle is that you're constantly, there is a feedback loop, which is you deploy your application, you monitor the application, and depending on what is happening with the application, you go back and automatically make the changes, deploy it. And if we can somehow keep the human out of this loop, things are going to be a lot faster.

Raghavan Srinivas: And the same idea is what ml Ops is as well, is to be able to have that same cycle and to be able to take advantage of how DevOps has just caught the enterprise by storm. So, maybe do the same thing with ml as well. And obviously, the moving parts with ML are a lot more than if you think about DevOps, because there is this concept of you have to first train the model and then you have to deploy the model. And then some parameters change if the data changes, then you go back, retrain the model. So, deploy it again.

Raghavan Srinivas: And again, the idea is, we're able to look at how efficient your model is, how efficient your algorithm is, and to be able to constantly tweak this and so on. I think Kubeflow, like what you mentioned, Srini is great to be able to orchestrate some of these. And the nice thing about Kubeflow, again, is that it can be done anywhere where Kubernetes is, and to be able to scale and so on.

Raghavan Srinivas: I think with respect to talking about 12 factor apps, it has evolved to be more data driven apps. And that makes it even more challenging from an ML and MLOps perspective. I think there is a lot happening in this area, I think Databricks also has like a cube, what is it called? Some flow.

Srini Penchikala: ML Flow or something else right?

Raghavan Srinivas: Yes, something like that. And it's on the same idea, as well as to be able to deploy your models and train them and do it as automatically as possible. But I think this is an area where I think we'll see a lot of activity, at least in the next couple of years.

Srini Penchikala: How about you, Kimberly, I know you have some experience with containers, right?

Kimberly McGuire: Yes, but not from the point of view of machine learning. Fortunately, I have seen though the study that is actually pretty recent, is that from robotics, they released in the service container, a Docker image that we actually are running on a Raspberry Pi at our office for testing, like packages and seeing how this works. Every time somebody does commits to one of our repositories, it will run it. Also on the role side of things.

Kimberly McGuire: And that is actually also, that's quite handy. At least it makes it more easier to get started. So, already get like, you don't have to setup the entire environment, you can just get Docker container and just get started. But in terms of machine, I haven't gotten that far yet. I got to just only discovered and containers myself, so I'm getting there. Maybe it's also good to maybe keep in mind if we're talking about today, education and machine learning. But let's see if we come to that topic still.

Srini Penchikala: I think all the technologies we talked about today, GPUs and the edge computing, all that, I think that comes together with MLOps and Kubernetes. So, they are two additional dimensions of providing speed and efficiency to the application developers. Kubernetes provides a scalable, resilient cloud platform that you can run as many containers as you need. And the MLOps brings this operational process efficiency to the process and just bring the best culture that the teams can leverage. So, there's no manual steps between the machine learning, data scientists and data engineers. It automates all the steps like Rags also mentioned. And also, it just complements and completes the cycle.

Kimberly McGuire: I guess those also comes to the discussion that we had before about autonomous cars, for instance, we first talk about autonomous cars, and that they have their GPUs in there. But they need to also be able to respond pretty quickly on somebody jumping in front of the car, of course.

Kimberly McGuire: And then something like the transformer networks are not going to be fast enough nor to respond to something like that, I would assume, of course, because it needs to be real time, but there are processes, for instance, in autonomous driving, that needs to have a little bit more depth to it, let's say, a little bit more cognition to it.

Kimberly McGuire: If this is able to also combine both the more quick, more closer to the sensors on the edge, and with morph and also connected as well with the cloud modules, that is definitely, I guess, like the best of both worlds, as you said.

Srini Penchikala: Exactly.

Kimberly McGuire: I can definitely see, definitely an advantage in that.

Raghavan Srinivas: So, one of the things that in DevOps, it was very simple to make the argument, I mean, maybe not simple, but you know that there was this wall between the Devs and the Ops. In the ml DevOps or ml Ops, or whatever you want to call it, there is this data scientist who is really at the center, data engineers, and so on.

Raghavan Srinivas: You've got to be harder to do this from an ml perspective as compared to DevOps. I mean, what do you see the rewards for this maybe further down the road? Is it going to be harder to have the same coming together Kumbaya, the data scientists, the developer, the system administrator, the Ops, and so on, all of them coming together. Because obviously, there are more moving parts here.

Anthony Alford: Well, I think that's a very good point. One thing that we've seen in the past anyway, is a sort of distinction between data scientist and data engineer, where the data scientist is the one who's looking at the problem domain, looking at the data, and applying I guess science to try to determine what's a good model architecture, what's good data, all these sorts of things.

Anthony Alford: Whereas the data engineer is the person who helps the data scientists figure out well, how do we set up the infrastructure and training pipelines, deployment pipelines and things like that. So, maybe the data scientist is more like the Dev and the data engineer more the Ops. And I really think that the trends we're seeing with ml Ops, is to try to automate away that data engineer, not necessarily eliminate the role, but maybe make it easy for the data scientists to do both roles, just as Dev Ops was trying to make it easier for the developer to also do the Ops role.

Anthony Alford: I think that's what we're seeing, especially as we get to the next topic about AutoML. But definitely these pipelines and other things that we've been talking about, like Kubeflow and MLFlow, their purpose is to help automate that infrastructure of data scientists creating the model all the way to production.

Raghavan Srinivas: No, it makes sense. I mean, you wouldn't want a PhD scientist like Kimberly or somebody to be installing Kubernetes. Although, she's perfectly capable. I mean, data scientists or others may not be even willing to go that way. And the idea is, let's focus on the problem domain on the data scientists and make this infrastructure pretty much standardized, or commoditized, however, you want to call it.

Anthony Alford: I strongly agree.

Srini Penchikala: I agree too. I think that's a good point, Rags and Anthony. I think the MLOps is going to be adding more value compared to DevOps, because DevOps includes only the developers and the operations like two teams, whereas ml Ops brings the third community, the data scientists, so there is more need for automating the overall process.

Srini Penchikala: And also, machine learning by nature is like iterative in terms of how it works. And also, we need faster feedback cycles. So, machine learning I think it's a great fit for automating bringing ci CD into the mix in the name of MLOps.

AutoML allows for automating part of the ML life cycle [46:20]

Roland Meertens: Shall we maybe talk a bit about can we automate even more? Can we automate the whole machine learning lifecycle without ml?

Anthony Alford: Well, that's certainly the goal. Of course, auto ml is so meta, you'd like the machine learning is the machine learning how to do something, but we have the data scientists directing it. Well, let's automate what the data scientist does. So Srini and I did a virtual panel talking to several people researching AutoML. And of course, none of them thought there will never be a need for humans, there's always going to be someone who's defining the problem and bringing industry specific knowledge translating the business problem into a machine learning problem.

Anthony Alford: But like you talked about, you don't want to hire a PhD to install Kubernetes, you don't need to have your data scientist worrying about, "Well, I got to try out all these different hyper parameters." So, as you probably all know, when you're training a model with machine learning, you have a bunch of different hyper parameters maybe with your neural network, how many layers or how many steps do you do your iteration? What's your learning rate?

Anthony Alford: AutoML, a great application is to try more. So, hyper parameter search is a very common use case. And one thing that we've noticed, especially with the commercial providers, all the commercial cloud providers now have an AutoML solution, Google, Azure, AWS. And along with that, they're bringing in a way to track these different experiments. So, they call these experiments when you run auto ml tries a bunch of different things, you need to track that, you need to track the accuracy, the performance of the resulting model so that you can pick the best one. And you can five months later, remember what you tried, and things like that. It eliminates some of the grunt work. And now you can tackle the next problem. Srini did you have any thoughts on that as well?

Srini Penchikala: Yes, that's a great point, Anthony. So, AutoML automates those routine tasks, that we don't want the data scientists to be spending a lot of time manually running the same input data against different algorithms. So, these auto ml solutions, for example, like DataRobot, or the one from Google, so they all run everything, all algorithms through this data, and they provide the recommendation, "Hey, this algorithm, or this machine learning model is better for this type of data with these features."

Srini Penchikala: So definitely, I think that's a really good addition to the overall machine learning tool set, it doesn't eliminate the need for human participation in the machine learning process. But it automates a lot of those routine tasks that we would rather the machines take care of. So, definitely a really good innovation in this space.

Roland Meertens: I think what it does do is maybe move the importance or move the focus on what is your biggest problem from finding the best model to finding the best data and ensuring that your data is of high quality, your data set is balanced, it contains all the maybe possible edge cases for your application. And I think that's really interesting. I'm really excited to see more on what's going on there in that field in the next couple of years, if we have more active learning, et cetera, to get the best data for your machine learning application, where the model can just be found automatically. I think this is great if we as a field managed to get that part away and have a new problem to focus on.

Anthony Alford: Absolutely, there's always a trade off. I mentioned this auto ml can try a bunch of different models out for you. We talked about how much money it costs to run GPT-3. Now imagine the auto ml running hundreds or thousands of different, parallel training jobs on GPT-3.

Anthony Alford: What we see also is these auto ml solutions try to be smart in their search. So, of course, search and restricting search space and optimizing search. That's an old problem. We're seeing a lot of these things brought into that and we're looking at things like Bayesian optimization of the search algorithms in auto ml.

What to learn to become a machine learning engineer [50:21]

Roland Meertens: Now that we know that soon there will be no more need for machine learning engineers to define malos. Shall we talk a bit about how to get educated in machine learning?

Kimberly McGuire: Yes.

Roland Meertens: Why do you all think nowadays, when you want to start with machine learning, how would you start? Maybe Kimberly, do you have any ideas?

Kimberly McGuire: Yes, for me, it's a little bit difficult. Actually, I will say, because I've had machine learning for at least during my master's, there are quite a few courses. But it's been a long time ago that it actually applies to something like the drones that I flew with, they didn't have neural networks in there. Very still simple, safe machines.

Kimberly McGuire: I do have the background knowledge, it feels like sometimes, like now with the podcast like speaking Swedish, let's say, I can actually understand Swedish quite well, but I cannot really respond very well. I definitely have to train my machine learning speech.

Kimberly McGuire: But if you're not a complete beginner, I feel it's even more difficult to find the right information to get you up to speed. For instance, when you guys were talking about the transformer ones, well, back when I was applying machine learning on my master's thesis, it wasn't the neural network with some recurrence connections. And now it's like that's out of the window, what happens, it's going so fast. It's a bit difficult. Like if you already had the background from some past already get into it again.

Kimberly McGuire: So, my boyfriend who wants to get started with machine learning right from the start, and I actually envy that because then you can really start with those tutorials from the right beginning all the way to the ends. But there's no really good sideways for people that already have some background learn, but don't want to really start with the beginning.

Roland Meertens: I think we can all recommend infoq.com source for learning things.

Kimberly McGuire: Yes, I'll definitely check it out indeed. But now this is like more my experience, I would say.

Raghavan Srinivas: Unlike Kimberly, I am a developer. I really don't know much about CNN, DNN classifications, regressions are not really very strong on statistics, either. I think there are plenty of resources on the web. And you're right. I mean, it's always a tough job to figure out which one works best for you.

Raghavan Srinivas: My undergrad was in mechanical engineering, and I like to see things more obviously, doing something with Python notebooks, Google Colabs, seeing things as they evolve, I think makes it a lot easier for me. I think those are some of the resources that I've used.

Raghavan Srinivas: If you go to any of these cloud, AI sites there are plenty of examples that you can play around like computer vision, and so on. Granted, they're pretty simplistic, but at least it gives you an idea of how to get started. And what exactly does regression mean, what exactly does classification mean.

Raghavan Srinivas: I mean, you don't have to know everything, but at least to a point where you can figure out what domain does your problem, or what domain is your problem set and kind of thing. So really, I think if you can go on Google Colab, GitHub, Python notebooks, I think that's probably the best way to do it.

Raghavan Srinivas: And one of the cool things today is that you can stand up, GPUs, multiple GPUs on the cloud, and play around with CUDA programming or anything that you want. So, those are off the top of my head. And I'm sure if you look through some of these frameworks, such as TensorFlow, PyTorch, Theano, CNTK by Microsoft, Keras all of them have fairly simple programs to get started with, which gives you an idea of the breadth. Of course, going deep in any one area is going to take some time.

Kimberly McGuire: Yes, true. I guess, it's also difficult, because usually, I cannot really let go of the application that I already want to use. For instance, what I want to have right now is already a pre made, mobile neural network that's going detects some feature, distrained inputs layer and turn it to TensorFlow light model. And that's just because I already have this specific idea. I know how it's possible, but just like searching for debts on any of those tutorials is actually quite a pain. It's possible, but maybe I just have to let it go and just spend a couple of days, maybe just a couple steps back. And let's see just how the story will evolve.

Raghavan Srinivas: Maybe you have to write a model to be able to extract this.

Kimberly McGuire: Exactly, I put all my trust now on GPT-3.

Roland Meertens: Maybe one tip for people who really don't have any computer vision or machine learning experience is Google has a website called teachablemachine.withgoogle.com. And there you can super easily train a computer to recognize your own images or sound approaches. And I think that's a perfect way to introduce not only children but everyone to train a model. So, you can just use your webcam to share with the examples you want to classify. And then you can download the model and loaded locally into your own Python or something.

Roland Meertens: So, you can really go from collecting data to having a model deployed on your local laptop within, I would say literally one lunch hour, you can have your own machine learning model life device. And I think that's a fantastic way to get started. Maybe another tip is to participate in a Kaggle competition. I think it's really good if you can learn by doing, then you have some intrinsic motivation to get a higher ranking in this Kaggle leader boards.

Roland Meertens: And you can try to search audit tips and tricks to improve your model, improve your training. And after the competition, you can even see what the winners what kind of tips and tricks they have for improving your model. So, you are really getting feedback on what you should have done to get a better model.

Anthony Alford: Yes, I agree. I found Kaggle very helpful. And I also think, if someone is starting out, then maybe the best answer is not to go do TensorFlow and build a deep neural network, really just start with something simple, like scikit-learn, and some linear regression or logistic regression. Because the way I've started talking to say, "regular software developers" I just tell them, machine learning is just something that writes a function for you.

Anthony Alford: So, to consume machine learning is 99% no different from consuming any third-party library in your code, just think of it as a function. In your case, Kimberly you're looking for somebody who's already written that function for you. And I think that's probably the mindset that most people need to have is just thinking about machine learning as something that creates a function for you to call. And if you want to learn more, start with something like Kaggle, do some logistic regression on the Titanic data set or something like that. I think that's very helpful for forming the basic understanding, and you can build on that.

Srini Penchikala: So, Anthony, as you mentioned right now, there are some frameworks that are more developer focused like Apache Spark, but they do abstract all of these complex machine learning algorithms behind the scenes. If you want to do like you said, a regression, it's just a one line of code in Python API or Java API. So, those tools can also be a good start for developers that want to learn more machine learning.

Srini Penchikala: How about for education of discussion, I want to come from a different angle here. So, to me, if any of our listeners of this podcast are new to machine learning, they want to get into the machine learning space, and they want to start a career with machine learning, I think my suggestion is, they need to pick which type of machine learning discipline they want to specialize in, I can see like a four different areas, they can be a data scientist, or they can be data engineers, they can be data analysts, or they can be data operations side of the responsibilities.

Srini Penchikala: So, once they're based on their interest and skill sets and the background, once they pick one of these four areas to become an expert in, then I think, what do you all mentioning make a lot of sense. If it's a data engineer we want them to be learning about programs and how to run these different algorithms from the programming side. But if they want to be data scientists, they need to learn more statistics that I'm not good at. It is a different way of learning.

Srini Penchikala: So, as you all mentioned, there are so many resources available out there and just the time is the limit. Whether it's Google Colabs, or whether it's Udemy or PluralSight courses, or most importantly, our own InfoQ has so many different resources that they can check it out.

Roland Meertens: I think that's a great note to end on read InfoQ and visit qcon.com. And I think this was a great discussion. I want to thank you Srini, Anthony, Rags and Kimberly for joining today and giving these great insights. And the trends report with relevant links and summary for everything we discussed today is available on InfoQ.com and I hope you will join us soon for another InfoQ podcast.

Anthony Alford: Thanks, Roland. This was a lot of fun.

Srini Penchikala: Thanks, Roland.

Raghavan Srinivas: Thank you.

Kimberly McGuire: Thanks.

Srini Penchikala: Thank you. Great discussion.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT