BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Panel: What Does the Future of Computing Looks Like

Panel: What Does the Future of Computing Looks Like

Bookmarks
43:38

Summary

This panel dives into advancements that will redefine how we interact with technology, exploring new concepts and discussing their potential to transform the world.

Bio

Julia Lawall is Senior Scientist @INRIA. Matt Fleming is CTO @Nyrkiö, Former Linux Kernel Maintainer @Intel and @SUSE. Joe Rowell is Founding Engineer @poolside.ai, Low-Level Performance Engineer, Cryptographer and PhD Candidate @RHUL. Thomas Dullien is a Performance & Security Engineer.

About the conference

Software is changing the world. QCon London empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Dullien: We will discuss a little bit about how the hardware has changed from the mental model that we all used to have when we started computing, and what that implies for things like operating system interfaces, what this implies for things like observability, what this implies for benchmarking. Because one of the themes I think we've seen in the track was that, quite often, software is optimized for both the hardware platform and a use case that is no longer current. Meaning, we build some software, we design it for a particular use case, for a particular hardware, and then by the time that anybody else gets to use that software, it's years later, and everything has changed.

The computing infrastructure is undergoing more change at the moment than it used to undergo, let's say, from 2000 to 2020 with the arrival of accelerators, with the arrival of heterogeneous cores, with the extreme availability of high I/O NVMe drives and so forth. We'll discuss a little bit how this is impacting everything. When was the last time that you saw something in software that was clearly designed for hardware that is no longer a reality?

Fleming: Data systems, various kinds of databases, streaming applications that don't respect the parallelism required to achieve peak performance in disks.

Rowell: I see a lot of code on a regular basis that's written that assumes a very particular model of x86 that has not existed for the last 30 years and will not exist for the next 30.

Lawall: There's a lot of applications that when they want to be parallel, they just make themselves n threads on n cores, without regarding the fact that maybe those are hyper-threads, and it's not always useful to overload the actual physical threads, not taking into account P cores and E cores, performance and energy saving cores, not taking into account perhaps virtualization issues. Perhaps we need some better model to allow the applications to figure out what is the best way to get performance out of the given machine.

Existing Programming Languages and GPU Throughput Computing

Dullien: Joe's talk talked about code mixing regular CPU bound code and GPU code. I don't think any of us ever expected that we would be running one C program in one address space on which two entirely different CPUs would be operating at the same time. What's everybody's thoughts on, are the programming languages and paradigms we use at the moment actually very adapted to this, or will we need different programming languages to deal with the arrival of GPU throughput computing in our service?

Fleming: I'm a big believer in modularization being one of the superpowers of this industry. I think we'll find a way to basically compartmentalize this type of hardware, so that we only have to deal with one language at once. I think we'll maybe get new languages for specific things, but not an all-encompassing language. We'll still manage to treat these things as composable units.

Rowell: I think I feel the same way, in so far as I think a lot of the issues I raised in my talk can be properly handled by good coding discipline. For example, there's been a trend in some C project lately to not actually use pointers, but to instead hand out a handle, essentially, and then use that in relation to some base object. This probably would have solved some of the issues that I had. Because if you have a different object, suddenly that handle has no meaning. I think you could still use these things. I think we need to start treating those things as antipatterns, rather than as something that we should just do because we want to, essentially.

Lawall: I would also think that maybe it's not new programming languages. Maybe we don't necessarily want to expose all of these details to the programmer. A programming language is the way the person thinks about the software that they're trying to develop, and maybe it's more like building the software modularity, as has been mentioned, that should be thinking about these issues of like, where should this code be running on what kind of hardware, using what kind of resources, and so on.

The Wall Between Pure Software Engineering and Underlying Hardware

Dullien: What I got from the entire row was that it's more about the actual software engineering and less about the programming language. I think what we can see a little bit, is also, for lot of my youth, I was told, don't worry about the low-level details, you write software at a high level, application level, and the magical compiler engineers and the magical electrical engineering wizards in the end, will make everything fast. I think what I'm observing, at least, is that we're seeing a little bit of a dissolution of that wall between pure software engineering in the sense of application programmers, and the idea that they don't need to know anything about the underlying hardware, as the hardware becomes more complicated or more heterogeneous. What are your thoughts on that wall and the dissolution of that wall?

Fleming: I think a lot of this may be cyclical, so you have peaks and waves where hardware accelerates and maybe you don't have to care, and then times where you do have to care. I think we're in that time now where it's very important for programmers to know how the hardware works and to take full advantage.

Rowell: I think I agree with the sentiment, but I actually think it's a slightly different issue, which is, you get to these situations where you can't express efficiently what it is you want to say, or at least the compiler is not clever enough, and so you write something that does exactly what you want, but then any future compiler will never be able to write something better. It's like with Richard's talk. Because everything was written in terms of shifts, the compiler was never going to go, that should be a multiplication instead, I know better. I think you end up locking yourself into a situation where you've written something that's good now but won't be good forever. If any of you are clever enough to do this, I'd really love to see someone write a tool that will automatically go through and try to undo all of these performance tweaks and then see what you get out the other side, because I think that would be a really interesting case study.

Lawall: Maybe it's naive on my part, but I would hope that compilers would figure this out over time, once people know what they want, or once the evolution of the hardware stabilizes, reaches a plateau for a certain amount of time, then presumably the compiler should be able to step up. That's what has happened in the past. It can definitely be necessary that maybe you need to give some options or something like that, like favor a GPU for this or something, but hopefully the compilers would fill in the gap.

Dullien: Historically, we've had both successes on the sufficiently smart compiler and we had some failures on the sufficiently smart compiler front. It's interesting to see that CUDA is somewhere in the middle. Because CUDA is C code, but it's also a special dialect of C code that provides extra information.

Making The Linux Scheduler More User Configurable

Another thing I wanted to talk about is with the heterogeneity of workloads, like with Meta, or Google, or whoever having very specific workload problems that may not be shared by everybody else in the industry. The Linux kernel trying to serve many masters in terms of the scheduler, but also, for example, the C++ language trying to serve many masters. We're seeing a push on the language design front from Google, or Meta, or whatever, to change parts of a spec. Also, in terms of operating system, we're seeing a push from these giants to provide certain features, like a configurable scheduler that can also help them with their problems. I would just love to hear a little bit about the efforts to make the Linux scheduler more user configurable.

Lawall: You have your particular software, it has particular scheduling needs, and so you'd like to write a scheduling policy that is particularly tailored to your piece of software. The difference between Meta and Google, Meta is keeping you at the kernel level. You write your code in BPF, and then you obtain a kernel module. It's basically like putting the scheduler in a kernel module, but you're doing in a more safe way, in the sense that you're writing BPF. On the other hand, the Google effort, it's written at user level. You write your code in whatever language you like, perhaps you use your debugger, your traditional development environment, and then they have efficient messages that are sent down to the kernel to cause whatever things you requested to happen.

Both of these efforts reflect a frustration, perhaps, that the current scheduler that tries to do everything for everyone is actually not succeeding at being optimal for particular jobs that these companies are particularly interested in. Not just Google, Meta, but other companies also obviously have particular jobs that they want to work well. Some kind of database you could imagine has some very particular scheduling requirements. It seems like it's very hard to resolve this distance between, we have very specific requirements, we want very high performance and so on, and the goal of being completely general. There's an effort to open things up.

Then there's also the question of, what do you make available? Do you make everything available in terms of, what can you configure? If you allow people to configure everything, then maybe they should just be writing their own scheduler in C and integrating it to the kernel. If you think in advance about, people will likely want to update these things, then you may miss something that people actually need, and then it will be somehow unsatisfactory, because if they're missing some expressivity, then they won't be able to use the approach at all. These things are just evolving. We haven't reached a perfect version at the moment.

One aspect is just to somehow speed up the evolution time to be able to write and maintain policies that are adapted to particular software. Another aspect is to speed up the testing time. We talked a little bit during my talk about how maybe we don't want to recompile the kernel, and it takes some time. It's a bit obscure how to do it. Once you learn how to do it's pretty easy, but I cannot deny that it takes some time. There are also certain kinds of execution environments that require a lot of setup time, and so to do your testing, the whole development and test cycle can get very long.

If you can just update things dynamically from the user level, then you don't have the reboot time and the cost of restarting your entire execution environment. It definitely shows an interest in the different community in actually thinking about the scheduler and even thinking about other operating system components. You can think about, how could you dynamically change the memory manager? There's actually been a long history of how can applications specify their own networking policies? This idea that you should just bypass at least some of the kernel and just manage these resources on your own is starting to get distributed to other resource management problems. It's something interesting that's evolving, and so we'll see how it goes in the coming years.

Introspection (Pain Points)

Dullien: When it comes to performance work, I think we've all run into the issue that the systems we're working on are not as inspectable as we would like them to be. Can you name a tool that you don't have yet that you would like to have? Can you think about a situation where the last time you tried to resolve an issue you wish you had better introspection into x or something to do y? Is there something, like an itch you would like to scratch when it comes to introspection?

On my main CPU, I know how the operating system can help me profile stuff. Everything related to GPUs are super proprietary, 250 layers of licensing restrictions, NDAs, and so forth from NVIDIA to do anything. I would just like to have a clean interface to actually measure what a GPU is doing.

Fleming: I'd like to have basically all the tools I have now, but they can tell me the cost of running certain functions, like monetary cost in the cloud. Like this function cost this amount of money. This network request costs this amount of money. Maybe I've been using them so long, I think the interfaces for the tools that I have are pretty good but it's missing that aspect of the price performance problem, which is like, what is the price?

Rowell: The first thing I'd really like is a magic disassembler that takes every piece of proprietary code and tells me what it does. I spend a lot of time working with GPUs, and you get to a point very quickly where you have no idea what's going on. In fact, even if you open it and say GDB or stuff, you will see, you do have the functions. You do have the stack, but the names of these functions do not correlate to anything that you would think that they would. They're like _CUDA54 or things like that. They're completely opaque. The second thing I'd really quite like, actually, is better causal profiling.

Causal profiling is this idea that, your system is very complicated, and so rather than sampling your call stack constantly, what you do is, is you slow down one of the threads, and you see how much that would change the overall behavior of your application. The point is, is that rather than just speeding up a single hotspot, you're actually working out what the performance dependencies are in your program. Every time I've tried to use one of these, especially in a GPU context, it's actually ended up being harmful to my ability to understand what's going on. Having a better version of that would be really good for me.

Lawall: I was actually really inspired with what Matt talked about with the CPD, so that can show you where are the change points in your execution, and somehow being able to zoom in immediately onto those change points and figure out what changed at those change points, and what are the different resources and so on that are involved in that. You have a long execution, maybe it runs for hours or something like that, and you find that its overall execution time is slower than you expected, so the ability to zoom in on exactly the place where things started going badly would be very nice.

Computer Science/Developer Ed and Emphasis on Empirical Science

Dullien: One thing I've observed in Matt's talk was, at the end of the talk, there was somebody asking, have you worked with a data scientist on your problem. One thing that haunts me personally when doing performance work, my background is originally pure mathematics with a minor in computer science, and it turns out that there's very little empirical science like hypothesis testing, statistics, if you choose that education. What are your thoughts on, does computer science education or software developer education need more emphasis on empirical science in order to deal with the complexity of modern systems? Because the reality is, in my computer science studies, it was all about, here's an abstract model of computation, here's some asymptotic analysis, and so forth.

The fact that a modern computer is a bunch of interlocking systems that cannot be reasoned about from first principles but need empirical methods just wasn't a thing. With the increased complexity of modern hardware, do we need a change in computer science education to have more focus on empirical methods to understand your systems?

Fleming: Yes. I think if you're doing interesting work, sooner or later, you come across a problem that nobody or very few people have hit before. It doesn't always happen a lot, but eventually you run into a compiler issue, a library issue, something that there is no Stack Overflow answer to, or GitHub Issue for. I think this ability to quickly move through the problem space comes down to hypothesis testing and being able to cut off certain branches of the decision tree as you're moving through. In my experience, like I was never taught to do this. I've not seen a lot of people demonstrate this, apart from the really good debuggers and engineers. I think it's something that the whole industry would benefit from.

Rowell: I think that we're actually sitting in a very exciting period of time. For those of you who have grown up in the UK, you'll know that for a long time, computer science education in the UK was basically Excel. You sat down, you went to class, you just singled ICT, which was how you made PowerPoints and stuff like that. As a result, at university level, there wasn't really much background that you could assume. Actually, if you had done any computer science before, maybe it would be your third year of university before you learned something that was truly novel to you. I really hope that in future years, this changes. I hope that we take this opportunity with having computer science actually being taught at a younger age to update what we're teaching in further education.

Lawall: When I talk with people, I see a lot of feeling, we have to think through this somehow. I think there needs to be more of a balance, definitely thinking through things, trying to understand what the algorithms are and so on, is very important. I think there needs to be more of a balance between trying to reason through things and trying to do experiments and getting more thought about, how can we do those experiments, and how can we get out the relevant information? Because I think maybe people tend to try to just think things through independently, because it's very hard at the moment to get actual information out of the huge amount of information that's collected, if you try to trace your code. We talked in the beginning about accelerators and so on. Things are going to get even more complicated with different kinds of hardware that are available, and teasing apart all that different information to show you that, like in Joe's talk, like your memory is going badly, because of your sharing with your GPU. Something is going badly, but what is it? It seems currently very hard to do at very large scale.

Path Dependence in Tech

Dullien: There's often a path dependence in technology, where, for example, we write some code, we use a compiler to compile this code, and then we design the next CPU so it runs the code that we've already compiled, more quickly, which locks us into one path, because now trying anything else will make us more slower. Have you encountered something that looks like this was a path dependence that nobody would ever build again in the same way if they could start from scratch in computing?

Fleming: I think I've seen the consequences of that, rather than actual clear, bona fide examples of that. I think that this comes back to people's inability to assess things from first principles, particularly performance and systems. They assume that what they have today doesn't need revisiting. That what is there is there and it doesn't need to be changed. This has a lot of implications for secondary systems, where actually, if you redesigned it, you would get more cheaper performance. I don't think that we take enough looks at stuff like that.

Rowell: There's a very famous example that came up recently, which is floating-point control words. If you ever look at any of the floating-point specifications, there's various flags that control how rounding is done, how things are discarded, and stuff like that. I think it was last year we found out that if you ran one program that was compiled with, I don't care, do whatever, it set that flag for all of the programs running on your CPU. That's completely unbelievable. Of course, it's a legacy from when, actually, we didn't care about this so much, where maybe you had one program or it was a system-wide decision. I don't think anyone would ever design it like that now. I think it's just asinine.

Lawall: At least, I think operating systems are very much a collection of heuristics. This goes back to what I was saying before about the user level scheduling. The existing operating systems are large collections of heuristics. It's very hard to tweak those. You can add more heuristics, but it's hard to think about actually changing them, because you don't know exactly why they are in that way anymore, and so that might break something that's critical somehow, so people would like this kind of programmability so they can just throw away all those heuristics and start with their own thing. I think, in general, we're stuck with this because we've always done it this way, and we need to maintain the performance that we had, so we can't actually go to new design strategies or something like that.

Dullien: We have a bit of a lock-in to a local maximum that might not be a global maximum anymore.

Building/Developing with the Future in Mind

Given that constants or magic parameters that are chosen at one point in time for one hardware platform that then expire are so ubiquitous, is it a sensible idea to try annotating parts of a source code that is likely going to expire, with an expiry date. One of my favorite examples is, there's a compression algorithm called Brotli, Brotli is in your browser these days. When it was created, the author of Brotli trained, essentially, a dictionary of common words based on the web corpus at the time to compress the web corpus better.

At that point in time, Brotli got much better results than the competitors, but that was more than 10 years ago, and the web corpus changed now. Nowadays, the Brotli spec contains this large collection of constant data that is of no use anymore, but can't be swapped because it's the standard now. What are your thoughts on, how can we on the software engineering side manage things better that are likely going to expire in the future?

Fleming: I've definitely seen this problem. I've seen issues in the Linux scheduler, where ACPI tables from 2008 CPUs were used, when AMD EPYC came out, the values pulled out of the table were completely relevant to like a machine that was built 10 years later. I don't know that people really think this way, and I think that's the problem, that the thing I'm building now is designed for the performance of the systems today. I don't think people would necessarily annotate or write documents in that way, though they should. If they did, I have this feeling that the annotations would be lost over time. I think it's a much bigger problem than it would seem. It's a mindset shift.

Rowell: I think I'd go a step further and say that it's unknowable. I'll give you an example. If you've ever written any C++ code, you will know that in any method, you have an implicit this pointer everywhere. Actually, the fact that you have a pointer everywhere means that you can never pass these objects in registers. You always have to push them onto the stack because you have to be able to take their address. I don't think whenever they design a language, they would have known this, that this would have actually had an impact on performance. It's called verification. We have a tendency to fix design points in our space to make it easier for us to reason about them. I think that's unknowable in a way. It's the same with security. I think we end up fixing certain things to make it easier to understand. I would agree. I would love it if my code refused to compile past a certain date, so that I could go back and fix things.

Lawall: It's not just constants, it's any kinds of design decisions if you have some labels like, this is a P core relevant process, and this is a E core relevant process. That might change over time as well. I think there need to be somehow more specifications of like, what was the purpose? More explanation in some way of what is the purpose of doing this particular computation or classification and so on. On the other hand, you could say, but developers will never want to do these things.

There's like Rust, and Rust requires one to put all kinds of strange annotations on one's types and things like that. Maybe there's hope in the future that developers will start to become more aware that there's like some knowledge in their head, and then they transmit this knowledge onto paper, and there's more awareness of, there's an information loss between the head and the paper, and that that lost information is not going to be able to be reconstructed easily in the future, and that's going to be an important thing in the future.

Dullien: That will make the point for not so much better programming languages, just really expressive type systems. The advantage of strong type systems is also that documentation gets out of date, but a type system will at some point allow the compiler to refuse to compile.

Where Performance Data Collection and System Transparency Converge

Joe, you mentioned security. As somebody working between security and performance, there was a famous attempt at backdooring a compression algorithm or a compression library to then get a backdoor into OpenSSH, which would have been the dream of every attacker and fairly disastrous for every defender. First of all, the person that noticed this noticed it because it created 500 milliseconds of extra lag during an SSH login. Then the person who took essentially performance tools as a first step to investigate what's going on, to analyze this backdoor.

For me, who used to work in security and now works in performance, it was very gratifying to see a little bit the convergence that more introspectable systems are systems that are easier to reason about from a performance standpoint, but they also help you deal with the security incidents more. What are your thoughts on, first of all, the convergence between gathering performance data that can also be used for other purposes, such as security, and for the importance of transparency in systems. We mentioned CUDA and NVIDIA's closed ecosystem before. What are your thoughts on this?

Fleming: Having this openness in open-source software is important, because, security and performance are about different tradeoffs, but you can increase security at the expense of functionality, usually. Performance has a similar tradeoff where you get more performance for a specific use case. I think the ability for people to understand what's going on in their systems because of these tradeoffs is critical. Also, repeatability, to me, is lumped into this whole open thing as well, and maybe verification is about, you need to be able to verify that the claims made by somebody are true, or that the things other people are seeing you can see too.

Rowell: I'd agree with all of that. I think it's very interesting when you consider that actually both of these things are observability problems and also performance problems in different ways. Just to give you some information on my background: prior to doing performance work, I did cryptography. In cryptography, you very much want your algorithms to always run at exactly the same speed no matter what. The reason why is because there are these very clever attacks where if you use certain instructions, you can essentially leak private information. That's also a performance problem, but it's a very different kind of performance problem. It's not maximum throughput. It's not having always the best you can possibly do. It's just, don't leak information. In that sense, observability is good and also bad, because if you can observe that, that's the backdoor. It's clear that observability is really very important to all of this.

Lawall: Performance is one thing that you could observe, like you could observe something else. Maybe other issues can also arise that might indicate security considerations. I would be inclined to put more emphasis on specifications and being able to continue to ensure that the specifications are matching what the software does as a way of somehow ensuring that things are going in the same direction. Definitely, one needs to bring everything together.

Performance Engineering - Looking into the Future

Dullien: If you were to give somebody that embarks on a career in performance engineering some advice about what to look into for the next couple years, which is always terrible, because the nature of any advice is telling people about a world that doesn't exist yet and you have no idea what's going to happen. If you try to give yourself advice, or somebody who's coming into the field now, what would your advice be about where to put emphasis in the next couple years with regards to performance engineering?

Lawall: Take inspiration from the people who work on data because there's a lot of opportunity to make incorrect conclusions, and if you have bad methodologies to believe that the performance is improving or decreasing or whatever, based on insufficient information. I think the data science people have a lot of interesting answers that we should be looking into.

Fleming: It's kind of a golden answer or evergreen answer, which is, look for the places where people haven't reassessed systems in a long time. I think that's the interesting place to be. This happens in cycles, and you get it. Database is having a resurgence at the moment, there are people that are reevaluating the way you design databases. I would urge them to look for adjacent types of systems where maybe we haven't reevaluated the way they're designed.

Rowell: I think I'll go in a slightly different direction by repeating the phrase that if you have a hammer, everything is a nail. I'd recommend that you all just learn random things. Because actually, in my own personal experience, oftentimes I've tried to apply standard tools like perf, or pahole, or whatever, to looking at a problem. Actually, normally, the insight that has helped me has been something completely random in the back of my head that I never would have told anyone else to learn. It's important when you're doing anything that has such general impact, to try to be a generalist in some way. Try to learn about as many different things as you can.

Dullien: It's one of the nice things about the full-stack list of performance, you get an excuse to scavenge in everybody's library.

Unikernels

Unikernels used to be a thing that was quite heavily discussed a couple years ago. The idea of a unikernel is essentially specializing a kernel to a particular workload with some support from the hypervisor to then run an operating system specialized for a particular workload. What has happened to it? Where has this gone? Is this still a thing?

Lawall: From what I know about the work on unikernels, the idea is more about like taking out certain components that are not relevant. If you are not doing the networking, it's better to take out the networking code, because that code might be vulnerable, or might be doing some polling, which is time consuming, and so on. Things like list scheduling, perhaps also memory management or something like that. This is very tightly intertwined subsystems that are not designed in a very modular way.

I think it would be hard to meaningfully just be extracting things from the scheduler to get a scheduler that perhaps has no preemption, or something like that, if you don't need preemption. It seems like the direction people might be going in was to provide some kind of rewritten specialized scheduler for this particular purpose. There, you would want to be adding more interfaces to core operating system services, but with the caveats that I mentioned before, the interface might not give all the expressivity that is wanted.

Fleming: I think the unikernel folks would argue that unikernels are definitely still in vogue. I think in a world where most of our software runs on cloud systems we don't own, but we rent, and it comes with various operating system images that maybe we don't inspect very well, I think there's a case for building custom operating system images. From a performance perspective, you get a non-negligible amount of noise from services that run as part of the base OS image. That's one of the reasons that I'm looking at unikernels now, is that it's a nice idea to basically strip all that out, like Julia said, and have just the application and the essential libraries and operating systems required. I think there's still a case for this.

Dullien: One of the things I've seen a little bit is not so much the unikernel deployment in production in large numbers, but people trying to get the kernel out of the way, so having user space talk directly to the NIC kernel bypass. I think user space talk more or less directly to the storage infrastructure. We might not necessarily get the real unikernels as we imagine of them, but we may get a system where more pieces of the system talk to each other directly without going through the kernel on the way by just having shared memory between them.

Tools For Diagnosing and Debugging Surprising Production Problems

Recommendations on tooling for diagnosing and debugging surprising production problems.

Fleming: I don't have a specific recommendation for a tool. In my experience, you need to have either the ability to replay the traffic or have something that's continuously on and is low overhead, which sounds like a weird answer, but you need one or the other. Because this idea that you can diagnose things after the fact without having enough information, in my experience, just doesn't work, and you will miss performance regressions. I don't have a recommendation, but something low overhead that is on all the time, or the ability to replay traffic with shadow traffic or something.

Rowell: I'd echo that point. In my experience with any continuous profiling, for me, it's really been about going, "That looks weird. That's slower than I expected". Then trying as hard as I possibly can to make a reproducible case. Actually, turns out most of the time I do need to go to replay network traffic. That would be my advice, would be, I tend to use it to catch first problems and then try very hard to reproduce.

Dullien: Having a continuous profiler in any form, and then in combination with bpftrace, there used to be a Kubernetes plugin called kubectl trace, which essentially allows you to schedule a bpftrace program on any node. You have a continuous profiler to provide profiling data continuously all the time, and then you can dig into a particular node by putting a kprobe somewhere in the kernel to measure what's going on. I found that combination to be very useful, of course, not solving all my problems, but it solves the first 40% of my problems, and then I've got new ones.

 

See more presentations with transcripts

 

Recorded at:

Jan 02, 2025

BT