BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Not Just Memory Safety: How Rust Helps Maintain Efficient Software

Not Just Memory Safety: How Rust Helps Maintain Efficient Software

Bookmarks
46:09

Summary

Pietro Albini discusses how Rust's type system can be used to ensure correctness and ease refactorings, leveraging procedural macros to reduce code duplication, introducing parallelism, and tooling.

Bio

Pietro Albini is a member of the Rust project, currently contributing on the Infrastructure Team (previously led by him), Release Team and Security Response WG. He currently works at Ferrous Systems as the technical lead of Ferrocene, bringing Rust to safety critical industries.

About the conference

Software is changing the world. QCon London empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Albini: The Rust programming language has been on a meteoric rise lately. There are good reasons for that. Rust started at Mozilla research as a small research project to see if a new language could help improve how Firefox is developed, could help improve the performance of Firefox. Today, 9 years since the first stable release of Rust, we have a large project of 300 people, both volunteers and paid for by companies, to contribute to Rust, to improve Rust.

We have Rust being adopted everywhere. We have the worldwide developer community liking Rust, wanting to use Rust. In the Stack Overflow Developer Survey every year, for the past 8 consecutive years, Rust has been marked as the most loved language. The main reason for Rust to exist is memory safety. Because low-level programming languages like C or C++ are plagued by memory safety issues, things like buffer overflows, use-after-frees, segmentation faults, null pointer dereferencing.

All of those problems are things that Rust is designed to prevent, and that with Rust, you cannot introduce in your program because your program is sub-compiling. Microsoft did some research on the vulnerabilities in their products, and around 70% of them were security vulnerabilities due to memory safety, were things that Rust would prevent. You just completely eliminate 70% of your vulnerabilities if you use in a low-level programming language. Should you care about that? Because if you're at this conference, you're already probably using a memory safe language. Because all high-level programming languages like Java, Python, JavaScript, Go, Ruby, all of those are already memory safe. They are not plagued by the issues present in C or C++. Why should you care about Rust? Rust is not just memory safety, Rust also enables efficient and maintainable code.

It can help you squeeze every single bit of performance, while still maintaining all of the tooling and developer experience that you know and love. There was a very recent example of UV, which was our implementation of the Python package manager within Rust. They run some benchmarks on it, and it was between 10x and 115x faster than pip. This is a best-case scenario. It's not like every single thing you write in Rust, you're going to get one or the next performance. Still, Rust empowered this team to create such performant software. The Home Assistant project, which is a Python project to manage IoT devices in your home, saves 215 hours of CI time every month, just for switching package manager to UV, just for the efficiency of Rust. For a single project, we save that much compute, which is staggering.

Background

In the talk I want to cover, how can you leverage Rust? What are the parts of Rust that could be interesting for your project, for your company? What are the reasons why you should adopt it? I am Pietro. I am a longtime member of the Rust project. Nowadays I'm active in the Rust project infrastructure, release, and security response to make sure that Rust gets delivered to you. Before, I was the lead of the Rust infrastructure team and I served on the Rust Core team for 2 years. At my day job, I'm also a technical lead of Ferrous, a distribution of Rust for safety critical software like automotive or aerospace.

Why Rust?

I want to start with the elephant in the room that Rust is not an easy language to learn. If you look at Rust, you'll see everywhere online that people say it's hard to learn. That is true, because Rust forces you to use a different programming model. One of the core pillars of Rust, the reason why Rust can ensure memory safety is the concept of single ownership of data. That an object, a value can only be owned by a single function at a given point in time. You cannot have the ownership spread between multiple parts of your code.

Then you can lend out references if you want other parts of your code to access them. The problem is, most programming languages don't enforce that. If you come to Rust, you're going to hit a big wall of having to internalize this new model, having to think again on how you architect your software. Once you do that, then Rust will click, then you will be able to productively use Rust, and take advantage of all of the good things about Rust without slowing down your team. Google who is a large user of Rust and has been reverting more of its external services in Rust, ran an internal survey recently.

This was announced by the director of engineering of Google Android at a conference, that Rust teams at Google are as productive as the ones using high-level programming languages like Go, and more than twice as productive as teams using C++. You can get all of these after you learn Rust. You can get all of these without slowing down your team.

Also, there is another reason why I think Rust is hard to learn. This is something that every person trying Rust, me included, is guilty of, that we all make Rust harder to learn for ourselves, because Rust allows you to squeeze every bit of performance. Rust offers all of the tools to create a reliable and efficient software. Doing that requires some of the parts of Rust that are harder to learn. If you want to learn Rust, what I can recommend is, don't start writing the most efficient algorithm possible even though Rust tempts you to do that. Start writing normal code.

Then as you get more familiar with Rust, as you get familiar with the concept that allows you to write efficient software, then you can dive into it. Don't prematurely optimize, making Rust harder to learn for yourself. Also, you're not alone when learning Rust. There are of course helpful communities online that can help you. There are a lot of resources both freely available and books you can learn on. There are commercial training providers upskilling your team. There is not just that, because with Rust, every one of us has a pair programmer, the compiler.

This is not an exaggeration, because working with the Rust compiler feels like having a senior developer next to you. Because Rust puts so much focus on good and actionable compiler errors. Rust is pushing the industry forward in what good error messages are. We are seeing now other compilers taking inspiration from Rust, working on their error messages. For us, we care about them so much that we have people in the Rust team just focused on improving error messages. If you find an error message that is confusing, that is not as great as it could have been, that is a compiler bug. A bug you should report and that the Rust project takes seriously and fixes.

Let's see an error message. This is an error message related to ownership, so the concept of having a single owner. What happened here is that you try to use the data in multiple places. We can see that Rust points out what the actual problem is, so that we move to the variable data in the print function, even though it was already moved somewhere else before. It points out where it was moved before, so you can go there and figure out how to refactor it to not move it. It points out where the data is defined, which type it has, so if you need to refactor how the type works, you can go ahead and know where to do that. Also, the compiler knows that that type can be cloned. You can duplicate that to create a new copy of it to pass along. Since the compiler knows it, it gives you the suggestions on, you can clone the data if the performance impact is negligible for you.

The Rust project is so confident in its compiler errors that we even allow the compiler to automatically fix some of them. You can invoke the compiler with a flag, where for the errors we are the most confident about, it just tweaks your source code to fix the error for you, making warnings and errors disappear. Rust also ships with Cargo, the official build system that allows you to build your project and fetch and build dependencies, which is something that high-level programmers are probably accustomed to, but is a first if you're having a unified tool used by everyone if you come from a background like C or C++. Cargo can fetch dependencies from your internal registry for your company, or for the public ecosystem with the crates.io package registry, where anyone can publish a new dependency and just add a line to your Cargo.toml to fetch it and include it in your program.

Rust also ships with all of the tools you would expect, a static analyzer called Clippy, the rustfmt code formatter, if you want to ensure a consistent style across your project. Top notch IDE integration with rust-analyzer that can hook into multiple editors, and the rustup version manager, which allows you to install Rust, upgrade Rust, and use different versions of Rust across multiple projects.

Rust In Action - Using Macros

Let's now see Rust in action. Let's see some ways that Rust can make you write more efficient, performant, and maintainable code. I'm going to focus on two things. One is how to use macros in the type system to make your code more robust without sacrificing efficiency. The other is how can you leverage concurrency to actually go ahead and increase the efficiency and performance of your code. Let's start with the first one. In Rust, Rust has macros, which is a feature that you can use to generate repetitive code. Maybe you can create a macro to declare two different functions from a template or write a usable pattern to reduce duplication. Rust has actually two different kinds of macros, what we call declarative macros, which are defined inlining the file, and are similar to preprocessor macros you might find in other languages.

It has procedural macros, which are external code generator tools that can hook into the language itself. I want to focus on procedural macros, and in particular about derived macros, which are a way to generate complex default implementations for your Rust traits. Rust traits are practically the same as interfaces, or type classes you might find in other programming languages. For some traits, you might have to write a custom implementation for every type because the behavior changes between objects. For some of them, there is a default implementation that makes sense for most types that you then can override. While for very simple default cases, you can just write a default method, for others, you might need to look into the type, generate a complex custom default, depending on the shape of your data.

Let's look at an example, the clone derive. Because Rust has the clone trait, which allows you to duplicate the object, we saw it before in that error message I showed you. The implementation of clone, if you think about it, is fairly simple. You just take all of the fields in your struct, and you clone each of them recursively. This will give you a full clone, a full copy of your data. If you want to implement that in Rust, you can just put the derive clone attribute at the top of your type, and that's it. The compiler will invoke the code generator. It will generate the best implementation of clone for your type.

Then you can just invoke the clone method. This is done without runtime reflection. Also, because Rust doesn't have it, so you cannot use it anyway. This means that the macro looks at your type, and generates the best code specific to what you're doing. If you look at what the macro actually generated behind the scenes, we can see that it implemented the clone trait for the type. Inside of it, it defines the clone method. For every single field, we just duplicate it. This is exactly the code you would write yourself if you were to implement clone.

This is what Rust calls zero-cost abstractions, which doesn't mean abstractions that are free, abstractions that don't have any performance impact. What it means is that those abstractions are as fast, as performant, as efficient as code you will manually write yourself, you will manually optimize for that type. This is so powerful because it allows you to use abstractions to simplify your code without making any sacrifice in efficiency.

Clone is a built-in derive. It's available in your code. It's available in every Rust toolchain. You don't need to do anything to use it. You can also write your own. You can write custom derives for your project if they make sense depending on your code structure, or you can fetch derives from the third-party ecosystem. The most popular one is Serde. Serde is a generic serialization and deserialization framework for Rust. It's so ergonomic that it's basically the default. Most Rust libraries support Serde. It's what everyone recommends, except in very niche cases where it doesn't fully fit. With Serde, you can just start derive serialize and derive deserialize at the top of your type, and you created a fully optimized serialization and deserialization code.

This doesn't do any intermediate step like putting the data into a dynamic map accessed by strings, which has a runtime cost, and a correctness cost because you need to ensure you're accessing the right strings. The implementation generated by Serde just translates the data in the format you want, like JSON in this example, into the structs you have, into the type the representation of it. This is a macro, so it doesn't use reflection. It's all code generated at compile time, that then the compiler fully optimized for your type. This is a zero-cost abstraction. It allows you to have a full deserializer, and a full serializer, as performant as one you'll hand-write yourself, but without all of the maintainability nightmares of manually implementing serializers for every single type of yours.

How do you represent different variants of data? In that example, we might want to have two different kinds of data depending on the type of message we were receiving in our application, for example. In Rust, you can do that with Enums. You might say, but Enums are types that have multiple variants, you can have data in them. That is correct in most programming languages, but in Rust, Enum variants can have data attached to them. This is called algebraic data types or sum types in programming language field. It's not an innovation of Rust. You have them in a lot of functional programming languages that are more niche, or even popular ones like Scala or Haskell. Enums allow you to attach data of different types, attach different variants of data depending on the variant of the Enum you're in.

In this case, we have a very simple Enum that defines the configuration of the database for your application. You might want to choose between SQLite and Postgres to both ease local development and have a reliable system in production. The configuration for the two of them couldn't be even more different because SQLite accepts a path to a file, while Postgres requires the URL, username, and password. With Enums, you can attach that information directly in the Enum.

You can put it in the same place as the variant, which makes it impossible to represent invalid state. With this, the compiler ensures for you that you are never going to be able to create a SQLite variant with a username and password. This greatly increases maintainability. It's something that I dearly miss every single time I'm using a language that doesn't have them. Once you start using them, you are going to want them everywhere.

Also, with Enums, you can only access the inner data after checking whether it's the correct variant. You can't just do, database.password, because you don't know whether it's a Postgres variant that actually has a password or if it's SQLite that doesn't have it. The compiler prevents you from doing that, which also helps ensuring correctness. There are multiple ways to check if the variant is correct before accessing, and the most flexible one is pattern matching, which allows you to check the shape of the data, check if the shape of the data corresponds to what you expect.

If so, allow you to bind the inner fields of that data structure as variables you can use. Pattern matching is also not a Rust innovation. It's something that is present in a lot of languages, and due to its flexibility, due to how expressive and powerful it is, is something that is permeating through mainstream languages. We have seen Java and Python that implemented pattern matching recently. We're seeing interest in C++ to also implement it. How does it actually look? We can see here, a match statement that checks the database, and we defined before. Depending on the kind of database you have, it connects to it in a different way.

Patterns are evaluated top to bottom. If we try to manually evaluate one, first we check whether we want to have an ephemeral SQLite database. In this case, we first check whether the Enum variant is the SQLite variant. Only in that case, we check whether the path equals to memory, which is a SQLite internal for create a database in memory not in a file. If that's the case, Rust invokes the ephemeral storage function, which is the body of your pattern. There, that is only called if it's an in-memory SQLite database. If it doesn't match either the SQLite variant or the path, you go to the next pattern.

Where, as before, we first check whether it's the SQLite variant. If so, we haven't put any constraint on the path. The compiler creates a local variable called path that you can then pass to the function. This is how Rust guarantees correctness. Rust only allows to access path, if you first check whether it was SQLite. Then, if it wasn't either SQLite, it does the same thing, it checks whether it's Postgres. If so, it binds the three inner fields as variable that then we use to connect to the database and authenticate. This is a simple example of a match statement. You can do way more complex patterns that are still very similar to how you will define the data in the first place. They are an extremely concise way to define the check you want to have.

Crucially, match statements in Rust are exhaustive, which means that you cannot forget to handle some variants. You can of course put a default statement, if you don't care about some of the variants. If in the example before, we remove Postgres, our pair programmer will tell us what's wrong. In this error, it will tell us that we're not checking all of the patterns in our match, because database Postgres is not covered. It points out to us where the database Enum is defined, so that you can go there and look at the documentation, and maybe see what you need to do to implement it. It also figured out that you can add the syntax to add a new match statement, which in this case is just a todo because it still doesn't know how to actually implement the actual code, but still points you in the right direction. This is just a surface of procedural macros and Enums.

You can add procedural macros to do basically anything. There are procedural macros that check at compile time your database queries, whether they are correct. There are procedural macros that generate a nom for your tables all at compile time. Enums are just so powerful, that you are going to miss them everywhere. You can use Enums to represent state machines that cannot represent invalid states. You can use them to ensure the layout of your data, and ensure the compiler can check the layout of your data, all without losing any efficiency or performance, while still drastically increasing maintainability.

Leveraging Concurrency to Increase Efficiency and Performance

We have seen how we can structure our code to be more maintainable without losing efficiency. What if we actually want to increase our performance? The best way to do that is through concurrency, because compared to three decades before, nowadays, everything else has multiple cores. We go from single server chips with 128 cores to computers that have at least 8 cores basically everywhere. Even in our mobile phones, they have so much compute power. Rust allows you to tap into that. Let's first see something that is not parallel, a Rust iterator. Iterators are something that is fairly popular in basically every single functional language, and allow you to access data and to transform data and collect data in a functional way. This is a very simple iterator that takes a list of numbers, creates an iterator out of them, converts them with a slow computation function, which we assume is going to be slow, and then sums everything together. If you have a lot of numbers, and slow computation is really slow, this is going to impact the performance of your program. How can you fix that? Rust has a third-party library that you can pull in called Rayon, which enables parallel iterators.

With parallel iterators, you just import rayon, and replace iter with the par_iter method, which will spawn the map across multiple threads automatically behind the scenes without having to do anything. This is also not a Rust innovation. For example, Java has parallel streams, but it's scary to use them because you are introducing concurrency in a program that was never designed for it. Let's imagine this was in a 10-year-old code base, that is 10,000 lines of code. It was never designed for parallelism.

Trying to do this will just cause extremely hard to debug concurrency issues. Isn't that scary? Not with Rust. Because one of the things that is actually unique to Rust is not present in any other programming language, is what we call fearless concurrency. Which means both having the confidence to write parallel code and the confidence that you're not making mistakes, that you're not introducing data races. Also, and crucially, the confidence when adding parallelism to existing code, to an existing ancient code base that was never designed for it. This is an extraordinary claim.

Let's see how it actually works in practice. We're going to use this simple example to avoid throwing too much into it. This example, we first create some data, which is wrapped into Rc, which is a reference counted object that tracks how many copies of the data exists, so that when it goes to zero, it can be removed. Then there is a RefCell, which is a way to bypass some of Rust restrictions and move them around time. Inside of it, we have the actual piece of data. Then we create the runnable closure, which caused the process function on that data, and we spawn that in a different thread.

If you actually try to do this, the compiler will not let you compile it, because this program has a crucial concurrency bug. Rc is not thread safe. The way Rc tracks multiple copies, like tracks how many exist, is not incremented atomically, so you can have data races. You could introduce wrong behavior. The compiler detected this and the compiler pointed it out that you cannot do that. This is thanks to the Send trait, which is an interface that is automatically implemented by the compiler for you. You never have to manually add it.

With the Send trait, Rust figures out that if all of the fields in your struct are send, your type is also send. As long as one of the fields is not send, because, for example, Rc is not send because internally it uses some abstractions that are not thread safe, then Rc is not send and your type is not send either. That is how Rust catches this. Rust figures out it's Rc, figures out it's not send, and preventing you from doing this.

Let's fix it. Let's replace Rc with Arc, which is exactly the same thing as Rc but it uses atomic operations internally to avoid thread safety issues. The fact that Rc and Arc are different types is great because atomic operations are more expensive for the processor to use. You don't have to use atomics everywhere in your program, just in the off chance that that specific type is going to be moved to a different thread. You can just use Rc and the compiler will point out exactly only the places that you need to make thread safe and incur the performance decrease of thread safe operations. If you actually try to compile that code, you get another error. Because in this case, RefCell cannot be shared between threads safely, because RefCell allows multiple parts of your code to mutate the data.

If you actually share it between different threads, you have different threads that at the same time can mutate the same data, and that is a data race. You really don't want to debug that. In this case, again, the compiler will tell you, you cannot do that. You need to make sure that RefCell is thread safe. That is done with the counterpart of the Send trait, the Sync trait, which represent types that can be accessed concurrently by multiple threads. Again, this is something that is implemented automatically by the compiler with the same rule as the Send trait. If we now fix it by replacing RefCell with a lock, then our code will compile, and we will have made our code thread safe.

This is a toy example. It was fairly trivial for a Rust programmer to see what the problem was here. Think you have a very large ancient code base that you want to add parallelism just to the hot loop to make it faster. You don't have to go and manually look at all of the types to make sure you're thread safe. The compiler does it for you. This is because all functions in your Rust that can cause concurrency, so, for example, the function to spawn a new thread that we saw, or Rayon that creates a parallel iterator and processes data across threads.

They all require that the inputs implement the Send and the Sync traits. That's why Rust allows you to fearlessly add parallelism, to do fearless concurrency, throw parallelism to something because you know that the compiler has your back and will prevent all of the thread safety issues you would otherwise encounter in any other programming language. Rust also has its own little twist on how locks work to make concurrency even easier to implement. Because in most programming languages, locks protect, not data, pieces of code.

You actually in practice want to use them to protect data, but you cannot guard the data. You need to manually know every single place that you access, or you modify the data and manually lock and unlock the lock around it. With Rust, locks encapsulate the data to protect. You actually store the data inside of the lock itself. The Rust type system only lets you access the inner data once you create the lock. As soon as you release the lock, the type system will prevent your program from compiling if it tries to access the data while it's not locked anymore.

This is not actually valid Rust code. I created the unlock method to make it clear to understand. Notice that you still have to prevent deadlocks yourself. The Rust compiler cannot prevent your program from compiling if you attempt to create a deadlock. Still, that is probably not easiest to debug because it's fairly hard to detect exactly why deadlocks happen, but is the easiest to detect that something is wrong. With data races, with corrupting data, when maybe 1 in 1000 times it accesses at the same time, that is way harder to detect. Rust prevents that, Send, Syncs.

The way that locks are designed enables fearless concurrency. It enables to add parallelism to your code. It enables to squeeze every bit of performance you can get from the hot path of your application that does most of the data processing without having to worry about it. Rust has even more to offer. There is native support for Async/Await that is actually even more interesting the more you look at it. You have a powerful type system apart from Enum that allows you to design extremely maintainable software.

Rust's Interoperability with Other Languages

Rust makes it really easy to interoperate with other languages. If you want to take advantage of Rust, you can, but you don't have to rewrite all of your code, you don't have to rewrite all of your legacy application that was developed for 20 years. Rust was designed to be able to slowly replace the parts of your code base that could benefit the most, maybe the parts that will benefit from concurrency, or the parts that need the performance without having to deal with memory safety issues. That is actually the approach that Mozilla has taken to introduce Rust in Firefox, because Rust originally was created just by Mozilla just to improve Firefox.

When Mozilla launched Firefox Quantum in 2017, they rewrote the Firefox CSS engine. This led to a 30% speedup in loading amazon.com. This was not because the old code was inefficient. This was because the old code was single thread. It was a legacy C++ multi-threaded application. Mozilla deemed basically impossible to add parallelism to it without all of the protections that Rust gives you, without all of the confidence that Rust gives you. With Rust, Mozilla was able to just replace a small part of Firefox and add parallelism to it, and get such a big speedup. There are multiple ways to interoperate with Rust.

The way that is compatible with most languages is through C FFI because most languages nowadays have the ability to call into a C library. Rust can expose a C interface, you can create a C interface for your library, or your part of the program so that you can interoperate code from your application into Rust, code from Rust into your application. There is excellent tooling to generate bindings, excellent tooling to reduce the boilerplate you have to write.

Possibly, the most incredible one is PyO3, which allows you to integrate with Python, create Python native modules within your Rust. This is a fully working PyO3 module, written in the slide, where we can create a new function that sums two numbers and returns a string. Then we create a module with this function. If you compile this, you will have a fully working Python module that you can just import. The module is written in Rust, without any boilerplate, and without having to worry about any memory safety issues if you were to write it with C.

Conclusion

This is what I recommend, identify which parts of your code base will benefit the most from Rust. Don't start blindly reverting anywhere, everything because that is often not the best approach. Think hard about where you can take the most benefit from Rust, and selectively start introducing it. I hope I showed you that Rust is useful beyond memory safety, that you can leverage it to increase your programming efficiency and performance, even if you're using a high-level programming language.

I think a key of Rust success is not just its parallelism ability, its ability to be memory safe, but the fact that Rust successfully managed to merge two different worlds of the programming ecosystem that were completely separate, before. Because Rust brings a lot of innovations to low-level programmers. It allows them to avoid security vulnerabilities, improve safety, and leverage the modern developer experience, and the convenience of modern tooling that every other programmer benefits from.

While from higher level programmers, people that were not familiar enough or maybe scared about using C and C++, Rust allows them to squeeze every single bit of performance without compromising on tooling, without compromising on developer experience and on ergonomics. This is why Rust empowers everyone to build reliable and efficient software. This is why Rust has been the most loved language for 8 years in a row.

Questions and Answers

Participant 1: According to your experience, is a better learner an expert C++ developer, or rather a junior developer?

Albini: I don't think there is much difference, because, in the end, Rust requires you to use a different model of programming than what you were used to in C++. Maybe for an expert developer, they have to learn not to go away with the C++ idioms they were used to, but also an expert C++ developer could appreciate even more and actually understand why the protections Rust gives you are there. Because they should do the same thing, but they don't have your pair programmer, the compiler, to help you achieve that. It depends on the person's style of learning, but I don't think there is going to be much difference realistically.

Participant 2: Where do you see Rust not being used?

Albini: There is a simple answer, which is actually where I tend not to use Rust, which is for very quick scripts, so program like small things, because Rust is also fairly notoriously slow to compile. It's getting faster. There is a lot of effort to make it faster. If you just need to write a quick script, maybe to do some setup in CI or something, it's probably not worth it. The more complex answer is that different parts of the Rust ecosystem have different maturity because Rust is young.

Even though the first stable release was 9 years ago, in the world of programming languages, it's fairly young. It's a fairly young ecosystem. Depending on how much effort was put into those ecosystems, the experience you're going to get is going to be drastically different. It might not be worth it to use Rust there. For example, in the game development space, there are some excellent projects that are already showing how Rust can benefit from them. It's not yet as complete an ecosystem as you might get with C++, for example.

With UI development, there are some libraries that allow you to create nice looking UIs that are fairly easy to implement, but it's still not as mature as Qt or other environments. It really depends on where you want to use Rust in. It's up to every one of us to look at the current ecosystem there and see whether it's enough for us, because even though it might not be as polished, it might still be enough for what we need to do, or see if it's worthy to contribute to ecosystem to push it forward or just not being a reasonable business decision to adopt Rust there. It really depends on where you want to adopt Rust.

Participant 3: I'm still trying to wrap my head around the procedural macros. I might be wrong, but I have not seen it in other programming languages. It's not really common, like functions. Is there an application that you can think of where functions can't be used and procedural macros would be good?

Albini: A place where functions cannot be used and procedural macros can is like, for example, the clone implementation we saw before, because the clone implementation depends on knowing which fields the struct has. It depends on knowing the shape of the data you want to clone. That is information that you don't have access at runtime. Even if Rust had reflection support, it will probably be inacceptable from a performance level in a lot of places. Procedural macros really shine where the code you need to generate changes depending on the shape of the data you have.

A good way to think about it is that procedural macros are something you would use where you would reach to reflection in other languages. If you were in Java and you wanted to use reflection, that's what in Rust you have to use procedural macros, which on one hand is worse because procedural macros are harder to write because you need to actually parse the struct and then generate code from it rather than just invoking a reflection method. On the other hand, they bring the efficiency and maintainability things I mentioned before.

 

See more presentations with transcripts

 

Recorded at:

Sep 10, 2024

BT