Sure, yes. I am a network geek from way back. I have designed a lot of different protocols, implemented a lot more than I have designed and been around the internet engeneering task force for about 20 years. I have designed and built messaging systems, messaging-oriented middleware, all kinds of different communication mechanisms, things like that. And my background is actually hardware. I grew up an electrical engeneer, kind of moved over to software as I, you know, kind of matured a bit. So, yeah.
2. And why are you here at QCon?
I was asked to talk about something that a collegue, Martin Thompson, and I have been working on, an open source project, sponsored by the Chicago Mercantile Exchange and the company that I work for, Informatica, talking about a serialization of financial information. It is called the simple binary encoding on FIX. FIX is the financial interchange protocol. It is used all over. So what this is is a simple binary encoding of that. But it is actually quite a bit more than that. So I came to give a talk on that, to sort of introduce it, and we are hopefull in a few weeks will be open sourcing it and it will be out there in the community.
Sure. We have actually digressed a little bit. You know, if you look at the ITF and standards efforts and protocols, they are fairly mature: the processes of how best to design them, how best to evolve them. We have had a lot of practice with this with a lots of different protocols. So there is very good best practices there. But the average user and the average user of things like serialization have not been able to really use much of that. And that’s really because honestly we have been a little bit insular in the networking space. We sort of talk about protocols and interactions and things like that, in ways that, you know, kind of if you don’t know the nomenclature, you maybe left of to the side and not be able to use much of that sphere. So, being able to actually bring in experience of designing protocols, you know, things that we know don’t work, but things that we know do work very well, and bringing that to sort of a problem domain of serialization was what attracted me to actually looking at this.
And so, you know the problem of serialization is one of performance. I actually come from, you know, selling middleware into high frequency trading firms and I have done that for the last ten years, actually 15 years. And so, it is a highly performant environment, high frequency trading as well as foreign exchange trading, exchanges, you know, speed is a competitive advantage. So it doesn’t mean that you want the absolutely fastest, there is a lot of different trade offs, but speed is not something that you can leave on the table, because if you just ignore it, you are going to get beat and when you are going to get beat, you loose money. So, that’s not something that anyone wants, especially if they are in financial services. Looking at speed, there are a few areas, where there are things that can give you an advantage. One of them is, how you serialize data. It is remarkable, but it is true. So, given in some systems you may see upwards of 25 or 30% of the time and latency of an application, taken up just be serializing primitive data types onto a wire. But also within storage of data, so storing data and how you store it, is more than just compressing it and having it there. It is how it is encoded, how you can access it and also thinking about being very efficient with encoding and decoding and things like that. These are all things that, you know, have been done for a long time, going back even to RPC and XDR, and this is going back quite far, those basic concepts. There are better techniques that we have now, just based on experience that we can use. And that’s kind of what we have done.
You may not be, if you don’t have any performance problems and that may not be latency, it could be throughput, it could be, you know, just being able to read in data from a file. You may not be. But when you are looking at performance problems, as you start to profile and measure where the time is being spent, or CPU-cycles are going, different things like that. Once you start looking, you may find that serialization takes up an inordinate amount of processing that you just did not realize. And so in that case, there are good techniques that you can use there, to really bring to bring that down dramatically.
5. And I suppose those are’t laveraging JSON (JavaScript Object Notation)?
Well JSON is much better than something like XML, which is very verbose ad requires string parcing, which is very CPU-intensive compared to what you could do. So JSON may be fine, if your are going from something like XML to JSON, may actually give you what you need, but you can go further than that. So, you may or may not need to, based on what your use cases are.
It never hurts to understand that. I mean, the more you understand how the actual hardware works and the way networks work and disks and everything, it is my firm believe and maybe it is because I am kind of an academic and I actually like to learn about how these thing work, I do take things apart, from a mental model as well as physically. And learning how they work you can get more out of them. And that’s why a collegue, Martin Thompson, that I worked with, his idea of mechanical sympathy, which he got from racing, is so appropriate to what we do, you know, in the performance space. Because knowing how these things work and being able to design with that in mind and leverage it, which you can get out of systems, that is just phenomenal. So, you know, you don’t have to, because there are people like Martin and myself and Peter Lawrey and a list of other people, who actually do do this for a living. They spend their time thinking about this and so.
But they can encapsulate things like in Martin’s case the disrupter and different things that he has down, and I am working with him on other things as well, as well as Peter’s work with chronical and all kinds of other ways of looking at this, this resonates with providing these other abstractions that can be used. I mean, if you look at like the collections within JAVA for example, they are not bad, but once you start getting to the upper end of performance, there are techniques which can dramatically change how they are done. The .NET and C# collections are ten times faster than something like hash map, just because of the way they are using open addressing. You know, it is technique. So there is techniques actually which rely on how a CPU works, how caches work, how you can access data, you know things like that. So knowing how those things work and being able to leverage them, can be a difference. And in high frequency trading, when speed is a competitive advantage, actually being able to use some of those, is something that you just can’t turn down, because it does have that big of an impact on your literal bottom line.
Absolutely yes. Measurement is always key. Performance is an exercise, or optimization is an exercise in measuring, you know, and in determining if one technique is better than another. You have to be very carefull about that, because it is always easy to micro-benchmark something that in the real world does not matter. I mean the common thing of you looking at how caches are being accessed and optimize, you know, for the use of caches, but than when you put it into a larger system where the cache is being polluted and it is being totally blown out, it doesn’t matter really what you do at that point. But, you know, if you take a whole system’s view and can fit that in, but you are looking at specific micro-benchmarks, they can be very usefull. So you’ve got to take it a bit with a whole system’s approach, but if you do, the advantage that you have are pretty stark.
Well the first thing is, I am extremely biased on how protocols work. And protocols come in two real generic forms. And think about this, if you are not familiar with protocols, just think about writing to disk. You can write in binary, in other words it is going to be a binary representation of the data, you know, intergers come in varying sizes, you are probably going to use something like, even if you don’t really care about size, you may use something like an integer 64, you know a 64-bit integer, very common, or you may use more appropriately sized type of values, or they may come in the form of ASCII, where you have taken the representation of something like an integer, converted it into a string and written it out to a file. I mean, we do that all the time. If it is a human reading thatit makes total sense, if it is a machine reading it, it does not make as much sense, right, because than it has to be converted from a string back into an integer before you can do anything useful to it.
So, one thing is to look at, what is the representation of the data, that is being on disk or sent over the wire, what is it really look like and does it need to be a string? Because the cost of actually parcing the string for an integer is costly. If you don’t think it is costly, try to implement it sometime and just look at the way you have to loop and everything else to get that. There are techniques that can dramatically reduce it, but a lot of times that is not done and we use libraries. And sometimes those libraries are not the most efficient thing out there. So looking at whether you need to have it in an ASCII form. One common thing that you will see within a lot of protocols within the ITF, HTTP aside, is protocols are mostly binary. You look at lower level protocols, IP, TCP, UDP, these are all binary, you know, because they are read by machines. And a lot of machine-to-machine data is ASCII. Log files are ASCII. We take log files from webservers for example. Does a human really read through those? Not really. Do we have machines that read through them, like Hadoop, map reduce, all kinds of different things? Absolutely. So why have them as ASCII? Very interesting, if you look down through that, it is a very interesting question, right? So if you take one view of it, we are kind of wasting a lot of processing demarshalling and marshalling to ASCII. But it makes total sense for certain use cases. So it is a delicate thing. So when I have usually taked to people about what they are doing and how they can improve performance in this little area, it is something as simple as just questioning: does that really need to be ASCII? Do you really need to know if this is ASCII, or can it be binary? Because in JAVA for expamle, writing out a value like in integer, is one call, no matter if it is ASCII or if it is binary. And it will be portable. If you are in C or C++ it actually doesn’t get much easier than writing out an actual editor. It is a memcopy. So you know, those kinds of things are extremely easy. It may not even be that, it could be you plug it into a struct and you just copy the struct. So things like that, and C# the same way. You got a lot of ways of doing this really, really simply. I have taught protocols many times. For about a decade I taught computer communications and one of the things I would have my students do is, at the start I would have them, you know, playing with existing protocols. Most of them were ASCII based. But as the move through the course, they get into binary and I start showing them techniques and different things like that and by the end you ask them do design a protocol. Very view actually use an ASCII protocol. They are mostly binary, because they have been exposed, now they know how easy it is and to them it is not something that is not arcane anymore. If you ask them in the beginning, that seems weird, not sure how to do that. So that’s one thing to look at is, do you need it to be ASCII or not and once you start to looking at binary, there is all kind of things that open up. It is actually easier in my mind. Maybe I have done it for so long that it is kind of been absorbed, but doing things like versioning and compatibility and a lot of other stuff, there is really good techniques that are already there. Just to give an example, IPv4 to IPv6: the first four bits, the first actually eight bits give you the version number. The rest rest of the packet afterwards is composed of that, so it is a big IF-statement, if you want to look at it that way.
The version and type, if I believe, it is all off the top of my head, is those first eight bits. Those two together will tell you everything else about what you need for the next several of bytes. And that doesn’t get much easier, right? I mean to be like having a version field is the first piece of data, that showes up. And you know, just as the evolution of protocols, as they evolve from sort of stumbling around implementing the interenet, there are some really hideous things out there, that protocol designers did. You know things like RIP, router information protocol, just look at it sometime. It is pretty hideous. But as we’ve got much better at how we can plan for future features, how you can lay things out, it is all data layout, it is not that complex, but there are really good things. And they have all been kind of evolved with the idea of performance, because it is expensive to implement these things on hardware. So how do you do that really effectively has been an ongoing debate for a lot of years, 15 years or so. So leveraging that knowledge and bringing it around, so that other people can use it and using it in other domains – I don’t even know where a lot of that stuff could apply, because I have done it for so long – but I think it does apply in a lot of different places. So using those techniques and using those to improve performance, change kind of how the way that the state of the art is. It is kind of fun.
Harry: Todd, thanks a lot for your time.
Alright, thank you.