BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Fantastic Front-End Performance Tricks & Why We Do Them

Fantastic Front-End Performance Tricks & Why We Do Them

Bookmarks
42:12

Summary

Jenna Zeigen covers the state of the art in front-end performance optimizations— from minimizing file size to preventing thrashing— digging into the way the internet and browsers work to explain why each of these practices is important.

Bio

Jenna Zeigen is a Senior Software Engineer at Slack.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Thanks for coming to my talk, "*fantastic* frontend performance tricks and why we do them." These slides are up on the internet already at that link. Also, feel free to tweet at me @zeigenvector on Twitter- questions, comments, compliments, anything of that nature. So yes, I am a senior frontend engineer on Slack's Search, Learning and Intelligence team, based in New York City. So I flew all the way here to QCon SF from New York. And I'm happy to be back in San Francisco. It's nicer weather here.

Other related things I do in New York include organizing EmpireJS, which is an annual JavaScript conference that happens in Manhattan, as well as BrooklynJS, which is a monthly JavaScript meetup that happens in, you guessed it, Brooklyn. BrooklynJS is the oldest of the borough JS siblings, of which San Francisco's WaffleJS is one of them. So maybe some of you have been to WaffleJS, and it's BrooklynJS's fifth anniversary this month. So I think that's pretty cool. I haven't been organizing it that long, but it's something that, it's a factoid I'm proud to share.

So, based on all that, I know firsthand the effort that it takes to pull off a conference like this. Actually, this is far bigger than any event I've ever organized. So could we get a round of applause for all of the organizers, volunteers and, our track host.

Just so I get a sense of my lovely audience, who would consider themselves mainly a frontend engineer? Cool. Backend? Nice mix. Both? Oh, wow. There we go. Both, that hit it. Neither? How about neither? I'm a frontend kid, that's why I'm up here talking about frontend performance, and hopefully everyone finds something that they learn in here, though some of you might already know some of these tricks. I think that's all right. So yes, regardless, we're about to dig into a lot of information, and there's going to be a lot going on here, and a bunch of people on the internet agree as well. There's going to be a lot of tips- advanced, beginner- but all rooted in the ways that networks, the browser, and human brains work. So for me, these tips become a lot more interesting when we understand why we're doing them. And so I'll be sprinkling that context in along the way, and hopefully it helps to make these tips a little bit more interesting for you as well.

The goal of frontend performance from the user's perspective is twofold. First, we want to allow our users to get what they need out of the website or application as soon as possible. So I'll be talking about some tips and tricks for loading performance. We also want to ensure that our users have a smooth experience when they're interacting with our website. Once everything is loaded, and they're scrolling around, and they're interacting with our site, we want that experience to be delightful. So I'll talk about some render performance tricks as well. So from the loading side, we want to make sure everything is getting to our users quickly. So first, we want to keep things small. We also want to keep things smart. And on the rendering side, we want to keep things smooth. But how do websites work, right? This is some of that crucial information, that crucial context that's going to help us root these tips and tricks in what's going on. So I'm going to do a super quick overview of how a browser gets its information from the server and turns it into a website. Because the context really makes these tips really pop.

How Do Websites Work?

First, a little bit about packets and protocols. So for those who are not familiar, a packet is a unit of data that gets sent over a network, like, say, the internet. And it contains a payload of information. These communications protocols are the ways that entities like computers transmit information among themselves over these networks. So TCP, which is going to be a super important protocol as we discuss these tips and tricks, stands for Transport Control Protocol. And it's the protocol that underlies HTTP, hypertext transfer protocol, and this is the protocol that powers the internet. It uses packets to send information from one computer to another. TCP's job is to deliver a reliable, ordered, and error-checked stream of packets across the network from one computer to another.

TCP also has something called congestion control. So these requests are going to start small and then they're going to ramp up in size, to ensure that the network can support these requests. And this is going to help the computers send the things that they need at exactly the right speed, based on each other's bandwidth and the health of the network itself. So at this point, the browser and the server have established a TCP connection, and in fact, in the case of this internet, more than one connection might be open. In fact, maybe six. And this is because in this world of internet that we're living in, it's a world of HTTP 1.1, the browser has a lot of stuff to do and not that much leeway. And the browser knows it has a lot to do, so it's going to try and plan ahead and open up these extra TCP connections and keep them prewarmed.

At this point, the browser is going to get all the information that it needs to build a web page by sending the HTTP request over the network to the server, and the server will send back the HTML document. So as the browser parses the HTML document, it will likely come across references to other files, like CSS, JavaScript, images, fonts, and more in those script tags, image tags, link tags, etc. And each of these files, in this HTTP 1.1 world, is going to need their own HTTP request to go and get them from the server. So the HTML is going to get parsed and turned into a structure called the DOM, or document object model. The CSS is also going to get parsed and turned into a structure called the CSSOM, or CSS object model. And then by the DOM and the CSSOM's powers combined, we will get a render tree, which is then used to compute the layout of each visible element, and serves as input to the paint process, which is what's going to paint the pixels to the screen. So I'll stop here. We're like halfway through the story, but I think we have good enough context to talk about a good chunk of render tricks. Of loading tricks, sorry.

Remember, the goal here at this point is to get the important stuff to our users as soon as possible. And this, remember, is all in service of user experience, which, yes, might in turn affect conversions, and money, and stuff, but user experience. The performance community has come up with a bunch of metrics by which to approximate this, because we all love numbers and measurable things and graphs, and these are what you can put on graphs.

So the first thing that your users are going to be looking for is some feedback that something happened, right? They navigate to your application, and they just want some feedback that the internet's working. So the milestones we're looking for first, are the time to first paint and the time to first contentful paint. The next thing that users will be looking for is that something useful is happening. They've gone to this website, or to this application, to do something, so they want to make sure that they will be able to at least get a glimpse of that something. So the metric for this is the time to first meaningful paint.

The next thing is that the user is going to want to probably interact with this website somehow, so the metric for this is how long it's going to take for the application to become usable, or the time to interactive. So in general, we want these numbers to be low; we want to optimize the critical rendering path. The general theme of this first section is to get the important packets to the user as soon as possible.

Keep Things Small

So again, to do this, we will keep things small. In general, we want to be sending as few packets as possible, because that means getting all the information to our users as soon as possible. Though, the total size of your website should not be the end-all be-all performance metric. So one thing that we can do in service of keeping things small is to minify your HTML, CSS, and JavaScript. Minification is the process of removing all of the unnecessary characters from your source code without changing its functionality. So between these three languages, HTML, CSS, and JavaScript, these unnecessary characters will include white space, new line characters, comments, and sometimes block delimiters. And these are generally used to add readability to the code for humans, but they're not required for it to execute. So while it's super important to make sure that all of these things, comments, white space, are there for when humans are trying to read your code and edit it and add to it and make it better, when it's time to pass your code off to a computer to read, the computer doesn't care about all your comments and your white space. So you're going to want to take it out in order to get all of that to the computer as fast as possible. And Uglify is a great tool for minification.

Here's an example of some unminified code. You see an array and a for loop. And here it is all squished up and minified. Smaller, but the snippet was small to begin with, but you can see the effect that even that had of just removing all of the white space.

So another thing that you can do to keep things small is to compress your HTML, CSS, and JavaScript. Compression helps a lot to help get your page to the user as soon as possible. gzip is a really popular compression method for these web things. gzip works by replacing repeated substrings with references to where a decompressor can find that substring. And this is great for HTML, because you know, think about how many times the string div appears in an HTML document. So in fact, gzip generally reduces the response size by about 70%. There are even some newer, better compression methods coming up, such as Brotli and Zopfli. I don't really know that much about these, but I'm sure the internet does if you are curious.

Optimizing images is also something that's really important. Images like JPEGs and PNGs can be huge files, especially for mobile devices, and even SVGs, which are gaining so much popularity in the frontend world, even have extra information in them put in there by the programs that create them, like Illustrator or Sketch. So, they also have a lot of white space in them, the white space that we took out with minification. We're not touching these files, so these image optimizers will be able to take that white space out. So running these images through an optimizer such as SVGO for SVGs, or TinyPNG, removes extra information like white space from SVGs, or extra data from your JPEGs and PNGs that humans will never notice because our brains aren't that good at telling differences in these things. SVGs can and should also be gzipped.

You should also make sure that you are using the right type of image type for the style of your image. Vector graphics, like SVGs, are really good for simple geometric illustrations like line drawings, while raster images like JPEGs and PNGs are generally, they're bigger files, but they're the best type for photographs.

And then some even more advanced image tips, if possible, you might want to try to use responsive images with srcset and sizes within the humble image tag, and then its even more Pokemon-volved sibling, the picture element. So here's an example of a souped up image tag. Miso is my parents' cat. So, see in this tag, there's some extra stuff that you might not recognize. So srcset is going to define the set of images that you want the browser to select between and how big each image is, and the sizes attribute defines the set of the media conditions, such as screen widths, that indicate what image size would be the best to choose, such as when certain media conditions are true.

And then even bigger version is the picture tag, so any source tag, the source that defines the set of images again that we want the browser to select between, how big each image is, and the media property is going to define the set of media conditions, such as screen size, again, that indicates what image size would be best to choose. The img tag also must be there as a fallback, or else nothing will show up. Then if you're feeling even more extra, you might want to check out the WebP and responsive JPEG file types. You can use these within the picture element with more conventional fallback. I don't know much about WebP or responsive JPEGs, but I figured they were worth mentioning for you more advanced folks.

So caching is also really important, and it happens at many places within the web request-response lifecycle, including at the DNS level. But we're going to focus on how we can use it to continue to download as few things as possible. So one thing you want to do is ensure that your proper Expires, Cache-Control: max-age, and other HTTP cache headers have been set. So in general, resources should be cacheable either for a very short amount of time, if they're likely to change, or indefinitely if they are static and they're not likely to change. You can just change their version in their URL, and you'll end up getting a fresh image, or a fresh version of that file when you need it. In fact, Cache-Control: immutable, this header is gaining popularity, and it was designed for fingerprinted static resources.

And caching is really important because if that file is already on your user's computer, it doesn't need to do a round trip, you're downloading fewer bytes, and everything will end up getting to your user a lot faster. You also want to carefully consider the impact of libraries and frameworks, on both site size and dev productivity. You really don't want to be sacrificing dev productivity in service of performance if you really don't have to. You might not need a framework, but you really might need a framework, and that's okay. But if you do, choose carefully, and consider at least the total cost of the size and the initial parse times before choosing an option.

So on a smaller scale, a lot of libraries have split what was previously a monolith into discrete modules. So right now, I'm thinking of Lodash, but there are probably a lot of others that have taken this route. So taking Lodash as an example, it's likely you're not using every single module in Lodash, you're not using those multitudes of really helpful functions. You might be only using a few. So import only what you need, and not the entire library. Also consider if the browsers that your application is targeting, if they have native versions of these functions that you're pulling from Lodash, like map and its other functional friends. Also, as Laurie said in their talk earlier, 97% of the code in modern web applications comes from npm. And I'm really happy for them that npm is getting that much usage, but 97% of code that's not your own is a lot. So maybe you don't need to be using that much code that is not your own, maybe you do. Who knows? Just check it out.

Tree shaking is a good way to get rid of the code that you're not using programmatically. It's going to reduce the size of the files that you're sending over the wire. So tree shaking is a method for dead code elimination. Though, Rich Harris, the developer of Rollup, which was the module bundler that really popularized this term within the community, prefers to call it “live code inclusion”, because it is actually a little bit more descriptive. So to do this, the module bundlers like Webpack and Rollup, are going to rely on the import and export statements in ES6 to detect if these modules are being imported and exported for use within your JavaScript files. And then it's going to go through and figure out what's actually getting used and only include that in your bundle.

And in the font realm, fonts are really big. So, can you get away with using UI system fonts? Maybe not. Design is actually important for user experience, so having a good design is actually important and well, you probably won't make your designers happy if you say, "No, I'd rather just use Helvetica or Arial because it's on everyone's computers." But if you do need to keep using that font, chances are high that the web fonts you're using include glyphs and other characters that you aren't actually using, font weights that you're not using. So if you're super fancy, you might be able to ask the type foundry to subset that web font, or you might be able to subset them yourselves if you are using an open source font. So for example, you might want to include only the Latin characters with some special accent glyphs in order to minimize your file size.

Another thing that you can do is put your files in a CDN, which means that they will be as close to your users in proximity as possible. And right now, we're talking not about keeping file size small, but about keeping round trip size small. So despite the speed of light being fast, it's actually a rate-limiting factor in getting packets from you, your servers, to them, your clients on their computers.

Keep Things Smart

That was a bunch of stuff about keeping things small, but what about keeping things smart? And by smart, I mean think about the order that you're loading things. I probably should have said "unblocked," but that doesn't start with "sm," so here we are. One of the really popular methods of keeping things smart is combining your files together. So this includes concatenating your JavaScript together and your CSS together, using a tool like Webpack, Rollup, or even Asset Pipeline. However, this might come back to bite you with caching. Because over time, you're likely to change a file that's getting concatenated together, and that will just invalidate the whole cache for that file. So because of this, some developers like having two bundles, one for their vendor assets, and one for the code that they're changing, their application code. So consider if that's something that would work for you.

Another thing that you can do is maybe make CSS sprites or icon fonts, so there aren't so many small images, each with their own requests. But also be aware that sprite sheets can get too large and actually backfire on you, sending a lot, a lot, a lot of really small images that actually could end up being too big. But why are we smooshing all these files together, right? Remember how we opened six TCP connections in the beginning? Well, that is the limit of the number of parallel downloads that you can have from the same domain. And even that is a hack around limitations of HTTP 1.1. This is a huge bottleneck. So, many of the optimizations that we hear about today are stemming from trying to get around this parallel download limit.

So another thing that you can do here is serve your images and other files from multiple host names, or sub-domains, so you can get around this hostname parallel download limit. And this is called domain sharding. And now, one of the most classic performance techniques that you can do is putting your scripts at the bottom of the page, right before the closing body tag. Why is this? Well, the HTML parser is doing its parsing, and as it's going along, it's going to run into a JavaScript include. And at that point, further parsing of the HTML document is going to stop. And this is because JavaScript has the ability to manipulate the DOM using [removed], and any HTML that gets added must then get fed back into the main parser. And that's a no-no. So to prevent this from happening, parsing of the document is going to halt until the script has been downloaded, if it isn't already on your user's computer, and executed. And while the script is downloading, the browser won't start any other downloads, even on different host names.

There are also more spec-approved ways of getting around this blocking as well. You could, maybe, add a defer attribute to the script tag, in which case this will not halt document parsing, and will execute only after the document is parsed. Similarly, you could add an async attribute to your script tag, which will cause the script to be downloaded asynchronously on a different thread in parallel. So with this, you're kind of promising that you won't call [removed], or if you do, that you're okay with it getting thrown out the window, which is a dumb pun.

So if you're using a bundler like Webpack or Rollup, you can also take advantage of code splitting, which is an advanced trick. So this feature allows you to split your code up into chunks, which then get loaded asynchronously on demand. So therefore, not all of your CSS and JavaScript have to be downloaded, parsed, and compiled right away. Could even go a step further with this for really expensive scripts, and load them asynchronously right away, when they are needed, with a shiny new API called the Intersection Observer API. And this provides a way to see if part of your page is within the viewport. So your user's scrolling along, and you get to that certain section that needs that really expensive JavaScript that your user might not actually ever get to. But if they do, then you can fire off the callback from the Intersection Observer and load in that script.

On the other hand, style sheets are supposed to go in the top of your HTML document, in the head tag. So remember that CSS is required for building the render tree, because the CSS object model is half of the information for the render tree. So you're going to want to kick off the request for all that CSS information as soon as possible, so the download of the style sheet happens as soon as possible. So putting the style sheets near the bottom of the document prohibits progressive rendering in many browsers. These browsers are going to block rendering in order to avoid having to redraw elements on the page when they get new information about the CSS. So from a UX perspective, this is going to help avoid the flash of unstyled content. And if that's not enough of a reason, if you're just more of a rules kid, the HTML spec clearly specifies that CSS is supposed to go in the head tag. So there you go, ruling.

But also within the past few years, it's been getting more and more popular to inline your critical CSS into the head of the document. And experts say that this will get you top performance. Your critical CSS is the minimum blocking set of CSS that we need to make the page appear for the user. The idea here is that we should prioritize everything that will get the user that first screen of content, also called the above the fold content. And you want to do this within the first few TCP packets. So by inlining CSS, you're cutting out the round trip time that would otherwise be needed to go and grab the styles. But due to the limited size of packets exchanged in an early TCP transaction, your critical CSS budget is very small, at around 16 kilobytes. This is also, like, why don't people do this more often? It's kind of hard to figure out exactly what that critical CSS is, so unless you really need to get this juice, people tend to not do it.

So I've been talking a lot about this HTTP 1.1 thing, right? It seems to be a huge pain. Well, there's another version. It's been taking a little bit of time to take off. Also a talk on HTTP/2 could be a talk on its own, a whole 50 minutes. But here's a little HTTP/2 101. So, what is HTTP/2? It's a spec that grew out of Google's work on SPDY, which has since been deprecated, and all that work has been folded into the HTTP spec. And it serves to fix a bunch of all these little pesky things with HTTP 1.1 that we've been needing to optimize around, right? The HTTP 1.1 spec came about before the internet was super fancy, so we didn't really expect to be doing all these things and sending a lot of files back and forth. So HTTP/2 is here to help us be able to not have to think about all these tricks so much.

So yes, a lot of the things that I just talked about, especially with load performance, will become obsolete when HTTP/2.2 is implemented more across the board. So, fancy features include request and response multiplexing. This is perhaps the biggest win of them all. So you know how you had those six parallel downloads, six parallel TCP connections that we had to open, that was kind of like a hack? No longer, because HTTP/2 allows the client and server to break down the HTTP message into independent frames, interleave them, and then reassemble them on the other side, all in a single TCP connection. And this opens up the possibility to allow the browser to fire off multiple requests at the same time on the same connection, and receive the responses asynchronously. With this, we no longer have to concatenate our files together. So no more JavaScript and CSS bundles, no more sprite sheets, and no more domain sharding, because we're going to use that TCP connection really efficiently.

Another thing that HTTP/2 is going to do is header compression. So this means that we're not going to have repeated headers getting passed back and forth. So in older HTTP, the headers are always sent in plain text, adding upwards of 800 bytes of overhead, or even maybe a kilobyte if we're using HTTP cookies. And that's actually a lot of bytes. So in HTTP/2, the compression format is going to generally reduce the transfer size using Huffman encoding, and it's also going to require that the client and server maintain a list of already passed around headers, so that future transmissions can refer to that instead of sending the entire string over the wire.

Another thing is server push technologies. So the server is going to be able to send multiple files for a single request from the client. So you can do that request for the HTML, and the server will send all of the included JavaScript, CSS files along with it, because it knows that those requests would otherwise be coming. So this is going to get rid of the need to inline files in HTML documents, because it can just send them over with server push, and this is actually better, because it allows these accesses to be multiplexed, cached, and even declined by the browser.

Keep Things Smooth

We've gone through a bunch of loading tips and tricks, and now for keeping things smooth. So getting your page to load super quickly and getting all the critical information there is super important, but it's only half of the performance story. You also have to make sure that when your user is interacting with your page, that that experience is delightful. So let's pick back up with this, "how do websites work?" story. So you might remember when we last left our heroes, we were combining the DOM and the CSSOM trees together into the render tree. And at that point, the render tree contained information about what nodes are visible and the computed styles. But we haven't yet figured out where these elements are going to go, how big they're supposed to be, and this is all part of the layout process, or reflow.

Layout is a recursive process, so it's going to start at the root renderer, which corresponds to the HTML tag in the HTML document. Then the browser is going to recursively traverse the render tree, calculating each node's position and size within the device's viewport. So in this phase, are relative measurements are converted into exact pixel locations on the screen. Once we know the location and dimensions of all the elements in the render tree, we can then start representing everything as pixels on the page, the painting phase. So in this stage, the render tree is then traversed again recursively, and the renderer's paint method is called to display the content. So this process is going to involve the browser making layers from the bottom up, in 12 phases, determining what color to paint each pixel. This paint process is going to create a bitmap for each layer. And then these bitmaps are going to get handed to the GPU for the compositing phase, or more technically, smooshing all of the layers together.

Repaints can occur at 60 frames per second, or every 16.66 milliseconds. Sixty hertz also happens to be the frame rate of the essential human visual field, or the flicker fusion rate of cones. I don't think that this is a coincidence. It very may well be, but that would be really surprising to me, you know? So maybe they did that on purpose. So just to recap, because I'm going to come back to this, getting the pixels to appear on the screen happens in three main phases: layout, painting, and compositing.

But what about JavaScript? So your renders are going to get constrained by the speed of your JavaScript. So this 60 times per second thing is like dream land; it's only going to happen under the most perfect of conditions. So JavaScript is single-threaded, running on your browser's main thread, along with all of the style, layout, painting, calculations, all of that complicated stuff that we talked about. It's all going to get thrown in along with all of the complicated JavaScript stuff that you're doing. So everything that gets called in JavaScript is going to go onto its call stack. Synchronous calls are going to go right on, and async callbacks, like those from web APIs, are going to get thrown onto a callback queue, and then are able to be moved on to the stack with those synchronous calls by the event loop, once that stack is cleared.

There's also a render queue that tries to get its renders done those 60 times per second. But renders can't happen if there's anything at all on the JavaScript call stack. And, bonus content, that's why nothing can render if you're in an infinite loop, because that call stays on the call stack, preventing those renders from getting initiated.

I hope that that detour into how the internet works a little bit more will help put the rest of the things I have to say into perspective. Now, back to your regularly scheduled performance tips. So if you're effecting visual changes via JavaScript, you're going to want to try and do that at exactly the right time. So wrapping your updating function in something called window.requestAnimationFrame is going to tell the browser that you wish to perform the animation, and request that the browser call your updating function at just the right time, right before the next repaint. And this particularly applies to scroll handlers, window resize handlers, things like that. So things that are going to get fired a lot as your user's doing them. So, wrapping them in requestAnimationFrame is going to debounce them until the next time that there will be a paint.

Here's an example actually from my website. I thought it'd be really cute to programmatically draw SVG triangles on the bottom of my page. And then someone might make their browser window a little wider and fill in that gap. And then I asked my friend to do a code review for me, and of course, the first thing that he does is move the window, bigger and smaller really, really quickly, and he was like, "Jenna, this is all janky." And the way to solve it was just to wrap the drawTriangles method in window.requestAnimationFrame. That little bit of code fixed that problem, so it's super cool.

Cognitive science time. Under normally scheduled circumstances, our minds are going to take those choppy discrete inputs, you know, that 60 frames per second, and create a seamless experience of movement that we perceive in a phenomenon called apparent motion. But if the timing of your updating is off, if you're moving something along and you drop a frame, so something goes from here to here, you won't see that as motion. And if you drop frames, it will see it as jumpy instead of continuous. And sidetrack, we think this happens because, like we evolved in the natural world, so we expect normal things to be moving through the world according to the laws of physics, not watching things get updated on a screen. So we have evolved to resolve those inconsistencies to see it as motion, instead of something moving around on a screen. Which I think is pretty cool that that's how we see our screen updating every 60 hertz.

Back to your computer. If you're doing computationally intensive stuff with JavaScript that isn't pushing pixels around, chances are that you can move it to a web worker. So, web workers are a way for you to run JavaScript in background threads. And web workers are pretty versatile, but they do have some limitations, like they don't have access to the DOM. So you can't do DOM manipulation with them. They also don't have access to some of the things that live on the window object, but you do have access to things like cache, or web sockets, or XMLHttpRequest and fetch, and even WebGL.

Why would you want to do this? As I mentioned, any JavaScript computations on the browser's main thread have to compete within that 16 millisecond time window, complete within that window, to ensure an unblocked next paint. So moving computations that might take longer off the main thread, so they aren't gumming up the crucial call stack, can help you even have a fighting chance of getting all of your update calls happening within 16 milliseconds.

CSS things. If you need to move something or make it disappear, you might want to consider using transform or opacity as opposed to position and display. And this all goes back to that three stages that we had: layout, painting, and compositing. So these properties that I suggested can be handled by the compositor alone. Because you don't have to do the layout and the painting, just the compositing, it's going to happen faster and smoother, because layout and painting are really expensive properties, or, are really expensive operations, so you don't want to do them if you don't have to. But the caveat here is that these elements, in order to take advantage of this magic, have to be in their own compositor layer. And you can do this really easily by adding will-change: transform to a moving-element's list of properties. But you're going to want to use this magic sparingly, because there's a memory hit from maintaining each layer. And this is particularly pricey on devices with limited memory, such as mobile devices.

In addition, reducing CSS selector complexity is one more thing that you can do. And using a CSS methodology is a great way to help control selector nesting. So the deeper the styles go, the longer it takes for the browser to figure out if there's a match for that entry in your CSS to something in the DOM. So, many of the proper methodologies are going to rely on giving an element that needs to be styled a single class based on strict naming rules. So it's easy for the browser to figure out if a rule should be applied or not. Does it have that class? Yes. Okay, apply the CSS. I personally have become partial to BEM, block element modifier, because that's something I've used at work. But there's also a bunch of other methodologies, like SMACSS, Suit, and object-oriented CSS.

Here's an example of something really complicated. I could not tell you off the top of my head what that actually is asking for, so I'll read it off my speaker notes. So is this an element with the class of title which has a parent who happens to be the negative nth child plus one element with a class of box? Complicated for humans, time-consuming for browsers, so, it's so time-consuming for the browser because it has to gain a lot of information about the element and its siblings in order to figure out if it matches. This, on the other hand, is a lot easier for humans, because you also want to consider the impact of your choices on your people you're writing code with. Also a lot simpler for browsers.

Always Measure First

I've talked about a lot of tips and tricks, but like in all things, premature optimization is not good. So you want to make sure that you actually need to optimize, you actually need to use these performance tricks. Check out, see if you actually have a performance bottleneck. Assess your performance. Again, as I've said a bunch of times, assessing performance could be a talk in and of itself. There's so many tools out there, and I'm sure there are talks and other resources already on the internet that you should really go check out. But here's a super minified version of what exists just in Chrome for you to assess performance.

There are two general camps of performance assessment. One is synthetic measurements, so using a controlled environment to see, “hey, do I have a problem?” So this is really good for like you're building something and you realize something takes two seconds to load, you might want to go and check just on your computer, “hey, I'm doing some debugging, why is this such a problem?” So tools like the network performance and memory panels in your browser, as well as Chrome's Lighthouse in the audit tab, are really great for synthetic testing just on your laptop, or on your machine.

But this is only half of the story because it's, again, a controlled environment. If you don't throttle your network connection or any of the other cool things that you could do just in Chrome, it's probably going to be the best fighting chance that your code actually has in that perfect condition on your dev server. So, again, synthetic testing is only half of the story, because it's a controlled environment. But there are some pretty cool tools in Chrome and Firefox to have that initial performance assessment going on.

Also, real user measurements. How is your code doing in the wild? This is something like, you get a lot of tickets like, "This is slow," you might want to add in some real user testing markers, for which the resource and navigation timing APIs are really great friends. So the navigation timing API is going to collect metrics for the document as a whole, while resource timing is going to collect them for just resources, like your CSS files, or JavaScript files, or images. And these amazing APIs are living just on the window performance object in your browser. And because they exist in most modern browsers, they can be used to gather metrics from users in the wild. So just collect them and send them back home for aggregation and then you might be able to get some pretty cool graphs that will say, “hey, yes, we actually do have a performance issue out there in the wild.”

So, again, really, measure first. This is, again, only the tip of an iceberg, just what you can do in your own browser and in your users' browsers. So take a step back, see what your app and what your users need. Things like code splitting and lazy loading are advanced and time consuming, and you and your users might not need them, and implementing them might take time unnecessarily away, and you could use that time to do other more interesting or important things.

To recap, do these three things to have great performance: keep things small, keep things smart, and keep things smooth. If you are interested in learning more, check out my slides online. These are actually links and here are all of the songs that I butchered. There's a lot. You can find my slides at that link, tweet things at me there.

 

See more presentations with transcripts

 

Recorded at:

Jan 22, 2019

BT