BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Latest in the World of Web Engineering (Featuring AI)

The Latest in the World of Web Engineering (Featuring AI)

Bookmarks
48:00

Summary

Tejas Kumar overviews web engineering in relation to AI, AI engineering, Intelligent Answering Engines, an update on CSS, HTML, JavaScript, and personal health and productivity.

Bio

Tejas Kumar is an international keynote speaker with an engineering background spanning 23 years, from design to front-end to back-end to devops. Today, Tejas shares talks at large with developer communities worldwide, equipping them to do their best work.

About the conference

Software is changing the world. QCon London empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Kumar: My name is Tejas. I've been building on the web for over 20 years at various places, either as a consultant or as an employee. I've had the privilege of working at some really great places and helping move the needle towards great engineering. What I'm here to talk to you about is the web in 2024. What is the state of the web? It's a lay of the land. The plan is to cover top contour. We probably won't go too deep, but we will go broad, and to just orient you in time where we are today.

What is in the zeitgeist? What are people talking about? Why are they talking about it? The goal is not to tell you to go use this or go use that, but to tell you enough detail that you don't feel like an imposter in the space when you hear a conversation about HTMX, or should I use Astro? You know enough to engage in those confidently, as opposed to, like Willian was saying, everything is different now. That's the goal. If you leave here with at least one thing you maybe didn't know or haven't had the time to explore, I'll consider that a win.

AI

We can't get started without talking about AI. Marc Andreessen from Andreessen Horowitz, the investment firm, was famously credited with saying, software is eating the world, years ago. Indeed, software has eaten the world. We don't have clothing stores anymore. We have online places that sell clothes. We don't have travel agencies anymore. We have websites that sell tickets. This is the state of today, and AI is sort of going in that direction as well. AI is starting to eat the world. If we don't agree with that, we're really not seeing reality the same. Almost everybody is adding chatbots. That's a cheap use of AI. It's important for me to tell you this, AI is accessible to everybody, and really, most of us may already be AI engineers, we just may not know it.

I speak to a ton of people, and I ask them, do you consider yourself an AI engineer? Do you feel like you can play in this space? Do you feel like you can contribute? Do you feel like you can work with or in AI? I would ask you that here, if you feel like that, can you show me? This is what we're here to fix. Because when I ask these people, do you feel like you could be an AI engineer? The response is no, because I feel like there's a background that I need in lambda calculus. I feel like I need to understand machine learning and back propagation and the hidden layers, and I don't write Python, it's machine learning, that's not AI engineering. These are fundamentally different. I'm here to tell you that you are an AI engineer, and you may not even know. You may be one. You may not even know. The reason I tell you this is because AI engineering and machine learning engineering and data science are all very different things.

What is AI Engineering?

That begs the question, what is AI engineering? Is this new? Indeed, it is new. OpenAI, the company behind ChatGPT, they posted the first job ever with the AI engineer job title. It's a very new field. What does it mean? To answer this question, let's look at, not my opinion, because I'm not really that big of a deal, but let's look at someone who is a big deal's take. This is Andrej Karpathy. Andrej is the former director of engineering at Tesla in charge of AI, so director of AI Tesla, former director of engineering at OpenAI, the company behind ChatGPT. The dude is just an absolute unit. If anyone knows AI, machine learning, all of it, it's Andrej Karpathy. He's actually got hours of a workshop on YouTube that's free, if you want to learn. What does he have to say? This is in response to an article posted by my friend, Shawn Wang, Swyx, host of the Latent Space: AI Engineer podcast. He's talking about an article written about the rise of the AI engineer, the role of the AI engineer.

I think this is mostly right. He gives a huge list of points about why, an article about the rise of the AI engineer is right. I want you to zero in on this specific thing. What does that mean? Let me highlight this for you. He says, "There's probably going to be significantly more AI engineers than there are ML engineers". He's making that distinction. He's also making a distinction between ML engineers and LLMs, or large language model engineers. He says, "One can be quite successful in the role of AI engineering without ever training anything". This makes the distinction that AI engineering and machine learning engineering and even large language model engineering are fundamentally different. Then, they're different, but what is AI engineering? AI engineering is the application of engineering, that is problem solving. If we go one layer deep, what is engineering? Engineering is the application of technology to solve problems, period.

What is technology? Technology is the application of knowledge. We have knowledge technology engineering to solve problems. It's just problem solving. What is AI engineering in this case? It's problem solving with AI. Some of you may be web developers. Some of you may be JavaScript engineers. You write browser JavaScript. You may know this browser API called Fetch, make a network request, POST, GET, PUT, whatever. AI engineering can be reduced to a Fetch call to OpenAI's API with a JSON payload, you get a response. You use that response from ChatGPT, whatever, to solve a problem. This is the working definition of AI engineer. Let me ask you again, are you an AI engineer? I want you to really take this home, because here's the thing, there's a ton of money in this. There's salaries going up to like $600,000 a year. I want you to be able to play in that field. I want to make this accessible to you. It's not about money, it's about access to information. This is the working definition of AI engineering, not me, Andrej, Shawn Wang, Swyx, all of us in the space, this is what we're going off of.

Demo (AI Engineering in the Modern Age)

Let me show you a demo about AI engineering in the modern age. Before I show you this demo, I want to preface, ChatGPT is not fundamentally an AI innovation. This is a big mistake that people make. GPT-3.5, the model behind the first ChatGPT that came out in 2022, was around. Nobody cared. Nobody cared about it because there was no user interface to it. Then, OpenAI, a few days later, "We believe in GPT-3.5, how do we tell people?" Invested in a React app with a chat UI that talks to it over HTTP. ChatGPT is fundamentally a user interface. It's a user experience innovation. It's not an AI innovation. Sometimes through web engineering, through frontend engineering, we need that to facilitate the storytelling. That went on to be revolutionary. That is AI engineering. It's a UX innovation. I want to be very clear on that.

Let's look at this demo. What are we going to do for a demo? Let me just show you my website. My website is built with AI as a first-class citizen. This is my website. I have a podcast. This podcast goes very deep. Each episode is like an hour and a half, an hour 28 minutes. It's very long. A lot of people go, "I don't have that kind of time, did you break it up into chapters? I just want one answer". How do we do it? With AI engineering, we can. There's a new Core Web Vital in the web. We had First Input Delay, FID, as a Core Web Vital.

This is an indicator of how long it takes for your website to respond to the first input. This is being replaced with a new Web Vital called INP, Input to Next Paint, meaning after you input some data, after you click, how long does your page take to paint the next state of the UI? FID going away in favor of INP. We discuss this in the podcast. Maybe someone just wants to know about INP, so they come over here, and we have this beautiful search where you could just go like, what is INP? What this is going to do, is across every podcast episode, this is multiple hours, it's going to find, where did we talk about this? It says, we discussed INP with Lazar from Sentry, and INP stands for Interaction to Next Paint. It teaches you this right here. It keeps going. If you want to then learn more, you tap on this, and it takes you to this.

"INP is new. What is INP? Do you remember what it stands for? I forget".

Nikolov: "Yes, it's Interaction to Next Paint".

Retrieval Augmented Generation (RAG), and Reinforcement Learning with Human Feedback (RLHF)

Kumar: Then you just go from there. This is AI engineering right here. I didn't train a model, didn't do any data science. Instead, how does that work? I want you to take this with you so you can think about applying AI to solve problems, aka AI engineering. This is interesting, because all we're using is web technology. Let me show you how this is built. The goal here is education. How is that built? We use a number of techniques. One technique is called Retrieval Augmented Generation. Anyone heard of this, RAG? This is a great technique with AI. How it works is, you retrieve a bunch of data. In the case of the podcast episode, I was like, I have a huge database of everything ever said in my podcast. There's hundreds of thousands of rows of data. What I do is a similarity search. Someone input a query, what is INP? We then turn that into numbers. These are called embeddings. We turn text into numbers. Then we go to my database, it's a vector database. It's called Xata.

There's a number of them you can use. We go to a vector database and say, these numbers from the input query, find results that are close to them in your vector space. It gives me just like, similar, ok, the question is about INP. Here's eight results from your podcast about INP. We take those results and we give it as a system prompt to OpenAI's GPT-4 Turbo API. You just tell it you are a podcast curator, here's 10 results. Now generate new text based on this context. We augment the generated output with data we retrieve, Retrieval Augmented Generation. We literally just augment the generated output with some data. That's how that's built. I feel like some of you, I can see the dots are connecting. You're like, I could do that. I could just do a search, get results. Say, ChatGPT, here's a bunch of stuff, now say it nicely. That's all we're doing. This is AI engineering. I'm being reductionary on purpose because I want it to be accessible. I want you to feel empowered.

Second is RLHF. That's reinforcement learning with human feedback. Some of you may see the thumbs up, thumbs down on ChatGPT. What you're doing is you're telling ChatGPT, this is good, that is bad. Then what OpenAI can do is they can use that data to reinforce the responses. In fact, that's what they do do in GPT-3.5 or GPT-4, so the quality gets better. How this works is you take a bunch of prompt response pairs with a grade, so this was good, this was good, this was good, and then you give it new prompts, and you get the response, and you basically have a loss function. Of the new generations, which ones are good. You just reduce the loss, the rate of bad responses over time.

Answer Engines

There's a trend in the web today away from search engines. Google is struggling, really struggling, and now Google has a new large language model called Gemini, and their Hail Mary this time is to deeply integrate Gemini with all of Google Workplace. Very soon, if you use Google Workplace, that is, Google Calendar, Google Docs, Google Mail, Gmail, you're going to see Gemini deeply integrated here. What that means is, you're just sitting chilling at a dinner, and this AI assistant from Google is like, just so you know, you're meeting your friend in two weeks, and you don't have a location in your calendar event. I noticed it's been three weeks since you've had Indian food. Maybe go to the Indian place. Here's a reservation. Do you want to approve? Approved or not. Gemini, this is how Google gets back some market, because they're losing. This deep integration of Gemini is what they're currently working on. Google's dying in the AI race. They just lose. They're too big to move fast enough. What's taking their place is answer engines. Anyone using an answer engine instead of a search engine? Perplexity is what's killing Google these days. Great tech company. Indian CEO. I can show you a little bit of a demo here.

Perplexity AI, what do we want to search for? How do I do RAG? It's going to go to Perplexity, and it will ask, and what it's going to do is search the web, get a bunch of search results. It's going to retrieve. It's going to search the web. Retrieve results from a web search and use that retrieved stuff to augment the generation from a model you can choose. It's doing RAG. It's RAG as a business. You could build this yourself. I just want you to know that. Crawls the web, gets results, and gives them to you. How do I do RAG? There we go. The cool thing about this is it can search inside YouTube. They've got tons of venture capital to build this really well. There you go. I think it misunderstood me. It's confidently wrong. The cool thing with this is the stop, and you can ask a follow-up, no, I mean Retrieval Augmented Generation.

I want to show you something here, because you can do it conversational style, and you can ask follow-ups, and it failed again. It will give you follow-up ideas. Here's the thing I wanted to show you, you can choose the model that you generate with. This is really cool, because this is like a hack. Claude 3 is blocked in the European Union. Through Perplexity, you could actually access this model, which is the most comprehensive model on Earth. Some say this model is actually sentient.

Is AI Going to Take Your Job?

Is AI going to take your job? I feel like this is what you want to know. This is not clickbait, but it's a real question. Is it going to? We just saw it be confidently wrong, but it's worth considering. The answer, obviously, is like, maybe. Nobody knows. There's been innovation in something called Devin. It's too expensive to take anyone's job now. Through techniques like RAG, I think it is going to take some jobs. I think if we don't acknowledge that, we're being delusional to the point where we're going to miss out. I'll tell you why. This is an actual real-world case study that I want to show you. Anyone use Prisma, frontend web people? Drizzle, or some type of ORM? With Prisma, you need to know a specific, it's called a DSL, a domain specific language. You need to write a Prisma schema in the Prisma language. There's a learning curve to that, because it's not TypeScript, it's not JSON, it's not TOML, Tom's Obvious Minimal Language. It's a new language.

Some of us had to learn Prisma, and we're like, why do I have to learn this? It's a barrier. What if we didn't have to? What if the barrier to entry was so low? What if AI lowered the barrier to entry so much that you didn't even need an employee or a team. This is what we're seeing with some RAG. Let me show you one thing here, where there's a similar company that needs a domain specific language, but from the schema, it generates and provisions full backend infrastructure.

We're talking database, authentication layer, monitoring, so like, from HTTP all the way through the database, like distributed tracing, three APIs, so REST, JSON, GraphQL, whatever you want. It handles all your backend concerns, so that you just build a UI around it, and you go. From a single schema, it generates all of this stuff for you. It's all deployed. It's already APIs, everything. That, in theory, would mean, if I'm an early startup and my costs are low, I don't hire people for this, and I use this instead. However, people have to learn the schema language. There's work being done to remove that barrier.

This scared me, and I wanted to show that to you here. The company is called Keel. They're actually based in the UK. Keel, a single schema generates a full backend. They have this thing called KeelGPT, which I made because I was curious. You say, write me a backend for a to-do app. This is going to use RAG on all of their documentation. It's going to use reinforcement learning with not just human feedback, but machine feedback, and it's going to generate for me a full schema. There we go.

For a basic backend to-do application. It just knows the schema language that no one on Earth for sure knows, because they've been like a stealth startup. It's explaining the schema design, and at some point, it will just give you a full schema. Here we go. It's writing a language that nobody knows. This is going to generate a database, an API, an auth layer, distributed tracing. What is the role of a founding backend engineer here in an early startup with low costs? Non-existent. I think it will take some jobs. Here, it wrote this thing that we don't know. If you git push this, it will be deployed on AWS and Neon, and it will be functional.

However, there's one final step, and you can see that here. It's aware that it can hallucinate. It's like, let me just double check by validating this with an external service. It will run this. This is actually a validator that will tell you if the schema is accurate. If it is, it will say, great, you can now commit this to a GitHub repo, deploy, and you've got your backend. We'll give it rights to talk here, and it will validate itself. If there's an error, it will self-correct until it's valid. Absolutely insane. Let's see what happens here. Talk to it, and now, are you valid? There you go. This will become backend infrastructure. Absolutely wild. The job of the backend engineer who was going to do all this work. I think some of us really need to wake up because it's going away. I hope I've made the case for AI in the space today. I think it's valuable, because I want you all to identify as AI engineers, contribute, and play in the space.

The Biggest Technical Problems in AI Engineering Today

What are some of the biggest problems in AI engineering today? Really, I want to reframe this to, where can you get involved? All of you are engineers and we solve problems together. If you wanted to get involved, where? Really, there's two places, one is reducing cost. This stuff is expensive. My website, the one that I showed you, the podcast search, that runs GPT-4 Turbo, which, per around 750,000 words, will cost about €30. That's a lot of money. It's used pretty heavily. People want to learn a lot of things. I can't spend €100 per day on my website without struggling a bit.

Cost reduction is key. There are already techniques for this. There's something called semantic caching, where you get a prompt, and if it's close enough to another prompt, you serve the cached response using vector search. That's already a thing. A really advanced way of reducing cost that I'm doing, my models run at 7 cents per day at this point in time. It's pretty cool. Another cost reduction technique is fine-tuning a cheaper model based on GPT-4's responses. There's a model called Mistral 7B. It is the cheapest known model today that is also open source. What I'm doing is I have a lot of people asking questions, and I get responses from GPT-4.

I take those and I impose them on Mistral 7B, and sometimes I serve the cheaper model's response, and you just don't know because they're so close. That's something we do. Then the second problem where you can get involved is memory. Every large language model, every machine learning algorithm since the beginning of time, is limited by memory. I don't mean RAM, I mean the size of the context window. You can't have a ton of context. You can't have a person's entire life as context. That would be so cool. Imagine having Jarvis. You would need my entire life there for it to know me intimately. There's work being done. We have some very large context windows, but there's more strategies that are at work today in solving this problem. One example is, instead of vector databases, we have specialized memory databases for large language models. I'd encourage you to look there, but it's very young, and there's really not a lot of data at that point in time. That's been the AI piece.

JavaScript

Let's explore the state of JavaScript on the modern web today. Some stuff about JavaScript. We have Object.groupBy built into the platform now. This is incredible. You don't need Lodash. You don't need to ship extra stuff to your users, because they get this for free. Object.groupBy is very similar to _.groupBy from Lodash, except the API is slightly different, but it's all documented on MDN. It's supported in every major browser, and Node.js from version, I believe, 22 onwards. Use it. The big problem is we import a lot of stuff. We ship a lot of code that nobody needs. This is part of the platform. It's excellent.

Number two, the Array prototype gets three new methods: Array.toSorted, Array.toSpliced, Array.toReversed. You might be going, we already had Array.Sort, already had Array.Splice. What? You might be one of these people going like, who cares? I can already sort. There are benefits to the new methods. What are they? They're functional. They don't change the input array. They're immutable. Why does it matter? Why does immutability matter? This is really some software design principles. The reason we have .toSorted, and the reason the JavaScript creators felt so strongly about this is because it's very easy to shoot yourself in the foot using Array.Sort, for example, because a lot of people don't know that it changes the input array.

You have an input array, you call .sort, you reference it somewhere else, but you reference like index three, but index three has moved, and you have bugs in your software. Array.toSorted, prevents that by giving you a new array, leaves the original array. You have no inadvertent changes. Number two, it encourages you to think in purity. What is purity? Purity means your function does nothing else. For a given input, it has a given output, that's it. There's no side effects. When we mutate an array, when we change it, we're doing a side effect that may be unknown as you go through larger code bases.

Number three, this really just depends on the scope of your project. For example, Array.toSorted is slower than Array.Sort. It has to be, because anything functional and immutable is slower. Why? Because you need to copy the original array to a new memory address, and all of this cost resources. You don't always have to use toSorted. You don't always have to write functional pure code. It is slower, so if you're doing work on embedded systems, maybe use mutation. It's fine. It's just, you have the choice now of both.

This would not be a lay of the JavaScript land if we don't talk about frameworks, and really make the distinction between framework and library. When people go, React is a library, not a framework, and other people say, React is a framework, not a library. We're not here to get into debates. I want to talk to you about frameworks, and I want to tell you this. This was something I posted on social media that went very popular.

It is my informed professional opinion that Astro is the single best tool to use to build high quality websites and web apps at this point in time, period. There is nothing else in the web engineering, specifically frontend space that comes close. Fred Schott is the author of Astro. It's just an absolutely brilliant project. I would be remiss if I didn't tell you why. Anyone using Astro in production? Anyone using React in production? React is probably the worst choice for performant websites and apps in 2024. It's the slowest.

Everybody knows this because its reactivity model relies on function recursion, which we just talked about functional with Array.toSorted. Functional recursion and purity tends to be slower because you're calling functions like this. Also, if you have a high-level parent component with a number of children that are expensive, and the parent component's state changes in React, then every single child from the tree down is going to be recomputed unless you explicitly wrap it in memo. It's bad. You shouldn't have to think about memo. That is not my opinion, that's everybody's opinion. That's actually the React core team, they're working on a compiler that automatically adds memo for you behind the scenes. That's how aware everyone is of this issue. React is the slowest library in terms of application code. This is the code you will write.

The authors of React have done a great job of making it fast. Let's go one level up. What's the faster way? There's libraries like SolidJS. Qwik is actually the fastest. We'll get to this. Svelte, Angular, they are all faster than React because of their reactivity model. Their reactivity model is fine-grained, as opposed to React's coarser reactivity model. What does fine-grained mean? They all use a reactive primitive called a signal for reactive updates.

A signal is very cool, because when you read a signal, you implicitly subscribe your portion of DOM, not even your component, your little portion of DOM to the signal. When the signal updates later, you call setState. What happens is just that little piece of the DOM changes, your function components with Solid, your Angular, whatever, they are never recalled. Function components are never called again. There is no re-rendering in these spaces. It's just tiny, incremental, fine-grained updates. That's why they're all faster. However, these are all libraries.

What is Astro's deal? Astro is not a library, it's a framework. What's the difference? A framework gives you a frame within which you can work, a library exports functions. Why is React a library? React doesn't really care about your directory structure. React doesn't care about your routing. React doesn't care about, are you server rendering or are you client rendering? It doesn't care. It exports a function called createElement. Every time you have a little Angular bracket, like an HTML element, JSX, that calls React.createElement. React exports useEffect, useState. It doesn't have opinions. It doesn't give you a frame for working. Therefore, React is a library.

A framework says, you have this folder structure, use server rendering. It has opinions. It's a frame within which you can work. NextJS, framework. React, library. SolidStart, framework. Solid, library. Angular for sure, framework. Astro is a framework. It says, structure your code like this. It says, a component looks like this. As a framework, Astro has the concept of reactive islands. Astro is fully static by default. It doesn't have signals. It doesn't have components. Has absolutely nothing. It just has support for HTML. When you want to make something interactive, you do Astro add React, Astro add Svelte, Astro add Solid, Astro add whatever.

Then it does all the work for you to make a little portion of your application a React component. With Astro, you could have your entire site be static, but your little contact form is a React component. Then another, like your little burger menu, that could be an Angular component. You can have different UI libraries, if you want, and just have tiny little reactive portions.

Let me show you a demo, because my website, the one with the AI search, that's built with Astro. I can make this really real for you. This is the source code of my website. If we look at the code base here, you have source, you have pages, and these are just Astro pages. The index page, this is basically what an Astro component looks like. You import a bunch of stuff. This is called a code fence. All of this is imports in JavaScript. We export a JavaScript const. After the code fence, this is all just static HTML, just static, nothing special.

The AI search component is interactive. All of this is static. Some stuff is interactive, AskPodcast. This client:visible signals to Astro, don't statically render this, but when the component enters the viewport, then load the JavaScript. We also ship absolutely no JavaScript. My website is like Lighthouse 100. It is extremely fast, as you've seen. When we come to AskPodcast, only then render that. What does that look like? Let's take a look back to the website. Let's go back here. If we load this from the top, let's get that out of the viewport.

This podcast AI thing is not in the viewport anymore. If we hard reload, you see how quick that is, and this is without cache, ridiculous. As I scroll to the AskPodcast, this whole component just loaded. You didn't see it, but it just loaded. There was no JavaScript until I scrolled down. That's the power of Astro. This component is obviously interactive. I can type and search. How does that work? This you can see it's not .astro, but it's .tsx. This is a SolidJS component. If we go here, this whole thing, this function is a SolidJS component. createSignal is imported from SolidJS, but Astro speaks SolidJS.

This Ask component, this is an island in my static ocean, so islands. This is the whole power of Astro. I could have implemented that island in Solid, I could implement another island in Angular, and Astro will just put what needs to be where it needs to be. With Astro, the question becomes, what do we use? Nobody cares. If you have a team that prefers Vue, use it. If you have a team that prefers jQuery, use it. Astro gives you a frame for success. You make your decision separately.

Anyone heard of vlt? It's replacing npm. npm the Node package manager was created to serve packages, but then acquired by GitHub, and then it's like lame now, for real. It hasn't seen a new update for years. It's just stagnated. Its creators, the founders of npm, were like, this sucks, we need to fix it. The same people who created npm have a new company called vlt, vlt.sh. It's literally aiming to be the replacement for npm and Node.js. I would pay attention to that. vlt is replacing npm. I would encourage you to play.

You'll see more of it. Bun is Node.js. It's supposed to be a drop-in replacement. It's not. It's not fully compatible with Node.js, but it is a JavaScript runtime that is monumentally faster because it's written in Zig, which is a much faster programming language. It does all the things Node.js does faster. It also includes a test runner, so you don't need to use the test ingest. It includes a lot of things that you go for in a Node.js environment. Tauri is a way to build desktop applications like Electron using web technology.

Your UI is with Astro or Solid or React, but when you need to access the file system or do some backend stuff, you're not talking to Node.js, which is very heavy, you're just directly doing this via Rust. The backend is Rust, frontend is web technologies. As a result, you can create highly performant but tiny applications. That's because Electron uses the Chromium browser. Every Electron app ships the whole Chromium browser and ships the Node.js runtime. These two are very beefy things. That's why Slack, for example, is very slow. It's an Electron app. Tauri doesn't ship any browser runtime, so it doesn't ship Chrome, Chromium. Instead, it uses the operating system's native web renderer, just whatever is built into Windows, or Mac, or Linux. Also, Rust doesn't have any runtime cost. It's all very minimal, very secure. That's the lay of the land for JavaScript.

CSS

This wouldn't be getting you up to date with the web platform without talking about CSS. CSS is actually moving faster than any other web platform APIs today. It is growing so fast. CSS module 4 is out and incredible. A big feature in CSS is View Transitions. Anyone using View Transitions in production? It's not production ready. You're using polyfill? View Transitions is a game changer, because we've always wanted to create mobile style UIs, but in the web.

For example, you have a little playlist of songs and you have album art and title, when you tap on this, instead of doing a lame slide, what if the album becomes big in place? You get the Spotify native app experience on the web. View Transitions enables this. How it works is using a bunch of pseudo elements. When you add a transition to a component, you get all of these that you can animate independently. You can say, for example, instead of the root group, I want the album cover in our example, and the title to be its own group, and I want to animate them when a View Transition happens, independently of the rest of the page. You could create an environment where some elements scale up and stick around while other elements exit the viewport, and you get more fine-grained control about this. That, at this point in time, is supported everywhere except Firefox and Safari.

Container queries are very new as well. This is really cool, because what you can do is you can style children not based on the size of the viewport for responsive web. We've done responsive. We've done media queries. Imagine doing a media query, but it's not relative to the size of the viewport, min width, like 1024 pixels, but it's relative to the size of some parent component. Crazy. It's a game changer. You could have children change their sizes based on the next div up, or three divs up, as opposed to the entire viewport. That's what container queries are. You write them like this. You have, for example, post, and you can say, I have a container called sidebar. When sidebar changes, I want to reorient myself. Then the container query for sidebar would look like this. You can say when the sidebar is bigger than 700 pixels, then the card has a bigger font size. It's not relative to the viewport, it's relative to the sidebar. This is really cool. We're seeing a trend here, because we're just increasing granularity.

Instead of the entire viewport being the source of responsiveness, we get more granular control. It's just like a little div. Similar with signals, instead of this big component being the state source of truth, little reactive primitives are. This is an encouragement to you and your daily job as you build on the web. What can you make more granular, to give more control to your engineers on your team? There's another trend, which is unidirectional data flow. The reason container queries can work is because they read the dimensions of their parent. This is obviously a recursive loop. If you resize the parent, then the child changes, and that causes changes in the parent. If you have two-way data binding between parent and child and CSS, you have the potential for infinite loops. Container queries only read parent state. They cannot affect parent state. That's why it works.

We also have new units with container queries, instead of vw and vh, that is viewport width and viewport height, we have container query height, container query width. You might be thinking, why not ch and cw? That's because the ch unit in CSS is characters, how many characters in your typeface. It conflicts. Our container query is going to replace media queries, because you can theoretically just do a container query with the entire viewport. The answer is no. That's because container queries are specifically about the size of a container. That's it. A media query is about media. You'd use a media query for things like prefers-reduced-motion, prefers-color-scheme: dark. Is this print or is this web? A media query is about media, container query is about containers. Support for container queries looks like this.

The has selector is really great. It allows you to style a parent based on whether it contains a child or not. You can say, if my h1 has an h2, then add a margin, otherwise don't. It's so cool. Supported everywhere. If you're not using has to style a parent based on its children, you can. In fact, has is really great, if you have an invalid form field, you can style the entire form based on if something is invalid. Absolutely brilliant. We have viewport units. This is really awesome, because building on the web really sucks for mobile.

Like 100vh falls apart. 100vh will be under the address bar, above the address bar. It really sucks. We have these new units, small viewport, large viewport, dynamic viewport. You can see that small viewport fits in 100% of the viewport, but without all the browser stuff. Large viewport fits in 100% of the viewport without the browser stuff and dynamic changes. It's worth noting, when you do 100dvh, and it changes from, I have a bunch of browser stuff in the way, to, I don't. It's not guaranteed to be a 60 frames per second animation, so it will be a bit janky if you use dvh. This is so cool. Solved by CSS. You don't need to do viewport detection with JavaScript or anything like that. This is also supported everywhere.

HTML

HTML, the goat of the platform, the original document markup language. Is there anything new in HTML these days? Yes, we have a popover element. This is so cool, because you could do tooltips natively with the platform. The web platform folks have seen this as a source of pain for developers. The response is, we just give you this behavior natively. Popovers are a thing now. You can also anchor them to elements. There's primitives for that. It's very exciting. You also have a way to do modals just natively. You do need JavaScript, but the JavaScript is supported in the runtime. You don't need a JavaScript library for modals. You have a dialog element, and you can choose to hide it or show it with JavaScript.

The cool thing about the dialog element is it handles all the accessibility for you, keyboard, tab accessibility. Press Escape to close it, put it over your thing. It handles all of these edge cases that sometimes we overlook, straight from the platform. Very cool. Select Menu. It's not even called Select Menu anymore. It has a new name. It's a way to style dropdowns. I don't know if you've seen dropdowns, if you've tried to style a dropdown, it's impossible, because Safari will force it to look a certain way. You have no control. You're getting control with Select Menu. I think it may be called something else by now. All of this is work done by the Open UI team. The Open UI is a project by the W3C that recognizes these issues and wants to give you more control. I'd encourage you to follow this. It's just on the Open UI website. You can find that where you find things.

HTMX

The last thing we'll talk about is HTMX. Anyone heard of HTMX? It's a great way to build interactive web pages for backend engineers who don't know JavaScript. It's really cool. You include one script tag, and then HTML gets superpowers. Because HTML is great, but it's very selective about who can do what in terms of interaction. What does that mean? An image component, an image element, sends an asynchronous fetch request to get the image data, but that's the only component that can do that.

Buttons are clickable, but divs are not clickable by default. Buttons respond to an onclick handler, but divs don't do it the way you would expect. Why is that? HTMX extends HTML and gives you a bunch of directives, hx-. Based on these directives, you could make any element interactive, any element talk to a foreign data source, and the response of that be placed wherever you want in the DOM. It gives you, by one script tag, the ability to create applications like you would in React or Solid or Astro. You could actually use Astro with HTMX, and that's a really great combo.

Summary

We spent a lot of time on AI on purpose, because I want to make it accessible to you. When I walked in, I said, are you an AI engineer? You said, why am I here? I asked you later, and many more hands went up. I'm excited about that. I want to see you playing in the space. I want to see you build things like my podcast search. We talked about JavaScript. We talked about immutability. We talked about reactivity.

Finally, we caught our breath with CSS, HTML, and HTMX. If we didn't cover anything, or if things change, or if you want to know more, I have this podcast on my website, built with AI, as you saw. You're welcome to check it out, and use this feature for the latest. It also has a search across all the talks I've ever done. There's almost 50 of them. You can go find them exactly this way. That's available to you if you want to play with that. It costs a fair amount of money, but it's a great engineering challenge to make it more affordable for myself.

 

See more presentations with transcripts

 

Recorded at:

Nov 05, 2024

BT