this is part of as days pass by, by Stuart Langridge

You really don't need all that JavaScript, I promise

JavaScript is your behaviour layer; the way to add interactivity to your sites, to provide a slick and delightful user experience, to make everything fast and easy and clean. But at some point everything changed: the tail started to wag the dog instead and development became Javascript-first. We'll talk about how you maybe shouldn't rely on JS as much as you're told to, and some practical strategies for how to build sites without reaching for a JS framework as first, last, and only tool for making the web happen.

@JSCampRo has been outstanding, but @sil closing out by making it incredibly clear the costs of JS is :chefkiss:

— Alex Russell (@slightlylate) 24 September 2019

Given at JSCamp.ro, Tuesday 24th September 2019.

Transcript

This is a (lightly edited for clarity) transcript of the talk. As such, it's a little stream-of-consciousness, but it should be readable enough.

Dear Niamh,

Hello, darling daughter!

I thought I'd take this time to write to you while I'm here on my own with a comfortable chair and nobody's listening.

I'm in Romania! It's great here!

But I wanted to take the time in this email to take a little time to enlarge on a discussion that we had about websites; I said they weren't as good as they could be, and the web was slow, and it was complicated, and I said:

You really don't need all that JavaScript, I promise.

And you said: what does that mean, daddy?

This is what it means.

Quite a lot of people have talked today about performance; you've heard stuff from Phil, from Noam, heard stuff from Alex first thing this morning, so you've already heard what happens if you don't care about performance. What happens is you make Alex sad, and then he comes around your house and sets fire to you with his mind. But there are are other reasons to care about performance (if you need a better reason than this!), so let's talk about Zach Leatherman. He tested building a client-side React site displaying a tweet, against a plain HTML file displaying a tweet, to time them and see what difference there was.

Oh no, sorry, I read that wrong: he tested a React site displaying a tweet against an HTML file rendering all twenty-seven thousand of his tweets. 8.5 megabytes of HTML. Which, I wonder, was faster?

Let's let Zach speak for himself: which has a better first meaningful paint time? Raw HTML, an 8.5MB HTML file with the full text of all 27,000 tweets, or a client-rendered React site with one tweet on it, and the answer is: the 8.5MB of HTML renders faster by 200 milliseconds.

Now, to be clear, don't serve people 8.5MB of HTML. Don't do that. But, nonetheless, take another example: code highlighting. Remy Sharp was doing some work with code highlighting, syntax highlighting, so he was publishing code examples on his website and wanted them to be syntax-highlighted because that makes them easier to read. And he was experimenting with: do we do the highlighting client-side with prism.js or something like that, or do we do it server-side first? Now, obviously, doing it server-side means that you're serving quite a lot more HTML, because instead of serving just the code, you're serving all the markup as well, spans and code tags with different highlighting and styles in them. But what he found was, in total, even given the fact that you're serving a bunch more HTML, the total transfer size is still 10KB smaller. And even though you're serving a lot more HTML, the parsing time -- the amount of time it took to read that greater quantity of HTML -- was basically unchanged. To quote Remy, "with practically zero impact on parsing time" even though we know the parsed HTML is twice the size when you're server-side rendering.

So it is tempting, honestly, to outsource the work that needs to be done. You say: instead of running it on my server, on my computer, which I have to pay for, and I have to wait for... how about I outsource it to the world's biggest distributed computer, which is the web? But the problem is that the work that needs doing doesn't go away. You're just running it on someone else's computer. You're making everyone else do it instead of you, and honestly their computers are not as good as yours. As Alex says, "the takeaway here is that you literally can't afford desktop or iPhone levels of JS if you're trying to make a good web experiences for anyone but the world's richest users".

But we've already heard quite a lot about performance, so let's talk about the next thing.

One thing you could do here, one thing that's quite often suggested, is this: serve less stuff. If you put less on the wire in the first place then it takes less time to download, it takes less time to render, everything's good. And fortunately, if you want to have less on the wire, then you have a very willing partner in this, which is the network itself. Because the network hates you and it wants you to suffer.

So let's talk about availability. Have people heard of GDS? The UK's government digital service? In the UK there's a a web team, a technology team, who work for the government and have to some extent oversight over government websites, and try and make them better. One of the things they did was a survey looking at their analytics, and they discovered that 1.1% of people hitting their sites weren't getting the JavaScript enhancements. And so, you think, well, OK, 1%, but y'know, maybe that's just the way it's got to be. That's not that many. And if they, y'know, people turn off JavaScript or whatever, there's not really a lot we can do about that, and we've got deadlines to meet, and we've got targets to hit, and if we're only losing 1% of people, that's OK.

The point is: it's not like this. It's not one person who doesn't get your JavaScript, and 99 people who are all fine. Quoting from that blog post: "the proportion of people that have explicitly disabled JavaScript or use a browser that doesn't support it only make up a small slice of the people who don't run it".

Some of you may have seen this: I built a thing called "Everyone has JavaScript, right?", and one of the things that it does is that it walks you through all the hoops that have to be jumped through before the JavaScript that you've written actually ends up doing its job on your user's computer. So, until the page loads, your JavaScript is not running. Did the HTTP request work? Did it complete? Was the JavaScript killed by your ISP or by the hotel Wi-Fi or the airport Wi-Fi? In the UK, Sky are one of the larger ISPs, and at one point, they blocked jQuery. By accident, to be sure, but they blocked jQuery. Which was obviously hilarious for everyone who just loaded it from the CDN and assumed it would work. That blew up loads of stuff.

And the point here is that what this actually means is: it's not 1% of users who never get your stuff and 99% of people who do. It's 1% of visits. So the person not getting all the JavaScript you're serving to them is not someone on some incredibly ancient not-really-a-smartphone in another country with hardly any connection or anything like that. It's you, and it's your customers, in a cellar bar or on a train or while they're waiting for the network to come back up. They don't have JavaScript turned off. They're not Opera Mini users somehow browsing from 2003. These are your actual customers, who will find that your stuff just doesn't work sometimes. And what's worse is when someone tries your site, your app, your service, and it doesn't work... and they toggle their phone into and out of airplane mode, and think: it works now, great! Maybe that's OK. The second time... maybe that's also OK. But then the third time, the fourth time... after that people start thinking: this site is rather unreliable. I can't really put a finger on why I feel that... but if after four times or so of it not working, you go away and you don't come back, then eventually what happens is that all your users have gone away and they haven't come back.

But there's another reason, and I think for this room probably a more important reason, why the modern web isn't necessarily something you want to embrace with both hands. And that's it's unnecessarily difficult. The world that we're building, that we're inviting new developers into, that you're all doing, is really bloody hard, and I wish it wasn't.

Can you keep up with all this stuff? I can't keep up with all this stuff.

And what's more important about this is that it's not just me saying this. It's not me standing on stage and haranguing you only because I'm too dim to understand this, but everyone else is fine.

Drew McLellan -- built 24ways.org, half of the team building noti.st and Perch -- he says: "Increasingly there seems to be a sense of fatigue within our industry. Just when you think you've got a handle on whatever the latest technology is, something new comes out to replace it." Who recognizes some truth in that statement?

Owen Williams: I've discovered how many others have felt similarly, overwhelmed by the choices we have as modern developers, always feeling like this is something we should be doing better. The web is incredible these days, but I'm not sure every app needs to be a React-based single page application, because the complexity is so great in a product's infancy.

Garrett Dimon again, saying exactly the same thing: "SASS, JavaScript dialects, NPM, and build tools solve some shortcomings with front-end technologies, but at what cost? I’d argue that for every hour these new technologies have saved me, they’ve cost me another in troubleshooting or upgrading the tool due to a web of invisible dependencies. In the meantime, I could have broken out any text editor and FTP program and built a full site with plain HTML and CSS in the time it took me to get it all running. And the site would have been easier to maintain in the long-run."

Rachel Andrew, the other half of the noti.st team: "I can still take a complete beginner and teach them to build a simple webpage with HTML and CSS, in a day. We don’t need to talk about tools or frameworks, learn how to make a pull request or drag vast amounts of code onto our computer via npm to make that start. We just need a text editor and a few hours. This is how we make things show up on a webpage."

In 2007 I gave a talk in London where I complained that people were writing links on their pages which look like this (a big pile of JavaScript), a span, but "enhanced" with Ajax, loading everything with Microsoft.XMLHTTP. And I said: this is rubbish, don't do this. This is what a link looks like (<a href="somewhere.html">to somewhere<a>). Do that instead. And it's great: the world appears to have listened to me, twelve years on, because now links don't look like (the big pile of JavaScript code), which is good, they look like this instead (a big pile of React code). So, y'know, thanks.

But more seriously, why are we doing all of this? I don't think people do stuff on the web just because it annoys me. I don't actually think that. Most of the time. Why are we doing this? Let's take a step back.

There are reasons, good reasons mostly, why this stuff is important. You've got things like component reuse. The idea of being able to pick up a component someone else's already written and debugged and tested, and dropping it into your site so you don't have to spend that time yourself is really handy. You can pick up whole libraries of existing code. Your whole team is working from a consistent starting point. You've normalized the differences cross-browser or cross-platform, and everyone's working from the same place, and that's also really handy. You can apply some organizational structure to your application, which you can define in code rather than it being maintained in someone's head. You're following best practices, so other people are suggesting that this is a good idea and you're following those practices from the rest of the industry. And that's all good. All this stuff is important.

But I think that these are after-the-fact rationalizations. I think we didn't say: we want all of those things, so let's build frameworks. I think we said: we've built frameworks, now, justify why they're a good idea. And that's what we came up with.

So, given that we've decided as an industry to use these things, what do we get out of it?

You have to understand, Neo, that most of these people are not ready to be unplugged. And many of them are so inured, so hopelessly dependent on the system that they will fight to protect it.

Are you listening to me, Neo? Or are you looking at the framework in the red dress?

Look again.

I think the reason people started inventing client-side frameworks is this: that you lose control when you load another page. You click on a link, you say to the browser: navigate to here. And that's it; it's now out of your hands and in the browser's hands. And then the browser gives you back control when the new page loads.

Now, in the old days, when you clicked on a link, the browser went white and didn't show you anything until the new page loaded. That has changed a bit now, so it doesn't turn the page white until the new content starts coming through. But there is still that loss of control, that gap in the middle. You lose control over the experience. And as designers, as developers: we don't like losing control over the experience. And to be honest with you, we are right to not like it. So people wanted to keep control over that experience: they don't want to say, OK, go off to the browser and then, browser, when you've taken us to that place that I asked you to, give me control back. Page loads are terrible. They have to be avoided. They must be avoided. And I think that's a lot of what laid under a lot of the initial work on this.

So you say to yourself, I have a plan. Instead of me letting the user click on a link that then goes to the browser and the browser giving me back control when the new page loads, what about implementing that myself?

Instead of letting the browser handle navigation, I will handle navigation, so as to avoid the loss of control. What I'll do is, I'll XHR the page off the server, and then I'll innerHTML it into the current page. You're getting around the loss of control by handling loading yourself. Don't trust the browser to do it, do it yourself. Implement it internally.

And then once you've done that, you then start thinking: that's really annoying, because when I pull back a bunch of HTML and then I swap out the page I'm in with this new one, a bunch of my HTML is the same! The header is the same, and the sidebar, that hasn't changed, and that seems really wasteful. So you invent the virtual DOM. And then you say, but when I put all my virtual DOMed new HTML in place, it doesn't change the URL! So you clamour at the browser manufacturers to invent the HTML History API, and you do client-side routing. And at that point: you are a framework. You have reinvented all of this stuff. And so the key point about this is: it's all built one step on top of another, dependent on the previous. And each individual step is quite logical, given what's come before. So it's a pyramid, where each thing is built on top of the previous thing. But the problem is: that pyramid is superbly balanced on a tiny, tiny point.

What if you could control page loading without having to implement loading yourself?

This is what <portal> is for. Has anyone come across the portal element? It's really new. Basically a portal is an iframe, but you can tell the browser: make the thing in this iframe be the main page. It's an iframe you can navigate into.

This is a an image from the web.dev site, where they show a new page loading in an "iframe" in the bottom corner, and then the user clicks on it and you navigate into it. But the key point about this is that the URL changes too. And that's not HTML History API trickery: the browser genuinely navigates into the page which is inside that iframe.

(insert a brief demo)

We load the JSCamp.ro site in our portal, and then I say navigate into it, and we immediately navigate into the thing showing in that iframe, And the URL has changed. Browsing context has changed. We've actually navigated into that iframe: I didn't just load the same URL as that iframe in my main page. I told the browser: do it.

So if I now bring this out of the way again, this is the code: so you create a portal element, just the same way you'd create an iframe, and stick it in the body (which you don't have to do; I'm making it visible in all these demos because otherwise it's not much of a demo). I click on a button, and it navigates into the portal, and that's all it does. I'm trying to demonstrate at least a little bit how it works, but the key thing here is a magic new function: portal.activate(). What that does is, it says to the browser: navigate into that portal. Make the thing that's displayed in that portal be the main page.

But the key point here is: in between you creating the portal and setting its src, and you activate()ing it, you've still got control. So you can do whatever you want in that gap, such as animate it.

(insert another brief demo)

I create a portal, and again it loads in the bottom corner, and then I say: navigate into it. This time it should zoom up and fill the screen, and then it navigates into the portal. OK, not the most earth-shattering demo, but bear with me: that code is not doing anything magic at all. This is all stuff that... there's no new portal API which makes it zoom, or anything. It's all just bog-standard CSS. I create a portal in the bottom corner (well, actually I create it at full screen and then scale it down to the bottom corner) and then I have an expand class which expands out to full size, and then I set things up thus: create the portal; add a URL to it; and then when someone clicks on the navigate button, I add the expand class, and at the end of the transition, activate the portal. Because you've got a new element which really exists in your DOM, you can do all the same stuff with it that you do currently. You don't need special APIs from it, frankly, other than the activate() API.

And one of the nice things about this is that it solves actual problems. So rather than give you more examples, this is an actual problem I ran into that I wanted to try and solve this way.

So you've got a website; a documentation site for a web library. (I'm not going to tell you which one, because I haven't actually proposed this solution to the people yet, and they might be a bit annoyed if I proposed it in a talk!) What you've got here is two panes: the left-hand pane is basically a table of contents, and the right-hand pane is the content. So down the left-hand side you've got all the different bits of the documentation, and each one of them is a header and a link. So you can click on it and then it will show that, in the right-hand pane. That seems to me a perfect case for a completely static site; you don't need any dynamic stuff here. It's just completely static.

So I said: make them all separate pages. Generate them once and then you're done! No problem. And they said: there is a problem with that, and the problem is this: on the left-hand pane, if you scroll down in that left-hand pane and then click on a link, when it loads the new page, it loses the scroll position, because that's a new page. It will scroll that left-hand pane back up to the top. And I said: that's a good point.

And then I thought about how else you might solve this problem. And, honestly, the first thing that came to my mind was <frameset>. But there is actually a reason beyond age why you wouldn't want to use framesets for this, and that's that they don't change the URL. So when you click on a link, the page that the browser is looking at is the "master" frameset page. So clicking on a link doesn't change the URL you're looking at. This is one of the reasons we sacked off framesets in the first place: sending a documentation link to someone else will always send them a link to the "root" of the documentation, not the page you were actually looking at, which is rubbish.

So we could do the whole thing with an SPA, which is what the site developers were talking about doing: we'll build the whole thing with some kind of client-side framework to do this, just in order to solve that problem of the scrolling left-hand pane. And the reasons against that are: literally everything I've said up to now in this talk.

I suspect those of you who are JavaScript programmers will have immediately thought of half a dozen hacky ways to solve this problem, right? When you click on a link, what it actually does is, it reads the scroll position and then stores it in localStorage, and then when a page loads it looks in localStorage and pulls the scroll position back out and scrolls to it. Or you patch all of the links at load time with ?scrollposition=303 or something like that. And the problem with this is: that's just sad, people. Don't do it.

The way I thought we could do it is with the <portal> element, and postMessage. One of the advantages with an SPA, one of the reasons that single page apps exist, is that you never lose control, as I mentioned. The fundamental point is that you get to see, in an SPA, both the page you are on and the page you are going to, at the same time. If you don't have an SPA, you don't get that -- there's basically no way of having that -- and that's a really valuable property to have in situations like this. In loads of situations, you want to know where you're going to, and what you've got now, at the same time, so they can talk to one another; they can interact. But in order to get it, up until now, you've basically had to re-implement everything the browser does, inside the browser, just so we don't lose control when navigating.

But with <portal>, you open up your new page in a portal, and then send it a message, because they both exist at the same time. You say: my scroll position is this; it then moves its left-hand scrollbar, and then sends back "I'm done!" and you say: browser, navigate into it. Done.

(a two-pane scroll demo)

Again not very graphically exciting, this, but: we've got links down the side ("first thing", "second thing", "third thing"). I click on "second thing" in the left-hand pane and it navigates to the "second thing" page in the main pane. I scroll down further and then click on "third thing" and it navigates to the "third thing" page, and my scroll position is still correctly set. These, first/second/third, are all completely separate pages, and none of those pages contain any JavaScript except one link out to one JavaScript file, which does the portal stuff. Which I shall now explain.

So there's no code in the pages, no change to the pages; it's the literal definition of progressive enhancement. If you don't support portals, then clicking on one of the sidebar links is just a link; nothing changes there, just <a href="third.html"> in the page. To drive it, there's this bunch of code, which I'm not gonna get you to read. Essentially what we do is put click handlers on the links in the left-hand pane, so when you click on one, it reads the href out of the clicked link, creates a portal, and then tells it to activate, with a scroll position that it's read. And then the receiving page (which has loaded the same library): when it receives the portal activate event (which means: "you're in a portal, now you're becoming the real page"), it also gets given the scroll position, and it says: scroll to that. And that's it; that's all you've got to do, and it's progressively enhanced. If you don't have support for the portal element then you're great.

Now one thing I should warn you about: this is hilariously unsupported at the moment. Do not even think about using this in production. As far as I'm aware, it's only in Chrome; it's only in the really really latest Chrome -- I've been genuinely worried running up to the date of this talk that I wouldn't actually get a version of Chrome which had the portal stuff in it before September the 24th. I only actually confirmed this was all available at the weekend -- and it's still quite buggy. To give you an example of that (and this one's brilliant), if you've got the devtools up and you navigate into a portal, it takes the devtools away! Which makes it really easy to debug problems, thanks. I know it's a bug, and they're working on it, and it'll be fixed; no problem at all. I don't even know if this is on a standards track, as opposed to just sort of "proposed" as a standard. But I think it's a bloody brilliant idea and I hope the other browsers adopt it.

But this is the point. When I said there that this is progressive enhancement stuff... most of the time you just want to add some sprinkles of activity to an existing page. The majority of websites aren't, and don't need to be, single page applications. And it's not me saying that: it's React saying that. "React has been designed from the start for gradual adoption, and you can use as much or as little React as you want." That's a page on React's own website explaining how to use React, just pulling the components that you need to enhance an existing page. You do not have to hand all control over to your framework.

This is a very small example: this is Vue rather than React (because I don't want to point fingers at just one thing) but this is a perfectly simple page which just happens to use some Vue components in it. But Vue doesn't own the page. What I'm not doing is writing a Vue app and then mounting it on <div class="app"></div> and serving people a completely empty HTML body, because you don't need to do that. Just use components where you need them; use static fast rendering, cached at the edge, HTML; the way that's been talked about, all the way throughout today.

The other thing is that HTML is actually quite a lot smarter than it used to be in the old days. You've got the impression that, HTML 3.2, this is what it looked like, and it would never ever change. Those were the building blocks that we had, to build the web, forever. But it's changed now, quite a lot. Take datalists. One of the Holy Grails in the early days was having an input box that was also a drop-down list, which HTML just flat-out doesn't allow. So people end up implementing their own component, because they want this basic thing which is available in literally every native toolkit. You didn't have it, so you had to implement an entire thing yourself, and it was a total nightmare. Now HTML just does it. This is an input box, but it's also a drop-down.

Fieldsets. If you've got a bunch of stuff in your form which is the "second page", or it's stuff which is not relevant based on how they're filling in the form, you can just disable a whole field set all in one go. One line of JavaScript.

One of my favorite things is CSS scroll-stop. How many of you have implemented a carousel at some point on websites? Keep your hands up if you wanted to implement it? Yeah. Nonetheless, y'know, business requirements will out. Now, CSS basically does all that work for you. So if I scroll through here, you see I scrolled on to the next item, and it snaps correctly into position, for each one. There is no JavaScript here at all: this is just a list inside a constrained box, with scroll-snap-type set on it. And obviously this is a trivially simple stupid example, but you can see how you could then use that to build something larger, because it's already built into CSS. The browser's doing the work for you.

Ana Tudor says (and this is an important point), "Just because you don't understand a simple little CSS solution, it doesn't mean it's "weird"/ "crazy". Goes the other way as well. Adding a ton of extra elements just for the sake of having a pure CSS solution when updating a value from the JS would suffice is just silly." And it is! I'm not for a moment saying, don't use JavaScript. I love JavaScript! It's excellent! It's the programming language of the web. I love it to bits. Don't avoid it. But don't necessarily use it for literally everything. Have it be a tool in your toolbox... just have there be other tools in your toolbox as well.

It's good to stay in touch with the latest changes, the latest moves, in our industry; we're in a fast-moving industry! But stay in touch with all of the stuff that's relevant to your job: the HTML, and the CSS, and the JavaScript. We've seen today things like CSS Houdini coming up, which is going to be amazing, and that's fab: stay in touch with that as well! Not just what JavaScript frameworks are providing for you.

Or, if you don't want to, don't! Keeping up with this stuff is hard, and you don't have to, and you shouldn't have to. "Not knowing new stuff is normal," says @flaki: "Not using new and shiny tech is okay. Feeling bad for all of these is common. You are not alone. But you are also completely and perfectly fine, and you are adequate." If you feel overwhelmed by some of this, so does everyone else, as you saw earlier. I don't want to show you a bunch of similar comments from people just to hammer the point home. It's more that the people I'm quoting, these are dedicated serious professionals on the web who've been doing it for years and years and years; they're people who have written books, and spoken on conference stages, and they feel as overwhelmed as I do, as some of you I suspect do. And it's okay to feel like that.

But let's fight the right fight. Don't fight against the web, and the things that browsers are doing and doing well. Don't say, we'll essentially serve a huge great big app on top of it, and ignore what the browser does: that the browser is just a "delivery mechanism" for our application. Don't do that. Because, honestly, there's so much in our industry and in the rest of the world that needs fighting against. There's so many ways that we could stand in solidarity with one another, with the rest of the web, with the rest of the world. There's so many people who need our help. If you want to fight, great! Fight that.

Web frameworks are a great thing. They are a fantastic, brilliant way to prototype stuff; to get things out into people's hands before the web gets around to it. Because we've seen, in talks earlier today, that the web takes a long time to standardize things. And that's because they do a lot of work: you've got to spend time when you're standardizing a feature making sure that the accessibility story is good, and that it's available everywhere, and the fallback story is good, and everyone's included, and performance is great, and there aren't performance problems, and that does take time, it really does. And so there will be times, after someone's come up with an idea but before the web has properly embraced and standardized it, when you want to use it. And great! Go ahead and do that! That's fabulous! Use extra stuff if you need things that aren't currently in the web platform. But if you're always finding yourself doing things that aren't in the web platform... is that really right? I mean, are you honestly on the bleeding edge at all times? To build a login form? Really?

So, my darling daughter, you asked... and you probably got more than you bargained for!

It's valuable what we do, as an industry. We have the power to bring knowledge to the whole world; to connect people together (when that's what they want); to be the greatest repository of information that's ever been known. And I want to keep it that way. That's what I think is important about the web. Thank you, my darling daughter.

I love you.

Daddy.