Web development is going to see a big push towards no-build libraries and frameworks (probably an overcorrection?)
Unless https://nodejs.org can add built-in support for https://www.typescriptlang.org, that might be the start of a real shift towards https://deno.land
]]>There are plenty of benefits to these types of tools. I won't go into the reasons why you would use one here, mainly because the tools do a good job highlighting benefits themselves. Maybe I'll write a future post going down that rabbit hole, but the tl;dr; is that metaframeworks are great to start projects quickly if you're okay with scrapping the codebase and starting over later if the project actually scales.
Picking a metaframework for your next project defines your team structure and hiring goals.
The lines between backend and frontend blur, to the point that there may not be a distinction anymore.
Infrastructure, maintainability, testability, and scaling considerations all change.
Almost every technical hire you make will likely need to be a full-stack engineer, bonus points if they specialize a bit.
Your SQL specialist will need to understand bundlers, JS tooling, and may need to know what "import server", "use server", and "use client" all do.
Your HTML/CSS and accessibility experts will need to know how to work in your JS component framework of choice. If you pull in Tailwind because styling in JS can be painful, they'll need to know that as well.
If you have, or plan to have,a public API it may need to be entirely separate from your own frontend project. This isn't a bad thing, I personally prefer separate backend and frontend code bases, but this flies in the face of a core benefit of a metaframework.
Now that RPCs are new stew rather than 3 day old halibut (RIP Anthony Bourdain), your project is likely using APIs created by the bundler without versioning or documentation.
Bug reproducibility is going to be tricky too.
Application flow now moves between SSR, client rendering, and functions marshaled back to the server via RPC calls. When a user hits an error, where do you look? How do you recreate it locally? And how do you automate a test?
I'll assume that you're deploying to a serverless or edge environment. In that case, you likely can't recreate the production environment locally if you wanted to.
Developers won't be able to locally reproduce the deployed hardware, production OS, or the production network. The same goes for end users' hardware and network conditions, reproducing client rendering and network issues will be tricky or impossible.
I'm sure there's more I'm missing here, and I haven't touched on the pros though I know there are some.
The moral of the story, though, is to avoid diving into a brand new metaframework because its the most hyped online. These tools are brand new and are an amalgamation of so many different concepts (both new and repurposed) that we don't yet know what we don't know.
Picking your tech stack can have a profound impact in the long term. Choose your dependencies carefully, sticking with "boring" tech as much as possible.
Throwing in a couple carefully picked bets on newer tech is totally reasonable, but those bets really shouldn't be so all-encompassing that you need to reconsider your organizational structure or hiring practices.
]]>If a new hire dev needs to manage everything from the database to CSS, of course they'll reach for magic solutions that promise to do everything.
I can't help but think these tools were reactions to the industry abandoning specialization.
]]>Well, OpenAI and the flood of similar large language models may have finally signaled the beginning of the end for the web as we've known it for over 20 years.
Lets start with a bit of history, I'll try to keep this short but no promises.
tl;dr; Iterations of the internet are defined by the business model driving the machine. We're entering a new phase where advertising is being replaced with scrubbing content to train large language models (LLMs).
I usually distinguish this as the time when content online was largely a one-way street - authors of a site could publish new content to their site but readers couldn't really interact or respond to it. Sites often had hit counters showing how many times the page was requested, but that was pretty much the extent of what you knew about your audience.
Web 1.0 was the wild west of the web. HTML and CSS were simple enough that anyone interested could learn how to get their ideas online. We hadn't defined what a "good" user experience was and people took that freedom to try out wacky ideas. We even had the now defunct <marquee>
element to scroll banners of text across the screen. AKA the good old days.
Web 2.0 came along right around in the early 00's, after the dot-com bubble when everyone realized that the web still does in fact cost money and online businesses eventually have to make money.
No, not the search engine. Google the advertising platform. Sure Google built a better mousetrap with their search algorithm, but it only exists to sell ads and collect extremely detailed and personal data on its users (which then helps sell more valuable ads).
This opened the door for online businesses to actually create a business with an actual revenue model. This revenue model was, you guessed it, ads. Countless websites popped up creating new content online with the main goal of bringing readers back to view more ads and unknowingly providing more personal data.
This is the world we've lived in for a couple decades, culminating in both users and some browser vendors implementing features that try to protect users from the very ad model that was Web 2.0. That battle has been brewing for a while, but it seems as though we're finally seeing a light at the end of that tunnel with a new business model hitting the streets.
OpenAI opened Pandora's box in more ways than they're given credit for. Not only did they make impressive (and dangerous) gains of function compared to previous machine learning tools, they opened the door for a new way of monetizing online content.
Reddit users protested when Reddit began locking down their API, developers shut down projects when Twitter began charging for API access. Both, though, are just a sign of what's to come.
Online advertising isn't the same business it used to be. It's even harder to really track value gained from online ads and the game of cat and mouse with ad blockers will never end. Luckily for online businesses, OpenAI created value where it didn't previously exist.
If you can throw up walls around user-generated content and control access, you can sell that to anyone wanting to train an LLM. Today LLMs are mostly trained on data from a couple years ago, but eventually that will change and when they do Twitter and Reddit will be sitting on a gold mine of user content heavily focused on current events and trends.
Where Web 2.0 focused on content creators selling ads by knowing intimate details of every visitor, Web 3.0 will focus on getting users to create as much content as possible inside of a walled garden.
I'm honestly not sure whether I think this new model is better or worse than the advertising model. I never liked the ad model, though I could fight back by trying to block ads I really had no chance of semblance of privacy online. User tracking is woven so deeply into the fundamentals of the modern web that it's effectively impossible to truly be anonymous online. At the end of the day I may be able to make targeting me for ads a bit harder, ironically I may only be making ads less interesting to me without escaping ads all together.
What I can say, though, is that I don't like the idea of content I post online being used to train for-profit machine learning tools. I rarely use social media and whenever possible I try to go the IndieWeb approach of writing on my own site and syndicating out to those other platforms.
This might have to change going forward though, and as much as I like the open web we may have finally made it untenable to post content publicly. I may change nothing in the short term, but I have toyed with hiding all of my old posts and only publishing to a private RSS feed. To that end, I'm curious what the LLM bots actually scrape and whether publishing only to RSS and avoiding an HTML page would dodge their algorithms for now.
]]>HTMX is so compelling because it actually writes sane frontends for once.
The backend is the final say in state, authorization, and error handling. Expecting the frontend to optimistically duplicate all this logic is an impossible task.
]]>Once it's shipped though, you have much less tech debt if you stuck with "boring" tech
I've got a hunch there's a strong correlation between # of dependencies and faith in the product-market fit
To the end, if love to see a "feature complete" flag similar to deprecation warnings
Would be handy to know a dependency is still actively maintained and the API won't break on me later
]]>Assuming that intent matters depends on being able to first predict the outcome.
We still can't even say how or why existing AI do what they do, we definitely can't predict the outcome of new changes.
]]>The lyrics absolutely are populist, but there's nothing left/right about it.
If anything this is a protest song that should bring us all together, shining a light on how far we've slid away from governments that work for the people rather than control them.
]]>The real question, though, has been whether web developers should use the platform or reinvent the wheel for every site.
No one ever used a single page application and said "eww gross, this feels like a native app!"
The idea behind building an MPA is to use what the browsers give us, and so far that has been a full page refresh with the pros and cons that go along with it.
The concern is with recreating support for page history, the URL bar, caching, etc.
Reinventing browser specs assumes that spec authors didn't know what they were doing or browser developers shipped buggy implementations that don't meet user needs – and aren't worth improving.
Newer specs like the View Transitions and shared elements blur the line between MPA and SPA.
Multi-page apps will be able to ship full HTML pages and achieve page transitions that feel seamless, complete with elements that persist state across pages.
So back to the original debate. We got lazy in the web dev community and latched the "use the platform" debate onto a limitation with page transitions and full refreshes.
This never was the real debate at all as far as I can tell, just a convenient line to draw for catchy names.
The question is whether sites should abandon core built-in features to rebuild one.
And if the browser specs don't meet today's needs, should we discuss new specs rather than throw out the old ones and take on the burden of reimplementing and supporting the same exact features?
Moral of the story:
The web is built on specs. They can be slow moving and there is a time and a place for polyfilling and testing proposed specs in userland code, but the goal should always be to clarify patterns and solidify those in open specs that can be supported long term.
]]>I've always understood government backed securities, i.e. bonds and treasury notes, to be considered an extremely safe/boring investment. They pay a fixed interest rate over a fixed period of time - you know exactly how much you put in up front, how much you end up with, and unless the US government allows itself to default there's effectively zero risk. So when a bank fails and the explanation is that it held toxic government securities it catches my attention. When it happens to be a bank that is deeply ingrained in the very industry I'm a part of, I start to ask questions.
Silicon Valley Bank (SVB) got caught up in what can only be described as a modern day bank run. It was a bank run because they went insolvent after too many depositors withdrew cash all at once and modern because we've never seen a bank run fueled by a combination of social media, an extremely well capitalized and tight knit industry, and the speed with which digital bank transfers can be completed. Yes people did in fact line up at bank branches to try to withdraw their money, but almost all of it went out over the series of interconnected tubes we popularized in the 90s.
It's been widely reported that SVB held a collection of risky government securities that, through a technicality, could be reported on corporate financials at face value even though they were millions of dollars under water. Basically, if those bonds were held to maturity they would get all their money back plus a small profit and finance laws allow the banks to mark the value of the securities at face value. Unfortunately for SVB they were running low on cash and needed to sell the securities. Because the Fed has increased interest rates dramatically over the last year, no one actually wants to buy old bonds that pay well under market rate and they weren't worth anything close to face value.
The other issue often raised is a regulatory change in 2018. The Dodd-Frank Act was passed after the 2008 financial crisis in an attempt to better regulate banks that are, for better or worse, too big to fail. The important bit for SVB was a series of stress tests that banks with over $100 billion were required to pass a series of stress tests that were designed to see how a bank would weather different storms. One such test was a sharp increase in interest rates. This may have helped raise alarm bells earlier if the limit wasn't raised in 2018 to only include banks with $250 billion in assets - SVB had around $200 billion shortly before it collapsed.
This can't be the whole story though, right? Why to would a bank really hold onto assets that they know are losing more and more value as interest rates go up? Even if they took the Fed at it's word a year ago when they didn't expect to raise rates, a few rate hikes and you'd surely shore up your books. Well, it gets interesting when you come across a seemingly tiny rule change in 2013 that I have found only a few references to so far.
This one gets really in the weeds of how banking and finance works - I'm going to do my best to gloss over the details and hit the high points. This isn't a full accounting of the change and I may miss a few things, bear with me! The Economist had a [great article]https://www.economist.com/finance-and-economics/2023/03/21/americas-banks-are-missing-hundreds-of-billions-of-dollars)(archived link) that touches on the ideas below with more thorough explanation in case you're interested.
Historically, banks weren't able to go straight to the Federal Reserve to buy securities on a whim. The Fed may occasionally make a lot of securities available, but standard practice for banks involved what is called the repo market. One bank who has cash on hand and needs to invest it in something that is considered safe and liquid would buy government-backed securities from other banks - this is a repo transaction.
In 2013, the Fed was already a few years into near-zero interest rates and started to model what might happen when they begin to raise rates. The Fed can only changes the interest rate they charge to banks, what the Fed can't directly change is the rates banks charge each other. The Fed began to worry that if they raised rates there would be a delay in the rates banks charged each other, removing some of the Fed's control over the market recovery. They came up with a simple fix - if banks can buy securities directly from the Fed whenever they want then bank-to-bank interest rates will have to more closely follow the Fed's rate. Why buy from a bank if the Fed pays you better rates?
This rule change created what was called a reverse-repo market, and may very well have been what really set SVB up for failure.
Well on the surface, that little rule change doesn't seem to matter - all the Fed did was ensure they had a tighter control over rate increases. What could go wrong?
When one bank buys a bond from another bank on the repo market the money does in fact leave one bank but it doesn't actually leave the banking system - commercial banks still collectively have the same level of liquidity (i.e. cash on hand).
When one bank buys a bond directly from the Fed in the reverse-repo market, that cash leaves the bank and is parked on the Fed's books. That money has left the banking system - the total liquidity of commercial banks decreased.
Again, so what? Well, this may very well mean that SVB did exactly what the Fed wanted them to do. When the Fed started to raise interest rates, SVB considered buying securities directly from the Fed to get the best rates.
It just so happens in 2020 and 2021 the Fed was also busy printing money to the tune of more than $4 trillion dollars. All of that money ends up in banks, and because banks are bound by regulations that limit how much cash they can keep on hand they have to invest it somewhere.
These two factors combined, the Fed (at the request of Congress) gave banks trillions of dollars. The banks needed a safe place to put the money, government bonds. The Fed already changed the rules to ensure that in times of interest rate hikes, the Fed would likely certainly pay the best rates.
Sure the bank has to be on the hook at some level, but they were playing the game designed by the Fed. The Fed wanted banks to turn to them for securities in times of increasing interest rates. Federal bonds are supposed to be secure, safe investments. SVB may have even taken the Fed at its word when they continued to announce that printed $4 trillion dollars couldn't cause inflation and that no interest rates were expected. Is that really their mistake though, trusting the Fed's official announcements and predictions?
None of this is to say there was some grand conspiracy where the Federal Reserve was purposely trying to set banks up to fail. Any of the decisions and rules changes along the way seem reasonable in isolation. The question here is whether the bank really was as negligent as is often claimed, or was the Fed negligent in recognizing the potential risk of the changes they made along the way?
Should SVB have diversified its portfolio to cover the bad investments? Probably, but only if they had good reason to believe they would be running low on liquidity and be forced to sell those investments early.
I could go on and on about how all roads lead back to the Federal Reserve here. Suffice it to say there's more to such a complex situation than the two big narratives that have gotten air time so far.
In the meantime, I'm still trying to answer the simple question of how all SVB deposits were made whole over a single weekend when the bank wasn't sold or liquidated. As of the beginning of this year the Fed's FDIC Insurance Fund only had $128.2 billion dollars in capital. The best estimates I've seen peg uninsured SVB deposits in the ballpark of $150 billion.
How the heck was that money put back in place given that the only public explanation was that tax payers wouldn't be on the hook and banks would eventually pay out of pocket? Did they honestly just click a few buttons and re-enable account balances, or did they fund it from some other part of the Fed's books? The answer is probably horribly mundane and boring but I'm really curious to know how such a massive depositor bailout was pulled off that quickly.
]]>I don't even know what to make of that. Hitting the gas and brake pedals at the same time is only a good idea if you're making a Tokyo Drift sequel
]]>astro-webfinger
announcement post for a quick rundown of what the heck the integration even does!
The first release of astro-webfinger
was focused mainly on static sites, letting you create a vanity username for your fediverse (usually Mastodon) account from your own domain. The main caveat was that the Webfinger spec can't really be supported on a static site, the spec uses query parameters to search for account details and that requires a live server to work.
For example, fire up your Mastodon frontend of choice and search for @tony@tonysull.co
- you'll find my account details. Now search @spam@tonysull.co
, it resolves to the same account! That's because the actual search query is ignored, the static wellknown
file always points to my indieweb.social account.
One Vite plugin refactor later and astro-webfinger
now fully supports server-side rendering and multiple accounts!
Static builds are still supported (and encouraged!), the API hasn't changed there and the 2.0 update should be seamless.
If you do need to alias multiple ActivityPub accounts, or just don't like that search doesn't really work statically, we've got you covered.
import webfinger from 'astro-webfinger'
export default defineConfig({
/**
* BYO server-side rendering adapter
* https://docs.astro.build/en/guides/server-side-rendering/
*/
adapter: {},
output: 'server',
site: 'https://tonysull.co',
integrations: [
webfinger({
tony: {
instance: 'indieweb.social',
username: 'tony',
},
spam: {
instance: 'myinstance.social',
username: 'fake'
}
}),
],
})
Don't forget to add an SSR adapter for Astro! See Enabling SSR in Your Project for more details.
What's going on in the example above?
First, we need to make sure Astro knows the production domain for the project.
site: 'https://tonysull.co'
integrations: [
webfinger({
tony: { /* */ },
spam: { /* */ }
}),
],
Next, we tell astro-webfinger
that there are two supported ActivityPub accounts. These account names are combined with the production domain — tony@tonysull.co
and spam@tonysull.co
webfinger({
tony: {
instance: 'indieweb.social',
username: 'tony',
},
spam: {
instance: 'myinstance.social',
username: 'fake'
}
}),
Finally, we provide the account details for the real fediverse accounts. Note that the local usernames don't have to match the account they redirect to, like spam@tonysull.co
redirecting to fake@myinstance.social
. Redirects can also point to different instances, in this case one points to indieweb.social
and the other points to myinstance.social
.
A lot more details can be added to a Webfinger file, including custom aliases and links. Those aren't supported today and I'm definitely not a power user in the fediverse so I haven't had a need for it but could see that being helpful to others.
Already using Mastodon nad looking for a way to let people search for you with your super fancy custom domain? Head over to npm for setup instructions.
Reach out if you find a bug, have a feature request, or find me on Mastodon to get in touch!
]]>Getting started in web development? Do yourself a favor and learn HTML, CSS, and JS first.
You'll need those skills whether you end up in a WordPress project or building web apps with NextJS + react + Tailwind + ...
]]>Its really interesting to see JS frameworks shift back towards server rendering.
What if we leaning into web components and treating the DOM as our state?
Interactivity is managed by custom element attributes.
Site logic might boil down to really thin event handlers that querySelect()s a node and toggles an attribute.
This would leave a lot of state logic we're used to today without a home...
Have state that doesn't make sense in the DOM? It belongs on the server.
HTML partials would be really interesting here. Leave complex business logic on the server, only asking the browser for enough resources to handle basic user interactivity
]]>tl;dr; Request routing is a complicated issue that's a fundamental feature of monolithic JavaScript frameworks. Whether your tool of choice is a static site generator like 11ty, a server-first framework like SvelteKit, or something in the middle like Astro they're all having to deal with request routing. What would a shared router spec look like if it needed to work well enough for every framework?
🌶️ If you can't describe how a router works in two sentences or less, it's too complex. A developer that isn't familiar with the code should be able to go from a URL to all code require to actually render the page.
Above all, when I'm looking for a router I want simple. Like really simple. Ever hear the argument that filing taxes should be easy enough that it fits on a single postcard? Router API docs should be even easier.
The number of code paths and potential edge cases balloons as complexity grows. Request routing is too fundamental in a server framework to risk unnecessary complexity.
Dynamic routes are key if you want reusable code, ideally one route for something like /blog/:slug
can be used to render every blog post on the site.
This can be handled a few different ways, and even skipped if you're using local files and are cool with referencing a layout every time, but this is a must for anything using a remote data source like a CMS.
This one seems like a no brainer, servers need redirects. This can be as simple as an API for one route to tell the framework a request actually needs to be handled elsewhere.
This can also get a bit more clever with automatic 404 handling where a fallback route is rendered when the request doesn't match any known routes. This does lead to a bit of convention overhead though, more on that later.
Say a bug is reported on a blog index page, for some reason draft posts are still being shown in production. I want to know exactly where to go to start debugging, and ideally that means the logic for rendering that URL should all start from one file.
There's been a trend recently for frameworks to split route logic across multiple files: +layout.whatever
, +loader.gql
, +page.nothtml
, +styles.idontlikecss
. Where do I start for that bug? Is it in the loader? Is a filter function in the page broken? Is the data actually being loaded by a parent route, passed down through some nested layout logic or a <Provider>
?
I get why splitting this logic across files is helpful for frameworks. It can make bundling more efficient, it helps define conventions that draw the line between client & server, and it looks cool as hell in a file tree.
I've used plenty of routers over the years and by far the nicest developer experience I've had is with routers that start each route from one file. I can go straight there and see where the data is loaded, how the data is processed, and where its rendered to HTML. No bouncing between files, no remembering the "right" convention for where data loading lives in the file structure or component tree. Simple.
Which leads us to...convention. Solutions and API designs based on convention always throw up red flags for me. Adding a random character like $
after a function name to imply some bundling magic seems like a nice idea, but that's one more rule I have to memorize — a rule that's based entirely on convention (i.e. arbitrary) and has no meaning behind it that I can learn from.
Why a $
character? Why not &
, _
, or %
? What does a random character in my function name have to do with bundling? Nothing. Absolutely nothing. Its just a rule you have to remember.
Don't get me wrong, convention can also be helpful and arguably necessary but it should be a rare last resort. Every convention has to be memorized and kept ready while you're coding, because while you can name your function whatever you want (no convention), you must add a $
to it for bundling.
In the case of routing the +layout
, +page
, +somethingbroke
, etc is pure convention. The names kind of make sense at least, but why do they have to be separate files? Can't I architect my project in the way that works best for me and my team?
If framework libraries are meant to be a component-based approach for rendering DOM, why the heck are components so often used for things other than rendering DOM?
Defining routes, context providers, etc. as components really blurs the line in a confusing way IMO. If a component is just logic and doesn't actually render anything, it really shouldn't be a UI component at all. This pattern has always felt a bit more like a solution to a more fundamental problem with a component framework, or JSX itself.
I know this one is very much my own opinion, but its my wish list and I'll cry if I want to.
This ones really a new addition to the list thanks to the work Tanner Linsley has been doing on TanStack Router.
The idea of being able to automatically type check a URL and its parameters is pretty compelling. Heck, I'd be happy with just the basics of verifying string vs. number in a URL or comparing the requested language against an enum of supported locales.
I'm biased here, I work for Astro and done a decent amount of work on our router. I prefer file-based over config-based routing, and Sapper got me onboard with the [slug]
filename convention for URL parameters years ago.
That's not the only solution though, and frankly there might be very good reasons that a universal routing API would need to be config-based. What we really need is to accept that routing is so core to an app that we can't keep piling on features and conventions.
There's plenty to complain about the standards process for updating web specs, but it's served us pretty well so far. I'd love to see frameworks follow a similar model, aligning on a single routing solution rather than reinventing the wheel eight different ways.
]]>Google announces that JavaScript execution will be disabled again for search crawlers
]]>Even if your site builds to static HTML/CSS/JS, and it probably should, you still have to consider routing. Should your about page live at /about.html
or /about/index.html
? How will you handle redirects, especially on a localized site that supports multiple languages? If you're using a static site generator (SSG) like Astro or 11ty, what conventions and rules does the build system use to go from page templates to .html
files?
Jump into a server-side rendered (SSR) and you need to consider how errors and 404s are handled — an unhandled error in an SSG might break the build but an unhandled error when server rendering will likely end up returning a 500 error or showing the visitor a blank page. Ultimately routing concerns for a SPA are really similar to modern SSGs though, case in point Astro added SSR support without any meaningful changes to how its router worked.
Then comes the real elephant in the room — single page applications (SPAs). The debate over SPAs has been going on for damn near a decade, but at the end of the day if SPAs are your thing then go for it! SPAs add a lot of complexity in client-side routing, but we're talking about server-side routing today so lets gloss right over that back button.
There's a (probably annoying) nuance in these two words.
Complicated problems can be hard to solve, but they are addressable with rules and recipes, like the algorithms that place ads on your Twitter feed. Complex problems involve too many unknowns and too many interrelated factors to reduce to rules and processes. - Theodore Kinne, MIT Sloan Review
Specs and browser standards landed on a set of rules and considerations, but these specs were really written more to manage the networking considerations rather than how a server internally handled routing. This was possible because routing is complicated but not complex.
Routing in a JavaScript-based web framework is similarly complicated, unfortunately we haven't yet circled the wagons to define a framework routing standard and the constant push to add more features left us with a pile of complex solutions.
What template/component/function should be used to render the URL? Is the URL even valid? How are URL parameters matched for dynamic routes like /blog/post-123
? What happens if two templates match the same URL? What's the "right" developer experience (DX)? These are really tricky questions to answer because they end up rooted more in tradeoffs and opinions than anything else.
So what are we to do? The most clear answer here is to start from the top and write a list of rules for how routing works in our framework.
Debate file-based routing vs. config-based routing, then pick one...or go nuts and support both.
Whiteboard all the syntaxes we can think of for URL parameter matching. Maybe a regex-able string like /blog/:slug
does the trick. Or a file naming convention like /pages/blog/[slug].html
. We are in JavaScript land after all, maybe an object to define routes will do the trick?
import BlogIndexPage from './routes/BlogIndexPage'
import BlogPostPage from './routes/BlogPostPage'
const routes = [{
path: "/blog",
component: BlogIndexPage,
children: [{
path: [":slug"],
component: BlogPostPage,
}]
}]
Any solution here will have the possibility of naming collisions where multiple routes match the same URL. i.e. /blog/latest
would match /blog/[slug].html
. That's probably not what we want since the template would have to know about this and handle lastest
as a special slug, time for a set of rules defining priority order.
Ok cool, now how do we validate URL parameters to make sure the blog post slug was valid?
With file-based routing this would be handled in the template itself, likely throwing an error or redirecting which both add features and complexity. Maybe we get fancy and support regex-like syntax in the file naming convention, a la /pages/[lang(en|sp)]/blog/[slug].html
.
Config-based routing might open a few doors here, what if each route can have a validation function?
import BlogIndexPage from './routes/BlogIndexPage'
import BlogPostPage from './routes/BlogPostPage'
import { getPost } from "./db/definitely-not-mysql.js"
const routes = [{
path: "/blog",
component: BlogIndexPage,
children: [{
path: [":slug"],
component: BlogPostPage,
check: async ({ slug }) => {
const post = await getPost(slug)
return !!post
}
}]
}]
When the validation fails we probably want to handle that gracefully. Does our router need a special 404 template? Redirects might be important here, so /blog/fake-post
can redirect to the main blog page instead of a 404 — does the router automatically redirect to the closest parent route, or expose a redirect convention/helper function? Can we at least stick to specs here and use a standard Response
object?
It's pretty common for a site design to reuse layouts/wrappers for sections of the site. i.e. every page has the same header/footer and every blog page has the same sidebar recommending latest posts in addition to the global header/footer.
That sure feels related to routing, to avoid code duplication our router really should build that formula in so layouts are nested by default. But wait, we need an escape hatch right? /blog/latest
and /blog/:slug
should reuse the same layout but that might not make sense for /blog/:slug/edit
.
Time for a bit more complexity. Should a page be able to eject it's parent layouts with some kind of boolean flag or API? Maybe even a use myownlayout;
pragma?
Do we tweak the file naming convention to group routes by shared layout, something like /pages/[lang(en|sp)]/blog/(public)/[slug].html
where the (public)
is ignored in the URL and only used for folder structure? Config-based routing may be easier here, if we're okay with additional ambiguity related to route collisions and more priority rules.
import RootLayout from './layouts/RootLayout'
import BlogLayout from './layouts/BlogLayout'
import BlogIndexPage from './routes/BlogIndexPage'
import BlogPostPage from './routes/BlogPostPage'
import BlogPostEditPage from './routes/BlogPostEditPage'
import { getPost } from "./db/definitely-not-mysql.js"
const routes = [{
path: "/",
layout: RootLayout,
children: [
{
path: "/blog",
layout: BlogLayout,
component: BlogIndexPage,
children: [{
path: ":slug",
component: BlogPostPage,
check: async ({ slug }) => {
const post = await getPost(slug)
return !!post
}
}]
}, {
path: "/blog/:slug/edit",
component: BlogPostEditPage
}
]
}]
I went out of my way here to avoid grabbing examples from any specific framework or router. Regardless of what Twitter might lead you to believe, when it comes to open source frameworks its a small world of passionate, dedicated individuals working solve real-world problems to make everyone elses' job just a bit easier. The last thing we need is yet another Framework A vs. Framework B debate.
What we really need is to rally the troops and stop solving the same problem fifteen different ways. Routing is effectively table stakes for a modern framework at this point, why reinvent the wheel? We'd be better off with one standard way of handling request routing in the server, even if that means foregoing some of the nice convenience features we have today in the name of a simple paradigm.
Rant over. I'll leave you with this hot take.
]]>🌶️ If you can't describe how a router works in two sentences or less, it's too complex. A developer that isn't familiar with the code should be able to go from a URL to all code require to actually render the page.
https://dev.to/oxharris/rethinking-the-modern-web-5cn1
Also highlights what brought me to Astro in the first, [HTML, CSS, JS]
> Build_Step
> [HTML, CSS, ...JS?]
👇 🧵
Logical properties like padding-inline or block-size fundamentally won’t work
i.e. it only really works with left-to-right (LTR) languages
That's fine for a smaller project, but a show stopper for a properly translated site like Astro's docs
Code splitting seems to be a real challenge regardless of framework
Utility classes used on any page often bleed out to every page on the site
Call it a bundler challenge or even an optimization for page transitions, but I don't want complex /admin styles on my landing page
]]>This photo is straight out of the original Duo's camera app. Heavy handed with the fake bokeh, but I really didn't do it any favors pointing straight at the evening sun
]]>The naming collision issues here really highlight the risk of leaning on class names vs CSS variables
Open Props avoids that whole headache
]]>I always hated that bitcoin was described as decentralized, ignoring that having more than one head says nothing for privacy or security
]]>At risk of butchering Cory's original article, I'll try to summarize my takeaway here.
When a new platform is launched it needs users and all of the focus is on growing the user base by solving a problem and making people happy. That inevitably hits a wall, though, and focus shifts to making monetizers happy - think YouTube adding support for video monetization and cash donations. Once that revenue machine is firing on all cylinders it's time for the platform to shift to its final challenge, making shareholders happy. The final sign of a doomed platform is looping back to making focusing on users, almost certainly after the increasingly obvious cash grab starts to erode market sentiment.
So what does this have to do with the web? It may not be a platform run by a multibillion dollar corporation, but the pattern is still there. The early web was largely a grass roots effort. Sure, companies were jockeying for position in the future but a vast majority of online content was put there by individuals. That was the honeymoon phase for the web, all things considered it didn't take long for user growth to balloon.
What came next revolves around the dot-com bubble. Piles of cash were thrown at companies moving online, when that boom went bust those left standing were wondering how to actually drive revenue...simple! Advertising. The internet moved from a focus on making users happy to a focus on helping monetizers make money.
I could go on for hours talking about how centralization was inevitable at that point, but suffice it to say five big tech companies account for over half of all global internet traffic. Google, Netflix, Facebook, Apple, Amazon, and Microsoft are all publicly traded corporations that are legally bound to do what is best for their shareholders. You honestly can't blame them at this point, but the internet has entered the last major step of extracting as much value as possible for shareholders.
The IndieWeb is often brought up as a bit of whimsical nostalgia for Web 1.0. We link to the original Space Jam website, begrudge walled gardens, and throw around phrased like "own your content".
Cory's "Enshittification" pattern offers an interesting lens though. The web isn't a platform driven by a corporation beholden to shareholders. There isn't the ratchet of falling profits and a shrinking user base to push the platform back to appeasing users.
At it's core the IndieWeb is an attempt to reclaim the internet, to pull focus back to the users. We need more people publishing their own content on their own website. We need to claw back online shopping from Amazon, ordering from a company's own site will push more businesses to build their own e-commerce shops.
...the water's nice! If you're interested in building your own site, avoid the rabbit hole of what tools you should use and Just Ship It™. Not interested in managing your own site? Give ghost a look for an easy way to get started (I'm not affiliated to ghost at all, I just like what they do). It really doesn't matter how you setup your site, just get out there and share your thoughts and ideas on your own site instead of feeding the algorithmic social media machine.
]]>Why publish schemas to NPM?
Content should be portable (see Markdown), if we can standardize schemas our content can be used in different sites and themes 🤔
]]>Two paragraphs in and I realized how much I have to say about analytics! This will likely be the first in a series covering everything from my love/hate relationship with Google Analytics to what data you really need to be collecting. Subscribe to my RSS Feed for updates!
I avoided adding analytics to my own sites for years, mainly out of concerns for visitors' privacy. I've also never enjoyed working with Google Analytics on client projects - the dashboard is much too complicated for my taste and frankly I've never seen a company actually gain meaningful value from such detailed (and complicated) analytics data.
I've been using Fathom Analytics on my own sites for a couple years now and really grew to appreciate the no nonsense approach to what data is collected and how it's presented to me. Sure there aren't as many features as Google Analytics, but that's because Fathom actually cares about user privacy (and isn't also in the search or ads business).
It's not surprise I prefer to use Astro these days. Go ahead and pull up the developer console on our site...
That's right, we use Fathom on Astro's homepage (as well as our docs and astro.new).
GDPR-compliance is no joke, and getting that right in the analytics business takes some serious attention. Fathom Analytics supports true data isolation, meaning all of your EU traffic is processed and stored in Europe.
Ad blockers and similar browser plugins often block known analytics scripts in the name of user privacy, and rightfully so! I happily follow privacytools.io recommendations every time I'm setting up a new machine or install a new browser. The goal is to protect privacy though, and Fathom has that built right in!
Fathom supports custom domains, making it easy to serve the analytics script from your own subdomain. This gets around the broad net used by most ad blockers. Normally I don't like those kinds of games, but I've spent years focusing on my own online privacy setup and feel comfortable that using Fathom analytics really doesn't violate the privacy that many of the ad blockers are intending to protect.
uBlock Origin is the main holdout here. It's a long story, but basically they've decided to chase Fathom Analytics and now block any script.js
loaded from any subdomain. This is mind-boggling to me - it doesn't seem unreasonable to me for a site to be configured to bundle its own script.js
that is hosted on a custom CDN from a subdomain...
Already using Fathom Analytics and looking for an easier to maintain setup? Check out the astro-fathom
I just published to npm.
Reach out if you find a bug, have a feature request, or just need to vent about online privacy!
]]>Mastodon has been in the spotlight recently, largely as a reaction to concerns over the future of Twitter. I'm still not sold on Mastodon as a protocol, or as a user experience for that matter, but it's built on a collection of excellent protocols.
🌶️ Hot take 🌶️ I'm also not sold on federation as the right answer, or push-based designs in general. I can dive down that rabbit hole in a future article if you're is interested!
tl;dr; Just looking for the code? I published astro-webfinger
to make it easy to add Webfinger support to an Astro.
What exactly is Webfinger? Think of it as a way to add metadata to an email address. There's a bit more to it, but for the sake of Mastodon support it's just used as a way for one server to discover a Mastodon profile by email address.
When you search for a user's profile on Mastodon you usually search for something similar to an email address, ex: you can find me on Mastodon by searching for tonysull@indieweb.social
. Your Mastodon server sends a request to the indieweb.social
server, specically to the .well-known/webfinger
endpoint. For example, searching for my account will send a request to
indieweb.social/.well-known/webfinger?resource=acct:tonysull@indieweb.social
The response will include metadata about my account like my profile's homepage and the URL for reading a feed of my posts and activity.
Running your own Mastodon instance is hard. There are a few services out there that will host a server for a small monthly fee, though as of now many of them are out of resources and aren't accepting new customers.
Hopefully one day I'll be writing about how to fully integrate my own site into Mastodon, but for now the Webfinger metadata is just a bit of JSON...lets self-host it!
Why? For one thing, self-hosting it now means that I can change Mastodon servers later without having to go back and fix any links to my profile that I've already published. @toot@tonysull.co will always link to my current Mastodon profile and will always be searchable, no matter how often I server-hop.
I published the astro-webfinger
integration to make it easy to add Webfinger support.
If you're curious how to do this yourself I highly recommend Lindsay Wardell's Integrate Mastodon with Astro post. It not only goes into details on writing your own Webfinger support, and even dives into using Mastodon APIs to pull your Mastodon activity back into your own site!
If you've worked with Astro integrations before this will feel very familiar.
# npm
npm i @astrojs/rss
# yarn
yarn add @astrojs/rss
# pnpm
pnpm i @astrojs/rss
To configure this integration, pass a config
object to the webfinger()
function call in astro.config.mjs
.
import webfinger from 'astro-webfinger'
export default defineConfig({
integrations: [
webfinger({
instance: 'myinstance.social',
username: 'myusername',
}),
],
})
That's it! The integration will include a /.well-known/webfinger
route to your build.
If you're already using Server-Side Rendering (SSR) in your project it will also include the correct Content-Type
header.
Currently, astro-webfinger
will return your Mastodon profile regardless of the username that was actually searched. ex: search for fake@tonysull.co
and you will still discover my Mastodon profile.
A future release of astro-webfinger
will add an SSR mode that allows you to configure what usernames should be recognized in search results. This will also allow you to alias multiple Mastodon profiles from the your own domain.
Translation: "Do you know how to use a glass?"
]]>Today, we are proud to launch The Astro Showcase: a place to explore beautiful community websites built with Astro. Use the showcase for inspiration, or just to check out what’s possible in Astro. We are proud to be powering the teams behind these awesome sites, including Google Firebase, Trivago, The Guardian, Daily Dev Tips and more.
]]>We launched Astro almost a year ago with the goal of delivering lightning-fast performance with a modern developer experience. Astro makes it easy to ship only what you need - 100% static HTML by default, bring your own framework to sprinkle in interactivity only where you need it.
Today, we're excited to announce two new catalogs to help speed up development: Themes and Integrations.
With our new Theme Catalog, it's never been easier to go from idea to live traffic. And when it’s time to add your favorite tools, libraries and services into Astro, our Integration Catalog has got you covered. Extend Astro with a single astro add
command.
Astro has always maintained a collection of official example projects and starter templates. These were great learning resources, but they were also limiting: 1 official blog theme, 1 official docs theme, etc. etc.
Meanwhile, our amazing community of developers had already begun to build and share fully-designed themes on our community Discord. Do you keep meaning to start a personal blog, but never seem to find the time? Grab a copy of the Astro Ink theme and start writing! With built-in support for dark mode, automated publishing for draft posts, and client-side search, you'll skip weeks of hacking to jump straight into the sharing your content.
We created the Astro Themes Catalog to showcase these amazing community-developed themes alongside our official set of starter kits.
Visit astro.build/themes to get started with any official or community theme.
Interested in releasing your own theme? We’re here to help! Check out our publishing best-practices for help getting started and instructions to get your theme listed on our catalog. Need help? Join the #themes channel on Discord to chat with other Astro theme creators.
We can't wait to see what you come up with! Add your own themes to the catalog, publish your own components and integrations, and join our Discord to say hello!
We're launching the next Astro Hackathon on Monday, April 11! Cash prizes will be awarded for a wide range of categories. Full details coming soon!
For example, let’s say you’re having performance trouble with 3rd-party scripts on your page. This isn't surprising, since sending too much JavaScript can lock up the main thread and block the window.onload
event (even if the script is marked as async
or defer
). Search our Integrations catalog to find the official Partytown integration for Astro, and add it with a single command:
Check out the full docs for details on how to build your own integrations.
We can't wait to see what you come up with! Add your own themes to the catalog, publish your own components and integrations, and join our Discord to say hello!
]]>I've honestly not given custom elements much of a chance since the early iterations of the spec, and its about time I give it a proper chance again. Like many web developers I really love the vision of having a standard for building native custom components without reaching for the usual JavaScript frameworks.
tl;dr; Web components aren't the magic bullet I'd hoped for, but they've come a long way in the last couple years. When paired with Astro's new resolve API you end up with a dead simple way to quickly author simple pure JavaScript web components, bundle them for production, and hydrate them on the client. Check out a live demo or jump right into the source code on GitHub.
No! There are great options if you're ready to go all-in on frameworks though, I strongly recommend you checkout webcomponent.dev's detailed breakdown of all the different ways it can be done.
At the end of the day though, the frameworks are just going to compile down to JS (I haven't seen any WASM implementations yet). Frameworks like Lit let you skip the boilerplate and can help avoid some of the gotchas along the way, but what's a basic web component even look like?
Web components can be daunting - shadow roots, <template>
s, and extend HTMLElement
aren't exactly old hack for most web developers. Let's break down the basic structure first, then jump into an full example.
Fair warning, I'm by no means an expert on web components - please hit me up on Twitter if I misrepresent something here!
One of the more contentious parts of the spec, and the cause of many of the limitations, is the shadow DOM. The idea is to encapsulate each custom element from the rest of the DOM - if you've ever worked with iframe
s this will sound familiar.
The key here is that code outside of the custom element can reach down into the shadow DOM and change things, styling for example. It doesn't work the other way around though, elements and styles inside the shadow DOM can't reach outside and affect the outside world.
Sounds great, until you want a web component to change it's style based on the content around it - theming can be tricky and force you to jump through hoops.
Thankfully, the shadow DOM is actually opt-in and you can extend HTMLElement
without losing access to the rest of the DOM.
template
sWeb components are meant to be reusable, and for that to be possible you need to be able to define a template with the element's initial HTML elements and styles.
This can be done a few different ways, but the most common way is to use template literals right in your web component's JS file. I'll be using one of the excellent examples from webcomponents.dev as a starting point.
const template = document.createElement('template')
template.innerHTML = `
<style>
/* your styles */
</style>
<span id="count"></span>
`
Feels a little weird writing HTML in a template literal, right? It gets the job done though, and in my opinion plain JS web components really shine with small components so this shouldn't get too crazy to maintain.
All this really does is create a new <template>
tag, just like if you directly included it in your index.html
. The template contains all the initial styling and HTML used to initialize the component.
This is where it gets really interesting. Ever wonder why you can't make your own <select>
or <input>
elements? Well now you can (kind of)! I wouldn't recommend trying to actually replace existing HTML tags - I don't know if that would even work and it sounds like a nightmare for accessibility tools.
But you can make your own <my-counter>
component, that's definitely not part of the HTML specs.
class MyCounter extends HTMLElement {
constructor() {
super()
this.count = 0
// open mode keeps all elements accessible to the outside world
this.attachShadow({ mode: 'open' })
}
// ...
}
// tell the browser to use this class for all `<my-counter>` elements
customElements.define('my-counter', MyCounter)
Notice the open
mode there? I mentioned earlier that you can avoid the one-way encapsulation of the shadow DOM, that's all it takes. It's a shame having to turn off one of the key features of custom elements, but theming and styling really can be a big problem for real world apps!
I'll leave it up to you to check out the full source code on GitHub. I also recommend checking out the examples from webcomponents.dev as well to see what all I had to change. Spoiler: not much!
One huge benefit of Astro is the heavy focus on minimizing, or even completely avoiding, the amount of JavaScript used on a site. I've written before about how important simplicity is in web development, so I'll spare you the rant here.
For me, the big promise of web components is the ability to easily share basic elements across multiple projects without being tied to one specific framework. I'm not ready to build an entire PWA in web components, but when it comes to the base-level building blocks for a site I'd love to share a single <nv-button>
, <nv-spinner>
etc.
Maybe one of these days I'll find the time to build a full OpenUI toolkit to use for all of our client projects...
const template = document.createElement('template')
Well that didn't take long, literally the first line of code breaks our Astro build 🤣
Astro is a static site generator, the entire build runs in Node.js. That means we can't actually touch the browser-only document
object.
// Just create a shared string here, no more document reference
const template = `
<style>
/* your styles */
</style>
<span id="count"></span>
`
class MyCounter extends HTMLElement {
constructor() {
super()
const elem = document.createElement('template')
elem.innerHTML = template
this.count = 0
this.attachShadow({ mode: 'open' }).appendChild(
elem.content.cloneNode(true)
)
}
}
customElements.define('my-counter', MyCounter)
There we go! Don't touch the document element at all until the constructor is called. Note that this really could/should be cleaned up to move elem
outside the class and only initialize it once, but for the sake of this demo I kept the code easier to follow.
Astro just recently released version 0.19, one of the cool new features is the Astro.resolve()
API. With it, you can take a relative URL to another file in your src/
directory and resolve it to the built file path.
This is handy for images, Astro.resolve('../images/penguin.png')
, but we're going to take it a step further and use this new API to pull in our web component's JS file.
In the demo project, the web component is defined in src/components/my-counter.js
. Inside the homepage at src/pages/index.astro
,
<head>
<title>Welcome to Astro</title>
<script type="module" src={Astro.resolve('../components/my-counter.js')}
></script>
</head>
<body>
<my-counter></my-counter>
</body>
That's all there is to it! From there Astro will be aware of the JS file, bundle it during production builds, and replace the Astro.resolve
call with the URL needed to load in the component.
Follow us on Twitter or subscribe to our RSS feed so you don't miss a future post covering more complex web components written with Lit!
Web components aren't a magic bullet, but I found this experience much less frustrating than the last time I tried it out. To be fair, that was probably back in 2017 when the spec was still an early work in progress.
I'm still not sure that I'd go through the effort to build an entire site in custom web components just yet, but I won't actually be surprised if that's a great option in the not too distant future.
Until then, browser support is surprisingly good and web components can be a great solution to reusable base components. Whether you're managing multiple projects or just preparing for the next big shakeup in frontend frameworks, it's worth giving native custom elements a second look in 2021.
]]>The beauty of embracing open formats like Markdown and JSON on top of Git is that a CMS you designed more than 5 years ago, works perfectly with the latest modern web tooling.
— Forestry CMS (@forestryio) August 5, 2021
Now go check @astrodotbuild to ship faster websites 🚀 https://t.co/0o6OgCfkgx
Follow @forestryio on Twitter if you don't already! Their git-based CMS tools are a huge win for any Jamstack project and should you have any questions or issues, their team is extremely responsive and helpful.
tl;dr; The web development landscape is constantly in flux. It's easy to see the latest tools, languages, and frameworks get hyped on your favorite blogs and podcasts, but at the end of the day you probably need to build quality sites that can be easily maintained for years. More often than not, the boring tools make your life easier in the long run!
It wasn't so long ago that sites were edited by handed - good old fashioned HTML,CSS, and (maybe) JS. A site contained more <marquee>
s than pages, RIP marquee!, and the querySelector
wasn't a thing.
Browsers are still fundamentally the same today. Yes they are much bigger and feature packed, but at the end of the day a browser takes markup in the form of HTML, styles in the form of CSS, and JavaScript for interactivity. We have more APIs to play with, and more caching problems to manage, but at the end of the day it's the development workflow that has really complicated.
Jumping ahead a few years (or 20) and now its not uncommon to start a project by deciding on the 5+ tools and platforms you will be bolting together. Do you want static site generation (SSG), server-side rendering (SSR), or client-side rendering(CSR)? Are you standing up and maintaining a backend server, or jumping on the serverless backend? Will plain CSS do the trick, or is SASS/SCSS/LESS your preferred approach? What about this CSS in JS and styled components madness?!?
Oh wait, you also need to host it somewhere. Does the project need continuous integration (CI) with automated testing, automated deployments, staging environments, etc? Better schedule another few days to decide on those tools, setup accounts, and bootstrap the basics together before you start coding!
No! Sure, all these tools and processes were created for good reason. CI can save a huge amount of time. Transpilers, build tools, and linters can help catch common mistakes and enforce team coding standards. Frontend frameworks can really be a lifesaver, especially if you need to build interfaces with complex interactivity and state management.
But all these advancements can turn into a real nightmare, too. Ever tried onboarding a new hire that is an experienced web developer but not familiar with the latest React trends like styled components or hooks? Is the time spent ramping up on these abstractions really worth the effort for a simple marketing site or statically hosted e-commerce store?
The tech stack you chose can be a much larger commitment than it first seems to be. I would be very surprised if react was no longer used a couple years down the road, but what about the 15 libraries you pull in to that react app?
If a project is built on an everything but the kitchen sink™ app platform like Nuxt.js, Next.js, or SvelteKit can really let you hit the ground running, but will they still be supported a few years down the road? Will developers still be excited to work with them, or will it be the butt of too many developers' jokes like WordPress?
Back to the entire reason we're here, sticking with "boring" tech may not be such a bad idea. The JSON spec is around 20 years old, git is 16 years old, and markdown is 17 years old. Though the HTML/CSS/JS standards are a constant work in progress they go back as far as 1993 (ok, 1995 for JavaScript).
Even better, these are all open standards that may evolve but will never go away.
This very site is built with a shiny new SSG, Astro, isn't that a bit hypocritical? I don't think so, but hear me out!
Take a look at the sourcecode for this project and you'll see that the real meat of the site is based entirely on the standards above. SCSS is used for styling, but honestly the only reason we even used that was because CSS nesting isn't supported yet.
Most of the site's pages are authored in markdown and various bits of data are stored in JSON.
Page templates and components are in .astro
files that really just consists of JavaScript in the form of frontmatter, styles are either in global .css
files or component-level <style>
blocks, and markup is mostly HTML with a bit of JSX-like javascript inline for conditionals and for-each loops.
It would take a matter of a few hours to migrate the entire site to a different SSG, or even an entirely different platform like Next.js.
So yes, we're technically using a few new build tools to wire templates together, but at the end of the day it's easy to use Astro and forget that you aren't using the basic, web-native HTML/CSS/JS stack.
It's a very common mistake, especially for developers newer to web development, to get stuck on the hamster wheel chasing the latest toy. Don't avoid innovation, just be strategic and selective in where you invest your time and efforts!
There's always a new tool to try or pattern to learn, but at the end of the day it's much more satisfying to ship a product rather than trying out a new tool with sample projects that never sees the light of day!
I feel like I bashed on react here - don't get me wrong, react is an excellent and extremely powerful tool. React still has a huge lead in usage compared to the other frontend frameworks and, especially if you're in the job market, it's much easier to find job listing that require react experience than listings that don't mention react at all. But if you find yourself running in circles, getting frustrated with a build pipeline that's fighting you or magic hooks that you don't understand, take a step back and learn the basic (boring) 20 year old tools. Ship fast, don't break the build!
]]>tl;dr; We received some great feedback from Forestry after the original demo was released. I had been holding off on revisiting that post until a few Astro features were released. Check out the live demo, or dive into the main diff that includes most of the updates listed below.
Specifically, Astro's Collections API was updated to handle even more use cases. Oh yeah, and their docs site was launched! It's hard to believe this project was only announced a few months ago, the community has really grown quickly and countless hours were put in to build a great looking (and localized!) documentation site.
Awesome post 👏
— Forestry CMS (@forestryio) June 29, 2021
We should give a try to @astrodotbuild 🚀
Minor feedback:
1. You could set /images as your default media folder instead of the default /uploads to further reduce the diff.
2. Authors could be stored as JSON file(s) instead of Markdown if you don't need a body.
Forestry's CMS is extremely flexible, and honestly its crazy the feature set they're able to offer while storing all your data in your own git repo. No, this isn't a paid post. I'm a user of Forestry and big fan of the git-based CMS approach!
One of many options when configuring Forestry is the default folder for media uploads. I definitely had an eye towards minimizing the diff in the original demo, I'm just in the habit of using a /uploads
directory for user uploaded content. Old dogs, new tricks, and all that.
This was excellent feedback, and worth digging into a little further.
I originally had author information stored in separate markdown files, /data/authors/don.md
and /data/authors/sancho.md
.This honestly didn't make that much sense, markdown is a great way to combine properties and content (usually built to HTML). The blog demo doesn't have any author-specific content, just a few properties like name
and image
.
Given that the site doesn't need to pull any HTML content for the author, it makes much more sense to store that data in a simple JSON file. Let's get rid of the author markdown files entirely, replacing it with src/data/authors.json
:
{
"don": {
"name": "Don Quixote",
"image": "/uploads/don.jpg"
},
"sancho": {
"name": "Sancho Panza",
"image": "/uploads/sancho.jpg"
}
}
Forestry supports this out of the box, once you setup the sidebar to include the new JSON file it recognizes that the file is a map and it just works. I honestly expected this to fight me a little bit, and was to when I had no issue removing references to the old markdown files. I was even able to reuse the same content model.
I did need to update the content model for posts to reference the new JSON file instead of a markdown file, but a few clicks in the settings menu and it was all hooked back up.
Forestry's instant previews run your development server in a docker container and allow you to preview CMS updates in realtime. That's one of those features that can push plenty of projects to use a hosted CMS platform, very cool to see live previews working so seamlessly in a git-based CMS.
One issue I ran into when deploying the first demo was that Astro only supported node 14+. Instant Previews allow you to customize which docker image is used for your development server, but I couldn't quite get it to work with an early version of Astro and ran out of time. As of a couple weeks though, Astro now supports node 12 out of the box!
After updating the demo project, setting up instant previews was as simple as going back to Forestry's default preview settings. I had tried a custom docker container with no luck, but the included node 12 + yarn image worked like a charm with the latest version of Astro.
The original collections API in Astro was designed before the beta was publicly released, and it turns out there were a few use cases that were more common than expected.
There aren't any monumental changes here, you can dig through the merged RFC if you're curious. A few of the API names were updated to be more clear, and the API was updated to work with the newer Astro.props
API.
You can check out the diff here to see exactly what I had to do to update the $posts.astro
route for the new API. Personally, I'm a fan of the newer design and think the code is a bit cleaner and easier to read.
Astro has been moving quickly since it's public beta launched! I was glad to see how easy it was to clean up the demo a bit and take even better advantage of Forestry. If you haven't worked with a git-based CMS before I highly recommend you take an afternoon and give it a try. It may not be right for every project, but the developer experience literally having all your CMS data on your local machine just can't be beat!
]]>tl;dr; Our homepage makes fewer requests (10 vs. 21), is smaller (77kB vs. 123kB), and our JavaScript footprint went down from 31kB to a whopping 2kB!
A huge majority of the web is static content, and our site is no different. There are exactly two components with any type of client-side interactivity, the mobile menu and our theme toggle. As you might expect, those components aren't exactly complicated and can be built with only a few lines of JavaScript.
Our site was previously written in SvelteKit and, especially for being it considered beta, we were very happy with SvelteKit. The fact is, though, that webapp frameworks like SvelteKit, Next.js, and Nuxt.js are focused on building highly interactive frontends rather than mostly static sites.
Don't get me wrong, SvelteKit earned its place at the top of my list when building large, dynamic webapps. But for most projects, I will happily build with Astro and sprinkle in Svelte components when needed.
Surprisingly, not that much! Most of the Svelte components we had were only used for layout purposes, and because Svelte sticks so closely to plain old HTML/CSS/JS it was trivial to convert most of the Svelte components to Astro components.
The main change will be related to conditional rendering syntax, like hiding/showing an element or looping over an array of data. For example,
In Svelte...
{#if open} ... {/if}
<ul>
{#each items as item}
<li>{item}</li>
{/each}
</ul>
becomes...
{open && (
...
)}
{items.map((item) => <li>{item}</li>)}
Coming from Svelte this feels a little too React-y for my taste, but it is really handy to be able to use JavaScript right in the template. I haven't actually tried this yet, but I assume you could sort()
or filter()
the array right in your template!
Nothing! Our site is pretty straight forward with little more than static homepage content and a blog, those really are table stakes for any static site generator.
When it came time to port over the mobile menu and theme toggle, we decided to use the Svelte components as-is initially. That's the real benefit of Astro, static site generation without having to give up the component framework when needed. Note that we did recently move those components to Astro as well, helping to bring our JS size below 2kB.
This site wasn't using any of SvelteKit's server endpoints, but it is important to remember that Astro is currently focused on static site generation.
Astro doesn't build any kind of server but if you happen to deploy to Netlify, their Functions product is a great fit for Astro. Subscribe to our RSS feed so you don't miss an upcoming post adding Netlify Functions to populate our site's web mentions!
Working with Astro was refreshingly simple. React, Svelte, Vue, etc. have their place in modern web development but it's important to remember that they aren't always necessary. Our theme toggle does nothing more than add/remove a class from the document element, is that really enough to warrant another dependency and ~9kB in extra JS?
Astro is still in beta! The framework has been moving extremely fast, a quick glance at the changelog shows 13 releases so far in the month of July. A few APIs have had small changes and the Collections API is in the middle of a refactor, don't be surprised if you occasionally have to make a few updates to keep up with the latest Astro release.
It's easy to skip right past the simple solution in software projects. Full-featured frameworks can be a huge time saver when starting a dynamic web application, but if all you need is a marketing site you may just be better off with a simple statically built site powered by Astro!
Have more questions that we missed here? Reach out on Twitter!
]]>Animations don't have to be all or nothing though, and it really can be best to start small and slowly grow and polish the user experience. Swup is an excellent starting point, offering all the basics you need to hook up client-side routing with page transition animations. Even better, Swup's API supports plugins with over a dozen pre-made plugins for the most common use cases and a simple API to roll your own custom plugin.
tl;dr; Adding page transition animations to Astro's blog-multiple-authors example was surprisingly simple. Check out the source code here or jump right into the live demo!
There's very little setup here since we're starting with an existing Astro starter. First, let's use degit to create a local copy of the blog example
npx degit snowpackjs/astro/examples/blog-multiple-authors demo-astro-swup
There are great docs on swup's site so I won't get too in the weeds, but it's worth having a basic idea of how the routing will work.
When a visitor clicks a link on your site, swup will hijack that request and attempt to load the next page in the background. By default swup looks for an element with the ID #swup
on every page (this is configurable). When navigating to a new page, swup loads the new content in the background and swaps out all the old content inside the #swup
element for the content in the new page.
During the navigation, CSS classes like is-animating
and is-leaving
are added giving you hooks to trigger the actual transition animations.
With that said, it'll be a little easier to add swup to every page on your site if they are all using a shared layout component in the Astro project. See this commit if you're curious exactly what I moved around to create the common layout component in this project.
Most importantly, in src/layouts/Main.astro
<main id="swup" class="wrapper">
<slot />
</main>
This ensures that the #swup
element is on every page and wraps all page-specific content that should be replaced during a page navigation.
Swup is extremely configurable, but one NPM dependency and a couple lines of code is really all it takes to get started.
npm install --save-dev swup
Initialize swup in /public/app.js
...
import Swup from 'swup'
const swup = new Swup()
Finally, import app.js
in the shared layout...
<html>
<body>
...
<script type="module" src="/app.js"></script>
</body>
</html>
Ok, so that won't actually animate anything yet. Check out swup's theme docs if you want to write your own CSS animations, but they do include a few basic themes on NPM. We'll stick with the basic slide theme for now, which you can see in action on the live demo.
npm install --save-dev @swup/slide-theme
import Swup from 'swup'
import SwupSlideTheme from '@swup/slide-theme'
const swup = new Swup({
plugins: [new SwupSlideTheme()],
})
Astro supports scoped styles out of the box. This can be a huge win for performance and avoiding unnecessary CSS in the browser, but we'll need swup to update any linked stylesheets when navigating to a new page.
This is actually a problem I ran into with barba.js. I couldn't for the life of me figure out how to update the page's
<head>
when navigating. It seems like this may have been possible inv1
but is no longer supported inv2
.
Thankfully, swup has this covered with their head plugin.
npm install --save-dev @swup/head-plugin
import Swup from 'swup'
import SwupHeadPlugin from '@swup/head-plugin'
import SwupSlideTheme from '@swup/slide-theme'
const swup = new Swup({
plugins: [new SwupHeadPlugin(), new SwupSlideTheme()],
})
Plugins to the rescue again! This plugin will replace the old <head>
with the version in the new page when navigating - it even has config options should you need to persist some styles or meta tags on every page!
Page transitions can really ruin the experience for anyone visiting your site with an accessibility tool like a screen reader. Many accessibility tools depend on browser events to know when the page content has changed, with swup manually changing out the DOM content a reader may never realize anything changed.
Again, there's a handy plugin to take care of this accessibility problem. It isn't a magic bullet and I strongly recommend you manually test all of your sites with a screen reader - if nothing else it's enlightening to see how different using the web can be if you are visually impaired!
npm install --save-dev @swup/a11y-plugin
import Swup from 'swup'
import SwupA11yPlugin from '@swup/a11y-plugin'
import SwupHeadPlugin from '@swup/head-plugin'
import SwupSlideTheme from '@swup/slide-theme'
const swup = new Swup({
plugins: [new SwupA11yPlugin(), new SwupHeadPlugin(), new SwupSlideTheme()],
})
Take a look at our Progressively enhancing Svelte with JavaScript post for a more detailed explanation of why progressive enhancement is so important, but suffice it to say a site should never require JavaScript to be usable.
In the case of swup transitions, if JavaScript fails or is disabled for any reason we have a perfectly functional static site that can navigate to new pages like normal. Swup doesn't change your links or <a>
tags at build time, so if app.js
never loads or hits an exception you're left with exactly the same static site that Astro built in the first place!
There's plenty more you could do here to really get crazy with animations. Pull in gsap to create intricate animation timelines or add transition animations to SVGs. Though I haven't tried it yet, I see no reason why you couldn't have multiple instances of swup targeting different portions of your page.
Follow us on Twitter if that's your thing, and reach out to share the animations you come up with for your own site!
]]>When designing the tech stack for an e-commerce project, the web of competing priorities can start to feel like an M. C. Escher painting.
Online stores directly drive revenue through their site, making conversion rates a top priority. That often leads to the need for realtime tracking and user-specific product recommendations. Cool! We know we'll need a backend feeding the frontend realtime data.
What happens, though, when your marketing campaign goes viral and site traffic takes off like a rocket? Wait a second, so is this when we're supposed to reach for cloud-based autoscaling? Or a serverless solution?
Static sites aren't a one size fits all solution, but they have some very real benefits especially at smaller scales. It seems crazy to use a static site for an online store though...right?
I'd argue that a vast majority of online shops could see huge benefits if they switched to a static setup. Their store is always available, fast, and easily indexed by Google. Most importantly, if their next Instagram post goes viral the rush of shoppers won't take down their site and throw away all those potential sales.
We have used Snipcart on quite a few projects over the years, including Kamfly and been extremely happy with the results.
Snipcart is a drop-in shopping cart solution complete with support for subscriptions and digital goods. Their admin dashboard gives you easy access to customer and product analytics, inventory management, abandoned cart re-engagement campaigns, and even supports multiple currencies and tax solutions.
Even better, Snipcart discovers your product details right from your HTML. You include data-
attributes defining things like a product's price and options and Snipcart handles the rest. And don't worry, Snipcart verifies all product details before accepting an order to make sure there wasn't any funny business.
Enough already, lets get to the solution! Fair warning, this demo is a fairly direct port from Kamfly's demo. That project was originally written in 11ty, another static site generator that provides an excellent developer experience for fans of JavaScript.
The UI design of the demo menu may look familiar to a few of you reading this. My first job out of college was as a Software Development Engineer in Test (SDET) at Microsoft. I worked on the UI team responsible for Windows Phone 7-10 and was always partial to the subway-inspired design!
I didn't want to stray too far from the original 11ty project, and that meant storing all the menu data in local JSON files.
You can see the full details in GitHub, but the basic solution here takes advantage of glob imports.
import.meta.glob('../data/menus/*.json')
The code above is a bit of import magic that allows you to use a glob pattern to load multiple files at once.
With import.meta.glob
the result is an object mapping from filename to an async function that loads the file. In case you want to load all the data immediately, import.meta.globEager
will be your best friend.
Snipcart makes this extremely simple.
<button class="header__cartbtn snipcart-checkout">
<img class="header__cart" src="/icons/cart.svg" alt="open cart" />
<span class="snipcart-items-count header__cartcount">0</span>
</button>
That's all there is to it! Snipcart checks your page for a specific classes. In this case snipcart-checkout
tells Snipcart this is the button that should open the shopping cart when clicked. The .snipcart-items-count
span is another special class telling Snipcart to update this span with the current number of items in the cart.
Snipcart looks for another special class to discover products on your site, snipcart-add-item
.
<button
class="menuitem__cartbtn snipcart-add-item"
data-item-id={item.slug}
data-item-name={item.display_title || item.title}
data-item-price={item.price}
data-item-url={url}
data-item-image={item.image}
{...modifiersMap}></button>
Most of this should be pretty straight forward, we're adding data-item
properties to tell Snipcart what the product is. But what about that modifiersMap
thing?
Snipcart allows you to add custom options to products, think dropdowns for size: Medium
or color: Gray
. To do this their API supports custom fields.
The string formatting for custom fields can be a bit confusing at first glance, but it's hugely powerful. Not only can you tell Snipcart what options are available, you can even define extra charges for specific options.
<button
...
data-item-custom1-name="Frame color"
data-item-custom1-options="Black|Brown[+100.00]|Gold[+300.00]"
>
...
</button>
In this case, Snipcart would give the user a dropdown option for "Frame color". The color black is free, upgrading to brown adds $100 to the product's price and upgrading to gold adds $300.
This one was a bit tricky to figure out at first, hopefully I'll help save you some time here.
I realized pretty quickly that the {...spread}
operator was going to be my friend. Because Astro allows JavaScript (and TypeScript!) inside the template's ---
fence, it really is just a matter of massaging the product options into an object.
const modifierMap = {
"data-item-custom1-name": "Sauce",
"data-item-custom1-options": "None|BBQ|Buffalo",
"data-item-custom2-name": "Side",
"data-item-custom2-options": "Fries|Onion Rings[+1.50]"
}
I'll leave it to you to check the full implementation in GitHub, but once we have the map of option properties it really is as simple as {...spread}
ing those properties onto the <button>
in Astro.
A huge thanks is in order here to both the Snipcart and Astro teams here. I had the easy job of tying together two excellent projects to build a completely static restaurant ordering site!
]]>Our go-to content solutions:@NetlifyCMS for internal projects (when we just need it to work)@forestryio for client projects (when a clean, user friendly UI is a must)@fauna (when a file based CMS just won't cut it)
— Navillus (@navillus_dev) June 20, 2021
What tools are you always reaching for?
Every CMS has it's benefits and best uses, but it's rare that we have to reach outside these three tools. We've already tried out Netlify CMS with Astro and, not surprisingly given how closely the Astro static site generator sticks to plain old HTML/CSS/JS, it was a breeze to get all setup. This begs the question, how does Forestry hold up to this brand new framework?
tl;dr; Check out the live demo, GitHub repo, or jump straight to the diff comparing the demo to the original Astro blog example.
Forestry CMS sits in a very special niche - a git-based CMS that is designed with non-technical users in mind. Being git-based, it allows for an extremely simple developer experience.
If you're working on a Jamstack project with a statically built front end and expect most of the content updates to be done via the CMS rather than markdown directly, I can't recommend Forestry enough.
Before you ask - No, this isn't a paid post. Navillus aren't affiliated with Forestry, we're just big fans of the product.
I started this demo with Astro's blog example. Let's break down the changes that were required to setup the CMS.
/public/uploads
This one is more personal preference than anything else. Many CMS tools drop all your image assets into the same location and let you visually pick images from a gallery.
The only time I've really found it useful to separate image assets into different directories was to transform specific images depending on their use, and we won't be bothering with image optimization for this demo.
Note that this also requires updating any image references from /images
to /uploads
. In our case, that meant updating /src/pages/about.astro
and each blog post in /src/pages/posts
.
The example repo includes a /src/data/authors.json
file which is a basic JSON object/map of the two demo authors. This structure doesn't make as much sense for a file-based CMS.
Instead, let's store each author in a separate markdown file in the /src/data/authors
directory. Later we can point Forestry to that folder, define the property types available for each author, and allow CMS users to create new authors without touching JSON.
While you're here, make sure each author's image property is pointing to the /uploads
directory instead of /images
.
This really was the only tricky bit to work out.
import authorData from '../data/authors.json'
A few different pages and templates need to load the author data, and they all expected ot find a JSON map. Now that each author has a separate markdown file we need to fix how that data is loaded.
let allAuthors = Astro.fetchContent('../data/authors/*.md')
let authorData = allAuthors.reduce((acc, next) => {
return {
...acc,
[`src/data/authors/${next.slug}.md`]: next,
}
}, {})
What's with the src/data/authors/${next.slug}.md
code? We'll be setting up Forestry soon, but one thing to note now is how Forestry handles content relationships by default.
By the end of this post you'll be able to create authors in the CMS and link an author to each blog post. That's right, this git-based CMS handles relational data out of the box! (Sorry, no table joins, it's just a git repo!)
Forestry references other CMS objects by absolute path, based on the projects root directory. In the case of authorData
above, we're mapping from each author's filepath to the author's data. There are other ways you could manage the data here, but for the demo this is easier since you won't need to update the pages templates otherwise.
This is the easy, and fun, part! Once you have the project building locally and pushed to GitHub, head over to Forestry and follow their importing walk through.
Forestry's repo onboarding process is really impressive. Once linked to your repo, Forestry walks you through the steps of defining your data types (in our case just Author and Post).
I'll leave it to you to play around with Forestry's content model settings. Take a few minutes to poke around in the different options, especially for defining different property types and data validation.
Bonus points Forestry offers live previews that run your actual dev server and give previews of content updates before publishing. I've had luck running Astro with node v14 and out of the box Forestry only offers v12. They support custom docker images from DockerHub though, go nuts and setup your own preview server!
Keep an eye out for a part two of this Forestry demo! I want to revisit the best way to edit the home and about pages, or create new pages for that matter, right in the CMS.
And yes, if the tweet at the top of this post didn't give it away, we'll be posting an Astro + FaunaDB demo soon enough!
]]>className
property to their first Svelte component.
Don't get me wrong, the power of dynamically adding classes to a component can save your butt. The term className
is a dodge in the JSX world though since class
is a reserved word in JavaScript. Here's a quick trick to make your Svelte components feel even more like plain old HTML with a proper class
property!
tl;dr; Check out the REPL example to see a working example.
With Svelte, you may think it's as simple as
<script>
export let class = "";
</script>
<div {class}>...</div>
Unfortunately you run smack into the same reserved word problem - Svelte treats the script
as regular JavaSript (or TypeScript) and won't allow a variable named class
.
If you're like me you probably hit this once, banged your head against the nearest wall a couple times, then moved on to the obvious fix of renaming the prop to class
.
<script>
export let className = ''
</script>
<div class="{className}">...</div>
Writing Svelte is so close to HTML that you can almost forget there's a framework at all...until you come across something like <Button className="send-btn"/>
. Those four extra characters stand out like a sour thumb, flashing a big neon sign to remind you this isn't actually HTML.
<script>
let className = ''
export { className as class }
</script>
<div class="{className}">...</div>
That's all there is to it! Internally the property is named className
, avoiding JavaScript's reserved word issue. Externally, though, the component has an optional property named class
.
<button class="send-btn" />
Doesn't that just feel right? No more JSX-like className
property screaming "is this React!?!"
React is a very powerful framework and web development wouldn't be where it is today without it. But when it comes to workarounds like renaming class
to className
, or camel casing CSS properties for that matter, it can feel freeing when you can move past those quirks and get back to the basics.
git
.
tl;dr; Check the live demo or dive right into the code on Github.
We're huge fans of the git-based CMS idea at Navillus. When the entire site is built to be deployed as a static site it really doesn't make much sense to need to pull from a remote server to load content, and you're already working in git
!
Our go-to content solutions:@NetlifyCMS for internal projects (when we just need it to work)@forestryio for client projects (when a clean, user friendly UI is a must)@fauna (when a file based CMS just won't cut it)
— Navillus (@navillus_dev) June 20, 2021
What tools are you always reaching for?
For those that follow us on Twitter, today's blog post won't be much of a surprise. Netlify CMS is the first content management tool we reach for on side projects and internal tools. It's dead simple to setup and deploy, and is deployed alongside our site as static HTML. And before you as, yes we are loading the admin panel's JS bundle from a CDN but you can actually install that via NPM if you prefer.
This demo is based on the excellent eleventy-netlify-boilerplate demo. If you're interested in 11ty as well, I strongly recommend you take a look at that repo to learn best practices when setting up an 11ty project!
Our goal today is to highlight the Astro-specific details when integrating with Netlify CMS, so I won't be diving too far into the initial setup. Check out Netlify CMS's excellent docs for adding the CMS to your own site for a quick rundown.
For this demo, I decided to load the netlify-cms
library from CDN, but as mentioned in the docs, you can install from NPM instead. In that case, Snowpack will handle bundling the JS in production builds.
When including /admin/index.html
and /admin/config.yml
, you can simply copy those files from the docs to your Astro project's /public
folder. Astro includes everything in the /public
directory as static assets, for example your /public/admin/index.html
file will be available when navigating to yoursite.com/admin
.
First thing's first, lets setup support for blog posts.
Once you have the CMS and Netlify Identity all setup, it's time to start adding content. If you take a look at our demo repo, you'll see that all of the blog posts are saved to the /src/pages/posts directory.
For Netlify CMS, The key is to make sure that your config.yml
is pointing to the correct folder.
collections:
# Our blog posts
- name: 'blog' # Used in routes, e.g., /admin/collections/blog
label: 'Post' # Used in the UI
folder: 'src/pages/posts' # The path to the folder where the documents are stored
create: true # Allow users to create new documents in this collection
slug: '{{slug}}' # Filename template, e.g., YYYY-MM-DD-title.md
fields: # The fields for each document, usually in front matter
- { label: 'Title', name: 'title', widget: 'string' }
- { label: 'Publish Date', name: 'date', widget: 'datetime' }
- {
label: 'Author',
name: 'author',
widget: 'string',
default: 'Anonymous',
}
- { label: 'Summary', name: 'summary', widget: 'text' }
- { label: 'Tags', name: 'tags', widget: 'list', default: ['post'] }
- { label: 'Body', name: 'body', widget: 'markdown' }
In this excerpt from the demo's config.yml
, note that folder
is pointing to the correct directory.
Loading local data is handled with the Astro.fetchContent API.
export let collection: any
export async function createCollection() {
return {
/** Load posts, sort newest -> oldest */
async data() {
const allPosts = Astro.fetchContent('./posts/*.md')
return allPosts.sort((a, b) => new Date(b.date) - new Date(a.date))
},
/** Set page size */
pageSize: 10,
}
}
That's really all there is to it! The fetchContent
API takes care of loading all matching markdown files. I left out RSS feed support here for brevity, but you can find that in the demo repo here.
<Layout {title} {description}>
<h1>{title}</h1>
{collection.data.map(post => <PostPreview post={post} />)}
</Layout>
Here the $blog.astro
template is taking the data loaded above and rendering a list of post previews. If you have experience with React (or JSX) this will feel very familiar. Brackets {}
are used to escape plain old JS into the template, mapping over the posts loaded in data()
and passing the data of to the PostPreview
component.
Take a look at one of the sample blog posts in the demo repo. It defines the template used to render a blog post in frontmatter, just like you may have used in 11ty, Jekyll, or really any other static site generator out there.
Astro is still in beta and one of the big updates coming down the pipe is an update to dynamic routing. We'll skip past the routing setup for now as that may very well change in the near future, but feel free to poke around in the demo repo or ask us questions on Twitter!
I won't go into detail here on how /author/:id
or /tags/:tag
routes for now, but keep an eye out for a follow-up blog post once the routing APIs are finalized!
Frontend frameworks have taken over much of the web, but the question remains - do we really need all that JavaScript in the browser?
Client-side rendering can be an extremely powerful tool, and in some cases can solve problems that are nearly impossible to solve strictly with server rendering. A vast majority of the web is pretty simple though - mostly static content with a bit of interactivity sprinkled in.
The Navillus is a perfect example. It has a mobile menu taking advantage of Svelte's excellent built-in transition, and we recently added in a dark mode toggle. That's it for interactions on our site though, the rest is entirely static.
Here's a screenshot of devtools with our homepage loaded. A grand total of ~78KB transferred, including all the HTML, CSS, JavaScript, images, and fonts. Don't get me wrong, there's more to web design and development than just minimizing bundle size, but performance matters and small wins can add up quickly.
When designing our site we had performance in mind from the beginning. You may notice that we purposely don't have any images on our homepage, and the only icons used are actually taking advantage of the SVG sprites technique to load all the icons in a single file.
As of last week, though, we officially swapped our site over to Astro. Why go through all that effort to trust our site with an early beta of a totally new project? Well, why not!
We didn't actually plan to go live with an Astro version of our site, expecting to kick the tires and keep an eye on the project. The build was surprisingly straightforward though, and the results don't lie.
The main bit of JavaScript on our site is the mobile menu - we hadn't actually added the dark theme toggle yet when first starting the Astro project. We already posted about how we built our progressively enhanced menu, but the key is that we wanted to use Svelte to easily work with or without JavaScript. If the JS loads, we pull in Svelte's slide
transition to nicely animate the menu in and out.
With Astro, we can build 98% of our site with little more than HTML templates and markdown while adding in a tiny amount of JavaScript to improve the experience. No client side routing, no frontend framework dynamically rendering the page, no hydration issues or content flickers.
Note: Astro isn't opinionated about what component library you prefer. They shipped with Vue, React, Svelte, and Preact out of the box and more are being added as you read this.
One of the most interesting ideas about Astro is partial hydration. By default a component's JavaScript won't even load in the browser, it will only be used on the server. That's not always enough though, and Astro gives you a few options for how to load it on the client.
<Component:load />
<Component:idle />
<Component:visible />
Say it's a component that really needs to be available as soon as possible, just tack :load
onto the component and it will be loaded on the client when the page loads.
Maybe the component is common but not needed immediately, adding :idle
instead will tell Astro to wait for the browser to initially load the page and go idle before pulling down the component, boosting performance on initial load.
They have you covered for components further down the page too, :visible
will take advantage of the IntersectionObserver API to only load the component if a user scrolls down to it. This might be useful for a contact form at the bottom of a marketing page where you need custom JS validation logic but don't want to risk any performance hiccups when every conversion matters.
<Component:media />
This is only a proposed feature still open on Github and the syntax may very likely change, but the concept here will be a huge win.
Take our mobile menu again. We're using :idle
to allow our whole page to load before enhancing the menu, but on desktop that menu is never even used. With :media
or similar we could make sure that the menu's JS is only loaded on mobile viewports, why even both loading it?
Keep your eye on Astro, follow their Github repo (or star it if you prefer), and check out the community Discord linked on their homepage to give feedback and share your ideas for the future of Astro!
]]>HTMLElement
.
This can be done in a one-off way with the bind:this
approach to getting a local reference to a DOM element, but there's a better way. The use:
directive allows for reusable logic to be pulled out of the component itself.
Let's take a look at a really common use for this - listening for clicks/taps outside an area of the UI. I used this recently for Kamfly's modals, instead of adding a one-off click listener in the Modal
component I created a clickOutside
action that can be reused elsewhere. This can also help make testing a whole lot easier, testing the async nature of a full blown Modal component can be tricky, testing just the action itself is much less complicated!
tl;dr; Check out the final example in this Live REPL demo.
Svelte's official tutorial has an excellent walk through of what actions are, but basically you can think of actions as lifecycle hooks for DOM elements themselves. An action works much like onMount
or onDestroy
, but actions are tied to DOM elements themselves rather than entire Svelte components.
It's easier to reach for bind:this
when the component you're working on needs to interact with a DOM element. I won't jump into the DRY design principles debate, instead just look at the testing challenges.
Many projects end up needing a modal at some point, and they're almost always ignored in automated testing. When the component itself is responsible for hiding/showing the modal, managing a stack of multiple open modals, etc. the logic gets tricky fast.
Testing asynchronous behavior like animations or modals hiding/showing based on user input can be a chore. What if you could move some of that logic out to a reusable function?
action = (node: HTMLElement, parameters: any) => {
update?: (parameters: any) => void,
destroy?: () => void
}
That's really all there is to it! An action in Svelte takes in the DOM element and (optionally) user defined parameters. The action
function can run initialization logic on the element before returning update
and destroy
handlers (both optional).
use:
clickOutsideLet's look at a really common UX pattern - a modal is open on screen and you want it to hide when the user clicks/taps outside the modal (often on some kind of grayed out background).
This isn't a big deal to do in the Modal component itself - add an extra DOM element for the grayed out background and add a click handler to it. But what if you want a similar interaction for your mobile menu? That component is almost certainly different, if you copy the code over now you have duplicated JS in your site and you need complicated test coverage testing the close functionality for both components.
export default (node, _options = {}) => {
const options = { onClickOutside: () => {}, ..._options }
function detect({ target }) {
if (!node.contains(target)) {
options.onClickOutside()
}
}
document.addEventListener('click', detect, { passive: true, capture: true })
return {
destroy() {
document.removeEventListener('click', detect)
},
}
}
There's a few things going on here, let's break that down.
First, the action is taking in optional parameters (_options
). The action expects options.onClickOutside
to be a callback function, if one wasn't provided it's defaulted to a noop function.
The detect
function does the real work of checking to see if a click event on the page was inside the original DOM element. The event listener is using passive
to avoid scrolling performance issues, and capture
to make sure that all clicks bubble up to the listener.
Finally, the action returns a destroy
callback that will clean up the event listener when the element is being removed from the DOM.
export default (node, _options = {}) => {
const options = { include: [], onClickOutside: () => {}, ..._options }
function detect({ target }) {
if (
!node.contains(target) ||
options.include.some(i => target.isSameNode(i))
) {
options.onClickOutside()
}
}
/** Same as above */
}
A little extra functionality here allows components to pass in other DOM elements that should be considered as "inside" the action (rather than "outside").
UI testing will be a topic for another day, but you can see how much easier this will be to test. Instead of having to write tests that can reliably wait for a Modal component to animate into view before testing the click away scenario, tests could work more closely to the REPL demo (below). DOM elements for a test wouldn't need to animate at all, they just need to accept a click
event so the test can count how many times the onClickOutside
handler was called!
That's all, folks! Check out the Live REPL demo for a working example.
]]>Yes, it's another CSS framework. No, we don't want to see it being used for the next 10 years. If we have our way, chisel.css
will make clear just how easy it could be to have modern designs with out-of-the-box HTML elements.
We've included the most common browser resets, similar to normalize.css or sanitize.css, along with modern styling based on CSS custom properties.
Ask any woodworker that's dabbled in hand tools and fine joinery what tool they'd keep if they could have only one, it'd be the chisel. Most people never even think about it, but when it comes down to it almost every tool used on wood is basically just a chisel anyway.
Take a close look at a saw (be careful!). See all those tiny chisels we call saw teeth? Now look at a drill bit - yep, it's pretty much a chisel blade twisted around a stick.
We realized we kept starting with the same basic CSS resets and element styles for virtually every Navillus project, after pasting it a half dozen times we figured it was time to standardize. The project quickly took aim at building one CSS framework that can be a complete reset for both browser vendor issues and horribly styled HTML elements.
Change a few variables here and there, but the goal is for chisel to help every site look good without immediately, ready for your throw in your custom CSS (or wrap a chisel around a stick if you still can't find that damn 3/8" drill bit).
Check out the docs for full details, but it really is as simple as
npm i -s chisel.css
or from a CDN
<link href="https://unpkg.com/chisel.css" rel="stylesheet" />
Minified and compressed the entire bundle is ~3.2KB, with full support for dark mode and basic app theming.
We recently updated navillus.dev to use chisel
as a basis for styling, you can see the complete pull request on GitHub. I'm always a fan of seeing a pull request full of red, gotta love code cleanup!
Granted we're a bit biased here, most of the common CSS we copy into every Navillus sites made it's way into chisel. The power of CSS variables lets you make some pretty drastic changes though, with very little effort.
Want a different primary color? No problem, change --chisel-primary
and you're all set. Prefer a different type scale? That's simple too, check the docs for our variables like --chisel-h1
or --chisel-p
.
Dark theme support is one that gave us more headaches than expected. We wanted chisel to support both browser's native prefers-color-scheme
as well as a fallback option. We landed on a custom HTML data property, data-chisel-theme
. In the future we plan to ship predefined color palettes that will also take advantage of the HTML dataset
property.
So what went wrong? If you take a look at v0.4.0, we added custom CSS properties for component-level styling similar to --chisel-button-bg
. This worked great for making it simple to custom style buttons, say if you want a button variant like .button-hollow
. Each component property defaulted to one of the main color palette styles, a la --chisel-button-bg: var(--chisel-primary)
.
At least on some browsers, CSS variable scope isn't always what you'd expect.
[data-chisel-theme='dark'] {
---chisel-primary: #002244;
}
You'd think this would also update any CSS properties referencing this variable. Nope! If --chisel-button-bg
is only defined on the :root
scope it isn't necessarily going to be updated to the new color code. This seems like a bug to me, and I have it on my list to dig through the spec to see if that's actually expected behavior, but in the meantime we will avoid it.
Yes, we could use component variables if we redefined --chisel-button-bg
in the dark
theme selector as well. That'd lead to way to much bloat and extra CSS though, it wasn't worth the extra KB.
Chisel is still very much a work in progress. Head over to the GitHub repo to file bugs, request new features, and star the project to follow the latest updates!
]]>Structured data allows you to give search bots scrubbing your page a helping hand by adding metadata that describes your business, product, or even your blog posts.
JSON+LD is a lightweight specification for including structured data as a simple JSON object, something we're all comfortable with in the frontend world.
tl;dr; Check out this Svelte REPL example for a working example in JS. Keep reading to get full TypeScript validation for your Schema objects!
Let's take a look at a basic example first. I pulled this straight from the JSON-LD Playground - it's an excellent reference for quickly messing around with the different schema and data types.
{
"@context": "http://schema.org/",
"@type": "Person",
"name": "Jane Doe",
"jobTitle": "Professor",
"telephone": "(425) 123-4567",
"url": "http://www.janedoe.com"
}
Nothing crazy going on there (yet). When a search bot comes by to index a page with this JSON, it's guaranteed to find Professor Jane Doe's contact information. This contact information is probably included elsewhere on the page, say in a sidebar or page footer, but chances are the way the HTML is setup could prevent the search from connecting the link from a phone number to the person's name.
This is where JSON+LD really stands out compared to other structured data formats that require special HTML attributes.
<script type="application/ld+json">
{
"@context": "http://schema.org/",
"@type": "Person",
"name": "Jane Doe",
"jobTitle": "Professor",
"telephone": "(425) 123-4567",
"url": "http://www.janedoe.com"
}
</script>
Yep, that's really all there is to it. Wrap the JSON object in a script
tag, mark it as a type of application/ld+json
and you're all set. There's way more to the different kinds of schemas you may want to include - check out the JSON-LD docs for a much better walk through of all the details than I would ever fit into a single blog post.
The beauty of Svelte is how closely it follows standard web technologies. Components are written in a single file with a script
tag for all the JavaScript, style
tag for the component's CSS, and one or more HTML elements for the actual DOM elements.
While that makes authoring Svelte components a nearly frictionless experience for web developers, it can throw a wrench in the JSON+LD department. Tooling built for Svelte like the VS Code plugin, and even the Svelte compiler itself, can cause headaches if you try to write an application/ld+json
script in your components.
This may very well get cleaned up in the future, but for now you'll likely run into warnings and errors related to having more than one script
tag in a component. A Google search will lead you down a rabbit hole of different combinations of Svelte's @html expressions and string literals to get it to compile.
There's a much better way to do this though, and even better it can be combined with TypeScript to add extra type validations for your JSON schema objects.
The first part of the magic trick is getting around issues related to parsing extra script
tags in your Svelte component. This is where many will reach for some combination of {$html}
and string literals.
I prefer to move this out of the Svelte component all together. With the logic in a plain old JavaScript file it can be unit tested and will never run into issues with tools trying to parse .svelte
files.
export function serializeSchema(thing) {
return `<script type="application/ld+json">${JSON.stringify(thing)}</script>`
}
There's not much magic going on here, the function just takes in a JavaScript object and spits out the ld+json
script tag for it.
We'll lean on the schema-dts package for full Schema.org type definitions.
import type { Thing, WithContext } from 'schema-dts'
export type Schema = Thing | WithContext<Thing>
export function serializeSchema(thing: Schema) {
return `<script type="application/ld+json">${JSON.stringify(
thing,
null,
2
)}</script>`
}
schema-dts
defines types for all of the different schema objects, see below for a more detailed example with an Organization
object. This is a huge win, it's easy to accidentally structure the JSON wrong or have a typo in one of the property names. Setting it up to use TypeScript definitions we can make sure that our JSON objects are validated at build.
Here's the Organization
object used by this very site.
import site from `$data/site.json`
export const organizationSchema: WithContext<Organization> = {
"@context": "https://schema.org",
"@type": "Organization",
"@id": `${site.url}#organization`,
url: site.url,
name: site.company.name,
description: site.description,
sameAs: [`https://twitter.com/${site.social.twitter}`],
logo: `${site.url}/favicon.svg`,
}
Data specific to our site is pulled in from a local JSON file. This could be data exposed through a Git-based CMS like Forestry or pulled down from a headless CMS like Sanity. The important thing here is that our Organization
object is defined in TypeScript, verified at build-time, and can be unit tested if you want to make sure the site.json
data is hooked up properly.
What is WithContext
doing? That's a clever setup from schema-dts
, the top-level JSON+LD object should have a @context
property. You can nest objects though, and they even have support for using an object graph format for multiple objects. Any nested objects, or every object in the graph, doesn't need @context
. WithContext
is a TypeScript wrapper for another type, in the example above I could have removed @context
from the object and it would be a valid Organization
type.
Finally, let's add the JSON+LD into the DOM. Most of our projects end up with a LDTag.svelte
component similar to
<script lang="ts">
import { serializeSchema } from '$utils/json-ld'
import type { Schema } from '$utils/json-ld'
export let schema: Schema
</script>
<svelte:head>
{@html serializeSchema(schema)}
</svelte:head>
Multiple Svelte components on a page can inject their own application/ld+json
script into the document's head. This works great with SvelteKit too, a layout or route component could inject page-level schemas.
Take a look at our Svelte REPL example to see a full working JavaScript example.
]]>Every web developer should try using a screen reader at some point. If you've ever tried tabbing through a website you'll get the general idea, the screen reader (or your keyboard's focus) move one by one through every element on the page. We can modify that behavior with aria tags, tabindex, and certain CSS properties, but one not so obvious solution is to actually hide links on the page that only show up once they are given focus by the keyboard or screen reader.
Curious what that looks like? Go over to GitHub's main site and logout if you are currently signed in (or use a private tab). Click in the address bar to put focus just above the page's content, then start tabbing down. You'll likely tab through some of the browser's chrome first, but as soon as focus gets to the site's content you'll see a big blue "Skip to content" button appear in the top left corner.
Even on a small site the navigation header likely has 6 to 10 interactive elements (brand logos, page links, login/logout buttons). Visually that isn't a big deal, navigation headers are almost always designed to pack all that content into a relatively small portion of the screen. Functionally, though, that's 6 or 10 different elements the user would need to walk through before even hearing the main title in the hero section. That's a huge barrier to entry!
Back to the GitHub example, when "Skip to content" has focus and is visible on screen you can still tab past it and walk through every link in their navigation. The real trick is what happens if you hit Enter
on that hidden link - the window's focus skips past the navigation header and puts focus into the hero section. In the case of GitHub, after you click "Skip to content" the next Tab
will focus on the email signup form rather than the logo in their navigation header.
The trick here requires two elements on the page. The very first focusable element on the page will be a new <a>
that is the hidden link itself. We want this to work whether JavaScript is enabled or not, so let's make sure to progressively enhance this feature.
Actually implementing this hidden button can be a bit confusing at first, Let's take it one step at a time.
<body>
<a href="#start-of-content" class="sr-only sr-only-focusable">
Skip to content
</a>
<header>
<!-- Your header content goes here -->
</header>
<div id="start-of-content" class="sr-only"></div>
<section>
<!-- Your awesome hero section -->
</section>
</body>
Nothing to crazy here. You're exact implementation might look different, but the key is adding a new anchor tag at the top of the page and a new div
immediately after the header.
The <a>
doesn't necessarily have to be the first child of the page's body
, but it's important that it is the first focusable element in the DOM.
.sr-only {
border: 0;
padding: 0;
margin: 0;
position: absolute !important;
height: 1px;
width: 1px;
overflow: hidden;
/* IE6, IE7 - a 0 height clip, off to the bottom right of the visible 1px box */
clip: rect(1px 1px 1px 1px);
/* maybe deprecated but we need to support legacy browsers */
clip: rect(1px, 1px, 1px, 1px);
/* modern browsers, clip-path works inwards from each corner */
clip-path: inset(50%);
/* added line to stop words getting smushed together (as they go onto seperate lines and some screen readers do not understand line feeds as a space */
white-space: nowrap;
}
.sr-only.sr-only-focusable:focus {
width: auto;
height: auto;
padding: 0;
margin: 0;
overflow: visible;
clip: auto;
clip-path: initial;
white-space: normal;
}
Update:
.sr-only
class updated based on some excellent feedback from Inclusivity Hub on dev.to
You have likely seen sr-only
helper classes elsewhere, most CSS frameworks like Bootstrap and Tailwind include them out of the box.
If you haven't seen sr-only-focusable
before, that's just an extra helper class that allows the element to be visible only when it has :focus
.
Update: Looking for a no-JS solution? Skip the JS logic and point your "Skip to Content" link straight to your
<main id="main">
block!
This feature already works as-is, no JavaScript at all. There is a catch though, clicking anchor tags with with href="#start-of-content"
will update the URL in the address bar. That's not always ideal and could even break your page if you are using hash routing.
function onSkipToContent(event) {
// Stop the event's default behavior
// In this case, don't let it actually change the page's URL
event.preventDefault()
// Find the hidden target div
const target = document.getElementById('start-of-content')
if (!target) {
return
}
// Find the next element in the DOM
const content = target.nextElementSibling
if (content instanceof HTMLElement) {
// Make sure the content div can't be tabbed to again, the give it focus
content.setAttribute('tabindex', '-1')
content.focus()
}
}
// Find the hidden "Skip to content" link and hook up tje click event
const link = document.querySelector('a[href="#start-of-content"]')
if (link) {
link.addEventListener('click', onSkipToContent())
}
This will look a little different depending on how your frontend JS is setup and whether you are using a framework like React or Svelte. The concept should be easy to transfer though, you just want to take over the <a>
element's click event and manually send the window's focus down the page.
Why bother with nextElementSibling
? Well it may not be necessary, but we added that div
as a bookmark and don't want it to actually take focus in the window. Instead, we want to find that bookmark and give focus to whatever comes next in the DOM.
Adding two elements and a bit of CSS gives us a fully functional "Skip to content" link for keyboard and screen reader users. Another dozen or so lines of JavaScript and we progressively enhanced the link to work as expected with any JavaScript framework or routing scheme the site might be using.
Stay tuned for more accessibility posts in the future!
]]>My trusty Kobo Aura H20 has a browser in it, but JavaScript is disabled and frankly there's no way the hardware could handle modern JS-heavy sites. Ride a subway in New York and you'll quickly realize how spotty your cell signal can be, and how broken a site will be if the markup loaded but the 1.3MB of JavaScript didn't finish before your connection dropped out.
tl;dr; Click here for a working example in the Svelte REPL. Or try out the menu on this site - disable your Javascript and shrink the window down to get the mobile view, the menu still works!
The hamburger menu is still king of mobile navigation designs, for better or worse. With a hidden menu, the last thing you want is for a visitor to not even be able to move around your site because they chose to disable JS. Even worse, if your site hits an unhandled exception that might break your JavaScript entirely.
The menu is almost certainly going to be interacted with, making design and animations a high priority there. The key is to build your menu so that it works with HTML and CSS only. Your JavaScript should enhance the menu with things like notification badges or animations that can't be done well in CSS.
We'll take a look at how this can be done in Svelte, but the same concepts can be applied to any framework.
Go ahead and try a working example right here, this site's menu works similarly. Once scripts are loaded the menu takes advantage of Svelte's built in slide transition. Disable JS in your browser though and our menu still works - it doesn't animate in but visitors can open and navigate around no problem.
<script>
import { slide } from 'svelte/transition'
let menuOpen = false
</script>
<header>
<div class="header__top">
<h1>Navillus</h1>
<button on:click={() => menuOpen = !menuOpen}>
☰
</button>
</div>
{#if menuOpen}
<nav transition:slide>
<!-- NAV LINKS GO HERE -->
</nav>
</header>
Nothing too crazy going on here yet. If you aren't familiar with Svelte, transition:slide
adds an entry and exit animation to the <nav>
element.
HTML checkboxes have a pseudo selector for :checked, let's take advantage of that to show/hide our menu from CSS.
<header>
<div class="header__top">
<h1>Navillus</h1>
<label for="toggle">☰</label>
</div>
<input id="toggle" type="checkbox" class="sr-only"/>
<nav>
<!-- NAV LINKS GO HERE -->
</nav>
</header>
<style>
#toggle ~ nav {
display: none;
}
#toggle:checked ~ nav {
display: block;
}
</style>
What's going on there exactly? Instead of a <button>
there's a <label>
tied to the checkbox. In CSS, the nav
element is hidden by default and only shown when the checkbox is :checked
.
<script>
import { onMount } from 'svelte'
import { slide } from 'svelte/transition'
let menuOpen = true
let mounted = false
onMount(() => {
menuOpen = false
mounted = true
})
</script>
<header>
<div class="header__top">
<h1>Navillus</h1>
<label for="toggle">☰</label>
</div>
<input id="toggle" type="checkbox" class="sr-only" class:js={mounted} bind:checked={menuOpen} />
{#if menuOpen}
<nav>
<!-- NAV LINKS GO HERE -->
</nav>
{/if}
</header>
<style>
#toggle:not(.js) ~ nav {
display: none;
}
#toggle:not(.js):checked ~ nav {
display: block;
}
</style>
Let's break that one down step by step.
First, menuOpen
is set to true initially and reset to false in Svelte's onMount lifecycle hook. If onMount is called we have JavaScript support and can enhance the menu, until then we stick to the HTML/CSS approach.
We also added in a second boolean flag for mounted
, and with class:js={mounted}
we're telling Svelte to add the js
class to our checkbox once the component's scripts have mounted.
Finally, the CSS has been updated to change the menu's display only as long as the checkbox doesn't have the js
class. That's the real magic, let CSS handle the show/hide functionality until JavaScript has mounted and Svelte's slide
transition is ready to animate the menu.
Oops! It's very easy to inadvertently create accessibility bugs. When manually testing I realized the hidden checkbox wasn't binding to our
menuOpen
state in svelte. The code block above was updated June 2, 2021 to includebind:checked={menuOpen}
to make sure keyboard users can toggle the menu after Svelte hydrates.
Svelte's use:action feature can be used to make this a bit more reusable. Actions deserve their own full blog post, but in short an action
in Svelte is function you can add in markup that will get called with the HTML node once it is created.
Check out the REPL example for a working example a custom enhance
action replacing the mounted
and onMount()
logic above.
2.0
. The term "Jamstack" has always been a bit of a marketing success for a general architecture idea, but most developers took it to mean 90's era websites with nothing but static files on a CDN. The latest frontend framework and serverless infrastructure features are combining to shake up the game - Jamstack is growing up.
Early websites really were nothing more than static documents and browsers were glorified document viewers. Javascript wasn't a thing and styling support was basic (and vendor specific). Developers wrote CSS and HTML by hand, more often than not banging their head against a wall trying to use nested <table>
s to layout the entire page.
The next big evolution on the web brought us server-side rendering with monolithic backend frameworks built on Ruby and PHP. The ability to serve up content customized to the person viewing it was huge, and the option to programmatically render pages allowed sites to grow much larger than anyone would have done manually writing every page in markup.
Then things really got interesting. The backend became a bottleneck and logic started moving back to the browser with client-side javascript. This trend really took off with libraries like Angular, React, and Vue. There was one problem though, returning a mostly empty HTML page with a handful of <script>
tags is terrible for search rankings. Until recently, Google's search bots didn't even run client-side javascript and would have no idea what was in a purely client-side rendered site.
Jamstack is really a pretty generic term at the end of the day - any site that uses Javascript, APIs, and Markup fits the definition. Until recently that was usually assumed to mean static sites only - pages built with SSG tools like Jekyll or 11ty hosted entirely on a CDN. That was more a sign of the tooling limitations at the time though, not a limitation of what Jamstack was functionally meant to be.
How can my site be static when it needs to show user-specific content like order history or messages?
This question inevitably comes up when talking about Jamstack, and rightfully so. What is the use in pre-rendering a page if all the content changes for logged in users? There is an SEO benefit to serving real content to anonymous users, but you definitely don't want your valuable registered users to see the entire page flash and re-render when the pre-built page is replaced with content specific to them.
We're starting to see javascript frameworks like Gatsby and SvelteKit give us the option to partially pre-render sites. Maybe you want to pre-render 90% of a page's content but the last 10% needs to be filled in with user-specific content. Maybe you want to pre-render the most popular pages of your site but leave the rarely requested URLs to render when requested.
Combine these framework features with infrastructure like Cloudflare Workers and Netlify Functions and things really get interesting.
Have hundreds or thousands of content blog posts on a site? No problem, build the handful of main pages once and delay building all your blog posts until a visitor requests the page. Even better, you take advantage of your CDN to cache the built blog post so the serverless function doesn't have to fire back up for the next visitor wanting to read that post.
I'm really excited to see how well frontend framework features like incremental builds tie together with building and caching at the edge with serverless functions. It's a tight needle to thread and there's definitely a risk debugging could be a nightmare, but if done well the value to both developers and site visitors could be huge.
Projects like Astro seem to be pushing it even further, taking advantage of native ESM imports to lazy load javascript components on the fly. Astro hasn't even been released yet, so we'll have to wait and see how it works, but the concept of combining the best of static site generators like 11ty with frontend libraries like React and Svelte might just be crazy enough to work!
]]>Check out the repo for details.
This is ultimately just a custom store built on top of svelte/store. Like the rest of Svelte, the built in stores are excellent building blocks that aim to give you all the tools you need without trying to solve every single scenario out of the box.
The goal with svelte-entity-store
is to provide a simple, generic solution for storing collections of entity objects. Throwing an array of items into a basic writable
store doesn't scale well if you have a lot of items and need to quickly find or update one item in the store.
npm i -s svelte-entity-store
Check out /examples for a working TodoMVC demo based on SvelteKit. More to come!
<script lang="ts">
import { entityStore } from 'svelte-entity-store'
// Define your entity interface
interface TodoItem {
id: string
description: string
completed: boolean
}
// Write a getter function that returns the ID of an entity (can be inlined in the constructor also)
// Currently number and string values are valid IDs
const getId = (todo: TodoItem) => todo.id
// Initialize the store
// (optional) the constructor accepts an Array as a second param
// ex: if you rehydrate state from localstorage
const store = entityStore<TodoItem>(getId)
// Get a derived store for every active todo
const activeTodos = store.get((todo) => todo.completed)
// toggle a todo
function toggle(id: string) {
store.update((todo) => ({ ...todo, completed: !todo.completed }), id)
}
// clear completed todos
function clearCompleted() {
store.remove((todo) => todo.completed)
}
</script>
{#each $activeTodos as todo (todo.id) }
// ... render your UI as usual
{/each}
Creating an instance of the store is pretty straight forward. Svelte has excellent TypeScript support these days, but it isn't a must. Using svelte-entity-store
in plain old JavaScript is very similar, just skip the interface
definition and <Type>
casting.
import { entityStore } from 'svelte-entity-store'
// Define your entity interface
interface TodoItem {
id: string
description: string
completed: boolean
}
// Write a getter function that returns the ID of an entity (can be inlined in the constructor also)
// Currently number and string values are valid IDs
const getId = (todo: TodoItem) => todo.id
// Initialize the store
// (optional) the constructor accepts an Array as a second param
// ex: if you rehydrate state from localstorage
const store = entityStore<TodoItem>(getId)
Nothing to crazy there so far. For TypeScript we define the model interface
, you could also use type
if that's your thing. The store also needs to know how to get unique IDs for each item. Right now string
and number
IDs are supported, but this may be extended later.
No problem! The store accepts an optional second parameter.
const items = [
// ... array of TodoItem's to populate the store with
]
const store = entityStore<TodoItem>(getId, items)
Pass in an array of initial items to avoid having to call store.set(items)
immediately. This is particularly handy if you are rehydrating the store from cache or localstorage, similar to the TodoMVC example.
The store's get()
methods return readable stores to access the entities. The get()
method has multiple overrides to serve different uses like grabbing a single entity, filtered list of entities, or everything in the store.
Properly overriding methods in TypeScript was an interesting challenge to get autocorrect and similar tooling to work properly. It's too much to go into here but should turn into a blog post of its own soon.
// Get one entity by ID
const item = store.get('abc-123')
// Get a list of entities by ID
const items = store.get([123, 456, 789])
// Get a list of entities that match a filter function
const activeItems = store.get(todo => !todo.completed)
// Or get every entity in the store
const allItems = store.get()
Because the store is returning derived stores, the full power of Svelte's reactivity model just works.
<script lang="ts">
// Create a computed property based on the get() results
$: activeItemsCount = $activeItems.count
</script>
// Directly access a single entity
<h1>{$item.title}</h1>
// Or loop over multiple entities
<ul>
{#each $allItems as item (item.id)}
<li class:completed={item.completed}>
{item.title}
</li>
{/each}
</ul>
Much like get()
, set()
has a few different overrides. Calling set
will blow away any old entity state, if you need to keep some of the old entities' state check out update()
instead (below).
// Replace the existing entity with ID 123, or add it if the ID doesn't exist yet
store.set({
id: 123,
description: 'Todo #1',
completed: true,
})
// Or add/replace multiple todos at once
store.set([
// ... multiple todo objects
])
Sometimes you just need to change part of an entity without worrying about the entire object. update()
solves this, it should look very familiar.
function toggleTodo(todo: TodoItem) {
return {
...todo,
completed: !todo.completed,
}
}
// Update a single entity by ID
store.update(toggleTodo, 123)
// In case you already have the entity object and don't want to call getId,
store.update(toggleTodo, todoObj)
// The same goes for lists of IDs or entities
store.update(toggleTodo, [123, 456])
store.update(toggleTodo, $activeTodos)
// What if you want to only update entities that meet a filter condition?
store.update(toggleTodo, todo => todo.completed)
// Go crazy with it and run the update against every entity in the store
store.update(toggleTodo)
Removing will look very similar to update()
// You can remove single entities by ID
store.remove(123)
// By list of IDs
store.remove([123, 456])
// Or remove every item that matches a filter
const isCompleted = (todo) => todo.completed)
store.remove(isCompleted)
Writing the examples here for update
and remove
, I realized there's no reason that remove
shouldn't let you pass in the entity objects similar to update
. Time for a github issue.
I went a little overboard trying out a few new (to me) CI and testing tools. On the plus side, the v1.0.0
of store has 100% test coverage and a working TodoMVC example.
All testing is done with lukeed's excellent uvu test framework. I've mostly reached for Jest the last few years but I don't think I'll be turning back. uvu
was simple to setup and it really does fly compared to other testing frameworks.
Take a peek at some of the svelte-entity-store
tests. It was particularly interesting figuring out a clean way to store subscriptions, i.e. to make sure subscribers get updated state or that subscribers aren't called if an API call didn't actually change the store at all.
I purposely didn't add sorting support for v1.0. It can be done without too much headache...
const allItems = store.get()
$sortedItems = $allItems.sort((a, b) => (a < b ? 1 : -1))
but ideally that's built right into the store itself. There's an issue tracking sorting functionality, the best solution is probably to add an optional sort
parameter to all get()
overloads.
What'd I miss? File issues for feature requests!
]]>