Website Optimization SEO Tips to Improve User Experience
Everybody loves a fast-loading website but without a solid understanding of how content is crawled, indexed, and ranked, third-party assets and bloated code can bring all the wrong kind of attention to your content (and prevent it from appearing in the SERPs).
There are few things more important to Google than UX (except of course, better content). So knowing how to approach website optimization for improved user experience (and optimal organic search placement) should always be a priority (except of course, for content).
This post is about how we revisited Google’s ranking factors for user experience and the steps we took to make improvements to our website.
Technical SEO vs. Content SEO
In the world of SEO marketing, there’s a fine line between which is the most impactful approach to SEO: “technical” or “content” SEO. The answer seems to be entirely subjective, with the bias leaning toward whichever SEO superpower the muses have bestowed upon any given guru on any given day, and the extent to which Google rewards their efforts.
There’s no denying that you can publish the most relevant and engaging content on the planet but if an apathy toward ranking factors keep it out of the SERPs, no one will ever see it.
Conversely, you can tweak the bejeebers out of your web vitals but without content, there’s no reason for Googlebots to list your business anywhere near the first page.
Page Experience vs. User Experience
This August marks one year since Google introduced their page experience criterion. And in the spirit of transparency, it’s taken us this long to conform. But hey, we’re in good company when it comes to procrastinating.
On the bright side, revisiting the table stakes for web performance has created the opportunity to reprioritize our own SEO roadmap (especially on the technical side) with the emphasis on refining the content delivery environment for improved UX.
To that end, we began our own foray down the road of ‘optimal page experience’ by being aware of Google updates. Search Engine Roundtable keeps a current list of updates, as does Google themselves, with the former offering more of a layman’s perspective on what it could mean for your website.
However, “awareness” and “action” are two different animals and we don’t live and die by updates simply because there’s too many of them. (Google’s Matt Cutts admits to an average of 350-400 updates every year, confirmed on Google’s Dev site.) I mean, we simply have too many other things to do.
We do however, care about staying compliant with core updates and meeting (or surpassing) essential ranking signals, keeping the technical conflicts of content delivery to a minimum.
Our goal is to present optimized content that will be crawled, indexed, and ranked with as little impedance to ‘web performance’ as possible.
So after waiting the better part of a year, here’s how our procrastination paid off…
Measuring Web Performance (Tools and Metrics)
We started by measuring page performance (speed and web vitals) to see where the problem areas and bottlenecks were, made the appropriate corrections, and re-tested until we were happy with the results (and knowing that this is a work-in-progress).
The tools we used are free (keep reading).
These tools measure website performance based on one of two benchmarks: realtime (“field”) data, or synthetic (“lab”) data.
Google’s Martin Splitt explains why “field data is a better indicator for how real users are experiencing your website than lab data.”
And Google’s John Mueller confirms that Google uses field data for search rankings.
Enough said. We used PageSpeed Insights as our go-to resource.
The Discovery Process
This is kind of a love/hate thing. Yes, it’s fun to see the visual results of what’s working (love this) but it’s equally disheartening to see what isn’t (hate this). And even though the reporting tools provide specific solutions (love this), it requires a little savvy to know how to implement said changes, especially when they have to work in tandem with other solutions (hate this).
But with a little juju and a lot of experimenting, when you see your results “in the green” (love this), you feel like you can fix anything.
Google would have the average website owner believe that managing a website is a walk in the park.
In reality, interpreting performance reports and taking the appropriate measures to make improvements can be a little tricky. Even for an agency with a web design background.
Knowing how to follow-up on the reporting recommendations takes time to research the ‘best options’ that ‘best practices’ dictate. And then there’s troubleshooting conflicts between those options to make sure that everyone plays nice together. Just sayin’.
On the bright side, correcting for one metric had correlating ramifications on others so it’s a little like a three-fer.
Tools We Used to Measure Web Performance
There are four powerhouse tools for measuring web performance, two that use lab data and two that use a combination of both lab and field data. In regard to these four options, GTmetrix explains why performance scores differ between web.dev, PSI, WebPageTest, and GTmetrix.
We focused our attention on measuring field data since this is clearly what the future of Google algorithms will measure.
PageSpeed Insights (PSI). This is the only way to access realtime user data apart from BigQuery or CrUX or Data Studio (tools that we don’t currently utilize on any great scale). NOTE however, that if your website does not have sufficient real-world speed datafor the CrUX report to generate data, this report won’t accurately reflect real-world user experience.
PSI gives us both a high level at-a-glance view of how our pages are doing, as well as recommendations for corrections. It also combines lab and field data, suggesting more realistic results.
Despite the fact that PSI is limited to Chrome data, we feel that the browser holds enough of a market share that it provides a pretty good representation of UX data.
Plus, the corresponding Opportunities and Diagnostics that it returns when I click on links specific to FCP, TBT, LCP, and CLS. Very helpful for prioritizing corrections.
WebPageTest. This testing tool combines lab and field data as well. There’s no grading system (or option to print the reports) but it does an incredibly thorough job at isolating data in different dropdown categories, especially Performance Summary, Details, Web Vitals, Opportunities and Experiments, Optimization, and well, everything.
Their color-coded Web Vitals chart is over-the-top useful in presenting an at-a-glance view of the processing stages and isolating connection bottlenecks. Clicking on individual steps shows everything you need to know about problem areas.
Their Optimization and Image Analysis is particularly eye-opening. Can’t say enough good things about this useful tool!
We Didn't Use These Lab Tools
GT Metrix (lab/synthetic data only). Lab data isn’t bad but it doesn’t accurately reflect how users interact with our site. However, their servers run out of Vancouver so viewing a report from here makes us look really good. When results were slow to show improvements and we felt like needed a warm hug, we just looked at our GTmetric report and everything was right in the world.
The Steps We Took to Measure Web Performance and Improve SEO Performance
Eleven months after the Page Experience update, we began our optimization efforts in the most obvious areas.
- Optimizing for Images
This one improvement was responsible for an immediate increase in page performance, mostly due to increased page load speed.
- Optimizing for Mobile friendliness
We inherently build everything for mobile out of the gate so we didn’t expect to dedicate much effort to this. And happily, we didn’t have to. Image optimization and correcting for core web vitals were the predominant strategies we focused on which resulted in a self-correcting fate.
- Optimizing for Core Web Vitals
These are the primary signals that measure page experience so naturally, we spent much of our time here.
Admittedly, image optimization was something we took for granted. While we were already using an image compression plugin, we didn’t fine-tune it for optimal efficiency. This hit home when our Largest Content Paint (LCP) score came back high (images are a huge contributor to this score).
After experimenting with a few popular automation options, we settled on the solution with the best results for the following outcomes:
- Faster page loads
- Automatic conversion to a faster web format (WebP)
Manual best practices for optimizing images
Simple things we know works includes:
- Using ALT tags (with relevant keywords)
- Adding descriptions (with relevant keywords)
- Keeping the image filename short. Knowing that file structure is a ranking factor, we pay attention to file names, (length, and relevance to the image).
- Include a default image for social shares
Ongoing best practices for optimizing images will be handled automatically
- Automatic (lossy) compression
- Remove EXIF data
- Deliver images over a CDN. Image CDNs can yield a 40-80% reduction in image file sizes!
- Adding missing dimensions (also prevents cumulative shift)
- Includes images in a sitemap
- Lazy load images (except for specified images above the fold)
- Uses image placeholders
- Disable hotlinking (consumes bandwidth and increases CPU)
- Downgrade image quality for slower connections
For us, mobile friendliness is more of a lifestyle thing than an after-thought. Google Console’s URL Inspection tool confirms that our site is already mobile-friendly (but of course we already knew that). Everything we do is built around responsive web design.
We considered optimizing for AMP (not to be confused with mobile responsiveness) however, after following the evolution of associated signals and non-signals, we decided against it.
And since AMP is not a Google ranking factor…
At the end of the day, AMP is a non-issue when it comes to page experience and as long as we optimize our performance metrics by meeting (and surpassing) web core vital thresholds, it makes more sense to focus on known user-centric ranking factors, with UX and core web vitals usurping our attention.
Testing Core Web Vitals
In a nutshell, Core Web Vitals are a subset of Google Web Vitals – metrics for evaluating a website’s impact on visitors.
Collectively, core web vitals and ‘other’ web vitals provide a framework for defining and measuring thresholds for designing an optimal user experience. They also provide website owners a frame of reference for improving their websites. Without them, there would be no way of knowing how to jockey for position within the competitive landscape of search.
And of course, Google’s John Mueller confirms that Core Web Vitals metrics are used as a ranking signal. So, there’s that.
Primary Core Web Vitals
For our initial purposes, we focused on the three primary core web vitals:
- Largest Contentful Paint (LCP) – accounts for 25% of the Performance Score
- First Input Delay (FID) – directly correlates with the lab data version (TBT), accounting for 30% of the Performance Score
- Cumulative Layout Shift (CLS) – accounts for 15% of the Page Speed Score
Focusing on hitting the these 75th percentile of page loads for these metrics has the potential to significantly impact and improve the performance of other vital metrics without direct intervention.
High Level Test Results
Troubleshooting performance reports isn’t for the faint of heart. It involves an above-average understanding of the characteristics of performance metrics as well as how to improve on them. It’s one thing to see the list of opportunities and diagnostics, but it’s quite another to fine-tune the solutions. Especially when we hear Mueller affirm that all three core web vitals must be met in order to benefit from their associated ranking signal. That’s right, Google wants a package deal.
What We Learned about Core Web Vitals
Many of the web vitals are interconnected. In the midst of optimizing for the three main core web vitals, we conveniently managed to improve the score of others. Good to know.
Eventually, we’ll work our way through the list in Opportunities and Diagnostics to solidify our web performance. For now, we’re nailing down the top three metrics for Page Experience.
Largest Contentful Paint (LCP)
LCP measures the time it takes individual content blocks to load above the fold within the user’s browser. Nothing below the fold is considered.
Typical content affecting this metric includes images and fonts.
A fast LCP is a strong indicator of a good page experience. We’re aiming for the optimal range of 2.5 seconds or less.
Common factors of a poor LCP grade
- Slow Server Response Times
- Slow Resource Load Times
Ongoing Best Practices for Automatically Correcting LCP
- Time to First Byte (TTFB) measures how long it takes a server to display the first byte of content to a browser. Improving this metric directly affects LCP. Contributing factors that improve TTFB might include:
- Using a faster web host
- Using a CDN – serves your website from the closest global server to a visitor’s browser, as opposed to where your local server is located
- Setup reliable caching
- Optimizing a website’s database (usually entails eliminating old and unused content)
- Ensuring plugins, addons, etc. are regularly updated
- Reducing queries
- Using the latest version of PHP
- “Eliminate render blocking resources” is a common reporting suggestion. Prioritizing the way that different scripts are loaded eliminates this bottleneck. Contributing factors that reduce or eliminate this might include:
- Inline critical CSS. The CSS involved in content displayed above-the-fold is given priority by placing it within the page’s HTML structure.
You can identify which files are problematic via the GTmetrix waterfall chart or by inspecting the page you’re testing and viewing the code coverage tab to isolate the resource (identified by the URL).
Conducting a simple plugin audit goes a long way to curb the impact that unused scripts have on LCP. Simply delete or deactivate plugins or plugin features that aren’t being used.
- Delay JS files. Prevent JS from loading until there’s some kind of user interaction.
- Minify CSS and JS Files – mostly involves eliminating all those helpful comments you used when creating the files as well as white space, line breaks, and other irrelevant bits.
- Image optimization. Already covered.
- Compress Files. A simple solution for faster transmission of website files between server and browser.
- Preload Critical Assets. This is a strategic strategy for fonts, links, and sometimes, images, video, and CSS/JS.
- Pre-connect 3rd-party integrations (eg. Facebook, YouTube, Google Analytics, etc.). When must-have integrations are hosted on 3rd-party servers, reducing the time it takes to connect to them helps increase the deliverability of your content and reduce the LCP score.
Cumulative Layout Shift (CLS)
Measures the amount of unintended movement (literally, shifting up/down or sideways) during the time it takes for a page to load, reflecting a level of difficulty (and annoyance) of user engagement with your site.
If you’ve ever had to wait for content to find its place before a page fully loads and you’re able to interact with it, you know what I’m talking about.
Typical content affecting CLS includes images, video, ads, embeds, and iframes without dimensions (think, Instagram feeds).
A low CLS score indicates a happy experience. We’re aiming for a result of .01 or less.
Common factors of a poor CLS grade
- Large web fonts can cause a delay in the presentation of content (Flash Of Unstyled Text (FOUT) or Flash of invisible text (FOIT)) because they’re so large that they haven’t finished loading before they’re needed to display copy. Many font files include glyphs and variations in weight and style which is why they can be so large.
- Failure to allocate media sizes
- Dynamically injected content (eg. animations).
- Actions waiting for a network response before updating DOM (eg. ads, IG feed, banners)
- Some plugins
Ongoing Best Practices for Automatically Correcting CLS
- Preload locally hosted fonts. Discover which font files are problematic via the GTmetrix Waterfall “fonts” section and use this URL to preload only the variant that’s being used (not additional styles, etc.).
- Optimize fonts. In addition to preloading fonts, unload them, set a fallback font, replace fonts with system fonts (faster), etc. Ensuring fonts load in a timely manner will prevent CLS.
- Add missing media dimensions. This can be done with a plugin or page builder.
- Add fixed position to animations
- Use a CDN
- Delay JS
- Optimize CSS delivery
- Disable Asynchronous CSS in caching. Loading CSS asynchronously can cause FOUC (Flash Of Unstyled Content) where the page’s contents are loaded before the CSS, resulting in slow page loads and layout shifts (CLS).
- Use critical CSS. If you want the benefits of loading CSS asynchronously but want to avoid FOUC, use critical CSS for above-the-fold loading.
First Input Delay (FID)
Measures the latency (delay) from how long it takes from when a user initially interacts with a web page (clicks, taps, key presses) and when their browser displays the results of their request (it doesn’t measure the processing time), representing another potential frustrating experience.
This potential latency is usually a reflection of any background processing taking priority over a user’s interaction. Naturally, this bottleneck reflects poorly on UX so by optimizing as many processes as possible, we self-correct the FID score.
This metric is based solely on UX so it’s measured as field data. TFT is the lab data version of the metric and directly correlates. interestingly, the metric is returned as “TBT” when using PageSpeedInsights.
Typical content affecting FID primarily involves heavy JS execution.
We’re aiming for a result of 100 milliseconds or less.
Ongoing Best Practices for Automatically Correcting FID
Efforts to improving FID shares the same best practices as the lab metric Total Blocking Time (TBT).
- Defer unused JS/CSS
- Delay JS execution time
- Minify JS/CSS
- Reduce the impact of 3rd-party code (eg. ads, social integrations, analytics, fonts, reCAPTCHA, maps, etc.)
- Pre-connect 3rd-party assets
Summing Up Our Web Performance Testing
WordPress has a lot of things going for it, not the least of which is a community of developers specializing in various aspects of its architecture. This makes it considerably easier to address the optimization corrections we needed to bring our website up to par.
We used a combination of plugins to work synergistically with each other to resolve heavy CSS/JS loads, eliminate render-blocking resources, reduce DOM size, etc. Getting them to all play nice is the tricky part.
As alluded to above, corrections score differently for different tools but the focus is to look at the diagnostics, not the numbers. The goal being, to get the three core web vitals in the green.
Apart from image optimization, it seems like the main bottleneck to optimize for (that positively correct other web vital scores) involves optimizing JS and CSS and HTML in any one of a number of ways, and tweaking according to how one solution affects a different property.
Clearly, optimizing our website and besting our performance scores is an ongoing effort and will involve implementing best practices for core web vitals (fonts, cookies, carousels, tags, 3rd-party assets, scrolling, CSS, etc.), watching and tweaking the corrections we recently implemented, and ensuring that we stay consistent in the way we manually add new content.
And now that we have a solid handle on optimizing the infrastructure (technical SEO) to create an optimal user experience, we can better focus on the way we introduce new content (content SEO).
Elite Digital Marketing is a full-service digital marketing agency dedicated to helping businesses succeed in their competitive marketplace. We specialize in developing high impact, cost-effective marketing initiatives that deliver tangible, measurable results.