Monitoring, Testing, and Maintaining Speed

Keyboard shortcuts
  • KPrevious lesson
  • /Search lessons
  • EscClear search

I optimized a client’s site in 2021. Got it down to a 1.9-second load time. They were thrilled. I moved on to other projects. Six months later they called me, frustrated. “The site is slow again.” Load time? 4.7 seconds.

What happened? Two plugin updates introduced new JavaScript. They’d added a chat widget. A new team member installed a page builder “just to try it” and forgot to remove it. And nobody had been watching.

Speed optimization isn’t a project with a start and end date. It’s a practice. Like keeping a house clean, if you stop paying attention, things pile up. This chapter gives you the systems to monitor, test, and maintain speed over months and years without it becoming a full-time job.

Setting Up Monitoring

You need to know when your site gets slower. Not after a client complains. Not after rankings drop. The moment it happens.

Free Monitoring Tools

Google Search Console is the baseline. It shows you Core Web Vitals data from real Chrome users who visit your site. This is field data, not lab data. It tells you what actual visitors experience. The downside is the data is delayed by about 28 days (it uses a rolling 28-day window), so you won’t catch problems immediately.

UptimeRobot (free tier) pings your site every 5 minutes and alerts you if it goes down. It also tracks response time over time, so you can see trends. If your average response time creeps from 200ms to 500ms over a few weeks, you’ll see it in the chart. The free tier gives you 50 monitors, which is more than enough for most people.

Google PageSpeed Insights API. If you’re technical, set up a cron job that hits the API daily and logs results. I have a script that tests my top 5 pages every morning and emails me if any score drops below 70. Catches problems within 24 hours instead of weeks.

Paid Monitoring Worth the Money

DebugBear ($12/month and up) is what I use for client sites. It runs scheduled Lighthouse tests, tracks field data, alerts on regressions, and shows exactly which metrics changed and when. The timeline view lets you pinpoint the exact day something went wrong and correlate it with plugin updates.

For most site owners, Search Console plus UptimeRobot is enough. If you manage multiple sites or speed is critical to revenue, invest in DebugBear.

Google Search Console Core Web Vitals Report

This is your reality check. Lab tests tell you what could happen. The Core Web Vitals report in Search Console tells you what actually happened to real users.

How to Read It

Navigate to Core Web Vitals under the Experience section. You’ll see separate reports for mobile and desktop. Each URL is classified as “Good,” “Needs improvement,” or “Poor” for each Core Web Vital metric.

LCP (Largest Contentful Paint): Good is under 2.5 seconds. Needs improvement is 2.5-4 seconds. Poor is over 4 seconds. If most of your pages are “Needs improvement,” focus on hero images, server response time, and render-blocking resources.

INP (Interaction to Next Paint): Good is under 200ms. This replaced FID (First Input Delay) and measures how quickly your site responds to any user interaction, not just the first one. High INP almost always means too much JavaScript on the main thread.

CLS (Cumulative Layout Shift): Good is under 0.1. If you’re seeing high CLS in the field but not in lab tests, the problem might be ad loading, dynamic content injection, or third-party scripts that behave differently under real network conditions.

What to Act On

The report groups URLs with similar issues. Click on a group to see which URLs are affected and which metric is failing. Fix the root cause, not individual pages.

One important thing: don’t panic over yellow (“Needs improvement”) scores. Google only gives a ranking penalty for “Poor” URLs. Yellow is worth improving, but it’s not an emergency. Focus on getting everything out of the red first, then work on turning yellow to green.

The data takes 28 days to update after you make changes. This is frustrating but unavoidable. Make your changes, verify them in lab tests, and then wait for the field data to confirm. Don’t make more changes in that 28-day window unless you’re sure of what you’re doing. Overlapping changes make it impossible to know what worked.

PageSpeed Insights vs. GTmetrix vs. WebPageTest

These three tools test your site’s speed, but they do it differently. Using the wrong one for the wrong purpose gives you misleading data.

PageSpeed Insights

What it does: Runs Lighthouse in a controlled environment AND shows real-user field data from the Chrome User Experience Report (CrUX).

When to use it: When you need to see both lab performance and real-user data in one place. When you want to know what Google sees. When you’re troubleshooting Core Web Vitals issues.

Limitations: Tests from a single location (usually US). The lab test uses mobile throttling that can be harsher than typical user conditions. Scores fluctuate between runs (5-10 points of variation is normal).

GTmetrix

What it does: Runs Lighthouse from multiple global locations. Shows a waterfall chart, video recording, and historical data if you create a free account.

When to use it: When you need to test from a specific geographic location. When you want consistent waterfall analysis. When you need to compare before-and-after with visual recordings.

Limitations: The free tier only tests from Vancouver, Canada. The scores can differ from PageSpeed Insights because the testing conditions are different. Don’t chase GTmetrix scores specifically. Use it as a diagnostic tool.

WebPageTest

What it does: The most detailed testing available. Custom connection speeds, multiple test runs, filmstrip comparisons, resource timing breakdowns, first-party vs. third-party analysis.

When to use it: When you need deep diagnostic data. When you’re trying to figure out exactly why something is slow. When you want to compare two URLs side by side with a visual filmstrip.

Limitations: The interface is complex. Results take longer. Not great for quick checks.

My Recommendation

Use PageSpeed Insights for your regular checks. It’s fast, it shows real-user data, and it’s what Google uses. Use GTmetrix when you want a waterfall chart and location-specific testing. Use WebPageTest when you’re debugging a specific issue and need granular data.

Don’t obsess over score differences between tools. They test under different conditions. A site that scores 78 on PageSpeed Insights and 85 on GTmetrix isn’t faster on GTmetrix. It’s just tested differently.

The Monthly Speed Audit

I do this on every site I manage. It takes 15 minutes and catches problems before they compound.

The 15-Minute Monthly Check

Minutes 1-3: Run PageSpeed Insights on your homepage and one inner page. Compare scores to last month. If they dropped more than 10 points, investigate. If they’re stable or improved, keep moving.

Minutes 4-6: Check Google Search Console Core Web Vitals. Are any URLs newly in the “Poor” category? Did the number of “Good” URLs decrease? Look at the trend line, not just the current number.

Minutes 7-9: Review recent plugin and theme updates. Go to your WordPress dashboard, check the “Recently Active” plugins view. Did any plugins add new features that might load additional assets? Read the changelogs for anything that mentions new scripts, styles, or API calls.

Minutes 10-12: Check your page weight. Open your homepage in Chrome DevTools, Network tab, and look at the total transferred size. Compare this to your baseline (you did write down your baseline in Chapter 11, right?). If it’s grown by more than 100KB, find out what was added.

Minutes 13-15: Spot-check mobile performance. Enable mobile throttling in Chrome DevTools and load your most important page. Does it feel fast? Does anything shift or jump during load? Trust your gut here. If something feels slow, it probably is.

Write down your numbers. Put them in a spreadsheet. You want to track trends over time, not just snapshots.

Common Speed Regressions

After maintaining hundreds of sites, I’ve catalogued the usual suspects. When your site gets slower, check these first.

Plugin Updates

A plugin update that “adds exciting new features” often means new JavaScript and CSS. Social sharing plugins are repeat offenders here. They add new animation effects, new button styles, new share networks, and each one comes with more code.

Theme Updates

Theme updates can change how assets are loaded, add new Google Fonts, or introduce new template structures. After any major theme update, re-test your speed. I’ve seen theme updates add 150KB of new CSS because they shipped a new feature the user never even turned on.

Content Changes

Your content team adds a Twitter embed. A YouTube video. An iframe. Each embedded resource brings its own JavaScript and CSS. Five Twitter embeds on one page can add 500KB+ of JavaScript. Monitor popular pages after content updates.

New Tracking Scripts

Marketing wants a heat mapping tool. Sales wants a chat widget. The CEO wants a holiday popup. Each one is “just a small script.” Each one adds 50-200KB of JavaScript. After a year, you’ve accumulated a megabyte of marketing scripts that nobody audits.

I’ve cleaned up sites with five analytics tools running simultaneously. Three A/B testing scripts. Two chat widgets (one nobody knew was still there). Combined overhead: over 2MB of JavaScript. Removing duplicates cut page weight by 40%.

Testing Methodology

Getting consistent, meaningful speed measurements is harder than it sounds. Here’s how to avoid the common pitfalls.

Run Multiple Tests

Never make decisions based on a single test. Speed measurements vary between runs. Server load, network conditions, and caching state all affect results. Run at least three tests and use the median (not the average, because one outlier can skew an average).

Clear Cache Between Tests

If you’re testing changes, clear your server cache, CDN cache, and browser cache before each test. Otherwise you might be comparing a cached version of the old site with an uncached version of the new one.

Test at Consistent Times

Server load varies throughout the day. A test at 3 AM will be faster than one at 3 PM because fewer people are using the server. Pick a consistent time for your regular tests. I do mine at 10 AM on weekdays.

Use Private/Incognito Mode

Browser extensions can affect page load. Ad blockers remove scripts. Password managers inject CSS. Test in a clean incognito window with no extensions.

Test the Right Pages

Don’t just test your homepage. Test your most-visited page, your heaviest page, and a random inner page. The homepage is often the most optimized page on the site because that’s what everyone tests. Your blog archive from 2019 with 47 unoptimized images and 3 embedded videos? That’s probably the slow page your visitors actually hit.

The Speed Budget

A speed budget is a set of limits you define and enforce. It’s the difference between saying “we care about performance” and actually doing something about it.

Setting Your Budget

Define maximum values for these metrics:

Total page weight: I recommend under 1.5MB for a blog, under 2.5MB for a content-heavy site, under 3MB for WooCommerce.

Number of requests: Under 50 for a simple site, under 80 for a complex one. Every request above your budget needs justification.

Largest Contentful Paint: Under 2.5 seconds on mobile. This is Google’s threshold for “Good.”

Total JavaScript size: Under 200KB compressed for a blog, under 400KB for an interactive site. This is the metric most sites blow right past.

Total CSS size: Under 100KB compressed. If you’re loading 300KB+ of CSS, something is wrong (usually a page builder or an unused theme framework).

Enforcing the Budget

Write these numbers down. Put them in a shared document. When someone wants to add a new script, plugin, or embed, check it against the budget. “This chat widget adds 180KB of JavaScript. Our JavaScript budget is 200KB and we’re currently at 150KB. If we add this, we’re at 330KB, which is 130KB over budget. What can we remove to make room?”

This sounds strict. It is. That’s the point. Without a budget, every addition seems small and reasonable. “It’s just 50KB.” “It’s just one more script.” But they accumulate, and a year later you’re wondering why your site loads in 6 seconds.

Building Speed Into Your Workflow

Most sites get slow because performance is treated as a cleanup task instead of a building standard. Here’s how to flip that.

Before Publishing

Check page weight before hitting publish on any new post or page. If you just added 5 images, make sure they’re compressed and properly sized. If you embedded a video, make sure it’s lazy-loaded. If you added a new section with custom CSS, make sure it’s not duplicating existing styles.

This takes 60 seconds per post. It saves hours of retroactive optimization later.

Before Activating Plugins

Test every new plugin’s performance impact before going live. Install it on a staging site. Run speed tests before and after activation. Check how many scripts and styles it enqueues. Check Query Monitor for new database queries.

If the plugin adds 300KB to every page for functionality you only need on one page, look for one that conditionally loads its assets. Or find a different solution entirely.

Before Deploying Theme Changes

Test every theme change for performance impact. I test in this order: staging site, speed test, visual check on mobile, then deploy. Five extra minutes prevents those “why is the site slow?” calls.

The Long Game

I’ve maintained client sites for 5+ years. The ones that stay fast have a clear performance owner, someone who checks speed monthly and says “no” when something will blow the budget.

They batch-update plugins monthly, test after, and roll back anything that causes problems. They review their plugin list annually. That email marketing plugin you stopped using? Still installed, still loading assets. That temporary fix from a migration? Still running.

The sites that get slow? Nobody watches the numbers. Nobody questions new additions. And one day, someone googles their own site, waits 8 seconds, and wonders what happened.

Your Speed Optimization Action Plan

If you’ve read this entire course and don’t know where to start, work through this list from top to bottom. Don’t skip ahead.

First: Get your hosting right. If your TTFB is over 600ms on an empty page, move hosts before doing anything else.

Second: Set up page caching. One plugin, configured properly. This alone cuts load times 50-80%.

Third: Compress and properly size your images. Convert to WebP, set dimensions, lazy-load below-the-fold. Easiest big win.

Fourth: Clean up your JavaScript. Defer, remove, delay. Biggest impact on mobile.

Fifth: Run a plugin audit. Remove what you don’t need. Replace heavy with light.

Sixth: Server-level optimizations. OPcache, compression, caching headers, real cron jobs.

Seventh: Fix Core Web Vitals. LCP, CLS, INP. Use PageSpeed Insights to identify problems.

Eighth: Set up monitoring. UptimeRobot for uptime. Monthly speed checks. Quarterly plugin reviews.

Ninth: Create your speed budget. Define limits. Write them down. Enforce them.

Tenth: Build performance into your workflow. Check before you publish. Test before you activate. Monitor after you change.

Start with the first three. They’ll give you the biggest improvements. Work through the rest over the following month. Complete all ten and your site will be faster than 90% of WordPress sites on the internet.


Chapter Checklist

  • [ ] Set up UptimeRobot (or similar) for uptime and response time monitoring
  • [ ] Review your Core Web Vitals report in Google Search Console
  • [ ] Run PageSpeed Insights on your top 3 pages and record baseline scores
  • [ ] Schedule a monthly 15-minute speed audit on your calendar
  • [ ] Document your speed budget (page weight, request count, JS size, LCP target)
  • [ ] Test your latest content in incognito mode with mobile throttling
  • [ ] Review plugins updated in the last 30 days for performance changes
  • [ ] Check for orphaned scripts and tracking tools no longer in use
  • [ ] Create a pre-publish checklist that includes image optimization and page weight
  • [ ] Save this chapter’s action plan and start working through it top to bottom

Chapter Exercise

Set up your complete monitoring system this week. Sign up for UptimeRobot and add your site. Open Google Search Console and review your Core Web Vitals report. Run PageSpeed Insights on your five most important pages and record the scores in a spreadsheet. Then schedule a recurring 15-minute calendar reminder for the first Monday of every month labeled “Speed Audit.” During your first monthly audit, run through the 15-minute check described in this chapter. Compare your numbers to what you recorded today. If anything degraded, investigate why and fix it before the next month.