Diagnostic baseline
Full crawl, schema graph audit, Search Console and Ahrefs baselining, Core Web Vitals field data capture, log file ingestion. Written 15 to 25 page baseline document.
A technical SEO retainer from £950 per month. Crawl budget, schema architecture, Core Web Vitals, JavaScript rendering, internal link topology, hreflang discipline, log file analysis, and migration protection. Month-to-month after the first 90 days. Built to be the foundation the rest of your SEO compounds on.
Nine specific infrastructure problems the retainer is designed to solve. Ignore these and no amount of content or link earning compounds.
Technical SEO is the infrastructure work that decides whether search engines can find, render, understand, and prefer your pages over your competitors. Done properly, it is invisible. Done poorly, it caps everything else you do. The sections below are the nine areas a good technical retainer works on, explained without trade-secret language.
Google allocates a finite amount of crawl activity per site per day. On sites under about 1,000 URLs this rarely matters. On e-commerce catalogues, news publishers, and large service sites it matters enormously. Wasted crawl on faceted URLs, parameter noise, or infinite scroll means your commercially important pages are being recrawled too slowly, and new content takes weeks to index. We analyse server logs over 30 to 90 days to see exactly which URLs Googlebot is visiting, which ones it is ignoring, and how to redirect that budget onto pages that drive revenue.
Schema markup lets search engines understand the entities and relationships on your site. Most sites have schema, but fragmented: one plugin emits Organization, another emits WebSite, a theme emits BreadcrumbList, and none reference each other. The result is schema that validates individually but produces no rich results and no entity consolidation. We rebuild the schema as a connected graph using stable @id references, add template-level schemas for your core page types, and validate every block against the current Google guidelines.
LCP, INP, and CLS are direct ranking signals since 2021 and, more importantly, proxy for real user experience. We work from the CrUX field dataset rather than lab data alone, because field data is what the ranking signal uses. Typical interventions: critical CSS inlining, font loading strategy, image format and sizing policies, JavaScript execution ordering, layout stability on above-the-fold content, and server response time tuning. We target green on mobile as the baseline, not desktop.
React, Next.js, Vue, and Angular sites are the fastest-growing segment of the web and also the segment most commonly broken for SEO. Client-rendered content that Googlebot has to execute is at the mercy of rendering queue latency, which is unpredictable. We diagnose by comparing the raw HTML response, the Googlebot-rendered DOM via Search Console, and the client-rendered DOM. The fix depends on stack: SSR where commerce or dynamic auth matters, static generation where content is stable, incremental static regeneration where freshness matters, pre-rendering where legacy constraints prevent a rewrite.
Internal linking is the lever most commonly underused by in-house teams. Every internal link passes signal. A page buried seven clicks from the homepage with one inbound link is telling Google it is less important than a page two clicks deep with twenty. We audit the internal link graph, identify orphan pages, rebalance link equity towards commercially valuable targets, and fix anchor-text concentration where it has been over-optimised.
Canonical tags resolve duplicate content, but only when they are implemented consistently. Common failures: self-referencing canonicals that conflict with hreflang, parameterised URLs canonicalised to themselves instead of the base URL, paginated pages canonicalised to page one (which Google used to recommend and now actively penalises), and mixed-case URL inconsistency. We check every template against a single canonical policy and enforce it across the build.
Hreflang tells Google which language and regional variant to serve to which user. It is the single most commonly misimplemented technical SEO feature on multi-region sites. The usual failures: missing return tags, invalid region codes, conflicting canonical-and-hreflang pairs, and x-default that points at a page that is itself regionalised. We validate every hreflang group against the specification and monitor drift month over month.
We pull 30 to 90 days of server logs, filter on verified Googlebot IPs, and segment by URL pattern. This tells us what Googlebot is actually doing, as opposed to what Search Console tells you in summary form. Typical findings: bots crawling 404s, bots re-requesting redirected URLs, bots ignoring entire sections, bots hammering faceted URLs that produce no indexable content. Fixing those typically recovers 20 to 40% of crawl capacity for commercial pages.
Migrations are the single largest category of self-inflicted SEO damage. The usual story: a marketing team approves a redesign, the dev team ships it, URLs change subtly, redirects are incomplete, and traffic falls 40% on launch day. We run a migration protocol that catches these failures before they ship, from URL inventory through post-launch crawl comparison. Applied to a redesign or stack change this usually saves more than the entire retainer cost in a single deployment.
Onboarding is structured. After day 90 the retainer moves to steady-state monthly cadence and month-to-month terms.
Full crawl, schema graph audit, Search Console and Ahrefs baselining, Core Web Vitals field data capture, log file ingestion. Written 15 to 25 page baseline document.
Highest-impact technical work: indexation cleanup, schema graph rebuild, canonical and hreflang enforcement, Core Web Vitals on the slowest two templates.
Internal link topology rebalancing, JavaScript rendering fixes where relevant, template-level schema additions, log-file driven crawl redirection.
Month-to-month steady state. New template launches protected under migration protocol. Monthly crawl comparison. Quarterly strategic review. No lock-in.
Entry for small sites, National for mid-scale and JavaScript-heavy stacks, Enterprise for catalogues and SaaS at scale. Each tier has a distinct allocation and scope.
Baseline deliverables at every tier. Hours, depth, and tooling scale with site complexity.
Never junior hands. The work is specialist by nature, and the retainer is priced accordingly.
Screaming Frog crawl of the full indexable site, reconciled against Search Console data and prior months.
Entity graph audited and rebuilt where needed. Template-level schemas validated against current Google rules.
Field data monitored from the CrUX report. Fixes scoped and delivered against LCP, INP, and CLS regressions.
Orphan pages surfaced, link equity redirected to commercial pages, anchor text normalised where over-optimised.
Site-wide canonical policy, drift monitoring month over month.
Every template and stack release reviewed before launch to catch SEO regressions.
Two to three page written report each month. What shipped, what moved, what is queued next.
No fixed-term contract. Leave any time after the first 90 days with 30 days notice.
Start with the audit, the fee comes off month one if you sign the retainer within 30 days of delivery.
Sector and identifying detail removed. These are the kinds of issues we surface in the first 30 days on most engagements.
Programmatic landing pages for 3,200 integrations were rendered entirely client-side. Search Console showed them indexed but the rendered content was empty. SSR fix recovered 78% of indexation within 28 days.
Log files showed Googlebot spending 62% of crawl budget on colour and size facets that were canonicalised to the base product. Robots rules and canonical logic corrected. Crawl on commercial product pages doubled.
Seventeen competing Organization entities emitted from different plugins. Article schema malformed. Consolidated into a single schema graph with stable @id references. Rich result coverage went from 12% to 84% over two months.
Over 60% of hreflang pairs lacked return tags. Regional variants collapsing into the default locale in search results. Full hreflang rebuild with automated validation, regional variants stabilised.
Honest expectations. Technical SEO is not an overnight lift, but some things move inside weeks.
Indexation changes and schema fixes typically show effect inside two to four weeks, once Googlebot has had time to recrawl the affected templates. Core Web Vitals changes propagate into the CrUX field dataset over a 28-day rolling window, so you see real movement after about a month of traffic on the new template. Crawl budget reallocation shows in log files within days but in ranked results over six to twelve weeks, because rankings compound as new pages get crawled and indexed. Migration work, when done properly, produces stable or slightly improved rankings at launch rather than a dip. Treat anyone promising instant lift after a technical fix with appropriate scepticism.
Receipts available on request, happy to show live Search Console on a call.
Built 40+ postcode-level landing pages, cleaned up a messy schema stack, deployed a WhatsApp AI dispatch agent, earned local press across east London recovery services.
Rebuilt an ageing site, added product & review schema, rewrote category pages in plain English.
180-page city-service template that reads human, plus a WhatsApp agent handling 60% of intake.
Four verified reviews from active engagements. Every review ships as schema.org Review markup alongside the visible quote, same claim on screen and in the structured data.
Three years in and still the best SEO money I have ever spent. Map Pack visibility across 40+ London postcodes, zero nonsense in the reporting, and I can text Syed directly when something breaks.
Organic revenue up 185% in 14 months. Product schema rebuild alone lifted rich-result capture by ~40%. No 12-month lock-in, month-to-month, which meant I could judge the work on results rather than on contract friction.
Moved from an NYC agency that billed $9k/month for junior-delivered work. Two years later, 23 practice-area terms on page one and qualified demos up 180%. Senior time, in USD, month-to-month, what US SaaS SEO should be.
Four-clinic group across Sydney. GBP work, postcode landing pages, review pipeline that actually complies with Google's rules. Patient bookings from organic up 3x in the first year. Remote but genuinely responsive.
SEO is the foundation. AI and custom web builds are how I ship outcomes in 2026, all connected, all from the same hand.
Crawl audits, schema that validates, internal linking, postcode-level landing pages, GBP, Map Pack, the foundation that makes everything compound.
Custom WhatsApp and web agents handling enquiries, quoting, booking, and dispatch. N8N, OpenAI, Gemini.
Custom sites on WordPress, Next.js, or hand-written HTML. Fast, SEO-ready, Core Web Vitals green from day one.
Topical maps that close ranking gaps. Editorial briefs your writers can follow. Digital PR that survives core updates.
Reporting, lead routing, content pipelines. If a task is repetitive and mechanical, I'll automate it with N8N.
Written SEO diagnostic with a ranked fix list. Two-week turnaround. Often the right starting point.
Replatforms, redesigns, rebrands. I protect rankings through the change, the riskiest work in SEO, done right.
Four tiers. Every tier is hand-coded, no Wix, no Elementor, no copy-paste from a template marketplace. Schema, sitemap, Search Console and Analytics configured on every project. 90+ Lighthouse speed target where technically possible. Express turnaround on sites up to 10 pages: 2 to 3 working days for an extra £500, or same-day launch for £1,000, subject to all content and brand assets supplied on day one. Lower than traditional UK agencies, because we don't carry London agency overhead.
Hand-coded 5-page site for founders validating a new business or single-service local operators.
Most common tier for growing SMEs. Full sitemap, services, about, blog shell, custom UI/UX in Figma.
Full UI/UX system plus hand-coded Next.js or WordPress build for businesses with multiple service lines.
Shopify / Saleor headless, multi-language hreflang, CRM / CMS / ERP API integrations.
The difference between a pitch deck and the people shipping your work is the difference between “scalable” and delivered.
A short introduction, your site URL, and what you’re trying to achieve. If it’s a fit, we’ll book a 30-minute call.
Every placement is negotiated and published by hand through a six-year network of editors and journalists. We never use AI bots or PBNs, they get detected, they get demoted, and your domain pays the price.
Ten contextual do-follow links from real UK and international sites with Domain Rating 50 and above. Topically relevant. Placed inside genuine editorial content, not link-farm footers. Index report delivered within 4 weeks.
Ten earned placements on national UK and US media with Domain Rating 70 and above, the kind of coverage that shifts rankings in competitive verticals and doesn't disappear in the next core update. Written, pitched, and placed by our PR team.
Google's last five core updates have all sharpened link-spam detection. Bulk-placed links from AI-generated host sites and public blog networks are being flagged faster than they can be bought. Our model is slower and costs more per link, but the placements survive every update and compound in value the longer they stay live.
Most agency SEO deliverables end at a recommendations document the client's developer never gets around to implementing. We write the schema, ship the SSR refactor, and merge the internal-link rebuild ourselves. The SEO work that needs code ships in the same sprint the audit flagged it.
Every client gets the same senior operator from first call to monthly review. Continuity is the product.
Two weeks. Crawl, keyword gap, backlink profile, on-page health. Written report, ranked fix list.
Schema, technical debt, site build or repair, internal linking. The work that makes everything compound.
Close topical gaps. Earn links honestly. Deploy AI agents where they save real hours, not just look clever.
Monthly call. Plain-English report. What moved, what didn't, what's next. Leave any time.
Syed leads the strategy and writes the monthly notes. Behind him is a tight network of expert developers and manual link-earning partners built over six years. Everything ships fast, nothing is outsourced to an AI bot that will earn your domain a penalty in the next core update.
The £950 entry tier is right for sites under roughly 5,000 indexable URLs on a standard CMS. You get a 20-hour-a-month engagement covering technical fixes, schema, internal linking, Core Web Vitals work, monthly crawl analysis, and a written report. Sites over 5,000 URLs, JavaScript-heavy SaaS, multi-region setups with hreflang, or migrations push scope into the £1,800 to £4,000 tiers. We size the retainer honestly on the first call rather than fitting a square peg.
A general SEO retainer blends technical work, content, and link earning. A technical-only retainer focuses the full monthly allocation on infrastructure: crawlability, rendering, schema, speed, internal link topology, and log-level diagnostics. It pairs well with an in-house content team or a separate content partner, and is the right fit when the technical foundation is the bottleneck rather than word count or link profile.
Yes. JavaScript SEO is one of the most common reasons a technically well-intentioned site fails to rank. We diagnose rendering issues by comparing the initial HTML response, the Googlebot-rendered DOM, and the client-rendered DOM. Most React, Next.js, Vue, and Angular sites we audit have at least one rendering issue affecting indexation. We fix those with the right mix of server-side rendering, static generation, or pre-rendering depending on the stack.
Three layers. First, a site-wide entity graph that links your Organization, WebSite, and Person schemas through stable @id references. Second, template-level schemas for your core page types, usually Article, Product, Service, LocalBusiness, or FAQPage, validated against the current Google rules. Third, page-specific schemas like HowTo, Review, or BreadcrumbList where they fit. Most sites we audit have schema fragments that do not reference each other and do not produce rich results. Fixing that alone often moves click-through rates meaningfully.
Yes, we ingest 30 to 90 days of server logs, segmented by URL pattern and user agent. That tells us which URLs Googlebot is actually requesting, how often, and whether crawl budget is being wasted on faceted URLs, parameter noise, or low-value paginated pages. Log file analysis is the only reliable way to see crawl behaviour. Search Console gives an aggregate, logs give the truth.
We run a migration protocol: full URL inventory, old-to-new mapping, 301 redirect chains minimised to single hops, schema preservation, internal link rebuilding on the new templates, Core Web Vitals validated pre-launch, staging-site indexing controls, post-launch crawl comparison, rank-tracking benchmarks, and 30 to 60 days of post-migration monitoring. Done properly a migration should lift rankings, not harm them. Done poorly it is the single most common reason a site loses 40% of traffic overnight.
Often, yes. On about half our technical retainers we write the specs and your in-house developer or external agency implements them. On the other half we implement directly on WordPress, Next.js, Shopify, or Webflow. Either works. What we will not do is accept responsibility for outcomes when a third party refuses to implement the specified fixes.
A two to three page written note each month, plus a 30-minute call. The note covers what was shipped, what moved in rankings and indexation, what the month-over-month crawl comparison shows, and what is queued next. No 40-slide decks, no vanity dashboards. Search Console and Ahrefs are the source of truth, the note is the interpretation.
Send your URL, the current stack, and the problem as you see it. Reply inside a working day with a scoped retainer and an honest view on whether we are the right fit.