Coding for SEO: How I Rank My Single-Page Applications
Moving from WordPress to Next.js tanked my SEO, so I fixed it. Here is a deep dive into SSR, Metadata APIs, pSEO, and JSON-LD to rank SPAs on Google.
Coming from a WordPress background, I got spoiled. Seriously. You install Yoast or RankMath, toggle a few switches, and you’re basically 90% of the way there. WordPress handles the permalinks, the sitemaps, the robotic meta tags. It just works.
Then I pivoted. I wanted more control. I wanted raw speed. I wanted to build complex SaaS applications, not just brochure sites. So I moved to the Next.js, Supabase, and Tailwind stack.
And my SEO tanked.
Suddenly, I was staring at a blank div element where my content was supposed to be. That is the single-page application (SPA) trap. Google’s crawlers have gotten smarter, sure, but relying on them to execute your heavy JavaScript just to read your H1 tag is a losing game. If you want to rank your Indie Hacker project or your SaaS, you have to code for SEO. It’s not a plugin anymore. It’s architecture.
Here is how I engineer my Next.js apps to actually show up in search results.
The Rendering Dilemma: CSR vs. SSR vs. SSG
This is where most devs screw up. They treat a marketing site like a dashboard.
If you use standard React (Client-Side Rendering), you are serving a skeleton. The browser downloads a minimal HTML file, then fetches a massive JS bundle, executes it, and finally paints the text. For a user with a fast connection, this is fine. For a Googlebot trying to crawl 10 million pages a day, it’s a waste of budget.
I stick to a strict rule: Anything public-facing gets Server-Side Rendered (SSR) or Statically Generated (SSG).
In the Next.js App Router (which you should be using), components are Server Components by default. This is a massive win. The HTML is generated on the server - or at build time - and sent to the browser fully formed.
When I'm building a landing page or a blog post, I want the server to do the heavy lifting. I want the crawler to see this immediately:
<h1>The Best SaaS Boilerplate for 2024</h1> <p>Stop wasting time setting up authentication...</p>
Not this:
<div id="root"></div> <script src="bundle.js"></script>
If I'm building a dynamic SaaS dashboard behind a login, I use Client Components ('use client'). Google doesn't index that stuff anyway. But for the marketing pages? Strictly server-side.
The Metadata API: Dynamic SEO Tags
In the old days of the Pages router, we used next/head. It was clunky.
The App Router’s Metadata API is beautiful. It lets me define SEO tags right next to the logic that fetches the data. This is crucial for dynamic pages - like when I’m doing programmatic SEO (more on that later).
Here is how I set up a dynamic blog post page. I fetch the data from Supabase, and I use that same data to populate the title, description, and Open Graph images.
import { createClient } from '@/utils/supabase/server'; import { Metadata } from 'next'; // 1. Fetch data for the page content async function getPost(slug: string) { const supabase = createClient(); const { data } = await supabase .from('posts') .select('*') .eq('slug', slug) .single(); return data; } // 2. Generate Metadata dynamically export async function generateMetadata( { params }: { params: { slug: string } } ): Promise<Metadata> { const post = await getPost(params.slug); if (!post) { return { title: 'Post Not Found', }; } return { title: post.title, description: post.excerpt, openGraph: { title: post.title, description: post.excerpt, type: 'article', url: `https://myapp.com/blog/${params.slug}`, images: [ { url: post.cover_image, width: 1200, height: 630, }, ], }, twitter: { card: 'summary_large_image', title: post.title, description: post.excerpt, images: [post.cover_image], }, }; } // 3. The actual page component export default async function BlogPost({ params }: { params: { slug: string } }) { const post = await getPost(params.slug); // render post... }
Notice the openGraph and twitter arrays. Social sharing is part of SEO now. If your link looks like trash on Twitter or LinkedIn, nobody clicks. If nobody clicks, your traffic is low. If traffic is low, Google thinks your page is irrelevant.
Programmatic SEO: The Indie Hacker Cheat Code
I don’t have the budget to hire a content team. I can't write 500 articles a month. But I can write code.
Programmatic SEO (pSEO) is the art of generating thousands of landing pages based on a dataset. Think about TripAdvisor. They don't write a unique article for "Hotels in Dublin" and "Hotels in Cork". They have a database of hotels and a template.
I do this with Supabase and Next.js dynamic routes.
Let’s say I’m building a tool for freelancers. I want to rank for "Invoice Template for [Profession]".
- The Database: I create a table in Supabase called
professions(Designer, Developer, Plumber, Writer). - The Route: I create a file structure:
app/templates/[profession]/page.tsx. - The Content: I write one killer sales copy template, and I dynamically inject the profession name.
"Download the best Invoice Template for Plumbers."
But here is the trick. You can't just swap the word. That’s "thin content" and Google will penalize you. You need unique data points.
I use GPT-4 via the OpenAI API to generate a unique 200-word intro for each profession and store it in my Supabase database. Then I pull that into the page.
To make sure Google finds these pages, you need generateStaticParams. This tells Next.js to build these pages at build time (SSG), which makes them blazing fast.
export async function generateStaticParams() { const supabase = createClient(); const { data: professions } = await supabase.from('professions').select('slug'); return professions?.map((profession) => ({ profession: profession.slug, })) || []; }
Now, when I run npm run build, Next.js generates static HTML for every single profession in my database. I deploy, and boom - I have 500 pages indexed for specific long-tail keywords.
The Sitemap: Automating Discovery
WordPress updates your sitemap.xml automatically. In custom code land, you have to build it.
Do not write a manual sitemap. You will forget to update it.
Next.js has a sitemap.ts file convention now. It allows you to return an array of URLs that the crawler should visit. I connect this directly to my Supabase database. If I add a new blog post or a new pSEO page, the sitemap updates automatically.
import { MetadataRoute } from 'next'; import { createClient } from '@/utils/supabase/server'; export default async function sitemap(): Promise<MetadataRoute.Sitemap> { const supabase = createClient(); const baseUrl = 'https://emmanuelasika.com'; // Get all blog posts const { data: posts } = await supabase.from('posts').select('slug, updated_at'); const blogUrls = posts?.map((post) => ({ url: `${baseUrl}/blog/${post.slug}`, lastModified: new Date(post.updated_at), changeFrequency: 'weekly' as const, priority: 0.8, })) || []; return [ { url: baseUrl, lastModified: new Date(), changeFrequency: 'yearly', priority: 1, }, ...blogUrls, ]; }
This code lives at app/sitemap.ts. When a crawler hits /sitemap.xml, Next.js runs this function and serves valid XML. Zero maintenance.
Structured Data (JSON-LD): Speaking Google's Language
You know those rich results in Google? The ones with the star ratings, the recipe times, or the "Software App" details? That’s Schema.org markup.
Google parses HTML, but it loves JSON-LD. It’s a script tag that explicitly tells the search engine: "This is an Article" or "This is a SaaS Application."
I add this to every single page. For a SaaS product, it looks like this:
export default function Page() { const jsonLd = { '@context': 'https://schema.org', '@type': 'SoftwareApplication', name: 'My SaaS Tool', applicationCategory: 'BusinessApplication', operatingSystem: 'Web', offers: { '@type': 'Offer', price: '19.00', priceCurrency: 'USD', }, }; return ( <section> {/* Add JSON-LD to your page */} <script type="application/ld+json" dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }} /> <h1>My SaaS Tool</h1> </section> ); }
Using dangerouslySetInnerHTML feels wrong, but it’s standard practice for injecting JSON-LD. This helps you take up more screen real estate in the search results. More pixels equals more clicks.
Canonical URLs: Preventing Self-Sabotage
Cloud engineering teaches you about redundancy. In SEO, redundancy is bad.
If your site can be accessed via www.myapp.com, myapp.com, and myapp.vercel.app, Google gets confused. Which one is the real one? It splits your page rank authority between them, effectively killing your ranking.
You need a Canonical URL. This is a tag that says, "No matter how you got here, THIS is the official URL."
In the metadata object I showed earlier, I always include this:
export const metadata: Metadata = { // ... other tags alternates: { canonical: 'https://myapp.com/official-page-slug', }, };
This is also critical if you cross-post. I often write on my own blog and then syndicate to Medium or Dev.to. If I don't set the canonical link on Medium back to my site, Medium (with its high domain authority) will outrank me for my own content. That hurts.
Performance as a Ranking Factor (Core Web Vitals)
Google cares about user experience (UX). They measure this via Core Web Vitals.
- LCP (Largest Contentful Paint): How fast does the main content load?
- CLS (Cumulative Layout Shift): Does the page jump around while loading?
Next.js helps, but you can still break it.
1. Font Optimization
Don't load Google Fonts via a standard link tag. It causes a layout shift when the font swaps. Use next/font. It downloads the font at build time and hosts it with your assets. Zero layout shift.
2. Image Optimization
I never use the standard <img> tag. I use next/image.
Why? Because it automatically serves WebP or AVIF formats (which are lighter), lazy loads images that are off-screen, and prevents layout shift by reserving the space before the image loads.
import Image from 'next/image'; <Image src="/hero.png" alt="Dashboard Screenshot" width={800} height={600} priority // Use this for the LCP image (above the fold) />
Adding priority to the main hero image is a quick win. It tells the browser to load that image ASAP, improving your LCP score.
The Architecture of Internal Linking
One thing I missed from WordPress was the widget system for "Related Posts."
Internal linking is how you pass authority around your site. If your homepage has high authority, linking to your new blog post passes some of that "juice" to it.
In my Next.js apps, I build components specifically for this.
For example, at the bottom of a "UseCase" page, I’ll query Supabase for 3 other random Use Cases and display them. This creates a spiderweb of links that keeps the crawlers on my site longer. Deeper crawl depth means more indexed pages.
But don't just link randomly. Link contextually. If I'm writing about Supabase, I link to my PostgreSQL article.
Why I Do This
It sounds like a lot of work. And it is.
But here is the reality of being an Indie Hacker: You don't have ad money. You are competing against VC-backed giants who can burn cash on Google Ads.
SEO is the great equalizer.
If I write better code, structure my data better, and serve my pages faster, I can win. I can rank a side project built in a weekend above a bloated corporate site loaded with tracking scripts.
Coding for SEO isn't just about pleasing the algorithm. It's about building a robust, accessible, and high-performance application. The fact that Google rewards you for it is just the bonus.
So stop trusting create-react-app. Get into the server-side weeds. Control your metadata. Own your traffic.
Read Next
React vs. Vue: Why I Stuck with React ecosystem
I moved from WordPress to Cloud Engineering and had to choose a frontend. Here is why I picked React over Vue: ecosystem, job market, and the power of Next.js.
ReadHow to Structure a Modern SaaS Codebase
Stop building spaghetti code. Here is a battle-tested, feature-first directory structure for Next.js, Supabase, and Shadcn that scales from MVP to IPO.
Read