Philosophy

Automating the Boring Stuff: My GitHub Workflow

Oct 2, 2025
8 min read
E.A
Emmanuel Asika

I used to drag-and-drop files. Now I let robots handle it. Here is my complete CI/CD workflow for Next.js and Supabase, from Husky hooks to Semantic Release.

I used to be the guy who dragged and dropped files into FileZilla. It feels embarrassing to admit that now, considering I’m deep into a Master’s in Cloud Computing and architecting systems on AWS, but we all start somewhere. The issue wasn't just the manual effort. It was the anxiety. The constant worry that I missed a file, or that I overwrote a config, or that the live site would just turn white while I frantically refreshed.

That anxiety is useless. It doesn't help you ship. It just slows you down.

Now that I'm building SaaS products with Next.js and Supabase, and aiming to ship fast as an Indie Hacker, I don't have time for manual nonsense. If a task takes more than two minutes and I have to do it more than twice, I automate it. That’s the rule.

My GitHub workflow isn’t just about "Continuous Integration." It’s about preserving my sanity. It’s about being able to merge a Pull Request from my phone while I’m on a bus in Dublin, knowing for a fact that the system won't break.

Here is exactly how I set up my repo, from pre-commit hooks to automated changelogs.

The Philosophy: Robots are Better at Consistency

Humans are great at creative problem solving. We are terrible at repetitive tasks. We get tired, we get bored, we miss details. A script doesn't get tired. It runs the exact same way at 3 AM as it does at 10 AM.

When I shifted from freelancing to engineering scalable systems, the biggest mindset shift was treating the pipeline as a product. The pipeline is the factory. If the factory is broken, it doesn't matter how good the car design is.

In my current stack-usually a T3 stack or standard Next.js with Tailwind and Supabase-the workflow handles three main pillars:

  1. Hygiene: Formatting, linting, type-checking.
  2. Integrity: Testing, building.
  3. Delivery: Versioning, changelogs, deployment.

Let’s break down the layers.

Layer 1: The Local Bouncer (Husky)

Automation starts before the code even leaves my machine. There is nothing more annoying than pushing code, switching context, and then seeing a red X on GitHub three minutes later because of a missing semicolon or an unused variable.

I use Husky to manage git hooks. It acts like a bouncer. You can't get into the repo unless you follow the dress code.

Here is how I set it up in a standard Next.js project:

npm install --save-dev husky lint-staged npx husky init

In my package.json, I configure lint-staged. This is crucial. I don't want to lint the entire codebase every time I commit a small fix. I only want to lint the files that are currently staged for commit. It makes the process instant.

"lint-staged": { "*.{js,jsx,ts,tsx}": [ "eslint --fix", "prettier --write" ] }

Then, inside the .husky/pre-commit file, I just add:

npx lint-staged

Now, if I try to commit garbage code, the terminal yells at me immediately. I fix it right there. No context switching. No broken builds in the cloud. It forces a level of quality that keeps the main codebase clean.

Layer 2: The GitHub Actions Core

This is where the magic happens. GitHub Actions is free for public repos and has a generous tier for private ones. If you aren't using it, you are working too hard.

I structure my workflows into specific jobs. I don't like one giant monolithic workflow file because it’s a pain to debug. I usually have a ci.yml for pull requests and a deploy.yml for merges to main.

The CI Workflow

This runs on every Pull Request. Its only job is to tell me: "Is this code safe to merge?"

Here is a stripped-down version of what I use for my Next.js projects:

name: CI on: pull_request: branches: [ "main" ] workflow_dispatch: jobs: build-and-test: name: Build & Test runs-on: ubuntu-latest steps: - name: Checkout Repo uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: 20 cache: 'npm' - name: Install Dependencies run: npm ci - name: Lint run: npm run lint - name: Type Check run: npm run type-check # I add a script in package.json: "type-check": "tsc --noEmit" - name: Build Project run: npm run build

A note on npm ci vs npm install: Always use npm ci in your pipeline. It installs the exact versions specified in your package-lock.json. npm install can sometimes update minor versions if your package file allows it, leading to that dreaded "it works on my machine but fails in CI" scenario. npm ci is strict. We like strict.

Caching is King: Notice the cache: 'npm' line in the setup step. Node modules are heavy. Without caching, the workflow has to download the internet every single time it runs. With caching, it grabs the modules from the previous run if the lockfile hasn't changed. This cuts my build times from 4 minutes down to about 90 seconds. Speed matters.

Layer 3: Database Migrations with Supabase

Since I’m betting heavily on Supabase for the backend, the database schema needs to be part of the version control. You cannot just click around in the Supabase dashboard creating tables and then expect your production app to work.

I use the Supabase CLI for this. Locally, I run migrations. In GitHub Actions, I need to ensure those migrations are valid.

I add a step to my CI pipeline to check for type safety against the database.

- name: Supabase Type Gen Check env: SUPABASE_ACCESS_TOKEN: ${{ secrets.SUPABASE_ACCESS_TOKEN }} PROJECT_REF: ${{ secrets.SUPABASE_PROJECT_ID }} run: | npx supabase gen types typescript --project-id "$PROJECT_REF" > types/supabase.ts # Then check if git detects changes. # If types changed but weren't committed, fail the build. git diff --exit-code

This is a bit aggressive, but it prevents the UI code from drifting away from the Database reality. If I change a table column but forget to regenerate the TypeScript types, the pipeline fails. It saves me from runtime errors later.

Layer 4: Automated Semantic Release

This is the part most indie hackers skip, but it is the most satisfying part of the workflow. I hate writing changelogs. I hate trying to remember if this update is version 1.1.2 or 1.2.0.

I use Semantic Release to handle this.

It works based on commit messages. This is why I use the "Conventional Commits" standard.

  • fix: button alignment -> This triggers a Patch release (v1.0.0 -> v1.0.1)
  • feat: add dark mode -> This triggers a Minor release (v1.0.0 -> v1.1.0)
  • BREAKING CHANGE: remove legacy api -> This triggers a Major release (v1.0.0 -> v2.0.0)

Here is the workflow config for release.yml. This only runs when code hits the main branch.

name: Release on: push: branches: - main jobs: release: name: Release runs-on: ubuntu-latest permissions: contents: write # Needed to create tags/releases issues: write pull-requests: write steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: 20 - name: Install Dependencies run: npm ci - name: Semantic Release env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: npx semantic-release

When this runs, it analyzes my commits since the last release, calculates the new version number, creates a GitHub Release, generates a changelog automatically, and posts it.

It turns a 20-minute administrative task into zero minutes. It also makes the project look professional. When you look at the repo, you see clean version tags and detailed release notes. It builds trust, even if I'm the only one working on it.

Layer 5: Dependabot (The Necessary Evil)

Security isn't a feature; it's a requirement. Especially when you are dealing with cloud infrastructure. Dependencies rot. Vulnerabilities are found in npm packages daily.

I enable Dependabot, but I configure it to not annoy me to death. I don't need a notification every time a dev-dependency has a minor patch.

Here is my .github/dependabot.yml:

version: 2 updates: - package-ecosystem: "npm" directory: "/" schedule: interval: "weekly" day: "saturday" open-pull-requests-limit: 10 groups: # Group all minor and patch updates together to reduce noise dependencies: patterns: - "*" update-types: - "minor" - "patch"

By grouping updates, Dependabot opens one PR on Saturday that updates multiple packages, rather than spamming me with 15 emails on a Tuesday morning. I review the changelog, let the CI pipeline run (which verifies the tests still pass), and merge. Done.

The "Why" Behind the Effort

Some of you might look at this and think, "Emmanuel, this is over-engineering. Just git push and go to bed."

But here is the thing about freedom-which is what we are all chasing as indie hackers and engineers. Freedom requires discipline.

If my deployment process is manual, I am tethered to my laptop. I have to remember steps. I have to be careful.

If my deployment process is automated, I am free to focus on the code. I am free to focus on the architecture. I can break things locally, but I can't break the build pipeline because the pipeline protects itself.

When I'm working on Cloud coursework or trying to get a SaaS MVP out the door, cognitive load is my most expensive resource. I refuse to spend it on running npm run build manually.

Automating the boring stuff allows me to operate like a team of five, even when it's just me.

So, stop dragging files. Stop running tests manually. Script it once, set up the YAML, and let the GitHub robots do the heavy lifting. You have better things to build.

#automating#IndieHacker

Read Next