Philosophy

How I Built Pass My Essay to Solve the AI Detection Problem

Oct 14, 2025
9 min read
E.A
Emmanuel Asika

I built a SaaS to bypass AI detection using Next.js, Supabase, and Stripe. A technical deep dive into algorithms, serverless architecture, and shipping fast.

I didn't build "Pass My Essay" because I wanted to help people cheat. I built it because the current state of AI detection is broken. It is a probabilistic guessing game that is punishing honest students and writers. You spend hours researching, writing, and editing, only for some black-box algorithm to flag your work as 88% AI-generated because you used Grammarly to fix your comma splices.

That paranoia is real. I've felt it. And as a developer moving from the freelance grind to building my own scalable systems, I saw a technical challenge I couldn't ignore.

So I built a SaaS to fix it.

Here is exactly how I built Pass My Essay, the tech stack I chose, the cloud architecture decisions involved, and the code that powers the humanization engine.

The Problem with Detection (And the Opportunity)

To build a solution, you have to understand the adversary. AI detectors like GPTZero or Turnitin work primarily on two metrics: perplexity and burstiness.

Perplexity measures the randomness of the text. To a standard LLM like GPT-4, a sentence with low perplexity is highly predictable. Humans are chaotic. We write weird sentences. Our perplexity is high.

Burstiness measures the variation in sentence structure. AI writes with a monotonous rhythm. Subject-verb-object. Subject-verb-object. Humans vary their cadence. Short sentence. Long, winding explanation that drags on for a bit. Then a punchline.

My goal wasn't just to "rewrite" text. It was to inject high perplexity and burstiness without losing the original meaning. It is a data transformation problem.

The Stack: Leaving PHP Behind

I spent years in the WordPress ecosystem. I know PHP, I know the hooks, I know the loop. But for this project, I needed speed, reactivity, and a modern developer experience. I didn't want to manage a VPS or worry about caching plugins.

I went with what I call the "Indie Cloud Stack":

  1. Framework: Next.js 14 (App Router)
  2. Language: TypeScript
  3. Styling: Tailwind CSS + Shadcn UI
  4. Backend/Auth: Supabase
  5. Hosting: Vercel
  6. Payments: Stripe

This stack gives me freedom. I can spin up a new feature in an hour. The App Router in Next.js 14 took some getting used to - server components are a paradigm shift - but the performance benefits on the edge are worth it.

Designing the Architecture

Since I'm currently deep diving into Cloud Computing for my Masters, I look at every project through the lens of scalability and cost. I couldn't just wrap the OpenAI API and call it a day. That is too expensive and too slow for long-form essays.

I needed a stateless architecture. Here is the high-level flow:

  1. User inputs text (up to 2,000 words).
  2. Frontend sends a POST request to a protected API route.
  3. Server validates the session (Supabase Auth).
  4. Server checks the user's credit balance in the database.
  5. If valid, the text is chunked.
  6. Chunks are sent to the processing engine (a mix of custom prompts and fine-tuned models).
  7. Result is aggregated and returned.
  8. Database transaction deducts credits.

The Database Schema

Supabase makes this incredibly easy. I'm using Postgres, but I interact with it like a Firebase developer. Here is the core schema for the user credits system. I keep it simple.

create table profiles ( id uuid references auth.users on delete cascade, email text, credits_balance int default 0, created_at timestamp with time zone default timezone('utc'::text, now()), primary key (id) ); -- RLS Policy so users can only read their own data create policy "Users can view own profile" on profiles for select using ( auth.uid() = id );

The beauty of Supabase is Row Level Security (RLS). I don't have to write complex middleware to check if User A is allowed to see User B's data. The database engine handles that logic.

The Engine: How We Humanize

The core value proposition is the rewriting logic. I can't give away the entire proprietary prompt chain, but I can explain the engineering behind it.

Standard LLMs are trained to be helpful and concise. To pass AI detection, we need the model to be verbose and slightly unpredictable.

I created a specialized API route in Next.js to handle this. One major issue I faced was Vercel's execution timeout. On the Pro plan, you get about 60 seconds for a serverless function. If a user submits a 2,000-word essay, processing can take longer than that.

The solution? Streaming responses or Asynchronous processing.

For the MVP, I stuck with streaming. It provides immediate feedback to the user so they know the system hasn't crashed. Here is how I set up the API route handler using the Vercel AI SDK:

import { OpenAIStream, StreamingTextResponse } from 'ai'; import { Configuration, OpenAIApi } from 'openai-edge'; // Use edge runtime for speed export const runtime = 'edge'; const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(config); export async function POST(req: Request) { const { content, mode } = await req.json(); // Prompt engineering logic based on 'mode' (Standard vs Aggressive) const systemPrompt = mode === 'aggressive' ? "Rewrite this text with high lexical diversity..." : "Paraphrase this text naturally..."; const response = await openai.createChatCompletion({ model: 'gpt-4', stream: true, messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: content } ], temperature: 0.9, // Higher temp = more randomness = less detection }); const stream = OpenAIStream(response); return new StreamingTextResponse(stream); }

Notice the runtime = 'edge' config. This is crucial. It allows the function to start up instantly without the cold start times of standard Node.js serverless functions. However, edge functions have limitations - they can't run standard Node libraries. You have to be careful with your imports.

The Frontend: Building Trust with UI

When you are selling a tool that promises to save a student's grade or a writer's job, the UI needs to look professional. It can't look like a hacky script.

I used Shadcn UI. It is not a component library you install as a dependency; it's a set of components you copy and paste into your project. This gives you total control over the code.

For the editor, I needed a split-screen view: Original on the left, Humanized on the right. I used a simple grid layout with Tailwind.

<div className="grid grid-cols-1 md:grid-cols-2 gap-6 h-full"> <div className="p-4 border rounded-lg bg-white shadow-sm"> <textarea className="w-full h-full resize-none focus:outline-none" placeholder="Paste your AI text here..." value={input} onChange={(e) => setInput(e.target.value)} /> </div> <div className="p-4 border rounded-lg bg-gray-50 shadow-inner relative"> {isLoading ? ( <div className="absolute inset-0 flex items-center justify-center"> <Spinner /> </div> ) : ( <div className="prose"> {output} </div> )} </div> </div>

It is simple code, but it works. The goal is to reduce friction. Paste. Click. Done.

Handling Payments with Stripe (The Tricky Part)

Integrating payments is where most Indie Hackers get stuck. I didn't want a subscription model initially; I wanted a credit-based system (Pay-as-you-go). This feels fairer to students who might only need the tool once a month.

The flow looks like this:

  1. User clicks "Buy 5000 Words".
  2. Redirect to Stripe Checkout.
  3. User pays.
  4. Stripe sends a checkout.session.completed webhook to my backend.
  5. My backend updates the Supabase database.

The webhook handler is critical. If this fails, you took the user's money and gave them nothing. That is a support nightmare. You must handle idempotency (making sure you don't credit them twice for the same event).

Here is a stripped-down version of my webhook handler:

import Stripe from 'stripe'; import { supabaseAdmin } from '@/lib/supabaseAdmin'; import { buffer } from 'micro'; export async function POST(req: Request) { const rawBody = await req.text(); const signature = req.headers.get('stripe-signature'); let event; try { event = stripe.webhooks.constructEvent(rawBody, signature, process.env.STRIPE_WEBHOOK_SECRET); } catch (err) { return new Response(`Webhook Error: ${err.message}`, { status: 400 }); } if (event.type === 'checkout.session.completed') { const session = event.data.object; const userId = session.metadata.userId; const creditsPurchased = parseInt(session.metadata.credits); // Atomic update of credits const { error } = await supabaseAdmin.rpc('increment_credits', { user_id: userId, amount: creditsPurchased }); if (error) console.error('Credit update failed:', error); } return new Response(null, { status: 200 }); }

I use a Postgres function (increment_credits) via RPC. This is safer than reading the current balance and adding to it in the application logic, which can lead to race conditions.

The Cloud Engineering Perspective: Scaling

Right now, Pass My Essay runs on Vercel's infrastructure. It's serverless. It scales down to zero when nobody uses it (costing me nothing) and scales up instantly when traffic hits.

But as I learn more about AWS and Azure in my masters, I'm thinking about the next step. If traffic explodes, Vercel gets expensive. The markup on bandwidth and serverless execution is the price you pay for convenience.

The future architecture would likely involve moving the heavy lifting-the actual text processing-to a containerized service on AWS ECS or perhaps Azure Functions. This would give me more control over the timeout limits and allow me to implement more complex, multi-stage processing pipelines without worrying about the 60-second HTTP limit.

For example, I could use a message queue (like Amazon SQS). The user submits the text, get a "Processing" ID, and the frontend polls for the result. This decouples the user interface from the backend processing. It's robust. It's how enterprise systems are built.

But for now? The Indie Hacker mindset says: stick with what ships. Vercel is fine.

Overcoming the "AI Wrapper" Stigma

There is a lot of noise on Twitter/X about SaaS apps just being "wrappers" around OpenAI. People say, "Why should I pay you when I can use ChatGPT?"

Here is the reality: Convenience is a product.

Yes, you could spend 45 minutes prompt engineering ChatGPT to bypass detection. You could tweak the temperature, provide few-shot examples, and iterate until it works. Or, you can pay Pass My Essay $5 to do it in 3 seconds.

I am not selling the AI model. I am selling the workflow. I am selling the result.

Furthermore, the "wrapper" argument ignores the backend complexity. The credit system, the auth, the history tracking, the fine-tuning parameters injected into the system prompt-that is the software engineering. The LLM is just the database; the application is the value.

Deployment and Reality Check

When I finally pushed the deploy button, nothing happened. No fireworks. No viral tweet immediately.

That is the reality of software engineering that tutorials don't show you. You build the system, you configure the DNS, you verify the SSL, and then you sit there. The code is only 20% of the battle. The rest is distribution.

But technically? It held up. The switch from WordPress to Next.js was the right call. The app feels instant. The navigation is snappy. Supabase handles the auth flow seamlessly, sending magic links and handling password resets without me writing a single line of email logic.

What's Next?

I am looking at fine-tuning open-source models (like Llama 3 or Mistral) to run on my own GPU instances. This would reduce my dependency on OpenAI and allow for even more specific "humanization" patterns that proprietary models refuse to generate due to safety guardrails.

Building Pass My Essay was a crash course in modern full-stack development. It forced me to understand edge runtimes, database security policies, and payment webhooks on a deep level.

If you are a developer stuck in tutorial hell or clinging to older stacks like PHP because it's comfortable: stop. Build something real. Pick a problem you have-even if it's just wanting to bypass an annoying detector-and engineer a solution.

The code is the easy part. Deciding to ship is where the real work happens.

#how#IndieHacker

Read Next