MeatButton

Your App Was Fast. Now It Takes 10 Seconds to Load. Here's Why.

For anyone whose app worked fine until it got real users

You built your app. It worked great. Pages loaded instantly. Then real users showed up — a few hundred, maybe a thousand — and everything slowed to a crawl. Pages that used to load in half a second now take five, eight, ten seconds. Sometimes they time out completely.

You didn't change anything. The code is the same. The server has plenty of CPU and memory. So what happened?

Almost always, it's the database. Specifically, it's the queries your app sends to the database. They worked fine when the database had a few hundred rows. Now it has a hundred thousand rows, and those same queries are choking.

This isn't a mystery. There are a handful of common reasons this happens, and all of them are fixable. Let's walk through them in plain English.

The most common cause: no indexes

This is the big one. If your database is slow, start here.

Think about a physical book. If you want to find every mention of "authentication" in a 500-page book, you have two options. You can read every single page from start to finish, scanning for the word. Or you can flip to the index in the back, look up "authentication," and go directly to pages 34, 112, and 387.

A database index works the same way. Without an index, when your app asks the database "find me all the orders for customer #4829," the database has to look at every single row in the orders table. Every one. If you have 50 orders, that's fine — it takes a fraction of a millisecond. If you have 500,000 orders, it has to scan all 500,000 rows to find the 12 that belong to customer #4829. That takes a while.

With an index on the customer column, the database jumps straight to those 12 rows. Instead of examining half a million rows, it examines 12. The query goes from several seconds to a few milliseconds.

The rule is simple: if you filter by a column, sort by a column, or join on a column, that column should probably have an index. The most common ones are:

Adding an index is usually one line:

CREATE INDEX idx_orders_customer_id ON orders (customer_id);

That one line can turn a 5-second query into a 5-millisecond query. It's that dramatic.

The N+1 problem: death by a thousand queries

This one is sneaky because each individual query is fast. The problem is how many of them you're running.

Here's the scenario. You have a blog. You load the 50 most recent posts. That's one query. Fast. Then, for each post, your app makes a separate query to load the author's name. That's 50 more queries. Then for each post, another query to count the comments. That's another 50. You started with one query and ended up with 101.

Now multiply that by 30 users loading the page at the same time. That's 3,030 queries hitting your database every time the page loads. The database spends more time opening and closing connections than actually doing work.

This is called the N+1 problem. You load N items (50 posts), then make 1 additional query per item. It should be 2 queries total: one to get the posts, and one to get all the authors for those posts in a single batch. Instead it's 51.

If you're using a framework like Rails, Django, Laravel, or an ORM like Prisma or Sequelize, you've probably got N+1 queries happening without realizing it. The ORM makes it easy to write code that looks clean but generates terrible SQL behind the scenes.

The fix is called eager loading — telling your ORM to load related data in batches instead of one at a time. Every major ORM supports this:

Same result. Two queries instead of 101.

Loading everything when you need almost nothing

Your users table has 25 columns. Name, email, address, bio, profile picture URL, settings, preferences, two-factor config, password hash, and a bunch of timestamps. When you display a list of usernames on a page, your app loads all 25 columns for every user, even though you only need the name.

This is what SELECT * does — it grabs everything. When the table is small, nobody notices. When you're loading 500 user records across 25 columns, you're moving a lot of data between the database and your app for no reason. That takes time and memory.

The fix: select only what you need.

-- Instead of this
SELECT * FROM users;

-- Do this
SELECT id, name FROM users;

In an ORM, use .select() or .only() or whatever your framework calls it. You're telling the database "I only need these two columns, don't bother with the rest."

No pagination: loading 10,000 rows at once

Your app has a page that shows "all orders." When you had 50 orders, that was fine. Now you have 15,000 orders and the page tries to load every single one of them at once. The database has to gather all 15,000 rows, your app has to process them, serialize them to JSON, send them over the network, and the browser has to render them.

Nobody is reading 15,000 rows on a screen. Show 25 or 50 at a time and add pagination. This isn't just a UI improvement — it's a database performance improvement. Loading 50 rows is orders of magnitude faster than loading 15,000.

SELECT * FROM orders ORDER BY created_at DESC LIMIT 50 OFFSET 0;

For the next page:

SELECT * FROM orders ORDER BY created_at DESC LIMIT 50 OFFSET 50;

If your app has any endpoint that returns an unlimited number of rows, that's a ticking time bomb. It works today. It won't work when the table grows.

How to find which queries are slow

You know your app is slow, but you need to figure out exactly which queries are causing the problem. Here's how.

Browser DevTools: start from what users see

Open your browser's developer tools (F12), go to the Network tab, and reload the page. Look at the timing column. If an API call takes 3 seconds, that tells you which endpoint is slow. Now you know where to look in your code.

PostgreSQL: pg_stat_statements

If you're on PostgreSQL, enable the pg_stat_statements extension. It tracks every query your database runs and tells you how often each one runs and how long it takes. This is the single most useful tool for finding slow queries.

-- Enable it (once)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Find the slowest queries
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;

This shows you the 20 slowest queries, how many times each one was called, and how long it took on average. Often you'll find one or two queries that are responsible for 90% of the slowness.

MySQL: the slow query log

MySQL has a built-in slow query log. Enable it and set a threshold — say, any query taking longer than 1 second:

SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;

Then check the log file. It records every query that exceeds your threshold, including how long it took and how many rows it examined. If a query examined 200,000 rows to return 5 results, that's a query that needs an index.

EXPLAIN: see what the database is actually doing

Once you've found a slow query, put EXPLAIN in front of it:

EXPLAIN SELECT * FROM orders WHERE customer_id = 4829;

This shows you the database's plan for running the query. The key thing to look for is whether it says Seq Scan (reading every row) or Index Scan (using an index). If it says Seq Scan on a big table, you need an index on whatever column you're filtering by.

Why AI writes queries that don't scale

If AI helped build your app — whether it's Copilot, ChatGPT, Cursor, or a vibe-coded project — there's a good chance your slow queries came from AI-generated code. Here's why.

AI writes the simplest query that works. When you ask AI to "load all blog posts with their authors," it generates code that works perfectly with your 10 test records. It doesn't think about what happens with 100,000 records because it doesn't know how big your data will get. It has no concept of scale.

AI doesn't add indexes. Creating indexes is a separate step from writing queries. AI generates the query or the migration that creates the table, but it rarely adds the indexes that make those queries fast. It solves the problem you asked about (make this feature work) and ignores the problem you didn't ask about (make it fast at scale).

ORMs hide the SQL. When AI writes ORM code, nobody sees the actual SQL that gets generated. The code looks clean and simple — Post.find_all().map(p => p.author.name) — but behind the scenes it's firing off hundreds of queries. You'd never write 100 queries by hand. But the ORM makes it invisible, and AI doesn't flag it.

AI doesn't profile. A human developer who notices a page is slow will look at the query log, run EXPLAIN, and find the bottleneck. AI doesn't monitor running applications. It generates code and moves on. The performance problems only surface later, in production, with real data and real users.

This is one of the most common patterns we see: an app that was built with AI assistance, worked great in development, and fell apart under real load. The code isn't wrong — it's just not optimized. And optimization is something AI consistently skips.

Quick checklist: making your database fast

  1. Add indexes on every column you filter, sort, or join on
  2. Fix N+1 queries by using eager loading / joins instead of per-item lookups
  3. Add pagination to every endpoint that returns a list — never return unlimited rows
  4. Select only the columns you need instead of SELECT *
  5. Enable query logging so you can see what's actually slow
  6. Run EXPLAIN on your slowest queries to see if they're using indexes

Most apps that feel slow can be made fast by fixing just the first two items on this list. Indexes and N+1 queries are responsible for the vast majority of database performance problems in web apps.

Still slow after trying all this?

Sometimes the problem is deeper — bad schema design, missing connection pooling, queries that need to be completely rewritten. Press the MeatButton and a real database expert will look at your actual queries, your actual data, and tell you exactly what to fix. First one's free.

Get MeatButton