MeatButton

Your Database Won't Start. Here's Why and What to Do.

For anyone whose database just stopped working

Your app was working fine. Now it's throwing database errors, or your whole site is down. You SSH into the server and try to restart PostgreSQL or MySQL. It fails. You try again. Same thing. The service just won't come back up.

This is one of the most stressful things that can happen to a live app. Your data is in that database. Every minute it's down, your app is dead. And you're not sure what broke or whether you're about to make it worse.

Here's what's actually going on, and how to figure out which problem you have.

Start here: read the logs

Before you try anything, look at what the database is telling you. It almost always says exactly what's wrong. People skip this step and start guessing. Don't.

Check the service status

sudo systemctl status postgresql

or

sudo systemctl status mysql

This gives you a quick snapshot: is the service running, did it fail, and the last few lines of output. If it says failed, the lines underneath usually explain why.

Get the full story from the journal

sudo journalctl -u postgresql -e

or

sudo journalctl -u mysql -e

The -e flag jumps to the end so you see the most recent entries. This is where the real error messages live. Read them carefully before doing anything else.

Check the database's own log files

The database also writes its own logs, separate from the system journal:

These are often more detailed than the journal. Look at the most recent file. The error at the bottom is the one that matters.

Reason #1: The disk is full

This is the most common reason a database won't start. It's also the easiest to miss.

Databases need to write constantly. PostgreSQL writes WAL (write-ahead log) files to ensure data integrity. MySQL writes transaction logs and binary logs. When the disk fills up, these writes fail. The database can't guarantee your data is safe, so it refuses to start. It's not broken — it's protecting you.

Check with:

df -h

If the /var partition (or / if you don't have a separate one) shows 100% used, that's your problem. The database can't write, so it won't run.

The fix: Free space first, then start the database. Look at log files, old backups, and temporary files. Don't delete database files to make room — that's the opposite of helpful. Once you free a few hundred megabytes, the database will usually start right up.

Reason #2: The OOM killer got it

Linux has a built-in mechanism called the OOM (Out of Memory) killer. When your server runs out of RAM, the kernel picks a process to terminate. Databases use a lot of memory, so they're a frequent target.

The database didn't crash because of a bug. The operating system killed it because there wasn't enough RAM to go around. When you try to restart it, it might start and then get killed again within seconds.

Check with:

sudo journalctl -k | grep -i "oom\|killed process"

If you see your database process in the output, that's what happened.

The fix: You either need more RAM, or you need to tune your database to use less. PostgreSQL's shared_buffers and work_mem settings, or MySQL's innodb_buffer_pool_size, control how much memory the database claims. On a small VPS, these need to be dialed down. A database trying to use 2GB of RAM on a 1GB server will get killed every time.

Reason #3: Wrong permissions on the data directory

The database's data directory must be owned by the database user. PostgreSQL expects postgres to own /var/lib/postgresql/. MySQL expects mysql to own /var/lib/mysql/.

This breaks when someone runs database commands as root, copies files between servers, restores a backup with the wrong user, or runs chmod or chown on the wrong directory. The database sees that its data files are owned by someone else and refuses to start for security reasons.

Check with:

ls -la /var/lib/postgresql/

or

ls -la /var/lib/mysql/

Everything should be owned by postgres:postgres or mysql:mysql. If you see root in there, that's the problem.

The fix:

sudo chown -R postgres:postgres /var/lib/postgresql/

or

sudo chown -R mysql:mysql /var/lib/mysql/

Then try starting the service again.

Reason #4: The upgrade trap

This one catches a lot of people. You ran a system update — apt upgrade or similar — and it upgraded PostgreSQL from version 14 to 15 (or 13 to 14, or any major version jump). The new version installed, the old version stopped, and now the new version can't start.

Here's why: PostgreSQL stores data in a version-specific directory. Your data lives in /var/lib/postgresql/14/main/. The new PostgreSQL 15 is looking for data in /var/lib/postgresql/15/main/, which is empty. Two PostgreSQL versions, one has the software, the other has the data, and they're not talking to each other.

MySQL can have similar issues. A major version upgrade sometimes changes the format of internal system tables, and the new version refuses to start until you run mysql_upgrade.

The fix: For PostgreSQL, you need to either run pg_upgrade to migrate your data to the new version's directory, or change the configuration to point the new version at the old data directory. This is not something to wing. The upgrade tool has specific steps and they need to be done in order.

For MySQL, try:

sudo mysql_upgrade

If the service won't start at all, you may need to start it with --skip-grant-tables first, run the upgrade, then restart normally.

Reason #5: Port conflict

Another instance of the database might already be running. Or another program grabbed the port. PostgreSQL uses port 5432 by default. MySQL uses 3306. If something else is already listening on that port, the database can't bind to it and won't start.

This happens after failed restarts (the old process didn't fully die), or when you have multiple PostgreSQL versions installed and both try to start, or when a Docker container is running a database on the same port as the host database.

Check with:

sudo ss -tlnp | grep 5432

or

sudo ss -tlnp | grep 3306

If something shows up, that process is holding the port. You need to stop it before your database can start.

Reason #6: Corrupted data files

If your server had a hard crash — power loss, kernel panic, the hosting provider rebooted it without a clean shutdown — the database files might be corrupted. The database performs integrity checks on startup and won't come up if it finds something wrong.

The logs will usually mention something about corrupted pages, invalid checksums, or failed recovery. This is the most serious scenario because it means the data on disk isn't in a consistent state.

The fix: This depends entirely on what's corrupted and how badly. PostgreSQL has recovery modes and tools like pg_resetwal. MySQL has innodb_force_recovery settings. But these are last-resort tools that can cause data loss if used incorrectly. If you have a backup, restoring from it is usually safer than trying to repair corrupted files.

Why AI is dangerous for database recovery

Database recovery is one of the worst problems to hand to AI. Here's why:

AI can't see your logs. It doesn't know which error you're getting, which version you're running, how big your dataset is, or what state the files are in. It guesses based on keywords in your question.

AI doesn't know what's destructive. One of the most common AI suggestions for a database that won't start is to reinitialize the data directory — essentially wiping it clean and starting over. If your problem was just a full disk or wrong permissions, that advice just deleted all your data. The fix was two commands. The AI's suggestion was a nuclear option.

AI can't assess before acting. A human engineer looks at the logs first, identifies the specific problem, and then applies the specific fix. AI skips the assessment and jumps straight to a list of things to try. With a database, trying things in the wrong order can turn a recoverable situation into a permanent one.

The stakes are real. Your data is in that database. Customer records, orders, content, user accounts — whatever your app stores. A wrong command doesn't just fail to fix the problem. It can make the data unrecoverable. This is not the time for trial and error.

Database down? Don't guess.

Press the MeatButton and a real expert will look at your actual logs, identify the specific problem, and fix it without risking your data. No guesswork, no AI suggestions that might wipe your database. First one's free.

Get MeatButton