Your VPS Disk Space Is Full and Everything Just Broke
One minute everything is fine. The next, your app crashes. Your database won't save anything. You try to SSH in and it's slow or won't connect at all. Maybe you see an error that says No space left on device. Maybe things just silently stop working with no error at all.
Your server's hard drive is full. And when a disk is full, everything breaks at once.
Why everything breaks at the same time
Every program on your server needs to write things to disk. Your database writes data. Your web server writes logs. Your app writes temporary files. Background processes write status files. Even logging in requires writing a session file.
When the disk hits 100%, none of that can happen. It's not that one thing breaks — it's that the shared resource they all depend on is gone. So everything fails simultaneously, in different ways, with different error messages. It looks like your whole server exploded, but the cause is one simple thing: no room left.
The silent killer
This is the frustrating part: a full disk doesn't warn you. There's no built-in alarm. No email. No dashboard notification. Your 25GB VPS slowly fills up over weeks or months while everything works perfectly. Then one day it hits 100% and everything dies at once.
It feels sudden, but it's been coming for a long time. You just couldn't see it.
What filled up your disk
It's almost always one of these:
1. Log files (the #1 culprit)
Every app, every web server, every service on your machine is writing log files. Nginx writes access logs and error logs. Your app framework writes its own logs. Your database writes logs. These files grow forever. Nobody cleans them up. A single Nginx access log can quietly grow to 10GB+ if your site gets decent traffic.
2. Docker images and build layers
If you use Docker, every time you rebuild your app it creates new image layers. Old images don't get deleted. Old containers that stopped running don't get cleaned up. Their volumes stick around. After a few months of deploys, Docker can easily eat 10-15GB without you realizing it.
3. Old deployments and backups
Some deploy tools keep old versions of your app around so you can roll back. That's nice in theory, but each copy takes up space. Same with database backups — if you set up automated backups but never set up automated cleanup, they pile up.
4. System journals
Linux keeps its own logs through a system called journald. By default, it has no size limit. It just keeps everything forever. On a small VPS, this can grow to several gigabytes.
5. Package manager caches
Every time you install or update software, your package manager (apt, yum) downloads files and caches them. Over time this cache grows. It's usually not the biggest offender, but on a 25GB disk, every gigabyte counts.
How to check and fix it
If you can still SSH into your server (it might be slow), here's what to do:
See how full the disk is
df -h
This shows every disk and how full it is. You're looking for the one mounted at / — that's your main drive. If it says 100%, that's your problem.
Find what's eating space
du -sh /var/log/*
This shows how big each file in your log directory is. You'll probably find one or two massive files here. Nginx logs, syslog, or application logs are the usual suspects.
Check system journal size
journalctl --disk-usage
This tells you how much space the system journal is using. If it's more than a few hundred MB on a small VPS, it's part of the problem.
Check Docker
docker system df
This shows how much space Docker is using for images, containers, and volumes. If it's significant, clean it up with:
docker system prune -a
Warning: This deletes all unused images, stopped containers, and dangling volumes. If you need to keep old images for rollback, be selective about what you remove.
Truncate massive log files
Don't delete active log files — the program writing to them might crash. Instead, truncate them (empty the file without deleting it):
> /var/log/nginx/access.log
> /var/log/nginx/error.log
That > with nothing before it writes "nothing" to the file, emptying it to zero bytes. The program keeps writing to it like nothing happened.
Shrink the system journal
journalctl --vacuum-size=500M
This trims the journal down to 500MB, deleting the oldest entries.
Clean package caches
apt clean
or yum clean all depending on your system. This removes cached package files you don't need anymore.
How to stop this from happening again
Freeing space fixes the immediate crisis. But if you don't change anything, the disk will fill up again in a few weeks or months. Here's what to set up:
Log rotation
Linux has a built-in tool called logrotate that automatically compresses and deletes old log files. Most services set it up by default, but it's often misconfigured or missing for app-specific logs. Make sure your app's logs are included in /etc/logrotate.d/.
Limit journal size permanently
Edit /etc/systemd/journald.conf and set:
SystemMaxUse=500M
Then restart the journal service. It'll never grow beyond that again.
Schedule Docker cleanup
Add a cron job to periodically clean up Docker:
0 3 * * 0 docker system prune -f
That runs docker system prune every Sunday at 3am. Not aggressive enough to delete images you're actively using, but catches the accumulated junk.
Set up monitoring
The real fix is knowing before you hit 100%. Even a simple cron job that emails you when the disk hits 80% would have prevented this. If you're using any hosting dashboard or monitoring tool, set up a disk space alert. If you don't have one, a five-line bash script in cron can do it.
The cheap VPS trap
This problem hits hardest on $5-10/month VPS plans. Those typically come with 25GB of disk space. That sounds like enough until you realize your OS takes 3-4GB, Docker takes a few more, and then your logs and images fight over whatever's left.
If you're running a real app with a database, Docker, and logging, 25GB can fill up in weeks. Upgrading to 50GB buys you time. Setting up proper cleanup buys you forever.
Why AI doesn't help here
AI can't see your server. It doesn't know what's on your disk, how full it is, or which files are safe to delete. When you tell it "my server isn't working," it'll give you a generic troubleshooting checklist that starts with "restart the service" — which won't work because the service can't write its PID file because the disk is full.
Worse, AI might tell you to delete things that look safe but aren't. Or it might tell you to install a monitoring tool — which also can't install because there's no disk space.
This is a problem that requires someone who can actually look at your server, see what's using space, and make the right call about what to remove.
Server down and disk full?
Press the MeatButton and a real expert will SSH into the problem. They'll find what filled the disk, clear it safely, and set up log rotation so it doesn't happen again. First one's free.
Get MeatButton