Accéder au contenu principal

How to Fix Docker "No Space Left on Device”: A Complete Guide

Master Docker troubleshooting to resolve the "no space left on device" error. Learn how to safely prune unused containers and reclaim lost storage space.
28 janv. 2026  · 13 min lire

The "no space left on device" Docker error typically strikes at the worst possible time: mid-deployment, during a critical build, or when pulling an essential image. I've encountered this error sometimes, and I can tell you that rushing to delete files without proper diagnosis is a recipe for data loss.

What makes this error particularly challenging is that it rarely has a single cause. You might be dealing with accumulated Docker images, runaway container logs, exhausted inodes from millions of small files, or even "phantom" space consumed by deleted files still held open by running processes. Each scenario requires a different solution.

That's why I always emphasize diagnosis before action. Understanding the root cause allows you to apply targeted solutions efficiently and avoid disrupting your production environment. In this guide, I'll walk you through a systematic approach to diagnosing, resolving, and preventing this frustrating issue.

If you are new to Docker, I recommend taking our hands-on Introduction to Docker course, which covers everything you need to get started with containerization.

What Is the Docker "No Space Left on Device" Error?

When Docker throws this error, it's signaling one of two fundamental problems with your system's storage:

  • Physical disk block exhaustion
  • Inode exhaustion

Understanding the distinction between these causes is crucial for applying the right solution.

Root causes behind the “no space left on device” error

The first and most straightforward cause is physical disk block exhaustion. Your filesystem has run out of actual storage space to write data. This is the more intuitive scenario: you've simply filled up your disk with Docker images, containers, logs, or other files.

The second, less obvious cause is inode exhaustion. Even with gigabytes of free space available, your filesystem can run out of inodes (metadata structures used to track files and directories). Each file and directory consumes one inode, so applications that create millions of small files (think PHP session files or npm's node_modules directories) can exhaust inodes while leaving disk space unused.

Understanding the overlay2 storage driver

Docker typically uses the overlay2 storage driver, which is based on Linux OverlayFS. OverlayFS layers multiple directories on a single host and presents them as a unified filesystem. Image layers are mounted as read-only lower directories, while each running container adds a thin writable upper layer. The merged view is exposed as a single directory to the container.

OverlayFS

The overlay2 driver natively supports up to 128 lower OverlayFS layers theoretically, but Docker's layer store enforces a practical limit of 125 base layers per image. It enables efficient image composition and improved performance for operations such as docker build and docker commit

While overlay2 is designed to consume fewer inodes than earlier storage drivers, Docker environments that repeatedly build images, pull layers, or create containers can still place significant pressure on both disk blocks and inodes over time, especially when images or application layers contain many small files.

How this error appears depends on the deployment environment:

  • On native Linux systems, it affects the filesystem backing /var/lib/docker. 

  • On Docker Desktop for Windows or macOS, the limitation exists inside Docker’s virtual machine disk image (such as a .raw or .vhdx file), which introduces additional considerations when reclaiming or resizing storage.

Step 1: Diagnose the Root Cause

Before removing anything, I always start with thorough diagnostics. This investment of a few minutes can save hours of troubleshooting and prevent accidental data loss.

Check system-level capacity

Start by examining your host system's disk usage with df -H (df stands for disk free):

df -H

Free Disk Check in WSL

Free disk check in WSL

This command displays disk usage in human-readable format. Look for filesystems at or near 100% capacity. 

On native Linux installations, pay particular attention to the partition where /var/lib/docker resides. This is typically your root partition (/) or a dedicated Docker partition. 

On Docker Desktop (Windows/Mac), look for the main filesystem mount (usually /dev/sdf or similar), which holds the Docker VM's data, rather than specific /var/lib/docker references, since Docker runs inside a virtual machine.

Next, check for inode exhaustion using df -i:

df -i

Inode Exhaustion Check

Inode exhaustion check

If you see 100% inode usage (IUse%) on any filesystem, you've found your problem. This scenario is surprisingly common in build servers and CI/CD environments, where Docker repeatedly creates and destroys containers with many small files.

Metric

Command

What to look for

Disk space

df -H

Partitions at 90%+ usage

Inodes

df -i

IUse% at 100%

Docker directory

du -h /var/lib/docker

Total size consumption

Analyze Docker-specific usage

Now that we've established whether disk space or inodes are the issue, let's go deeper and examine Docker's specific resource consumption. The docker system df command provides a high-level summary that works universally across all Docker installations:

docker system df

This output breaks down space usage into four categories: Images, containers, local volumes, and build cache.

The RECLAIMABLE column is particularly valuable. It shows how much space you can recover without affecting running containers. Understanding the difference between these two metrics is key: "active" space is currently in use by running containers, while "reclaimable" space can be freed up safely.

For more granular details, add the verbose flag:

docker system df -v

Docker Usage Analysis

Docker usage analysis

This verbose output lists every image, container, and volume individually with their sizes, providing the granular breakdown you need to identify space hogs. Here's what to look for in each section:

  • Images: Shows each image's size and whether it's currently in use. Look for large images that are unused or duplicated with different tags. Images marked as "unused" can be safely removed without affecting running containers.

  • Containers: Displays the writable layer size for each container. If a stopped container shows a significant SIZE here, it's been writing data to its filesystem. The CREATED column helps identify old containers that might have been forgotten.

  • Volumes: Lists volume sizes and whether they're in use. Volumes marked as not in use are safe candidates for removal, though always verify they don't contain important data before pruning.

  • Build Cache: Often the largest consumer and frequently overlooked. These are intermediate layers from docker build operations that Docker keeps to speed up subsequent builds.

OS-specific commands

For native Linux users who want even deeper filesystem insights, you can optionally examine subdirectories directly to show which subdirectories (overlay2, containers, volumes) consume the most space:

sudo du -h --max-depth=1 /var/lib/docker | sort -h

However, for Docker Desktop users, /var/lib/docker exists inside Docker's hidden virtual machine and isn't directly accessible. The good news is that docker system df -v provides all the actionable information you need, regardless of your platform, making it the most reliable diagnostic approach.

Step 2: Clean Up System With Docker Prune 

After identifying where space is consumed, I can safely reclaim it using Docker's built-in pruning commands. These operations are designed to remove only unused resources, minimizing the risk of disrupting running services.

Prune dangling resources

Docker distinguishes between "dangling" and "unused" resources. A dangling image is one with no tag and no container references. Typically, an intermediate layer from a build that failed or was superseded. These are always safe to remove.

Start with the basic prune command:

docker system prune

This removes all unused containers, networks, and dangling images. It won't touch tagged images or volumes unless explicitly specified. Docker will prompt for confirmation before proceeding.

To be more aggressive and remove all unused images (not just dangling ones), add the --all flag:

docker system prune --all

Docker System Prune

Docker system prune

This command removes any image not currently associated with a container. Be cautious: subsequent deployments will need to pull the removed images again.

If you want to include volumes in the cleanup (which contain persistent data), add the --volumes flag:

docker system prune --all --volumes

Warning: This permanently deletes data in unused volumes. Always verify volumes don't contain important data before using this flag.

Manage the build cache

Here's where things get interesting. The build cache, which stores intermediate layers from Docker build operations, is often the largest space consumer, yet docker system prune deliberately excludes it to preserve build performance.

To specifically target the build cache:

docker builder prune

If you want to be selective and preserve recent cache layers, use time-based filters:

docker builder prune --filter "until=24h"

This removes only cache layers older than 24 hours, keeping your recent work intact.

Clean up volumes safely

Volumes require extra caution because they contain persistent data, such as databases, uploaded files, and application state. Removing the wrong volume means permanent data loss.

First, identify dangling volumes (those not attached to any container):

docker volume ls -f dangling=true

Review this list carefully. If you're certain these volumes aren't needed, prune them:

docker volume prune

Docker will ask for confirmation before proceeding. When in doubt, back up volumes before pruning.

Step 3: Handle Docker Log File Exhaustion

Container logs can silently consume massive amounts of disk space, sometimes exceeding 50 GB per container.

Identify bloated log files

Docker's default json-file logging driver captures all container stdout and stderr to JSON files. Without a max-size setting (default is -1, unlimited), these files grow unbounded.

Check container sizes across all environments:

docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Size}}"

To check a specific container's log file size:

ls -lh $(docker inspect --format='{{.LogPath}}' <container-name>)

For native Linux only, find the largest log files directly:

sudo find /var/lib/docker/containers/ -name "*-json.log" -exec du -h {} + | sort -h | tail -10

If docker logs becomes slow or containers fail to start, oversized logs are likely the cause. To solve this, you can truncate the logs.

Truncate logs safely

Truncating the logs will provide more space. However, never delete log files with rm while containers are running. The Docker daemon holds open file handles to these logs, and deleting them creates "deleted but open" files that continue consuming space until the daemon restarts.

The safe approach is to truncate logs to zero:

sudo truncate -s 0 $(docker inspect --format='{{.LogPath}}' <container-name>)

This releases space immediately without breaking file handles. The container continues logging to the same file.

Step 4: Advanced Troubleshooting and Hidden Issues

When standard cleanup fails, you're likely dealing with one of these less common but equally impactful issues:

  • Inode exhaustion: Your filesystem has run out of file metadata structures despite having free disk space

  • Deleted but open files: Processes holding file handles to deleted files, creating "phantom" space usage

  • Docker Desktop virtual disk bloat: The host .raw or .vhdx file does not shrink after you delete data inside the VM

Let's tackle each of these scenarios systematically.

Solve inode exhaustion

This can be caused by applications creating millions of small files, such as PHP sessions, npm node_modules, or build artifacts that consume inodes at alarming rates.

First, verify inode exhaustion:

df -i

Using Docker commands (all environments):

docker ps -a | wc -l             # Check container count
docker images | wc -l         # Check image count
docker image prune -a      # Remove unused images to free inodes

For Linux installations, identify inode-heavy directories:

for dir in /var/lib/docker/*/; do
    echo "$dir: $(sudo find $dir -type f 2>/dev/null | wc -l) files"
done

Removing unused images with docker rmi also frees inodes. For application-level issues, implement cleanup within containers or use volume mounts for temporary files.

Handle deleted but open files (Linux-only)

When a file is deleted while a process still has it open, the filesystem doesn't release the space until the process closes the file handle ("phantom" space usage).

For Linux users, you can verify if this is happening by searching for "deleted" entries in the list of open files.

First, identify these files:

sudo lsof | grep deleted 

Docker Deleted Files

Docker deleted files

Then, to release space, restart the specific process (using PID from lsof) or restart Docker. Once you do this, if you run the command again, the files won’t appear anymore, which means they have been properly removed.

Let’s now explore the last advanced troubleshooting strategy, which is applicable only to Docker Desktop: managing virtual disks.

Managing Docker Desktop virtual disks

Docker Desktop stores data in a virtual disk file (.raw on Mac, .vhdx on Windows) that grows dynamically but doesn't shrink automatically when you delete data. You prune images and containers, see free space inside the VM increase, yet your host disk remains full.

To manage this, the safest approach is to use fstrim to compact the disk without losing data:

docker run --privileged --pid=host alpine nsenter -t 1 -m -u -i -n fstrim /

This command trims the Docker Desktop VM's filesystem, releasing unused blocks back to your host system. Your images, containers, and volumes remain intact.

Alternatively, if you want to start completely fresh (this deletes all data), use Docker Desktop SettingsTroubleshootReset to factory defaults

Warning: This is destructive and removes all images, containers, volumes, and settings. It should only be the last option, in case everything else fails.

Reset to factory defaults

Reset to factory defaults

Step 5: Prevent Docker “No Space Left on Device” Error

Now that we've resolved the immediate crisis, let's implement structural changes to prevent recurrence. These configurations require planning but pay enormous dividends in system stability.

Move the Docker root directory (Linux-only)

Docker Desktop users cannot easily change the data root location. Instead, you can:

  • Increase disk space: Use the virtual disk compaction methods described in Step 4

  • For WSL2 backend: Manage limits through a .wslconfig file as shown in Docker Desktop SettingsResources

However, if you are in Linux and your root partition is constrained, but you have a larger partition available (perhaps a dedicated data disk), relocating Docker's data directory is a permanent solution.

First, stop Docker:

sudo systemctl stop docker

Edit /etc/docker/daemon.json to specify the new location:

{
  "data-root": "/mnt/docker-data"
}

Migrate existing data to preserve your images and containers:

sudo rsync -aP /var/lib/docker/ /mnt/docker-data/

Finally, restart Docker:

sudo systemctl start docker

Docker now stores all data in the new location. Verify with docker info | grep "Docker Root Dir" to confirm the path to the new directory.

Configure global log rotation

Log rotation is the single most effective prevention strategy I've implemented. By default, Docker's json-file driver has max-size: -1 (unlimited), which is a disaster waiting to happen in production.

For native Linux installations, edit /etc/docker/daemon.json to enforce limits globally:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

After editing the file, restart Docker.

For Docker Desktop users, you don't edit /etc/docker/daemon.json directly. Instead:

  1. Open Docker Desktop
  2. Go to SettingsDocker Engine
  3. You'll see a JSON editor with the daemon configuration
  4. Add the log rotation settings to the existing JSON:
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Finally, click Apply & Restart.

This example configuration limits each container to three log files of 10MB each (30MB total per container).

Critical note: These settings apply only to newly created containers. Existing containers retain their original log configuration. You must recreate containers to apply the new limits.

For even better compression and performance, consider the local logging driver:

{
  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3",
    "compress": "true"
  }
}

Additionally, you can also override log settings per service in docker-compose.yml:

services:
  web:
    image: nginx
    logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "5"

Automate maintenance

Another good practice is to automate certain processes. It is often said that prevention is better than crisis management. Therefore, I recommend automating cleanup with a cron job that runs during maintenance windows:

# Create a cleanup script
sudo cat > /usr/local/bin/docker-cleanup.sh << 'EOF'
#!/bin/bash
docker system prune -f
docker builder prune -f --filter "until=168h"
EOF

sudo chmod +x /usr/local/bin/docker-cleanup.sh

# Add to crontab to run weekly on Sundays at 2 AM
echo "0 2 * * 0 /usr/local/bin/docker-cleanup.sh" | sudo crontab -

This script creates a scheduled task that automatically removes stopped containers and unused data every Sunday at 2 AM. It also safely clears out build cache layers older than one week (168 hours) to prevent disk usage from growing indefinitely.

Additionally, optimize your Dockerfiles using multi-stage builds to minimize image sizes. Multi-stage builds allow you to use one stage for compiling and building (with all the heavy development tools), then copy only the final artifacts to a clean, minimal production stage. 

This eliminates build dependencies, source code, and intermediate files from your final image, dramatically reducing its size and attack surface.

# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Production stage
FROM node:16-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]

This pattern reduces production image size by excluding build tools and source code.

Conclusion

Fixing the "no space left on device" error follows a predictable pattern: diagnose first to understand whether you're dealing with disk blocks or inodes, prune safely to reclaim space without data loss, then optimize configuration to prevent recurrence.

Through my years managing Docker environments, I've learned that log rotation is the single most effective prevention strategy. A simple daemon.json configuration can prevent most space exhaustion incidents.

I encourage you to audit your daemon.json configuration now. If you don't have max-size and max-file set for your logging driver, add them immediately. Your future self will thank you when you avoid the next space crisis.

Ready to become a Docker pro? Then the next step is to enroll in our Containerization and Virtualization with Docker and Kubernetes skill track, which I highly recommend.

Docker "No Space Left on Device" FAQs

What causes the Docker "no space left on device" error?

The error occurs from either physical disk exhaustion (filled with images, containers, or logs) or inode exhaustion (too many small files consuming file metadata). Docker's overlay2 storage driver accumulates layers that consume both disk space and inodes rapidly, especially in build-heavy environments.

How can I identify which Docker resources are consuming the most disk space?

Use docker system df to see a breakdown of space usage across images, containers, volumes, and build cache. For detailed information, run docker system df -v to list every resource individually, along with its size. The RECLAIMABLE column indicates the amount of space that can be safely recovered.

What's the safest way to clean up Docker disk space without losing important data?

Start with docker system prune to remove stopped containers, unused networks, and dangling images. For more aggressive cleanup, use docker system prune -a to remove all unused images. Never use the --volumes flag unless you've verified volumes don't contain critical data, as this permanently deletes volume data.

How do I prevent Docker log files from consuming all my disk space?

Configure global log rotation in your daemon.json (or in Docker Desktop Settings Docker Engine) with "max-size": "10m" and "max-file": "3" settings. This limits each container to 30MB of logs total. Remember that these settings only apply to newly created containers. Existing containers must be recreated.

How can I prevent Docker from running out of inodes?

Inode exhaustion occurs when applications create millions of small files. Check inode usage with df -i and reduce it by removing unused Docker images with docker image prune -a. For applications generating many temporary files (like npm or PHP sessions), use volume mounts for temporary storage instead of writing to the container's writable layer.


Benito Martin's photo
Author
Benito Martin
LinkedIn

As the Founder of Martin Data Solutions and a Freelance Data Scientist, ML and AI Engineer, I bring a diverse portfolio in Regression, Classification, NLP, LLM, RAG, Neural Networks, Ensemble Methods, and Computer Vision.

  • Successfully developed several end-to-end ML projects, including data cleaning, analytics, modeling, and deployment on AWS and GCP, delivering impactful and scalable solutions.
  • Built interactive and scalable web applications using Streamlit and Gradio for diverse industry use cases.
  • Taught and mentored students in data science and analytics, fostering their professional growth through personalized learning approaches.
  • Designed course content for retrieval-augmented generation (RAG) applications tailored to enterprise requirements.
  • Authored high-impact AI & ML technical blogs, covering topics like MLOps, vector databases, and LLMs, achieving significant engagement.

In each project I take on, I make sure to apply up-to-date practices in software engineering and DevOps, like CI/CD, code linting, formatting, model monitoring, experiment tracking, and robust error handling. I’m committed to delivering complete solutions, turning data insights into practical strategies that help businesses grow and make the most out of data science, machine learning, and AI.

Sujets

Docker Courses

Cursus

Conteneurisation et virtualisation avec Docker et Kubernetes

13 h
Découvrez la puissance de Docker et Kubernetes, cette piste interactive vous permettra de construire et de déployer des applications dans des environnements modernes.
Afficher les détailsRight Arrow
Commencer le cours
Voir plusRight Arrow
Contenus associés

blog

How to Learn Docker from Scratch: A Guide for Data Professionals

This guide teaches you how to learn Docker from scratch. Discover practical tips, resources, and a step-by-step plan to accelerate your learning!
Joel Wembo's photo

Joel Wembo

14 min

Tutoriel

Docker Prune: A Complete Guide with Hands-On Examples

Remove unused containers, images, volumes, and networks safely with a single shell command. Keep your system tidy and reclaim disk space by removing unused Docker resources.
Dario Radečić's photo

Dario Radečić

Tutoriel

Docker Remove Image: How to Delete Unused and Dangling Images

This guide shows you exactly how to safely and effectively remove Docker images to reclaim disk space, avoid build clutter, and prevent storage-related errors!
Dario Radečić's photo

Dario Radečić

Tutoriel

Docker Stop All Containers: A Step-by-Step Guide

Learn how to stop all running Docker containers using simple commands. This will help you manage system resources and streamline your Docker environment.
Khalid Abdelaty's photo

Khalid Abdelaty

Tutoriel

Update Docker: A Step-by-Step Guide

Learn how to update Docker Engine, Docker Desktop, and container images — all without breaking your setup.
Dario Radečić's photo

Dario Radečić

Tutoriel

Docker Compose Guide: Simplify Multi-Container Development

Master Docker Compose for efficient multi-container application development. Learn best practices, scaling, orchestration, and real-world examples.
Derrick Mwiti's photo

Derrick Mwiti

Voir plusVoir plus