Programa
Do you need object storage for development but don't want to setup and pay for AWS S3?
MinIO gives you S3-compatible storage that runs anywhere - on your laptop, a VM, or a Kubernetes cluster. It's open-source, which means it’s vendor-agnostic, and you won’t get any surprise bills at the end of the month. And combined with Docker, running MinIO even easier since you can spin up a storage server in seconds without installing anything directly on your machine.
Docker containers keep MinIO isolated from your system and at the same time give you full control over configuration and data persistence. You can test S3 APIs locally, mock production storage setups, or run lightweight object storage for small projects.
In this article, I'll show you how to run MinIO with Docker, verify it's working correctly, and configure the most common setup options.
If you’re completely new to Docker, devote a Saturday afternoon to master the fundamentals - DataCamp’s Introduction to Docker course has you covered.
Prerequisites
You need three things to run MinIO with Docker.
Docker installed and running on your machine. You can verify this by running docker --version in your terminal. If you get a version number back, you're good to go.
Basic familiarity with Docker commands. You should know how to start and stop containers, view logs, and work with Docker images. If you've run docker run or worked with Docker Compose before, you'll be fine.
A local directory for persistent storage. MinIO needs somewhere to store your data outside the container. Create an empty directory on your host machine - something like ~/minio/data works well.
That's it. The list is short and simple, but make sure you tick all the boxes before moving forward.
How to Run MinIO with Docker (Single-Node Setup)
A single-node MinIO setup runs one instance of MinIO in a Docker container and stores all your data in one location.
This setup works for development, testing, and small-scale production workloads where you don't need high availability or distributed storage. You get full S3 API compatibility without the complexity of running multiple nodes.
Running MinIO with Docker Run
The docker run command starts MinIO in a new container with everything configured in one line.
Here's the basic command:
docker run -p 9000:9000 -p 9001:9001 \
--name minio \
-v ~/minio/data:/data \
-e "MINIO_ROOT_USER=admin" \
-e "MINIO_ROOT_PASSWORD=password123" \
quay.io/minio/minio server /data --console-address ":9001"

Running MinIO with docker run
Let's break down what each part does.
-
Port 9000 is the API endpoint where your applications connect to upload and download files. This is where S3-compatible clients send their requests.
-
Port 9001 hosts the web console where you manage buckets, set permissions, and monitor storage. You'll use this to verify MinIO is running correctly.
-
The
-v ~/minio/data:/dataflag maps your local directory to the container's storage location. Everything MinIO stores goes into~/minio/dataon your host machine. When you stop or remove the container, your data stays safe in this directory. -
Environment variables set your access credentials.
MINIO_ROOT_USERis your admin username andMINIO_ROOT_PASSWORDis the password. These are the credentials you'll use to log into the web console and configure API access. -
The
server /dataargument tells MinIO to run in server mode and use/dataas the storage directory. The--console-address ":9001"flag specifies which port the web console listens on.
There’s an alternative way to run MinIO which is great if you’re not a fan of long, multi-line terminal commands.
Note: MinIO no longer updates their Docker Hub and Quay images (as of October 2025). The code in this article still works for local development, but for production use, consider maintained alternatives like Chainguard's MinIO image (cgr.dev/chainguard/minio:latest).
Running MinIO with Docker Compose
Docker Compose lets you define your MinIO setup in a YAML file instead of typing long commands.
This makes your configuration repeatable and version-controlled. You can share the file with your team, commit it to git, and restart MinIO with the exact same settings every time.
Create a docker-compose.yml file:
services:
minio:
image: quay.io/minio/minio
container_name: minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password123
volumes:
- ./minio/data:/data
command: server /data --console-address ":9001"
The structure mirrors the docker run command but organizes everything into named sections. Ports, environment variables, and volumes each have their own block.
Volume mapping works the same way - ./minio/data:/data creates a directory in your current folder and mounts it to the container. The ./ means relative to where this docker-compose.yml file lives.
You can now start MinIO with:
docker-compose up -d

Running MinIO with docker compose
The -d flag runs the container in the background. Your terminal won't be blocked by MinIO logs, and the container keeps running after you close the terminal.
Stop it with:
docker-compose down
Compose is better for development because you can add healthchecks, restart policies, and multiple services in the same file. If you need to add a database or other services alongside MinIO later, you just add more entries under services:.
How to Access and Verify MinIO Is Running
Now that MinIO is up and running, you’ll want to confirm it started correctly before you can use it for storage.
There are two ways to verify your setup: the web console for visual confirmation and the MinIO client for command-line verification.
Using the MinIO web console
Open your browser and go to http://localhost:9001.
You'll see a login screen asking for credentials:

MinIO web UI
You can log in with the credentials you provided as environment variables - admin/password123 in my case.
After logging in, you'll land on the MinIO dashboard. The main page shows storage usage, number of buckets, and system health. The left sidebar has options for creating buckets, managing users, and viewing metrics.

Creating a bucket through MinIO web UI
Create a test bucket to verify everything works. Click "Buckets" in the sidebar, then "Create Bucket." Give it a name like test-bucket and click create. If the bucket appears in your list, MinIO is running correctly and storing data.

Creating a bucket through MinIO web UI
Using the MinIO client (mc)
The MinIO client is a command-line tool that lets you interact with MinIO like you would with the AWS CLI.
Run one of these commands to install it:
# macOS
brew install minio/stable/mc
# Linux
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/
And now run this command to connect the client to your local MinIO instance:
mc alias set local http://localhost:9000 admin password123
This creates an alias called local that points to your MinIO server. You can now run commands against it.
List your buckets:
mc ls local

Listing buckets
If you created the test bucket earlier, you'll see it in the output. If the command returns empty or shows your buckets, MinIO is working.
You can now upload a test file to the bucket:
echo "test" > test.txt
mc cp test.txt local/test-bucket/
And then verify it's there:
mc ls local/test-bucket
If you see test.txt in the output, you’ve set up everything correctly.

Listing files in a bucket
Here are some quick troubleshooting if things aren't working:
-
Check if the container is running with
docker ps. If you don't see a container namedminio, it didn't start or crashed. -
View the logs with
docker logs minio. Look for errors about ports already in use or permission issues on the data directory. -
If you can't access the web console, verify the ports aren't blocked by checking
docker port minio. You should see both 9000 and 9001 mapped correctly. -
If you’re running into permission errors on the data directory, run
chmod -R 755 ~/minio/datato fix access issues.
Persistent Storage and Data Volumes
Containers are ephemeral by default - when you remove a container, everything inside it disappears.
MinIO stores objects, metadata, and configuration files. If you don't set up persistent storage correctly, you'll lose all your data the moment you restart or remove the container.
Using local volumes with Docker
Docker volumes and bind mounts keep your data safe outside the container.
When you use -v ~/minio/data:/data or map a volume in Docker Compose, MinIO writes everything to your host machine. The container reads and writes files to /data, but those files actually live in ~/minio/data on your host.
When you stop the container, remove it, or even delete the image - your data stays in ~/minio/data. Start a new MinIO container pointing to the same directory and all your buckets, objects, and settings come back exactly as you left them.
If you don't map a volume, MinIO uses the container's internal filesystem. Everything works fine until you stop the container. When you restart it, MinIO starts fresh with no buckets, no objects, and no configuration.
The compose example I’ve shown earlier creates a minio/data folder right where your compose file is located:

MinIO data folder
Common storage mistakes
Running without a volume is the most common mistake.
You start MinIO, upload files, create buckets, and everything seems fine. Then you restart the container for an update or configuration change. All your data is gone because it was stored inside the container, not on your host.
Always check if your Docker command or Compose file has a volume mapping before you put any real data in MinIO.
Permission issues happen when Docker can't write to your mounted directory.
The MinIO process inside the container runs as a specific user. If that user doesn't have write permissions on the host directory, MinIO crashes on startup or fails silently when trying to store objects.
You can fix this by making sure your data directory is writable:
chmod -R 755 ~/minio/data
Or run the container with a user that matches your host user:
docker run --user $(id -u):$(id -g) ...
Long story short, set up your volumes correctly once and you won't have to worry about losing data.
Environment Variables and Basic Configuration
MinIO reads its configuration from environment variables when it starts.
This means you can change how MinIO behaves without editing config files or rebuilding containers. You set these variables in your docker run command or Docker Compose file.
Access keys and credentials
MinIO requires two environment variables for authentication: MINIO_ROOT_USER and MINIO_ROOT_PASSWORD.
These create the root admin account that has full control over your MinIO instance. The root user can create buckets, manage other users, set policies, and access all stored objects.
Set them like this in during docker run:
-e "MINIO_ROOT_USER=admin" \
-e "MINIO_ROOT_PASSWORD=your-secure-password"
Or in Docker Compose:
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: your-secure-password
Don't use default credentials in production. The examples in this article use admin and password123 for simplicity, but these are terrible choices for real deployments.
Pick a strong password with at least 8 characters. Better yet, use randomly generated credentials and store them in a password manager or secrets management system.
Don't hardcode credentials in Docker Compose files you commit to version control. Use environment files instead:
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
Then create a .env file with your actual credentials and add it to .gitignore.
Port configuration and networking
MinIO needs two ports to function correctly.
- Port 9000 handles the S3 API. This is where your applications send requests to store and retrieve objects. All S3-compatible clients connect to this port.
- Port 9001 serves the web console. This is the browser interface where you manage MinIO through a GUI.
Map these ports in your Docker command:
-p 9000:9000 -p 9001:9001
Port conflicts happen when another service already uses 9000 or 9001 on your host.
You'll see an error like "bind: address already in use" when starting the container. Fix this by mapping to different host ports:
-p 9090:9000 -p 9091:9001
Now MinIO's API is at http://localhost:9090 and the console is at http://localhost:9091. The container still uses 9000 and 9001 internally, but externally you access them through different ports.
These commands will help you check what's using a port before you start MinIO:
# Linux/macOS
lsof -i :9000
lsof -i :9001
# Windows
netstat -ano | findstr :9000
netstat -ano | findstr :9001

Checking what’s using a port
If you're running multiple MinIO instances on the same machine, give each one unique port mappings so they don't conflict with each other.
Running MinIO in Distributed Mode with Docker
Distributed mode runs MinIO across multiple servers with multiple drives for high availability and data redundancy.
You don't need this for development or testing. Single-node mode handles most use cases perfectly fine. Skip this section unless you're planning a production deployment that needs to stay online even when servers fail.
When distributed MinIO makes sense
Use distributed mode when you need fault tolerance.
If one server goes down in a distributed setup, MinIO keeps running and your data stays accessible. The system uses erasure coding to split objects across multiple drives and servers, so you can lose drives or entire nodes without losing data.
You also need distributed mode for large-scale storage. If you're storing terabytes or petabytes of data, spreading it across multiple machines gives you better performance and more capacity than a single server can provide.
Local development doesn't need any of this. Distributed mode adds complexity - you need multiple machines or VMs, coordinated networking, and careful drive configuration. For testing S3 APIs or running object storage on your laptop, single-node mode does everything you need.
Production environments use distributed mode when downtime isn't acceptable and data loss would be catastrophic. Think backup systems, data lakes, or applications where users depend on constant storage availability.
High-level setup overview
Distributed MinIO requires at least four drives across multiple nodes.
Each node runs a MinIO container, and all nodes must see the same drive configuration. You can't mix single-drive nodes with multi-drive nodes or change the number of drives after setup.
A basic distributed setup looks like this:
- Four servers (or VMs) with MinIO installed
- Multiple drives on each server dedicated to MinIO
- Network connectivity between all nodes
- Identical drive paths on every node
You can handle the coordination in a single Compose file, but I warn you - it’ll be a long one. You need to define all nodes, set their drive paths, and start them together.
Here's a complete example for four nodes with two drives each:
version: "3.9"
services:
minio1:
image: quay.io/minio/minio:latest
hostname: minio1
container_name: minio1
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
ports:
- "9001:9000"
- "9091:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password123
volumes:
- ./data/minio1/data1:/data1
- ./data/minio1/data2:/data2
networks:
- minio
minio2:
image: quay.io/minio/minio:latest
hostname: minio2
container_name: minio2
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
ports:
- "9002:9000"
- "9092:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password123
volumes:
- ./data/minio2/data1:/data1
- ./data/minio2/data2:/data2
networks:
- minio
minio3:
image: quay.io/minio/minio:latest
hostname: minio3
container_name: minio3
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
ports:
- "9003:9000"
- "9093:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password123
volumes:
- ./data/minio3/data1:/data1
- ./data/minio3/data2:/data2
networks:
- minio
minio4:
image: quay.io/minio/minio:latest
hostname: minio4
container_name: minio4
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
ports:
- "9004:9000"
- "9094:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password123
volumes:
- ./data/minio4/data1:/data1
- ./data/minio4/data2:/data2
networks:
- minio
networks:
minio:
driver: bridge
The {1...4} syntax tells MinIO to connect to four nodes (minio1 through minio4) with two drives each (data1 and data2). Each node gets unique port mappings so you can access them individually - node 1 uses 9001/9091, node 2 uses 9002/9092, and so on.
The minio-distributed network lets containers reach each other by hostname. All nodes must use the same credentials and see the same drive layout.
To start the cluster, first create data directories and then run the compose file
mkdir -p data/minio{1..4}/data{1..2}
docker compose up -d

Running MinIO in distributed mode
That’s it! You can now visit any of the following URLs to access the web UI:
But again, don't build this unless you actually need it - distributed mode is for production workloads where uptime and redundancy justify the extra complexity.
In addition, you’ll probably want things like TLS, single console endpoint, external volumes/disks for a full production environment.
Common Problems When Running MinIO in Docker
Most MinIO Docker issues come from misconfigured volumes, ports, or credentials.
Here are the problems you'll actually run into and how to fix them fast.
Container starts but the web UI is inaccessible
You run docker ps and see the container running, but http://localhost:9001 returns nothing or times out.
To solve this, first check if the console port is mapped correctly:
docker port minio
You should see 9001/tcp -> 0.0.0.0:9001. If you don't see port 9001 listed, you forgot to map it in your docker run command or Compose file.
Also check the MinIO logs:
docker logs minio
Look for the line that says "Console: http://..." - this tells you which address MinIO is actually listening on. If you see a different port than expected, your --console-address flag might be wrong.
Permission errors on mounted volumes
MinIO crashes on startup or you see "permission denied" errors in the logs when it tries to write to /data.
The MinIO process runs as a specific user inside the container, and that user needs write access to your mounted directory. Fix this with:
chmod -R 755 ~/minio/data
Or if you're on Linux, run the container with your user ID:
docker run --user $(id -u):$(id -g) ...
Port conflicts with other services
You get "bind: address already in use" when starting the container.
This means that another service is already using port 9000 or 9001. Find out what's using it:
# Linux/macOS
lsof -i :9000
lsof -i :9001
# Windows
netstat -ano | findstr :9000
And then map MinIO to different host ports instead:
docker run -p 9090:9000 -p 9091:9001 ...
Now access the API at localhost:9090 and the console at localhost:9091.
Credentials not working as expected
You set MINIO_ROOT_USER and MINIO_ROOT_PASSWORD but can't log in, or the container won't start.
To solve, check if you actually passed the environment variables:
docker inspect minio | grep -A 5 Env
Look for MINIO_ROOT_USER and MINIO_ROOT_PASSWORD in the output. If they're missing, you forgot the -e flags or the environment: section in your Compose file.
MinIO also requires passwords to be at least 8 characters. If your password is shorter, the container might reject it or use a default instead.
If you changed credentials on an existing setup, stop and remove the container completely, then start fresh:
docker stop minio
docker rm minio
docker run ... # with new credentials
Old credential files might persist in your data directory and cause conflicts.
Best Practices for Running MinIO with Docker
Follow these practices to avoid common mistakes and keep your MinIO setup maintainable.
-
Use Docker Compose for repeatability. A Compose file documents your exact configuration - ports, volumes, environment variables, and commands. You can version control it, share it with your team, and recreate identical setups across different machines. Running long
docker runcommands from your shell history leads to configuration drift and mistakes. -
Always configure persistent storage. Map a volume or bind mount before you store any real data. Containers are ephemeral - if you skip this step, you'll lose everything when the container restarts. Check your setup with
docker inspect minioand verify the volume mapping is correct. -
Keep credentials out of command history. Don't put passwords directly in
docker runcommands or commit them to git in your Compose files. Use environment files (.env) with Docker Compose, or pass credentials through environment variables at runtime. Add.envto your.gitignoreimmediately. -
Use single-node mode for development only. Distributed mode is complex and slow to set up. You don't need high availability or erasure coding on your laptop. Save distributed deployments for production where downtime actually matters.
-
Monitor logs during startup. Run
docker logs -f miniowhen starting MinIO for the first time. The logs show you the API endpoint, console URL, and any configuration errors. If something's wrong, you'll see it immediately instead of wondering why nothing works.
Follow these practices and your MinIO setup will be solid from day one.
Conclusion
Running MinIO with Docker means you can have S3-compatible object storage running on your machine in under a minute.
For development, testing, and small-scale projects, single-node MinIO in Docker gives you everything you need. You get full S3 API compatibility, a web console for management, and complete control over your data without depending on cloud services or paying for storage you don't use.
Start with the basics: a simple docker run command or a Docker Compose file with persistent volumes and secure credentials (prefer latter). Test your setup, verify it works, and only add complexity when you actually need it.
Distributed mode, high availability, and production-grade configurations exist for a reason - but that reason isn't local development. Move to those setups when you're deploying to production, when downtime costs you money, or when you're storing data that can't be lost. Until then, keep it simple and focus on building your application instead of managing infrastructure.
When you’re ready to dive into more complex Docker topics, check our our Containerization and Virtualization with Docker and Kubernetes course.
Master Docker and Kubernetes
FAQs
How can I optimize MinIO performance for high-concurrency uploads?
Increase the number of concurrent connections your MinIO server can handle by adjusting the MINIO_API_REQUESTS_MAX environment variable. For high-concurrency workloads, also consider using multiple drives in your setup since MinIO distributes load across available drives automatically. If you're running distributed mode, make sure your network bandwidth between nodes can handle the traffic and use dedicated network interfaces for MinIO communication.
What are the best practices for setting up MinIO with Docker Compose?
Always use Docker Compose instead of docker run commands for production-like setups since it makes your configuration repeatable and version-controlled. Store credentials in a separate .env file and add it to .gitignore to keep secrets out of your repository. Map persistent volumes to dedicated directories on your host, set restart policies to unless-stopped or always, and include healthcheck configurations to monitor container status.
How do I configure MinIO for secure access using TLS?
Generate TLS certificates and place them in a directory on your host, then mount that directory to /root/.minio/certs inside the MinIO container. MinIO automatically detects certificates in this location and enables HTTPS on startup. You'll need both a public.crt file for the certificate and a private.key file for the private key, and the certificate must match your MinIO server's hostname or IP address.
What are the key differences between MinIO and other S3-compatible alternatives?
MinIO is fully open-source and designed specifically for high-performance object storage, while many S3-compatible alternatives are either cloud services or have limited feature sets. MinIO supports features like distributed erasure coding, versioning, encryption, and lifecycle policies that match AWS S3's capabilities. Unlike some alternatives, MinIO runs anywhere - on bare metal, VMs, containers, or Kubernetes - and doesn't lock you into a specific cloud provider.
How can I automate MinIO backups and restore processes?
Use the MinIO client (mc) to create scripts that mirror buckets to another MinIO instance or S3-compatible storage with mc mirror source/ destination/. Schedule these scripts with cron jobs or use MinIO's built-in bucket replication feature to automatically sync data between MinIO instances. For restores, use mc mirror in reverse to copy data back from your backup location, or use mc cp with the --recursive flag to restore specific buckets or objects.


