Skip to main content

How to Expose a Docker Port

Learn how to effectively expose and publish ports in Docker to enable communication between your containers and the outside world. This guide covers everything from Dockerfile configuration and runtime flags to Docker Compose orchestration and troubleshooting techniques.
Jun 2, 2025  · 11 min read

When working with Docker containers, one of the most crucial aspects I have encountered is enabling network communication between containers and the outside world. Without proper port configuration, containerized applications remain isolated and inaccessible. In this article, I will walk you through the process of exposing Docker ports, a fundamental skill for anyone working with containerized applications.

This guide is designed specifically for software developers, DevOps engineers, and IT professionals who need to make their containerized applications accessible to users, other services, or external systems. Whether you are deploying a web server, an API, or a database within a container, understanding port exposure is essential for building functional Docker-based systems.

By the end of this article, you will have a clear understanding of how Docker networking works, the different methods of exposing ports, and practical techniques to implement these concepts in your own projects.

If you are new to Docker, consider taking one of our courses, such as Introduction to Docker, Containerization and Virtualization with Docker and Kubernetes, Intermediate Docker, or Containerization and Virtualization Concepts.

What is Exposing a Port in Docker?

Before diving into the technical details, it is important to understand what port exposure means in the Docker ecosystem.

Container network isolation principles

Docker containers are designed with isolation as a core principle. This isolation is achieved through Linux kernel features such as namespaces and cgroups. Namespaces provide process isolation, ensuring that processes within a container cannot see or interact with processes in other containers or the host system. Cgroups, on the other hand, control resource allocation, limiting how much CPU, memory, and other resources a container can consume.

From a networking perspective, each container receives its own network namespace, complete with a virtual Ethernet interface (veth pair). This interface connects to a Docker bridge network by default, allowing inter-container communication while maintaining isolation from the host network. Think of this as each container having its own private network address, invisible to the outside world.

In its default configuration, a Docker container is completely sandboxed. Services running inside can communicate with each other, but they are unreachable from outside the container, even from the host machine. This is where port exposure becomes necessary.

Expose vs publish: semantic distinction

When working with Docker port configuration, you will encounter two related but distinct concepts: exposing and publishing ports.

Exposing ports is primarily a documentation feature. When you expose a port in Docker, you are essentially adding metadata to your container image, indicating that the containerized application listens on specific ports. 

However, exposing a port does not actually make it accessible from outside the container. It serves as documentation for users of your image.

Publishing ports is what actually makes your containerized services available to the outside world. Publishing creates a mapping between a port on the host machine and a port inside the container. When you publish a port, Docker configures the host's network to forward traffic from the specified host port to the corresponding container port.

Here is when to use each approach:

  • Use expose when you are building images meant to be used by other Docker users, to document which ports your application uses.
  • Use publish when you need external systems (including the host) to access services running inside your container.
  • Use both together for complete documentation and functionality, especially in production environments.

How to Expose Docker Ports

Now that we understand the concepts, let's look at the practical aspects of exposing Docker ports in different contexts.

Dockerfile declaration strategies

The most basic way to expose a port is by using the EXPOSE instruction in your Dockerfile. This declares the ports the container is designed to use.

FROM my-image:latest
EXPOSE 80
EXPOSE 443

In this example, I have specified that the container will use ports 80 and 443, which are standard for HTTP (Hypertext Transfer Protocol) and HTTPS (Hypertext Transfer Protocol Secure) traffic. You can also combine these into a single instruction:

EXPOSE 80 443

When specifying ports, it is good practice to include the protocol, TCP (Transmission Control Protocol) or UDP (User Datagram Protocol) if your application uses a specific one:

EXPOSE 53/udp
EXPOSE 80/tcp

If no protocol is specified, TCP is assumed by default. This is appropriate for most web applications, but services like DNS (Domain Name System) or gaming servers often require UDP.

From a security perspective, I recommend only exposing the ports that your application actually needs. Each exposed port represents a potential attack vector, so minimizing the number of exposed ports follows the principle of least privilege.

Runtime port management

While the EXPOSE instruction documents which ports a container uses, to make these ports accessible from the host or other machines, you need to publish them when running the container.

Here is how to explicitly bind a container port to a specific host port:

docker run -p 8081:80 my-image

The -p (or --publish) flag lets you bind a single port or range of ports in the container to the host.

This command maps port 80 inside the container to port 8081 on the host. After running this command, you can access the web server by navigating to http://localhost:8081 in your browser.

For convenience during development, Docker provides the -P, capital P or --publish-all flag, which automatically publishes all exposed ports to random high-numbered ports on the host:

docker run -P my-image

To find which host ports were assigned, I can use the docker ps command:

docker ps

This will show output like:

CONTAINER ID   IMAGE  COMMAND    CREATED  STATUS  PORTS                                  NAMES
a7c53d9413bf   image  "/docker-.…"  10 s ago   Up 9 s    0.0.0.0:49153->80/tcp, :::49153->80/tcp   wizardl

Here, port 80 inside the container is mapped to port 49153 on the host.

When working with services that use both TCP and UDP on the same port (like DNS servers), you need to specify the protocol when publishing:

docker run -p 53:53/tcp -p 53:53/udp dns-server

Docker Compose orchestration

For multi-container applications, Docker Compose provides a more manageable way to configure networking and ports.

In a Docker Compose YAML file, you can specify port mappings under the ports key for each service:

services:
  web:
    image: my-image
    ports:
      - "8081:80"
      - "443:443"
  api:
    build: ./api
    ports:
      - "3000:3000"

Docker Compose makes an important distinction between ports and expose. The ports section creates published port mappings accessible from outside, while expose only makes ports available to linked services within the same Compose network:

services:
  web:
    image: my-image
    ports:
      - "8081:80"
  database:
    image: postgres
    expose:
      - "5432"

In this example, the web service is accessible from the host at port 8081, but the PostgreSQL database is only accessible to other services within the Compose file, not directly from the host.

For flexible configurations, especially across different environments, I can use environment variables in port mappings:

services:
  web:
    image: my-image
    ports:
      - "${WEB_PORT:-8081}:80"

This syntax allows me to specify the host port through an environment variable (WEB_PORT), defaulting to 8081 if not set.

Exposing Docker Ports - Worked Example

Let's walk through a complete example of exposing Docker ports for a web application with a database backend.

Imagine we are building a simple web application that consists of a Python Flask API service and a PostgreSQL database. The API service listens on port 5000, and PostgreSQL uses its default port 5432.

Here is what our simple Flask application (app.py) might look like. This application will create a database myapp and will fetch the data from the items table, once this table is created within the database.

from flask import Flask, jsonify
import os
import psycopg2

app = Flask(__name__)

# Database connection parameters from environment variables
DB_HOST = os.environ.get('DB_HOST', 'db')
DB_NAME = os.environ.get('DB_NAME', 'myapp')
DB_USER = os.environ.get('DB_USER', 'postgres')
DB_PASS = os.environ.get('DB_PASSWORD', 'postgres')

@app.route('/')
def index():
    return jsonify({'message': 'API is running'})

@app.route('/items')
def get_items():
    # Connect to the PostgreSQL database
    conn = psycopg2.connect(
        host=DB_HOST,
        database=DB_NAME,
        user=DB_USER,
        password=DB_PASS
    )
    
    # Create a cursor and execute a query
    cur = conn.cursor()
    cur.execute('SELECT id, name FROM items')
    
    # Fetch results and format as list of dictionaries
    items = [{'id': row[0], 'name': row[1]} for row in cur.fetchall()]
    
    # Close connections
    cur.close()
    conn.close()
    
    return jsonify(items)

if __name__ == '__main__':
    # Run the Flask app, binding to all interfaces (important for Docker)
    app.run(host='0.0.0.0', port=5000)

Next, I will create our requirements.txt file:

flask
psycopg2-binary

Now, I need to create a Dockerfile for our Python API service:

FROM python:3.9-slim
WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose the port Flask runs on
EXPOSE 5000

# Command to run the application
CMD ["python", "app.py"]

Finally, I will create a Docker Compose file to orchestrate both services:

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile
      tags:
        - "my-custom-image:latest"
    container_name: my-custom-api
    ports:
      - "5000:5000"
    environment:
      - DB_HOST=db
      - DB_NAME=myapp
      - DB_USER=postgres
      - DB_PASSWORD=postgres
    depends_on:
      - db
  
  db:
    container_name: my-custom-db
    image: postgres:13
    expose:
      - "5432"
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

In this configuration:

  1. The API service builds from our Dockerfile and publishes port 5000, making it accessible from the host machine.
  2. The PostgreSQL service exposes port 5432 but does not publish it, making it accessible only to other services within the Compose network.
  3. The API service can access the database using the hostname db (which is the service name) and port 5432.

To run this application, use:

docker-compose up

Now, I can access the API at http://localhost:5000, but the PostgreSQL instance is not directly accessible from outside the Docker network, which is a good security practice for database services. But you can access PostgreSQL from the running container:

docker compose exec db psql -U postgres -d myapp

This command will access the PostgreSQL CLI, where I can view the database and create an items table. Once created, I can view the data in http://localhost:5000/items

How to expose multiple Docker ports

If my Python application needs to expose multiple ports, I can specify them all in the Dockerfile:

FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 8081 9090
CMD ["python", "app.py"]

This Dockerfile exposes:

  • Port 5000 for the main Flask application (currently active)
  • Port 8081 for a potential monitoring dashboard 
  • Port 9090 for a future debugging interface

It is important to note that while I have exposed ports 8081 and 9090 in the Dockerfile as documentation, my current Flask application is only configured to listen on port 5000. The additional exposed ports indicate where future functionality might be implemented.

How to publish exposed ports

To publish ports that are exposed in a Dockerfile, I use the -p or -P flag when running the container:

docker run -p 8081:5000  my-custom-image

This maps port 5000 in the container (which is exposed in the Dockerfile and where Flask is listening) to port 8081 on the host. After running this command, I can access my Flask application by navigating to http://localhost:8081 in my browser.

I can also use the -P flag to automatically publish all exposed ports to random ports on the host:

docker run -P my-custom-image

To check which ports are published, I can use:

docker ps

For more detailed information, I can use:

docker port [container_id]
5000/tcp -> 0.0.0.0:32768
8081/tcp -> 0.0.0.0:32769
9090/tcp -> 0.0.0.0:32770

This command shows all the port mappings for a specific container. For example, if my Flask application is only listening on port 5000 but I have exposed multiple ports in the Dockerfile, Docker port will show me exactly which host ports are mapped to which container ports. So, if port 5000 has been mapped to 32768, I can access my Flask application under http://localhost:32768

Diagnostic and Troubleshooting Techniques

Even with careful configuration, issues can arise with Docker port mapping. Here are some techniques I use to diagnose and resolve common problems.

Port mapping inspection

The first step in diagnosing port-related issues is to verify that the mappings are set up correctly. I use these commands:

List all running containers with port mappings:

docker ps

Get detailed info about a specific container:

docker inspect [container_id]

Check port mappings for a specific container:

docker port [container_id]

The docker inspect command provides detailed information, including network settings. To focus on network-related details, use:

docker inspect --format='{{json .NetworkSettings.Ports}}' [container_id] | jq

If you need to perform deeper traffic analysis, tools like tcpdump and netstat  can be very useful. First, ensure they are installed inside the container. If the container does not already include these tools, you can install them by running the following command:

docker exec -u root -it f5c6cac71492 bash -c "apt-get update && apt-get install -y net-tools tcpdump"

To view which ports the application is listening on inside the container:

docker exec -it [container_id] netstat -tuln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.11:42265        0.0.0.0:*               LISTEN
udp        0      0 127.0.0.11:34267        0.0.0.0:*

This shows, for example, that the Flask app is listening on all interfaces on port 5000.

To monitor incoming and outgoing traffic on the container's main network interface (usually eth0):

docker exec -it [container_id] tcpdump -i eth0
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes

This command allows you to trace traffic and identify whether requests are reaching the container or if packets are being dropped or misrouted.

Iptables rule analysis

Docker leverages iptables to manage Network Address Translation (NAT) and control traffic flow between the host and containers. It automatically inserts rules to forward host ports to the appropriate container IPs and ports, enabling seamless access to containerized services.

On native Linux hosts, you can inspect these rules using commands like:

sudo iptables -t nat -L DOCKER -n -v

This shows how Docker maps incoming traffic on specific host ports to container endpoints. Based on the Docker Compose file from earlier, you shall see the forwarded ports:

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
   50  3000 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
  100  6000 RETURN     all  --  br-xxxxxxx *       0.0.0.0/0            0.0.0.0/0
  500 30000 DNAT       tcp  --  !br-xxxxxxx *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5000 to:172.18.0.2:5000
  400 24000 DNAT       tcp  --  !br-xxxxxxx *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8081 to:172.18.0.2:5000
  300 18000 DNAT       tcp  --  !br-xxxxxxx *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9090 to:172.18.0.2:5000

Here:

  • DNAT rules redirect traffic from the host to the container.
  • RETURN rules ensure proper routing of internal Docker network traffic.

However, depending on your environment (for example, Docker Desktop on Windows or WSL2), Docker may handle port forwarding differently, and iptables rules may not be visible or modifiable from inside the system. In this case, you can check the port forwarding with the following command:

sudo ss -tulnp
Netid    State       Recv-Q      Send-Q            Local Address:Port         Peer Address:Port 
tcp        LISTEN     0                 4096                                          *:5000           	        *:*           
tcp        LISTEN     0                 4096                          		   *:8081                               *:*    
tcp        LISTEN     0                 4096                          		   *:9090                              *:*    

Troubleshooting tips:

  • Confirm port mappings with docker ps and docker port [container].
  • Check if the host system firewall or security software is blocking ports.
  • Use container logs and network tools like netstat and tcpdump to verify your application’s network behavior.

Conclusion

Exposing Docker ports is a fundamental skill that bridges the gap between containerized isolation and practical usability. Throughout this article, I have explained how Docker's networking model works, the difference between exposing and publishing ports, and provided practical examples for various scenarios.

Keep these key points in mind:

  • Use EXPOSE in Dockerfiles to document which ports your application uses
  • Use -p or -P when running containers to make those ports accessible from outside
  • Leverage Docker Compose for managing complex multi-container applications
  • Use diagnostic tools to troubleshoot issues when they arise

Mastering these concepts and techniques will enable you to design more robust containerized applications that communicate effectively both internally and with the outside world. Whether you are developing a simple web application or a complex microservices architecture, proper port management is critical for success.

As containers continue to dominate modern application deployment strategies, the ability to confidently manage Docker networking and ports will remain an essential skill in every developer's toolkit.

To keep learning, be sure to check out the following resources:

Exposing a Docker Port FAQs

What does it mean to expose a port in Docker?

Exposing a port in Docker is a way to document which ports your containerized application listens on, but it does not make the port accessible outside the container.

How do I make a Docker container’s port accessible from my host machine?

You publish a port using the -p flag with docker run or the ports section in Docker Compose, which maps a host port to the container port.

What is the difference between `EXPOSE` and `-p` in Docker?

EXPOSE is used in the Dockerfile to declare ports for documentation, while -p publishes and maps those ports for external access.

Can I expose multiple ports for a single Docker container?

Yes, you can expose multiple ports using multiple EXPOSE instructions in the Dockerfile or by specifying multiple ports in the ports section of Docker Compose.

How do I troubleshoot Docker port mapping issues?

Use commands like docker ps, docker port, docker inspect, and network tools inside the container (e.g., netstat, tcpdump) to verify port mappings and traffic flow.


Benito Martin's photo
Author
Benito Martin
LinkedIn

As the Founder of Martin Data Solutions and a Freelance Data Scientist, ML and AI Engineer, I bring a diverse portfolio in Regression, Classification, NLP, LLM, RAG, Neural Networks, Ensemble Methods, and Computer Vision.

  • Successfully developed several end-to-end ML projects, including data cleaning, analytics, modeling, and deployment on AWS and GCP, delivering impactful and scalable solutions.
  • Built interactive and scalable web applications using Streamlit and Gradio for diverse industry use cases.
  • Taught and mentored students in data science and analytics, fostering their professional growth through personalized learning approaches.
  • Designed course content for retrieval-augmented generation (RAG) applications tailored to enterprise requirements.
  • Authored high-impact AI & ML technical blogs, covering topics like MLOps, vector databases, and LLMs, achieving significant engagement.

In each project I take on, I make sure to apply up-to-date practices in software engineering and DevOps, like CI/CD, code linting, formatting, model monitoring, experiment tracking, and robust error handling. I’m committed to delivering complete solutions, turning data insights into practical strategies that help businesses grow and make the most out of data science, machine learning, and AI.

Topics

Top Docker Courses

Track

Professional Data Engineer in Python

0 min
Dive deep into advanced skills and state-of-the-art tools revolutionizing data engineering roles today with our Professional Data Engineer track.
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

blog

Top 18 Docker Commands to Build, Run, and Manage Containers

This guide breaks down essential Docker commands—from container basics to volumes and networking—so you can confidently manage applications across environments!
Laiba Siddiqui's photo

Laiba Siddiqui

15 min

blog

How to Learn Docker from Scratch: A Guide for Data Professionals

This guide teaches you how to learn Docker from scratch. Discover practical tips, resources, and a step-by-step plan to accelerate your learning!
Joel Wembo's photo

Joel Wembo

14 min

Tutorial

Docker Compose Guide: Simplify Multi-Container Development

Master Docker Compose for efficient multi-container application development. Learn best practices, scaling, orchestration, and real-world examples.
Derrick Mwiti's photo

Derrick Mwiti

Tutorial

How to Build a Custom NGINX Docker Image

Learn how to set up, configure, and optimize NGINX in Docker. This guide covers running containers, serving custom content, building images, setting up a reverse proxy, and using Docker Compose for efficient deployment.
Derrick Mwiti's photo

Derrick Mwiti

10 min

Tutorial

Docker for Beginners: A Practical Guide to Containers

This beginner-friendly tutorial covers the essentials of containerization, helping you build, run, and manage containers with hands-on examples.
Moez Ali's photo

Moez Ali

14 min

Tutorial

How to Containerize an Application Using Docker

Learn how to containerize machine learning applications with Docker and Kubernetes. A beginner-friendly guide to building, deploying, and scaling containerized ML models in production.

Rajesh Kumar

15 min

See MoreSee More