Skip to main content

Docker Proxy: The Complete Guide to Make Containers Work Behind Corporate Firewalls

Learn how to set up proxy configuration at every layer so your Docker operations work reliably behind enterprise firewalls.
Jul 31, 2025  · 15 min read

Most enterprise networks use proxy servers to monitor and filter internet traffic. This creates barriers when Docker tries to pull images from registries. You'll hit connection timeouts, authentication errors, and failed builds that work on your home network. These proxy-related issues can derail development teams who can't access the base images they need.

Docker proxy configuration solves these connectivity problems while providing performance improvements through caching and meeting corporate security requirements. Once configured, your containers will pull images regardless of network restrictions.

In this tutorial, I'll walk you through setting up Docker daemon proxy settings, configuring container environment variables, and handling proxy configuration in Docker Compose.

New to Docker and feeling overwhelmed by all the networking concepts? Start with the Docker fundamentals and build your confidence step by step.

Understanding Docker Proxy Architecture

Different parts of the Docker ecosystem need proxy configuration at different layers.

Proxies act as intermediaries between your Docker components and external networks. When Docker needs to pull images, containers need internet access, or builds require external resources, proxy servers control and monitor these connections. They sit between your containerized applications and the outside world, filtering requests and responses.

Organizations use proxies with Docker for performance, security, and compliance.

Performance comes from caching frequently accessed resources like base images, reducing download times across your team. Security benefits include traffic filtering, malware scanning, and blocking access to unauthorized sites. Compliance requirements mandate that all network traffic passes through monitored channels with proper access controls and audit trails.

Docker proxy configuration happens at four layers:

  • Client proxy: affects Docker CLI commands like docker pull and docker push

  • Daemon proxy: controls how the Docker daemon accesses external registries  

  • Container runtime proxy: sets proxy variables for applications inside containers

  • Build-time proxy: handles proxy settings during docker build operations when Dockerfiles need internet access

Each layer serves a purpose and requires its own configuration approach.

Understanding Docker networking starts with grasping how containers communicate

Configuring Docker to Use a Proxy

Getting Docker to work with your corporate proxy requires you to set up both the Docker daemon and client.

Why? Because they handle different types of network requests. 

The daemon manages image pulls, pushes, and registry authentication, while the client handles CLI operations and API calls. Missing either configuration leaves gaps where Docker operations can fail with connection errors.

How to configure the Docker daemon

The Docker daemon reads proxy settings from /etc/docker/daemon.json. Create this file if it doesn't exist:

{
  "proxies": {
    "default": {
      "httpProxy": "http://proxy.company.com:8080",
      "httpsProxy": "http://proxy.company.com:8080",
      "noProxy": "localhost,127.0.0.1,.company.com"
    }
  }
}

The noProxy field lists addresses that should bypass the proxy. Include your internal registry domains and localhost addresses here.

Setting up systemd service overrides

On systemd-based Linux systems, you also need to configure the Docker service itself to use the proxy. Create the override directory and configuration file:

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

Then, add these environment variables to the http-proxy.conf file:

[Service]
Environment="HTTP_PROXY=http://proxy.company.com:8080"
Environment="HTTPS_PROXY=http://proxy.company.com:8080"
Environment="NO_PROXY=localhost,127.0.0.1,.company.com"

This ensures the Docker daemon process itself can reach external services through your proxy.

Docker won't pick up proxy changes until you restart the daemon. Run these commands to reload systemd and restart Docker:

sudo systemctl daemon-reload
sudo systemctl restart docker

The daemon reload tells systemd to read the new service configuration, while the Docker restart applies the daemon.json changes.

Platform-specific considerations

Docker Desktop handles proxy configuration differently than server installations. On Windows and macOS, use the Docker Desktop GUI to set proxy settings in the Resources - Proxies section. These settings automatically configure both daemon and client.

Linux hosts using Docker Engine need the manual configuration described above. Some distributions package Docker differently, so check if your system uses dockerd directly or through a different service manager.

Verifying your configuration

Test that Docker can reach external registries with a simple pull command:

docker pull hello-world

You can also check the daemon's effective configuration:

docker system info | grep -i proxy

This shows whether Docker detected and applied your proxy settings.

If pulls still fail, check your proxy server logs to confirm Docker is routing requests correctly.

Networking issues got you stuck? Expose Docker ports correctly to avoid connectivity headaches.

Setting Proxy Environment Variables

Docker CLI commands need their own proxy configuration separate from the daemon settings. Environment variables give you flexible control over how Docker clients connect through corporate proxies.

Client proxies control Docker CLI operations like docker pull, docker push, and docker login. When you run these commands, the Docker client makes HTTP requests to registries and needs to know which proxy server to use. Without client proxy settings, these operations fail even when the daemon is properly configured.

JSON configuration method

The cleanest approach uses Docker's configuration file at ~/.docker/config.json:

{
  "proxies": {
    "default": {
      "httpProxy": "http://proxy.company.com:8080",
      "httpsProxy": "http://proxy.company.com:8080",
      "noProxy": "localhost,127.0.0.1,.company.com"
    }
  }
}

This method keeps proxy settings contained within Docker's configuration and doesn't affect other applications on your system.

Environment variable alternative

You can also set standard proxy environment variables that Docker will automatically detect by using these commands:

export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
export NO_PROXY=localhost,127.0.0.1,.company.com

These variables work for any application that respects proxy standards - not just Docker.

Global vs per-user configuration

For system-wide configuration, add the environment variables to /etc/environment or create a script in /etc/profile.d/:

# /etc/profile.d/proxy.sh
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
export NO_PROXY=localhost,127.0.0.1,.company.com

For per-user configuration, add them to your shell profile (~/.bashrc, ~/.zshrc) or use the JSON config method. Per-user settings give developers flexibility to use different proxies or bypass them entirely for testing.

Security considerations

You should never hardcode usernames and passwords in configuration files or environment variables. 

Proxy credentials in plain text create security risks, especially in shared environments or version control systems.

Instead, use these approaches:

  • Configure your proxy server for IP-based authentication when possible
  • Use credential helpers or secure storage systems for authentication
  • Set up service accounts with minimal permissions for Docker operations

If you must include credentials, use the format http://username:password@proxy.company.com:8080, but store these in protected files with restricted permissions.

Build-time proxy configuration

Docker builds need proxy settings passed explicitly through build arguments:

docker build \
  --build-arg HTTP_PROXY=http://proxy.company.com:8080 \
  --build-arg HTTPS_PROXY=http://proxy.company.com:8080 \
  --build-arg NO_PROXY=localhost,127.0.0.1 \
  -t myapp .

Build arguments become environment variables inside the build context.

This allows RUN instructions to download packages and dependencies through your proxy. Without these arguments, builds fail when Dockerfiles try to install software or fetch resources from the internet.

Build-time proxy settings don't persist in the final image unless you explicitly set them with ENV instructions in your Dockerfile.

Environment variables give you the flexibility to handle proxy settings at the right level for your workflow.

Do you find build arguments confusing? Master Docker build args with real examples that actually work.

Container and Build-Time Proxy Injection

Docker client configuration only gets you halfway there - your containers and builds need their own proxy settings. In this section, I'll show you how to inject proxy configuration at runtime and build time.

Runtime environment variables

Containers don't inherit your host's proxy settings.

You need to pass proxy variables explicitly when starting containers. Docker gives you multiple ways to do this, depending on whether you want to configure all containers or just specific ones.

JSON configuration is the cleanest approach for consistent proxy settings - as discussed earlier. To recap, add this to your ~/.docker/config.json:

{
  "proxies": {
    "default": {
      "httpProxy": "http://proxy.company.com:8080",
      "httpsProxy": "http://proxy.company.com:8080",
      "noProxy": "localhost,127.0.0.1,.company.com"
    }
  }
}

Docker automatically injects these variables into every container you start. No extra flags needed.

CLI flags give you more control per container:

docker run \
  --env HTTP_PROXY=http://proxy.company.com:8080 \
  --env HTTPS_PROXY=http://proxy.company.com:8080 \
  --env NO_PROXY=localhost,127.0.0.1 \
  nginx

You can also pass an entire environment file:

# proxy.env
HTTP_PROXY=http://proxy.company.com:8080
HTTPS_PROXY=http://proxy.company.com:8080
NO_PROXY=localhost,127.0.0.1

docker run --env-file proxy.env nginx

Existing containers can get proxy variables injected after startup using docker exec:

docker exec -e HTTP_PROXY=http://proxy.company.com:8080 container_name curl google.com

This approach works for one-off commands but doesn't persist across container restarts.

Build-time proxy configuration

Image builds fail in proxy environments unless you explicitly pass proxy settings.

Build arguments are the standard way to inject proxy variables during builds:

docker build \
  --build-arg HTTP_PROXY=http://proxy.company.com:8080 \
  --build-arg HTTPS_PROXY=http://proxy.company.com:8080 \
  --build-arg NO_PROXY=localhost,127.0.0.1 \
  --tag myapp .

Your Dockerfile needs to declare these arguments to use them:

ARG HTTP_PROXY
ARG HTTPS_PROXY
ARG NO_PROXY

# Use proxy for package installation
RUN apt-get update && apt-get install -y curl

# Don't persist proxy in final image

Dockerfile declarations let you set default proxy values:

ARG HTTP_PROXY=http://proxy.company.com:8080
ARG HTTPS_PROXY=http://proxy.company.com:8080

ENV HTTP_PROXY=$HTTP_PROXY
ENV HTTPS_PROXY=$HTTPS_PROXY

RUN pip install requests

The ENV instruction makes proxy variables available to all subsequent RUN commands in your build.

You should be aware that BuildKit limitations can cause proxy issues with newer Docker versions. BuildKit caches build contexts aggressively, which sometimes ignores proxy changes.

You can force BuildKit to recognize proxy updates:

DOCKER_BUILDKIT=1 docker build \
  --no-cache \
  --build-arg HTTP_PROXY=$HTTP_PROXY \
  --tag myapp .

Or, you can disable BuildKit entirely for proxy-sensitive builds:

DOCKER_BUILDKIT=0 docker build --build-arg HTTP_PROXY=$HTTP_PROXY --tag myapp .

When it comes to multi-stage builds, they need consistent proxy configuration across all stages:

ARG HTTP_PROXY
ARG HTTPS_PROXY
ARG NO_PROXY

# Build stage
FROM python:3.13 AS builder
ENV HTTP_PROXY=$HTTP_PROXY
ENV HTTPS_PROXY=$HTTPS_PROXY
ENV NO_PROXY=$NO_PROXY

COPY requirements.txt .
RUN pip install -r requirements.txt

# Runtime stage
FROM python:3.13-slim AS runtime
ENV HTTP_PROXY=$HTTP_PROXY
ENV HTTPS_PROXY=$HTTPS_PROXY
ENV NO_PROXY=$NO_PROXY

COPY --from=builder /usr/local/lib/python*/site-packages /usr/local/lib/python*/site-packages
RUN apt-get update && apt-get install -y curl

# Clear proxy variables for final image
ENV HTTP_PROXY=
ENV HTTPS_PROXY=
ENV NO_PROXY=

Each stage inherits build arguments but not environment variables from previous stages. Set proxy variables explicitly in every stage that needs them.

In short - configure once, inject everywhere.

Confused about ENTRYPOINT versus CMD in your proxy-enabled containers? Get the complete breakdown with practical examples.

Configuring Package Managers to Use a Proxy in Containers

Docker's proxy settings don't automatically flow down to package managers inside your containers. You need to configure each package manager separately to download packages through your corporate proxy.

Package managers like apt-get, yum, and apk make direct HTTP connections to package repositories. In restricted network environments, these connections fail unless the package manager knows about your proxy server.

Standard environment variables aren't enough. Most package managers have their own configuration files that override system-wide proxy settings.

Ubuntu/Debian: apt-get configuration

Create a proxy configuration file that apt-get will read during package installation.

Add this to your Dockerfile:

FROM python:3.13

# Configure apt proxy
RUN echo 'Acquire::http::Proxy "http://proxy.company.com:8080";' > /etc/apt/apt.conf.d/proxy.conf && \
    echo 'Acquire::https::Proxy "http://proxy.company.com:8080";' >> /etc/apt/apt.conf.d/proxy.conf

RUN apt-get update && apt-get install -y curl wget

The /etc/apt/apt.conf.d/proxy.conf file tells apt-get to route all package downloads through your proxy. The file persists across RUN commands in your build.

You can also set proxy exclusions for internal repositories:

RUN echo 'Acquire::http::Proxy::internal.company.com "DIRECT";' >> /etc/apt/apt.conf.d/proxy.conf

Alpine Linux: apk configuration

Alpine uses the apk package manager, which reads proxy settings from /etc/apk/repositories and environment variables.

FROM python:3.13-alpine

# Configure apk proxy
ENV HTTP_PROXY=http://proxy.company.com:8080
ENV HTTPS_PROXY=http://proxy.company.com:8080

RUN apk add --no-cache curl wget

Alpine's apk respects standard HTTP proxy environment variables, so you don't need separate configuration files. But you can create one for more control:

RUN echo 'http_proxy=http://proxy.company.com:8080' > /etc/environment && \
    echo 'https_proxy=http://proxy.company.com:8080' >> /etc/environment

CentOS/RHEL: yum configuration

CentOS and RHEL systems use yum or dnf, which read proxy settings from /etc/yum.conf.

FROM centos:8

# Configure yum proxy
RUN echo 'proxy=http://proxy.company.com:8080' >> /etc/yum.conf

RUN yum update -y && yum install -y curl wget

For newer CentOS versions using dnf, use the following:

FROM centos:stream9

# Configure dnf proxy
RUN echo 'proxy=http://proxy.company.com:8080' >> /etc/dnf/dnf.conf

RUN dnf update -y && dnf install -y curl wget

You can also exclude specific domains from proxy routing by adding this command:

RUN echo 'proxy_exclude=internal.company.com,localhost' >> /etc/yum.conf

Testing proxy functionality

First, you need to verify that your package manager can reach repositories through the proxy.

Add test commands to your Dockerfile:

FROM python:3.13

# Configure proxy
RUN echo 'Acquire::http::Proxy "http://proxy.company.com:8080";' > /etc/apt/apt.conf.d/proxy.conf

# Test package manager connectivity
RUN apt-get update && \
    apt-get install -y --dry-run curl && \
    echo "Package manager proxy test passed"

# Install actual packages
RUN apt-get install -y curl python3-pip

The --dry-run flag tests package resolution without actually installing anything. If it succeeds, your proxy configuration works.

You can also test with verbose output to debug connection issues:

RUN apt-get -o Debug::Acquire::http=true update

This shows exactly which URLs apt-get tries to access and whether proxy connections succeed.

For Alpine testing, use the following:

RUN apk update --verbose && \
    apk add --simulate curl && \
    echo "Alpine proxy test passed"

To recap, configure each package manager individually, and your builds will work behind any proxy.

Need to clean up proxy-related build artifacts? Docker prune commands will free up space and keep your system clean.

Advanced Proxy Implementations

Basic HTTP proxies get you started, but production environments need more control over registry access and image caching. Here's how to set up caching proxies and registry mirrors for better performance and reliability.

Caching proxy registries

Pulling the same base images over and over wastes bandwidth and slows down builds.

Caching proxies sit between your Docker clients and external registries like Docker Hub. They store frequently-accessed images locally, so subsequent pulls come from your internal network instead of the internet.

The most popular choice for enterprise environments is Harbor:

# docker-compose.yml for Harbor
version: '3.8'
services:
  harbor-core:
    image: goharbor/harbor-core:v2.8.0
    environment:
      - CORE_SECRET=your-secret-key
      - JOBSERVICE_SECRET=your-job-secret
    ports:
      - "80:8080"
    volumes:
      - ./harbor.yml:/etc/core/app.conf

You can configure Harbor as a proxy cache by editing harbor.yml:

# harbor.yml
hostname: harbor.company.com
http:
  port: 80

database:
  password: harbor-db-password

data_volume: /data

proxy_cache:
  - endpoint: https://registry-1.docker.io
    username: your-dockerhub-username
    password: your-dockerhub-token

If you need a simpler alternative for basic caching, use Docker Registry Proxy. Here's how:

docker run -d \
  --name registry-proxy \
  -p 5000:5000 \
  -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
  -v registry-cache:/var/lib/registry \
  registry:2

Then, point your Docker clients to the caching proxy:

{
  "registry-mirrors": ["http://harbor.company.com"]
}

Here are all the benefits you get with caching proxies:

  • Faster image pulls after the first download
  • Reduced internet bandwidth usage
  • Better build reliability when external registries have issues
  • Centralized access control and image scanning

Registry mirrors and BuildKit

Registry mirrors let you redirect image pulls to internal or geographically closer registries.

Single mirror configuration routes all Docker Hub requests through one mirror:

{
  "registry-mirrors": ["https://mirror.company.com"],
  "insecure-registries": ["harbor.company.com:5000"]
}

Add this to /etc/docker/daemon.json and restart Docker. All docker pull commands will try the mirror first, falling back to Docker Hub if the mirror is unavailable.

Multiple mirrors provide redundancy:

{
  "registry-mirrors": [
    "https://mirror1.company.com",
    "https://mirror2.company.com",
    "https://registry-1.docker.io"
  ]
}

Docker will try mirrors in order until one succeeds.

BuildKit mirror setup requires separate configuration because BuildKit bypasses some daemon settings:

# buildkitd.toml
debug = true

[registry."docker.io"]
  mirrors = ["mirror.company.com"]

[registry."mirror.company.com"]
  http = true
  insecure = true

You can start BuildKit with the custom config:

buildkitd --config=/etc/buildkit/buildkitd.toml

Or, use BuildKit's inline mirror configuration:

docker buildx create \
  --name mybuilder \
  --config /etc/buildkit/buildkitd.toml \
  --use

Finally, test your mirror setup by building an image that pulls from Docker Hub:

FROM python:3.13-slim
RUN pip install requests
docker build --progress=plain .

The --progress=plain flag shows which registry BuildKit actually contacts. You should see your mirror URLs in the build output.

That's it!

Managing complex multi-container proxy setups? Docker Compose simplifies networking and service discovery.

Security Hardening

Proxy configurations open new attack possibilities that need careful attention. Here's how to lock down Docker socket access and protect proxy credentials from exposure.

Docker socket proxy

Exposing Docker's socket directly creates a massive security risk.

The Docker socket (/var/run/docker.sock) gives root-level access to your entire host system. Any process that can write to this socket can spawn containers with host filesystem access, escalate privileges, or break out of container isolation entirely.

Direct socket mounting is dangerous:

# DON'T DO THIS
docker run -v /var/run/docker.sock:/var/run/docker.sock myapp

This pattern appears in Docker-in-Docker setups and CI/CD pipelines, but it's a security nightmare. A compromised container can control the entire Docker daemon.

Socket proxy services create a safer intermediary layer. They filter Docker API calls and restrict what containers can actually do.

Use Tecnativa's docker-socket-proxy:

# docker-compose.yml
version: '3.8'
services:
  docker-proxy:
    image: tecnativa/docker-socket-proxy
    environment:
      - CONTAINERS=1
      - IMAGES=1
      - NETWORKS=1
      - VOLUMES=1
      - BUILD=0
      - COMMIT=0
      - CONFIGS=0
      - SECRETS=0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - "2375:2375"

The proxy exposes only specific Docker API endpoints. Set environment variables to 1 for allowed operations and 0 for blocked ones.

Your applications connect to the proxy instead of the raw socket:

docker run --env DOCKER_HOST=tcp://docker-proxy:2375 myapp

Here are some additional socket hardening strategies to follow:

  • Run the socket proxy on a separate network segment
  • Use TLS authentication between clients and the proxy
  • Monitor socket proxy logs for suspicious API calls
  • Rotate proxy access credentials regularly

Credential management

Proxy credentials in plain text URLs show up everywhere - logs, process lists, environment dumps.

Never embed credentials in proxy URLs:

# SECURITY RISK - credentials visible in process list
export HTTP_PROXY=http://admin:password123@proxy.company.com:8080

Anyone with access to ps aux or environment variable dumps can see your proxy password. This includes application logs, container inspection, and system monitoring tools.

Instead, use authentication files:

# Create credentials file with restricted permissions
echo "admin:password123" > ~/.proxy-creds
chmod 600 ~/.proxy-creds

# Configure proxy without embedded credentials
export HTTP_PROXY=http://proxy.company.com:8080

Configure your proxy server to read credentials from the file or use external authentication systems.

TLS proxies encrypt all proxy traffic including authentication handshakes:

# HTTPS proxy encrypts credentials in transit
export HTTPS_PROXY=https://proxy.company.com:8443

Set up your proxy server with proper TLS certificates - either from a CA or self-signed certificates distributed to client machines.

Certificate-based authentication eliminates passwords entirely:

# Client certificate authentication
export HTTPS_PROXY=https://proxy.company.com:8443
curl --cert client.crt --key client.key --cacert proxy-ca.crt https://example.com

The proxy validates client certificates instead of username/password combinations. Certificates can be revoked individually if compromised.

Kubernetes secret management keeps proxy credentials out of container images:

apiVersion: v1
kind: Secret
metadata:
  name: proxy-credentials
type: Opaque
data:
  username: YWRtaW4=  # base64 encoded
  password: cGFzc3dvcmQxMjM=  # base64 encoded
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - name: myapp
        env:
        - name: PROXY_USER
          valueFrom:
            secretKeyRef:
              name: proxy-credentials
              key: username
        - name: PROXY_PASS
          valueFrom:
            secretKeyRef:
              name: proxy-credentials
              key: password

Secrets stay encrypted at rest and get injected at runtime without appearing in image layers.

Lock down socket access and encrypt your credentials - your future self will thank you.

Considering alternatives to Docker for your proxy setup? Compare Docker versus Podman to choose the right containerization tool.

Troubleshooting Common Issues

Proxy issues show up as timeouts, connection failures, and mysterious build errors that work fine outside your network. Here's a systematic approach to diagnose and fix the most common Docker proxy problems.

Structured diagnostic approach

Start with the basics and work your way up the stack.

Step 1: Verify proxy connectivity from the host

Test if your proxy server is reachable:

curl -x http://proxy.company.com:8080 https://registry-1.docker.io/v2/

If this fails, your proxy configuration or network routing has issues. Contact your network team before troubleshooting Docker.

Step 2: Check Docker daemon proxy settings

Verify the daemon reads your proxy configuration:

docker info | grep -i proxy

You should see your proxy URLs listed. If they're missing, check /etc/docker/daemon.json and restart the Docker service.

Step 3: Test Docker client operations

Try pulling a simple image:

docker pull hello-world

Success means your client proxy settings work. Failure points to daemon-level proxy issues.

Step 4: Test container-level connectivity

Run a container and test outbound connections:

docker run --rm \
  -e HTTP_PROXY=http://proxy.company.com:8080 \
  -e HTTPS_PROXY=http://proxy.company.com:8080 \
  python:3.13-slim \
  python -c "import urllib.request; print(urllib.request.urlopen('https://pypi.org').getcode())"

This isolates whether the problem is with Docker operations or container networking.

Common issues and solutions

Issue: docker pull times out

You'll see this error when Docker can't reach external registries:

Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: no such host

Start by testing basic connectivity to isolate where the problem occurs:

# Check if DNS resolution works through proxy
nslookup registry-1.docker.io

# Test direct registry access
curl -I https://registry-1.docker.io/v2/

# Verify daemon proxy configuration
sudo journalctl -u docker.service | grep -i proxy

Fix this by addressing DNS and proxy configuration issues:

  • Add DNS servers to /etc/docker/daemon.json

  • Configure NO_PROXY for internal DNS servers

  • Check if proxy supports HTTPS CONNECT method

Issue: Package installation fails during builds

Package managers inside containers can't reach repositories, showing errors like this:

E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy/Release

Test package manager connectivity directly in a temporary container:

# Test package manager connectivity in a running container
docker run -it --rm python:3.13 bash
apt-get update -o Debug::Acquire::http=true

The verbose output shows exactly where the connection fails.

Address this by configuring package managers to use your proxy:

  • Configure package manager proxy settings in Dockerfile
  • Pass proxy build arguments to Docker build
  • Check if proxy blocks package repository domains

Issue: BuildKit ignores proxy settings

BuildKit uses different proxy handling than the legacy builder, causing errors like this:

failed to solve: failed to fetch remote https://github.com/user/repo.git

Test whether the issue is BuildKit-specific by checking your builder configuration:

# Check BuildKit configuration
docker buildx inspect

# Build with legacy builder
DOCKER_BUILDKIT=0 docker build .

If the legacy builder works, you have a BuildKit proxy configuration problem.

Fix BuildKit proxy issues with these approaches:

  • Configure BuildKit-specific proxy settings in buildkitd.toml
  • Use --build-arg to pass proxy variables explicitly
  • Disable BuildKit for proxy-sensitive builds

Issue: Container can't reach external services

Your application inside the container fails to connect to external APIs:

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.example.com', port=443)

Debug this by testing connectivity from inside the container:

# Test from inside the container
docker exec container_name curl -v https://api.example.com

# Check container environment variables
docker exec container_name env | grep -i proxy

This shows whether the proxy variables are present and if the connection attempt succeeds.

Resolve container connectivity issues by ensuring proper proxy configuration:

  • Pass proxy environment variables at runtime
  • Configure application-specific proxy settings
  • Add destination to NO_PROXY list

Log analysis and connectivity testing

Docker daemon logs reveal the complete story of proxy negotiations and failures.

On systemd-based systems, use journalctl to follow Docker service logs in real-time:

# SystemD systems
sudo journalctl -u docker.service -f

For systems using traditional log files, check the Docker log directly:

# Direct log file
sudo tail -f /var/log/docker.log

Look for specific proxy-related patterns in the logs. Connection refused errors indicate the proxy server isn't accessible. Authentication failures show up as "407 Proxy Authentication Required" messages. Timeout errors suggest network routing problems or proxy server overload.

Container-level testing with curl and wget gives you detailed insight into proxy behavior.

Test HTTP connections through your proxy to see exactly what happens during the handshake:

# Test HTTP proxy
docker run --rm \
  -e HTTP_PROXY=http://proxy.company.com:8080 \
  python:3.13-slim \
  curl -v http://httpbin.org/ip

The verbose output shows DNS resolution, proxy connection establishment, and the actual HTTP request flow. You'll see lines like "Connected to proxy.company.com" followed by "CONNECT httpbin.org:80" if the proxy tunnel works correctly.

HTTPS connections require the proxy to support the CONNECT method:

# Test HTTPS proxy
docker run --rm \
  -e HTTPS_PROXY=http://proxy.company.com:8080 \
  python:3.13-slim \
  curl -v https://httpbin.org/ip

Watch for "CONNECT httpbin.org:443 HTTP/1.1" in the output. If you see "Method not allowed" or similar errors, your proxy doesn't support HTTPS tunneling.

Test authentication by embedding credentials in the proxy URL:

# Test with authentication
docker run --rm \
  -e HTTPS_PROXY=http://user:pass@proxy.company.com:8080 \
  python:3.13-slim \
  wget -O- https://httpbin.org/ip

Authentication failures show up as "407 Proxy Authentication Required" responses. Successful authentication proceeds directly to the target request.

Network-level debugging with tcpdump captures the actual packets between Docker and your proxy server.

Run tcpdump to see all traffic flowing to your proxy:

# Capture proxy traffic
sudo tcpdump -i any host proxy.company.com and port 8080

This raw packet capture shows connection attempts, data transfer, and connection termination. Look for TCP connection establishment (SYN/ACK packets) followed by HTTP traffic. Connection timeouts appear as repeated SYN packets with no response.

DNS resolution testing eliminates a common source of proxy confusion.

Many proxy issues are actually DNS problems in disguise. Test DNS resolution inside containers:

# Test DNS inside containers  
docker run --rm python:3.13-slim nslookup registry-1.docker.io

If this fails, the container can't resolve domain names at all. Try using a public DNS server to isolate the problem:

# Test with custom DNS
docker run --dns 8.8.8.8 --rm python:3.13-slim nslookup registry-1.docker.io

Corporate environments often have DNS servers that only work from inside the network. Your proxy might need to handle DNS requests as well as HTTP traffic, or you might need to configure containers with specific DNS servers that work through your proxy.

Work through each layer systematically, and you'll find the root cause.

Looking for options that might handle proxies differently? We have a guide on Docker alternatives so you can explore the current complete landscape of containerization tools in 2025.

Conclusion

Docker proxy configuration isn't only about getting images to download - it is also about building reliable, secure containerized applications in enterprise environments.

I've shown you how to configure proxy settings at four different layers: client operations, daemon processes, container runtime, and build-time operations. Each layer serves a specific purpose and requires its own configuration approach, from JSON files and environment variables to package manager settings and BuildKit configurations. A layered proxy strategy protects you from single points of failure.

Want to look deeper into Docker, containerization, and security? Check out our curated list of courses:


Dario Radečić's photo
Author
Dario Radečić
LinkedIn
Senior Data Scientist based in Croatia. Top Tech Writer with over 700 articles published, generating more than 10M views. Book Author of Machine Learning Automation with TPOT.

FAQs

Why do I need proxy configuration for Docker?

Corporate networks use proxy servers to monitor and filter internet traffic, which blocks Docker's direct connections to registries like Docker Hub. Without proper proxy configuration, you'll see connection timeouts when pulling images, failed builds when installing packages, and containers that can't reach external services. Docker proxy settings route all these connections through your corporate proxy server, making everything work smoothly behind firewalls.

What's the difference between Docker daemon proxy and client proxy settings?

The Docker daemon handles operations like pulling and pushing images to registries, while the Docker client manages CLI commands and API calls. Each needs separate proxy configuration because they make different types of network requests. Missing either configuration creates gaps where some Docker operations work while others fail with connection errors.

Do containers automatically inherit proxy settings from the host?

No, containers don't inherit your host's proxy settings by default. You need to explicitly pass proxy environment variables when starting containers using --env flags, environment files, or JSON configuration. This isolation means you can run containers with different proxy settings or bypass proxies entirely for specific applications.

Why does my Docker build fail even when image pulls work fine?

Docker builds need proxy settings passed as build arguments because the build process runs in isolation from your client configuration. Use --build-arg HTTP_PROXY=... when running docker build, and declare these arguments in your Dockerfile with ARG instructions. Package managers inside containers also need their own proxy configuration files to download dependencies.

How do I troubleshoot BuildKit proxy issues that don't happen with the legacy builder?

BuildKit uses different proxy handling and caches build contexts aggressively, sometimes ignoring proxy changes. Try building with --no-cache to force fresh proxy detection, or use DOCKER_BUILDKIT=0 to disable BuildKit entirely. For persistent issues, configure BuildKit-specific proxy settings in /etc/buildkit/buildkitd.toml with registry mirror configurations.

Topics

Learn Docker with DataCamp

Course

Understanding Cloud Computing

2 hr
189K
A non-coding introduction to cloud computing, covering key concepts, terminology, and tools.
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

blog

How to Learn Docker from Scratch: A Guide for Data Professionals

This guide teaches you how to learn Docker from scratch. Discover practical tips, resources, and a step-by-step plan to accelerate your learning!
Joel Wembo's photo

Joel Wembo

14 min

Tutorial

Docker Compose Guide: Simplify Multi-Container Development

Master Docker Compose for efficient multi-container application development. Learn best practices, scaling, orchestration, and real-world examples.
Derrick Mwiti's photo

Derrick Mwiti

Tutorial

Docker for Beginners: A Practical Guide to Containers

This beginner-friendly tutorial covers the essentials of containerization, helping you build, run, and manage containers with hands-on examples.
Moez Ali's photo

Moez Ali

Tutorial

How to Build a Custom NGINX Docker Image

Learn how to set up, configure, and optimize NGINX in Docker. This guide covers running containers, serving custom content, building images, setting up a reverse proxy, and using Docker Compose for efficient deployment.
Derrick Mwiti's photo

Derrick Mwiti

Tutorial

Mastering Docker Networking: From Custom Bridges to Swarm-Ready Architectures

A complete hands-on guide to Docker networks, including network drivers, IP management, Swarm overlay setup, and performance tuning for scalable, secure containerized systems.
Josep Ferrer's photo

Josep Ferrer

Tutorial

Docker Build Args: The Ultimate Guide for Data Professionals

Learn how to supercharge your containerization workflow with Docker build arguments for flexible, secure, and consistent environments.
Dario Radečić's photo

Dario Radečić

See MoreSee More