How to Fix Docker Node.js fetch ECONNREFUSED 127.0.0.1 Error

You're running a Node.js application inside a Docker container. Everything looks perfect until you try to fetch data from your local API:

// index.js
import fetch from 'node-fetch';

const response = await fetch('http://localhost:5000/api');
console.log(await response.text());


And then you see this frustrating error:

FetchError: request to http://localhost:5000/api failed, reason: connect ECONNREFUSED 127.0.0.1:5000


The API is running perfectly on your host machine at port 5000. You can access it from your browser. So why can't your containerized Node.js app connect to it?




Step 1: Understanding the Error


The ECONNREFUSED error means your Node.js application tried to establish a TCP connection to localhost:5000, but the connection was actively refused. This happens when there's no service listening on that port from the container's perspective.


Let me show you how to reproduce this error:

# Dockerfile
FROM node:18-alpine
WORKDIR /app

# Install dependencies
COPY package*.json ./
RUN npm install

# Copy application files
COPY . .

# Start the application
CMD ["node", "index.js"]
// package.json
{
  "name": "docker-fetch-test",
  "version": "1.0.0",
  "type": "module",
  "dependencies": {
    "node-fetch": "^3.3.2"
  }
}


Build and run the container:

$ docker build -t node-fetch-test .
$ docker run --rm node-fetch-test
# Error: connect ECONNREFUSED 127.0.0.1:5000


Step 2: Identifying the Cause


The root cause lies in Docker's network isolation model. Here's what's actually happening:


The localhost Trap


When your Node.js code inside the container references localhost or 127.0.0.1, it's pointing to the container's own loopback interface, not your host machine's localhost. Each Docker container has its own network namespace, completely isolated from the host.


Think of it this way:


  • Host machine localhost → Host's network interface
  • Container localhost → Container's own network interface (isolated)


Docker's Default Bridge Network


When you run a container without specifying a network, Docker uses the default bridge network. This creates a virtual network bridge that allows containers to communicate with each other and the outside world through NAT (Network Address Translation).


# Check your container's network configuration
$ docker inspect <container_id> | grep NetworkMode
# Output: "NetworkMode": "bridge"


The bridge network creates this hierarchy:

  • Host machine (your computer)
  • Docker bridge (docker0 interface)
  • Container with its own IP (typically 172.17.0.x)

Operating System Differences


The networking behavior differs significantly across operating systems:


Linux:

  • Docker runs natively on the Linux kernel
  • Containers share the host kernel directly
  • --network host option available (containers can share host network)


macOS and Windows:

  • Docker runs inside a lightweight VM
  • Additional network translation layer between host and containers
  • Special DNS name host.docker.internal provided to access host
  • --network host doesn't work as expected (falls back to bridge mode)

Step 3: Implementing the Solution


Here are four proven solutions, from simplest to most robust:


Solution 1: Use host.docker.internal (macOS/Windows)


For macOS and Windows users, Docker Desktop provides a special DNS name that resolves to the host machine:

// index.js - Updated for macOS/Windows
import fetch from 'node-fetch';

// Replace localhost with host.docker.internal
const response = await fetch('http://host.docker.internal:5000/api');
console.log(await response.text());


Run the container normally:

$ docker build -t node-fetch-test .
$ docker run --rm node-fetch-test
# Success! API response received


Note for Linux users: This doesn't work by default on Linux. You need to add it manually:

$ docker run --rm --add-host=host.docker.internal:host-gateway node-fetch-test


Solution 2: Use Docker Compose with Service Names


When running multiple services, Docker Compose creates an internal DNS that allows services to communicate using their service names:

# docker-compose.yml
version: '3.8'

services:
  api:
    image: your-api-image
    ports:
      - "5000:5000"
    networks:
      - app-network

  nodejs-app:
    build: .
    depends_on:
      - api
    networks:
      - app-network
    environment:
      - API_URL=http://api:5000

networks:
  app-network:
    driver: bridge


Update your Node.js code to use the service name:

// index.js - Using Docker Compose service name
import fetch from 'node-fetch';

// Use the service name 'api' instead of localhost
const apiUrl = process.env.API_URL || 'http://api:5000/api';
const response = await fetch(apiUrl);
console.log(await response.text());


Start everything with Docker Compose:

$ docker-compose up
# Both services start and can communicate


Solution 3: Use Host Network Mode (Linux Only)


On Linux systems, you can make the container share the host's network stack:

$ docker run --rm --network host node-fetch-test
# Container can now access localhost:5000 directly


This approach has trade-offs:

  • ✅ Simple and fast (no network translation overhead)
  • ✅ Direct access to all host ports
  • ❌ No network isolation (security concern)
  • ❌ Port conflicts possible
  • ❌ Doesn't work on macOS/Windows
  • ❌ Not compatible with Docker Compose's default networking


Solution 4: Configure a Reverse Proxy


For production environments, use a reverse proxy like Nginx to route traffic:

# docker-compose.yml with Nginx proxy
version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    networks:
      - app-network

  nodejs-app:
    build: .
    networks:
      - app-network

networks:
  app-network:
    driver: bridge
# nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream api {
        # Point to host machine
        server host.docker.internal:5000;
    }

    server {
        listen 80;
        
        location /api {
            proxy_pass http://api;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}


Step 4: Working Code Example


Here's a complete, production-ready example that works across all platforms:

// config.js - Environment-aware configuration
export const getApiUrl = () => {
    // Check if running in Docker
    const isDocker = process.env.DOCKER_ENV === 'true';
    
    // Check operating system
    const isLinux = process.platform === 'linux';
    
    if (isDocker) {
        // Inside Docker container
        if (process.env.API_SERVICE_NAME) {
            // Using Docker Compose with service name
            return `http://${process.env.API_SERVICE_NAME}:5000`;
        } else if (!isLinux) {
            // macOS or Windows
            return 'http://host.docker.internal:5000';
        } else {
            // Linux - requires special handling
            return process.env.HOST_IP 
                ? `http://${process.env.HOST_IP}:5000`
                : 'http://172.17.0.1:5000'; // Default Docker bridge gateway
        }
    }
    
    // Running directly on host
    return 'http://localhost:5000';
};
// index.js - Main application
import fetch from 'node-fetch';
import { getApiUrl } from './config.js';

const makeApiCall = async () => {
    const apiUrl = getApiUrl();
    console.log(`Connecting to API at: ${apiUrl}`);
    
    try {
        const response = await fetch(`${apiUrl}/api`);
        
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        
        const data = await response.text();
        console.log('API Response:', data);
        return data;
        
    } catch (error) {
        console.error('Connection failed:', error.message);
        
        // Provide helpful debugging information
        if (error.code === 'ECONNREFUSED') {
            console.log('\nTroubleshooting tips:');
            console.log('1. Ensure your API is running on the host machine');
            console.log('2. Check if port 5000 is accessible');
            console.log('3. Verify Docker network configuration');
            console.log('4. Try using host.docker.internal (macOS/Windows)');
        }
        
        throw error;
    }
};

// Run the application
makeApiCall()
    .then(() => console.log('Success!'))
    .catch(error => process.exit(1));
# Dockerfile - Production-ready configuration
FROM node:18-alpine

WORKDIR /app

# Install dependencies first (better caching)
COPY package*.json ./
RUN npm ci --only=production

# Copy application files
COPY . .

# Set Docker environment flag
ENV DOCKER_ENV=true

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD node -e "require('http').get('http://localhost:3000/health', (r) => {r.statusCode === 200 ? process.exit(0) : process.exit(1)})"

# Run as non-root user
USER node

CMD ["node", "index.js"]


Step 5: Additional Tips & Related Errors


Debugging Network Issues


When troubleshooting, use these commands inside your container:

# Enter the container
$ docker exec -it <container_name> sh

# Test network connectivity
$ ping host.docker.internal  # Won't work on Alpine Linux (no ping)
$ wget -qO- http://host.docker.internal:5000/api  # Alternative to curl
$ nslookup host.docker.internal  # Check DNS resolution


Check Docker network configuration:

# List all networks
$ docker network ls

# Inspect network details
$ docker network inspect bridge

# See container's network settings
$ docker inspect <container_name> | grep -A 10 NetworkSettings


Common Related Errors


Error: getaddrinfo ENOTFOUND host.docker.internal

  • Occurs on Linux without the --add-host flag
  • Solution: Add --add-host=host.docker.internal:host-gateway


Error: connect ETIMEDOUT

  • Firewall blocking the connection
  • Solution: Check firewall rules, especially on Windows


Error: connect EHOSTUNREACH

  • Network is unreachable from container
  • Solution: Verify Docker daemon is running and network bridge exists


Security Considerations


When exposing host services to containers:

  • Never use --network host in production - It removes network isolation
  • Limit exposed ports - Only expose necessary ports
  • Use secrets management - Don't hardcode API endpoints
  • Implement proper authentication - Containers should authenticate to host services


Performance Optimization


Different solutions have different performance impacts:

  • host.docker.internal: Slight DNS lookup overhead (~1-2ms)
  • Docker Compose services: Minimal overhead, same network
  • --network host: Zero overhead but security risks
  • Reverse proxy: Additional hop but better for scaling


Platform-Specific Commands


For Linux users who need host.docker.internal:

# Get host IP dynamically
$ HOST_IP=$(ip route | grep docker0 | awk '{print $9}')
$ docker run --rm -e HOST_IP=$HOST_IP node-fetch-test


For Windows PowerShell users:

# Windows with Docker Desktop
docker run --rm node-fetch-test
# host.docker.internal works out of the box


The ECONNREFUSED error in Docker containers is fundamentally about network isolation. Once you understand that containers have their own network namespace, the solution becomes clear: you need to bridge the gap between container and host networks using one of the methods above. Choose based on your environment and security requirements, with Docker Compose service names being the most portable solution for multi-container applications.


How to Fix Python Module Not Found Error in Crontab: PATH Environment Variable Solutions