From charlesreid1

Network Access

As mentioned in the Access Control section of the MongoDB page, one of the ways to provide coarse-grained access control to a MongoDB database is to limit network access to the port and server of the MongoDB host machine.

  • We start by covering binding to particular network interfaces
  • We move on to network architectures that will place MongoDB in a protected zone:
    • Classic firewall
    • Access Mongo through SSH tunnel
    • Mongo via VPN
    • Place Mongo host on a private subnet, accessible via node(s) on a public subnet


Binding to a Network Interface

To set the network interface that MongoDB binds to, set the bind_ip option in the MongoDB config file.

Local requests only:

This address is the special localhost IP address, and tells MongoDB to only listen for local requests:

bindIp: 127.0.0.1

All requests:

This configuration is the opposite and tells MongoDB to listen for requests from any network interface:

bindIp: 0.0.0.0

on a public web server, this will bind to the public-facing interface.

Particular network:

If the MongoDB host is connected to two different networks, NetA and NetB, it will have two different IP addresses.

Suppose the host has the IP 10.0.0.3 on NetA and 192.168.1.6 on NetB.

To tell MongoDB to only listen for requests coming from NetA:

bindIp: 10.0.0.3

To tell MongoDB to only listen for requests coming from NetB:

bindIp: 192.168.1.6

To tell MongoDB to listen for requests from either network:

bindIp: [10.0.0.3,192.168.1.6]

or,

bindIp: 0.0.0.0

Network Access with Docker

If you're using the mongo pod at https://git.charlesreid1.com/docker/pod-mongo you have slightly different considerations for the network, but the same basic idea applies.

The docker pod controls what network ports and addresses Mongo binds to in the docker-compose.yml file. More precisely, at this line of the docker compose file: https://git.charlesreid1.com/docker/pod-mongo/src/branch/master/docker-compose.fixme.yml#L12

The ports directive of the MongoDB container in the docker compose file dictates the bind address and port schema. For example, A.B.C.D:XXXX:YYYY would indicate:

  • Bind to the host network interface that has the IP address A.B.C.D and listen on it for incoming requests
  • The host should have port XXXX open.
  • Any incoming traffic to port XXXX is remapped to the MongoDB container to port YYYY

So the actual port configuration, shown below, would correspond to:

  • Binding to the local interface (local requests only)
  • Host port 27017 is open to requests, and forwards those requests to port 27017 in the container

A selection from the docker compose file: https://git.charlesreid1.com/docker/pod-mongo/src/branch/master/docker-compose.fixme.yml#L12

version: "3.1"
services:

  stormy_mongodb:
    build: d-mongodb
    restart: always
    ports:
      - "127.0.0.1:27017:27017"
    volumes:
      - "mongo-data:/data"

  ....

To have MongoDB bind to a different address, like 10.0.0.3, change the line to:

    ports:
      - "10.0.0.3:27017:27017"

Network Architectures to Protect Mongo

Classic Firewall

TODO

Basic example is the AWS EC2 security model: by default, all ports are blocked by the firewall except for the port you want

You also specify the CIDR block from which you will accept traffic (i.e., from a narrow range, for something like SSH, or from anyone, if a public web server).

This is a classic method for protecting the MongoDB port for a public-facing server, or for making the MongoDB service only available to a select set of clients based on their IPs.

SSH Tunnel

TODO

If you have SSH access to both the client and the server, it is possible to set up an SSH tunnel between the client and the server and forward traffic to a local port, so that the MongoDB client can communicate with the server as though it were present on localhost (and the traffic is transparently forwarded by the SSH tunnel).

To do this, you set up an SSH connection from the client to the server, and use the -L flag to tell it to tunnel traffic through a local port. Use the syntax XXXX:localhost:YYYY, where XXXX is the local (MongoDB client) port and YYYY is the remote (MongoDB server) port.

For example, if you want traffic on localhost of the client, occurring on port XXXX, to be forwarded to the remote MongoDB server, and appear on port YYYY on the remote server, you can run this from the MongoDB client node:

(to start the tunnel in the background, you can add the -d flag)

$ ssh -d -L XXXX:localhost:YYYY user@ip-of-client

Typically for MongoDB both will be 27017:

$ ssh -d -L 27017:localhost:27017 user@ip-of-client

VPN

The VPN, like the SSH tunnel method, creates an encrypted tunnel between the client and server, but the VPN method uses the encrypted tunnel to create a network (and potentially connecting many clients) instead of an SSH connection.

This method works with any VPN software, I like Tinc because it is simple and creates a mesh-based VPN, but OpenVPN works great too. This will assume you have a VPN set up with an IP schema like 10.0.0.0/24 (meaning, 10.0.0.1, 10.0.0.2, etc)

Once the client and server are both on the virtual private network, they will each have a virtual network interface with an IP address - say 10.0.0.1 (for server) and 10.0.0.2 (for client).

Now you can use the bind to IP technique (above) and have MongoDB only bind to the virtual network interface. The client and server will have an encrypted communication channel (the network traffic is encrypted by the VPN software) and all the security measures that come rolled into VPN software (certificates, trust mechanisms, ciphers, etc.)

Private-Public Subnet

This solution is a more sophisticated implementation of the VPN solution above, and is relevant for large scale deployments, distributed networks, large data ingestion rates, or deployments with sensitive/proprietary data that must be carefully protected.

The public-private subnet is a VPN network architecture that consists of a single network with two subnets, one public and one private. The MongoDB server lives on the private subnet and is only accessible to nodes on the public subnet.

The overarching network uses the CIDR block 10.0.0.0/16. In plain English that means the last two octets of the IP address (e.g., 10.0.x.y). This may have more than two subnets, and in fact it can have up to 254 subnets (254 values of x).

Added to this network are two subnets, (that is, networks with different values of x in 10.0.x.y) that can see each other via a NAT gateway. For data to reach the private subnet (where the MongoDB server or servers live), it must come from a node on the public subnet.

In this way, assets on the private subnet (the data servers) are protected from breaches of assets on the public subnet (public-facing servers).

Why is this safer?

The public-private subnet approach is safer than using a single server to store data and to run public-facing services because it isolates (at the network level) public-facing services and private data storage.

If we stored data on the same node as we ran a public facing server, then a security vulnerability in web server software (e.g., nginx or Apache) could allow an attacker to gain remote access to the server, which could allow them to compromise MongoDB and/or access private data.


Flags