From charlesreid1

This page describes how to run the labs for this nmap short course. Notes for myself, but also for anyone who might be interested.

TODO: GitHub repo link

Overview

The aim is to provide students with a laboratory environment to supplement lectures about nmap and allow them the freedom to explore and learn in a structured and well-defined environment, so that they can focus on learning how to use their tools.

The environment we define will be entirely virtual, so that we can define 1 host or 10 or 50, all of them will run as virtual machines on one single server. We will use infrastructure as code to define the virtual network and the virtual hosts living on it, their software, their configuration, etc. All of this virtual infra will run on a large EC2 instance. (As large as budget allows.)

Students will be logging into a (virtual) bastion host inside the (virtual) network they will be exploring, all of which are hosted on the main EC2 instance (which must be accessible to students so that they can log into the bastion host).

The core idea is to use a single, reasonably powerful EC2 instance as the hypervisor and host for your entire virtual network. Inside this EC2 instance, we'll use a combination of Vagrant (with libvirt/KVM) for managing any full virtual machines (like an attacker machine or specific OS targets) and Docker/Docker Compose for deploying a variety of lightweight target services and "machines" (containers). Ansible will be the glue for configuring everything consistently.

Host EC2 "Big Boy" - Core Lab Infra Host

EC2 Instance Choice:

  • Operating System: Use a Linux distribution that supports KVM, such as Ubuntu Server LTS or Amazon Linux 2. Ubuntu Server often has more readily available documentation for tools like Vagrant with libvirt.
  • Instance Type: You'll need an instance with sufficient vCPUs and RAM to run multiple VMs/containers. Start with something like a t3.xlarge (4 vCPUs, 16 GiB RAM) or m5.large/m5.xlarge (2-4 vCPUs, 8-16 GiB RAM). Monitor resource usage and adjust as needed. Ensure your chosen instance type supports nested virtualization if you plan to run VMs within VMs (though for this setup, KVM on the EC2 host is primary).
  • Storage: Allocate sufficient EBS storage (SSD, e.g., gp3 for balanced performance and cost) for the OS, VM images, Docker images, and student data.
  • Security Group:
    • Allow SSH (port 22) from your IP and your students' IPs (or a bastion/VPN exit IP).
    • If using a VPN hosted on this EC2, allow the VPN port (e.g., UDP 1194 for OpenVPN).
    • Other ports should generally not be exposed directly to the internet; students will access target services from within the lab environment.

Virtualization Software Setup:

  • KVM/QEMU & Libvirt: Install these on your EC2 host instance to enable running full virtual machines.
sudo apt update
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
sudo systemctl enable --now libvirtd
sudo usermod -aG libvirt $(whoami) # Add your user to the libvirt group
  • Vagrant: Install Vagrant. This will orchestrate your VMs.
    • Install the vagrant-libvirt plugin: vagrant plugin install vagrant-libvirt
  • Docker & Docker Compose: Install Docker Engine and Docker Compose. This will run your containerized targets.
sudo apt install docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker $(whoami) # Add your user to the docker group
# Install Docker Compose (check official docs for latest version/method)
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

(Note: You'll need to log out and log back in for group changes to take effect, or use newgrp libvirt and newgrp docker in your current session.)

Big Boy Cloudformation Template

Infra requirements:

  • Instance - large/xlarge, latest generation
  • AMI - Ubuntu Server
  • Storage - EBS, 80-100 GB
  • Networking - VPC subnet, public IP for student access, consider elastic IP (static) if starting/stopping often, or a cheep domain name
  • Security Groups - inbound 22 must be allowed from instructor/student IP addresses, outbound all must be allowed
  • Internet - need a connection to an internet gateway for connectivity
  • IAM Role - probably not needed now, but if the setup gets fancier (artifacts stored in S3 as part of virtual lab setup), this makes life easier

Host requirements:

  • standard OS user
  • git, curl, net-tools, vim
  • virtualization core
  • docker engine, docker compose
  • vagrant, vagrant-libvirt, ansible
  • user groups - libvirt, docker groups
  • services - libvirt, docker services
  • ip forwarding - if any internal containers need to access the internet, ip forwarding for internet requests from the internal docker network will only work if ip forwarding is enabled
  • additional firewalls - ufw unnecessary, rely on AWS Security Groups

Infrastructure as Code (IaC) Setup

Vagrant (for VMs and Network Orchestration)

Use a Vagrantfile to define your base "attacker" machine (e.g., a lightweight Linux VM with nmap and other tools pre-installed) and any specific target VMs that require a full OS (e.g., a Windows Server trial, or an older Linux distribution).

Vagrant can define private networks that your VMs and Docker containers will share. For example:

# Vagrantfile (example snippet)
Vagrant.configure("2") do |config|
  config.vm.define "attacker" do |attacker|
    attacker.vm.box = "generic/ubuntu2004" # Or your preferred minimal box
    attacker.vm.network "private_network", ip: "192.168.50.10"
    attacker.vm.provision "ansible" do |ansible|
      ansible.playbook = "playbooks/attacker_setup.yml"
    end
  end

  config.vm.define "target-linux-vm" do |targetvm|
    targetvm.vm.box = "generic/ubuntu1804"
    targetvm.vm.network "private_network", ip: "192.168.50.20"
    targetvm.vm.provision "ansible" do |ansible|
      ansible.playbook = "playbooks/target_linux_vm_setup.yml"
    end
  end
end

Docker Compose (for Containerized Services/Targets)

Use docker-compose.yml files to define various target "machines" (containers). Each container can run a specific service (web server, FTP, database, custom vulnerable app) on a defined IP address within the Docker network.

You can create custom Docker networks that can be linked to or exist on the same subnet as the Vagrant private network, or be routed.

# docker-compose.yml (example snippet)
version: '3.8'
services:
  webserver_vulnerable:
    image: vulnerables/web-dvwa # Example vulnerable web app
    container_name: target_web_dvwa
    networks:
      lab_network:
        ipv4_address: 192.168.50.100 # Assign static IP within the lab network
    ports: # Only expose internally or not at all for nmap scanning
      # - "80:80" # Avoid exposing to EC2 host's public IP unless intended
    restart: unless-stopped

  ftp_server:
    image: fdelbia/vsftpd # Example FTP server
    container_name: target_ftp
    environment:
      FTP_USER: ftpuser
      FTP_PASS: ftppass
    networks:
      lab_network:
        ipv4_address: 192.168.50.101
    # By default, vsftpd in this image might expose 20, 21. Control with Ansible/container config.
    restart: unless-stopped

networks:
  lab_network:
    driver: bridge
    ipam:
      config:
        - subnet: 192.168.50.0/24
          gateway: 192.168.50.1

Ansible (for Configuration Management)

Use Ansible playbooks to:

  • Configure guest VMs - Install software (nmap on attacker, specific services on targets), manage users, set up firewall rules (ufw, firewalld, iptables) to open/close specific ports, deploy application code or vulnerable configurations.
  • Build custom docker images (optional) - While you can use public images, Ansible can help script the creation of custom Docker images with very specific configurations.
  • Configure running containers (less common but possible): Execute commands or copy files into running containers if needed, though baking this into the image is often better.

Example Ansible task to open a port using ufw (in playbooks/target_linux_vm_setup.yml):

# playbooks/target_linux_vm_setup.yml (example snippet)
---
- hosts: all
  become: yes
  tasks:
    - name: Install net-tools (for ifconfig, etc.)
      apt:
        name: net-tools
        state: present

    - name: Ensure ufw is enabled and allow specific ports
      community.general.ufw:
        state: enabled
        policy: deny # Deny by default

    - name: Allow SSH (port 22)
      community.general.ufw:
        rule: allow
        port: '22'
        proto: tcp

    - name: Allow HTTP (port 80) for a web server target
      community.general.ufw:
        rule: allow
        port: '80'
        proto: tcp
      when: "'web_target' in group_names" # Example conditional based on inventory group

    - name: Open custom port 1234/udp
      community.general.ufw:
        rule: allow
        port: '1234'
        proto: udp
      when: "'custom_target' in group_names"

Virtual Network Design

Primary Lab Network:

  • Create one or more private virtual networks (e.g., 192.168.50.0/24, 10.0.10.0/24) that your Vagrant VMs and Docker containers will reside on.
  • Vagrant's libvirt provider can create these networks.
  • Docker Compose can define its own bridge networks. You can configure routing on the EC2 host or within an "attacker" VM to allow communication between these if they are separate. For simplicity, try to have them on the same logical network or ensure the attacker VM/container has routes to all target networks.

Host Configuration:

  • IP Addresses: Assign static IPs to your targets (VMs via Vagrantfile, containers via Docker Compose ipv4_address) so students have predictable scan targets.
  • Open Ports: Use Ansible to meticulously control which ports are open on each target VM or container, simulating different service configurations and firewall rules. This is key for nmap exercises.
  • Services: Deploy a variety of services: web servers (Apache, Nginx with different versions/configs), FTP servers, SSH servers (with different versions or password/key auth), databases (MySQL, PostgreSQL), Telnet, custom TCP/UDP services (e.g., using netcat or simple Python scripts).

Student Access to the Cloud Lab

You need a way for students to access the lab environment running on the EC2 instance to perform their nmap scans.

There are a few options:

  • SSH to a Dedicated "Attacker" VM/Container (Recommended for Simplicity & Control):
    • How it works: You provision an "attacker" VM (using Vagrant) or a Docker container within the lab network. This attacker machine has nmap and other necessary tools pre-installed. Students SSH into this attacker machine from their own computers. From there, they can scan the target VMs and containers on the internal lab network.

  • VPN (Virtual Private Network)
    • How it works: Set up a VPN server (e.g., OpenVPN or WireGuard) on the EC2 instance. Students connect to this VPN from their local machines. Once connected, their machine effectively becomes part of the EC2 instance's private network, and they can run nmap directly from their own laptops against the lab targets.

  • Bastion Host (EC2 acts as Bastion):
    • How it works: Students SSH into the EC2 host itself (to a non-privileged user account). From this shell on the EC2 host, they can use nmap (installed on the EC2 host) to scan the IP addresses of the VMs/containers on the internal Docker/libvirt networks.

Recommendation: For a short summer course, SSH access to a pre-configured attacker VM or Docker container per student (or a shared one if class size is small and exercises are managed) offers a good balance of ease of use, control, and realism.

Lab Customization and Deployment Workflow

IaC is Key: Your Vagrantfiles, Docker Compose files, and Ansible playbooks are your lab definition.

Version Control:

  • Store all these IaC files in a Git repository. This allows you to track changes, revert to previous lab states, and create branches for different lab scenarios.
  • To change open ports: Modify Ansible playbooks (firewall rules) or Dockerfile/container startup commands.
  • To change services: Modify Ansible playbooks (install/configure different software) or use different Docker images in docker-compose.yml.
  • To change network topology: Modify Vagrantfile network settings or docker-compose.yml network definitions.

Deployment:

  • Launch and configure the base EC2 instance (can also be automated with Terraform or AWS CloudFormation).
  • Clone your Git repository onto the EC2 instance.
  • Run vagrant up to provision VMs defined in your Vagrantfile. This will also trigger Ansible provisioning.
  • Run docker-compose up -d to deploy your containerized targets.

Tear Down:

  • docker-compose down -v and vagrant destroy -f.

Example of Defining a Simple Target

docker-compose.yml:

services:
  simple_web:
    image: httpd:2.4-alpine # Basic Apache server
    container_name: target_simple_web
    networks:
      lab_network:
        ipv4_address: 192.168.50.110
    # No 'ports' exposing to host, nmap will find it on its internal IP

Students, from their attacker VM (e.g., 192.168.50.10), would then run nmap 192.168.50.110

Cost Considerations

EC2 instance will be your main cost.

  • Stop when not in use: Stop the EC2 instance when the lab is not actively being used (e.g., evenings, weekends). You only pay for storage then, not compute. Automate this with scripts or AWS Lambda.
  • Choose the right size: Don't overprovision. Start smaller and monitor CPU/RAM.
  • Spot Instances (with caution): For non-critical development or very short lab sessions where interruption is acceptable, Spot Instances can save up to 90%. However, they can be terminated with little notice, so ensure your IaC setup allows for quick reprovisioning. Probably not ideal for scheduled class time.
  • Reserved Instances/Savings Plans: If the course runs frequently or for an extended period, these can offer significant discounts over On-Demand pricing.

EBS Storage: Pay for what you provision. Keep VM disk images and Docker images lean if possible.

Data Transfer: Data transfer out of AWS can incur costs. For this lab model, most traffic is internal to the EC2 or between students and the EC2 for SSH/VPN, so outbound data transfer should be manageable.

Example Conceptual Lab Scenario

Imagine a lab network 192.168.50.0/24:

  • 192.168.50.10: Attacker VM (Ubuntu with nmap, students SSH here).
  • 192.168.50.20: Target Linux VM (Vagrant managed, running an outdated SSH on port 22, and a custom Python service on TCP 10000, firewall configured by Ansible).
  • 192.168.50.100: Docker Container - "Prod Web Server" (Nginx, port 80, 443 open).
  • 192.168.50.101: Docker Container - "Dev Web Server" (Apache, port 8080 open, directory listing enabled).
  • 192.168.50.102: Docker Container - "FTP Server" (vsftpd, port 21 open, anonymous login enabled).
  • 192.168.50.103: Docker Container - "Hidden Service" (netcat listener on UDP 55555).
  • 192.168.50.104: Docker Container - "Windows-like SMB" (Samba configured to mimic a Windows share, ports 139, 445).

Students would then use nmap from 192.168.50.10 to discover these hosts, identify open ports, enumerate services, and identify OS versions, practicing various scan techniques.

This approach provides a powerful and adaptable environment. You can create different branches in your Git repository for different lab modules, each with unique network configurations, target vulnerabilities, and learning objectives.

Mermaid Architecture Diagram

C4Container
    title Container Diagram for Nmap Virtual Lab (C4 Level 2)

    Person(student, "Student", "Cybersecurity Learner")

    System_Boundary(lab_system, "Nmap Virtual Lab System") {
        System_Boundary(ec2_host, "EC2 Instance (Lab Host)") {
            Container(ssh_gateway, "SSH Gateway", "OpenSSHd", "Student SSH Access")
            Container(attacker_workstation, "Attacker Workstation", "VM/Docker", "nmap Env & Lab Access")

            Container(vm_orchestrator, "VM Orchestrator", "Vagrant/KVM", "Manages Target VMs")
            Container(container_orchestrator, "Container Orchestrator", "Docker Compose", "Manages Target Containers")

            System_Boundary(virtual_lab_network, "Virtual Lab Network", "Isolated Network") {
                Container(target_vm_1, "Target VM 1", "Linux/Win VM", "Target Services (e.g., SSH)")
                Container(target_container_1, "Web Target 1", "Docker", "Web Server Target")
                Container(target_container_2, "FTP Target 1", "Docker", "FTP Server Target")
                Container(target_vm_n, "Target VM 'n'", "VM", "More VM Targets")
                Container(target_container_n, "Target Cont. 'n'", "Docker", "More Container Targets")
            }
        }
        Component(ansible_scripts, "Ansible Playbooks", "IaC Scripts", "System Config (services, firewalls)")
    }

    %% Relationships
    Rel(student, ssh_gateway, "Connects via", "SSH")
    Rel(ssh_gateway, attacker_workstation, "Provides Shell Access")

    Rel(attacker_workstation, target_vm_1, "Scans", "nmap")
    Rel(attacker_workstation, target_container_1, "Scans", "nmap")
    Rel(attacker_workstation, target_container_2, "Scans", "nmap")
    Rel(attacker_workstation, target_vm_n, "Scans", "nmap")
    Rel(attacker_workstation, target_container_n, "Scans", "nmap")

    Rel(vm_orchestrator, target_vm_1, "Deploys/Manages")
    Rel(vm_orchestrator, target_vm_n, "Deploys/Manages")
    Rel(container_orchestrator, target_container_1, "Deploys/Manages")
    Rel(container_orchestrator, target_container_2, "Deploys/Manages")
    Rel(container_orchestrator, target_container_n, "Deploys/Manages")

    Rel(ansible_scripts, vm_orchestrator, "Provides Config To")
    Rel(ansible_scripts, container_orchestrator, "Provides Config To")
    Rel(ansible_scripts, attacker_workstation, "Configures")
    Rel(ansible_scripts, target_vm_1, "Configures")
    Rel(ansible_scripts, target_vm_n, "Configures")
    Rel(ansible_scripts, target_container_1, "Configures")
    Rel(ansible_scripts, target_container_2, "Configures")
    Rel(ansible_scripts, target_container_n, "Configures")

Flags