Proxmox VE Guide

Proxmox Virtual Environment is a popular open-source hypervisor for running CICADA IR in production. This guide covers importing the VM, networking, storage configuration, GPU passthrough for Ollama, backups, and troubleshooting.

Prerequisites

  • Proxmox VE 8.0 or later
  • A host with at least 8 GB free RAM and 40 GB free storage
  • The CICADA IR QCOW2 image (cicada-ir-<version>.qcow2)
  • SSH access to the Proxmox host

Import the CICADA IR VM

Method 1: Web UI + CLI (recommended)

First, upload the QCOW2 image to your Proxmox host:

# From your workstation
scp cicada-ir-latest.qcow2 root@proxmox-host:/var/lib/vz/template/

Create a new VM in the Proxmox web UI:

  1. Click Create VM in the top-right corner
  2. General tab:
    • Node: Select your Proxmox node
    • VM ID: Choose an ID (e.g., 200)
    • Name: cicada-ir
  3. OS tab:
    • Select Do not use any media
    • Type: Linux
    • Version: 6.x - 2.6 Kernel
  4. System tab:
    • Machine: q35
    • BIOS: OVMF (UEFI)
    • EFI Storage: Select your storage (e.g., local-lvm)
    • SCSI Controller: VirtIO SCSI single
  5. Disks tab:
    • Delete the default disk (we will import the QCOW2 separately)
  6. CPU tab:
    • Cores: 4 (minimum 2)
    • Type: host (best performance)
  7. Memory tab:
    • Memory: 8192 MB (minimum 4096 MB)
  8. Network tab:
    • Bridge: vmbr0 (or your management bridge)
    • Model: VirtIO (paravirtualized)
  9. Click Finish (do not start the VM yet)

Now import the disk via the Proxmox host shell:

# Import the QCOW2 as the VM's primary disk (replace 200 with your VM ID)
qm importdisk 200 /var/lib/vz/template/cicada-ir-latest.qcow2 local-lvm

# Attach the imported disk to the VM
qm set 200 --scsi0 local-lvm:vm-200-disk-1

# Set the boot order
qm set 200 --boot order=scsi0

Start the VM from the web UI or CLI:

qm start 200

Method 2: Full CLI import

# Create the VM
qm create 200 \
  --name cicada-ir \
  --memory 8192 \
  --cores 4 \
  --cpu cputype=host \
  --net0 virtio,bridge=vmbr0 \
  --scsihw virtio-scsi-single \
  --bios ovmf \
  --machine q35 \
  --efidisk0 local-lvm:1,format=raw,efitype=4m,pre-enrolled-keys=0

# Import the disk image
qm importdisk 200 /var/lib/vz/template/cicada-ir-latest.qcow2 local-lvm

# Attach and configure boot
qm set 200 --scsi0 local-lvm:vm-200-disk-1
qm set 200 --boot order=scsi0

# Enable the QEMU guest agent
qm set 200 --agent enabled=1

# Start the VM
qm start 200

Networking

Default bridge (vmbr0)

By default, Proxmox VMs on vmbr0 are bridged directly to the physical network. The VM will get an IP via DHCP from your network's DHCP server.

Static IP (recommended for production)

After the VM boots, log in via the Proxmox console and configure a static IP:

sudo nano /etc/netplan/01-netcfg.yaml
network:
  version: 2
  ethernets:
    ens18:
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        addresses:
          - 8.8.8.8
          - 1.1.1.1
sudo netplan apply

VLAN tagging

To place the CICADA IR VM on a specific VLAN:

# Set VLAN tag on the VM's network interface (e.g., VLAN 50)
qm set 200 --net0 virtio,bridge=vmbr0,tag=50

Or in the web UI: VM > Hardware > Network Device > Edit > set VLAN Tag.

Dedicated management bridge

For isolated management networks, create a dedicated bridge in Proxmox:

# Edit /etc/network/interfaces on the Proxmox host
auto vmbr1
iface vmbr1 inet manual
    bridge-ports eno2
    bridge-stp off
    bridge-fd 0

# Then assign the VM to vmbr1
qm set 200 --net0 virtio,bridge=vmbr1

Storage configuration

Expand the disk

If you need more space for investigation data or Ollama models:

# Resize the disk (e.g., add 40 GB)
qm resize 200 scsi0 +40G

Inside the VM, extend the filesystem:

# Check current partition layout
lsblk

# Extend the partition (assuming /dev/sda2 is the root partition)
sudo growpart /dev/sda 2

# Resize the filesystem
sudo resize2fs /dev/sda2

# Verify
df -h

Add a dedicated data disk

For separating OS and investigation data:

# Add a second disk (50 GB) via CLI
qm set 200 --scsi1 local-lvm:50

# Inside the VM, format and mount
sudo mkfs.ext4 /dev/sdb
sudo mkdir -p /opt/cicada/data
sudo mount /dev/sdb /opt/cicada/data

# Add to fstab for persistence
echo '/dev/sdb /opt/cicada/data ext4 defaults 0 2' | sudo tee -a /etc/fstab

GPU passthrough for Ollama

Passing an NVIDIA GPU through to the CICADA IR VM dramatically improves Ollama inference speed. This section covers PCIe passthrough on Proxmox.

Step 1: Enable IOMMU

Edit the GRUB configuration on the Proxmox host:

# For Intel CPUs
sudo nano /etc/default/grub
# Change the GRUB_CMDLINE_LINUX_DEFAULT line to:
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

# For AMD CPUs
# GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

# Update GRUB
sudo update-grub
sudo reboot

Step 2: Load VFIO modules

# Add VFIO modules
echo -e "vfio\nvfio_iommu_type1\nvfio_pci\nvfio_virqfd" | sudo tee /etc/modules-load.d/vfio.conf

# Blacklist the host GPU driver so Proxmox doesn't claim it
echo "blacklist nouveau" | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
echo "blacklist nvidia" | sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf

sudo update-initramfs -u
sudo reboot

Step 3: Identify the GPU

# Find your GPU's PCI address
lspci -nn | grep -i nvidia

# Example output:
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation ... [10de:2684]
# 01:00.1 Audio device [0403]: NVIDIA Corporation ... [10de:22ba]

Step 4: Assign GPU to the VM

# Add the GPU PCI device (replace with your PCI address)
qm set 200 --hostpci0 01:00,pcie=1

# Increase VM memory if running large models
qm set 200 --memory 16384

Step 5: Install NVIDIA drivers inside the VM

# Inside the CICADA IR VM
sudo apt update
sudo apt install -y nvidia-driver-550

# Reboot the VM
sudo reboot

# Verify GPU is detected
nvidia-smi

# Ollama will automatically use the GPU on next inference
ollama run llama3.1:8b "test"

Backups

Manual backup

# Create a backup (stored in /var/lib/vz/dump/ by default)
vzdump 200 --compress zstd --mode snapshot

# List backups
ls -lh /var/lib/vz/dump/vzdump-qemu-200-*

Scheduled backups

In the Proxmox web UI:

  1. Navigate to Datacenter > Backup
  2. Click Add
  3. Configure the schedule:
    • Node: Your node
    • Storage: Where to store backups
    • Schedule: e.g., daily at 02:00
    • Selection mode: Include > select cicada-ir
    • Compression: ZSTD
    • Mode: Snapshot
    • Retention: Keep last 7
  4. Click Create

Restore from backup

# Restore to the same or new VM ID
qmrestore /var/lib/vz/dump/vzdump-qemu-200-2026_04_14-02_00_00.vma.zst 200

# Or restore as a new VM
qmrestore /var/lib/vz/dump/vzdump-qemu-200-2026_04_14-02_00_00.vma.zst 201

High availability

If you run a Proxmox cluster (3+ nodes), you can enable HA for the CICADA IR VM so it automatically restarts on another node if the host fails:

# Add the VM to the HA group
ha-manager add vm:200 --state started --group production-nodes

Or in the web UI: Datacenter > HA > Add > select cicada-ir.


Proxmox troubleshooting

VM won't boot after import

SymptomCauseFix
UEFI shell instead of bootBoot order not set or disk not attachedRun qm set 200 --boot order=scsi0 and verify the disk is attached in Hardware
"No bootable device"Disk imported but not attached to a busCheck VM > Hardware for an unused disk and attach it to SCSI
Kernel panic on bootWrong BIOS type (SeaBIOS vs OVMF)Recreate VM with OVMF (UEFI) BIOS and q35 machine type

No network connectivity

  1. Verify the network device model is VirtIO (not E1000):
    qm config 200 | grep net
  2. Check the bridge exists and is up:
    ip link show vmbr0
  3. Inside the VM, check the interface name (Proxmox VirtIO typically shows as ens18):
    ip link show
    cat /etc/netplan/*.yaml
  4. If the interface name changed, update the netplan config to match the actual interface name

Disk import shows as "Unused Disk"

This means the disk was imported but not attached to a bus device:

# Find the unused disk
qm config 200 | grep unused

# Attach it (e.g., unused0 maps to the imported disk)
qm set 200 --scsi0 local-lvm:vm-200-disk-1
qm set 200 --boot order=scsi0

GPU passthrough not working

  • Verify IOMMU is enabled:
    dmesg | grep -e DMAR -e IOMMU
  • Verify the GPU is in its own IOMMU group:
    find /sys/kernel/iommu_groups/ -type l | sort -V
  • Ensure the GPU is not being used by the Proxmox host (check lspci -k for driver binding)
  • Some consumer GPUs have a "Code 43" issue — add cpu: host,hidden=1 to the VM config

Slow disk performance

  • Use VirtIO SCSI single as the SCSI controller (not the default LSI):
    qm set 200 --scsihw virtio-scsi-single
  • Enable IO thread for the disk:
    qm set 200 --scsi0 local-lvm:vm-200-disk-1,iothread=1
  • Use local-lvm (LVM-thin) rather than directory-based storage for better IOPS

Useful Proxmox commands

# VM status
qm status 200

# Start / stop / reboot
qm start 200
qm shutdown 200
qm reboot 200

# View VM configuration
qm config 200

# Open a console (VNC)
# Use the Proxmox web UI: VM > Console

# View VM resource usage
qm monitor 200

# Snapshot management
qm snapshot 200 pre-upgrade --description "Before CICADA IR upgrade"
qm listsnapshot 200
qm rollback 200 pre-upgrade

Next steps