Skip to content

Proxmox VE Cluster Ceph-Storage Configure

1. Ceph disk prepare

1.1 Setup disk to non-raid

Reboot server and press F2 to Entering System Setup

image-20250904171641579

Click Device Settings

image-20250904171728384

Click RAID card

image-20250904171825118

Click Configure

image-20250904171903555

Click Convert to Non-RAID Disk

image-20250904173228836

Click Check All -> OK to covert data disk to Non-RAID Disk

image-20250904173349072

Click OK to confirm

image-20250904173450366

Click Back to return to the RAID main menu

1.2 Check disk status

Click Main Menu to check disk status

image-20250904173605522

Click Physical Disk Management

image-20250904173643162

Check disk type

image-20250904173742868

Click Back -> Finish to finish disk setup, and reboot server

image-20250904173814742

2. Install ceph

Please confirm this node can access PVE enterprise repository (https://enterprise.proxmox.com)

  • Install PVE Ceph and check ceph version (@all nodes)

Login PVE webUI, select node and click >_Shell

pveceph install
apt policy ceph

image-20251110131928841

If there no purchase subscription, you can use local APT repository or 3-third repository, please open temporary access internet

  • Replace APT repository to local (Optional: If use nosubscription)
sed -i 's|https://enterprise.proxmox.com|https://wzs-yum.wistron.com/proxmox|g' /etc/apt/sources.list.d/ceph.sources
sed -i 's|enterprise|no-subscription|g' /etc/apt/sources.list.d/ceph.sources

3. Ceph config

3.1 Ceph initial

  • Initialize cluster configuration (@first node)

Login PVE cluster website

Select first node ZSXXXXX, and click Ceph -> Configure Ceph

image-20251110133550926

Select Public Network IP/CIDR and Cluster Network IP/CIDR, then click Next

image-20250929103915928

Click Finish

image-20250929105029451

3.2 Create monitor

⚠️The production environment requires at least 3 MON nodes (distributed on different physical nodes) to ensure HA

  • Select node, and click Monitor, and click Create under Monitor

image-20251110133924951

  • Select node Host, and click Create to add new Monitor

image-20251110134141680

  • Check Monitor status

image-20251110134302431

3.3 Create manager

⚠️Recommend to create at least 2 MGRs to ensure HA

  • Click Create under Manager

image-20251110134713128

  • Select node Host, and click Create to add new Manager

image-20251110134827648

  • Check Manager status

image-20251110135048375

3.4 Create OSD

⚠️add all node's HDD as OSD disks (use command or WebGUI)

Command:

pveceph osd create /dev/sdb
pveceph osd create /dev/sdc
......

WebGUI:

  • Select node, and click OSD -> Create: OSD

image-20251110135258894

  • Select unused Disk (example: /dev/sdb...), and click Create

image-20251110135440075

  • Check all OSD status

image-20251110140740412

3.5 Create pools

  • Select node, and click Pools -> Create

image-20251110140844285

  • Input ceph pool Name / Size, and click Create

Name: assign PVE ceph storage name

Size: number of replicas (minimum set to 2)

Add as Storage: checked (Automatically add to the cluster after creation)

image-20251110141024360

  • Check PVE cluster storage

image-20251110141448587