Proxmox install for use with a ceph cluster
-
This is all lab work I'm doing and I'm having a hell of a time sorting out how I'm supposed to install proxmox on my hosts.
Each host is an HP DL380 Gen9 with 2 drives in each host.
When I perform the proxmox installation, the installer sees
/dev/sda
for the total space of what those drives would offer in a RAID1.Now I'm sure the answer is "add more drives to each" but that is counter-initive since I see no option at all.
Paging @scottalanmiller since I know you've been using this for a bit.
-
Proxmox would not offer any mdraid configuration in its installer. https://pve.proxmox.com/wiki/Software_RAID
-
@taurex said in Proxmox install for use with a ceph cluster:
Proxmox would not offer any mdraid configuration in its installer. https://pve.proxmox.com/wiki/Software_RAID
This makes sense, but doesn't add up as the only disk listed during install is
/dev/sda
.I haven't setup anything on these, started the server and booted from a usb.
-
Are you positive no logical volume is configured on that host in its RAID controller? You should be able to check it via iLO. Or you can start Provmox VE installer in debug mode that gives you a console. You can use it to list all recognised drives.
-
@taurex I'm positive, but will check again tomorrow.
-
During the install, Target Harddisk should show you /dev/sda and /dev/sdb. The default filesystem is ext4. Now if you want to raid those to drives during the install, you can click Options, click the drop-down next to Filesystem and select either zfs (Raid1) or zfs (RAIDZ-1) or other RAID(Z).
-
@black3dynamite said in Proxmox install for use with a ceph cluster:
During the install, Target Harddisk should show you /dev/sda and /dev/sdb. The default filesystem is ext4. Now if you want to raid those to drives during the install, you can click Options, click the drop-down next to Filesystem and select either zfs (Raid1) or zfs (RAIDZ-1) or other RAID(Z).
Yeah I'm only presented with /dev/sda so there has to be something else, be it hardware or a raid controller getting in the way.
I'm not at the office yet but will check when I am.
-
Turns out the P420i defaults to setting up a R1 for you if you don't configure it as a "nicety". But if I don't want a R1 why would it do that?!
Maybe I want an R0.
-
Anyways I'm now installing to a 32GB USB drive just to test and see how it all works.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Turns out the P420i defaults to setting up a R1 for you if you don't configure it as a "nicety". But if I don't want a R1 why would it do that?!
Maybe I want an R0.
Because it HAS to default to SOMETHING. If you wanted anything, you'd have selected it. So they default to what is safest and most common. Why pay for a hardware controller if you didn't have a use for it? The key features of a hardware controller are disabled with R0.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Turns out the P420i defaults to setting up a R1 for you if you don't configure it as a "nicety". But if I don't want a R1 why would it do that?!
Maybe I want an R0.
Because it HAS to default to SOMETHING. If you wanted anything, you'd have selected it. So they default to what is safest and most common. Why pay for a hardware controller if you didn't have a use for it? The key features of a hardware controller are disabled with R0.
Not when I've expressly wiped the configuration, on purpose. The system is actively attempting to bypass the settings I had configured.
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Not when I've expressly wiped the configuration, on purpose. The system is actively attempting to bypass the settings I had configured.
It can't, it only does that if you forget to configure it. If you configure for RAID 0, it will never go to RAID 1. but if you wipe it and force it to choose a default, that's the same as choose RAID 1.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
The system is designed around paying customers, not people receiving the hardware later.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
The system is designed around paying customers, not people receiving the hardware later.
Of course, but when any administrator specifically goes into the controller and wipes the configuration and tries to boot the system, it immediately attempts to go back and recreate the very same array.
If I wanted an HP Server using JBOD that should also be an option, regardless if the system has hardware raid.
-
Which it is an option, but you have to fud around with the controller at start-up
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Of course, but when any administrator specifically goes into the controller and wipes the configuration and tries to boot the system, it immediately attempts to go back and recreate the very same array.
Only if you don't make something else. it has to do "something", no matter what, there has to be some configuration. If it did anything else, we'd still be having this conversation. You wiped it, but didn't configure it, so it was in a situation of having to make a "judgement call" to try to help you and it made what is, far and away, the only reasonable choice other than stopping the boot completely and forcing you to manually decide - which given that you had already opted out of that, isn't a great choice.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
If I wanted an HP Server using JBOD that should also be an option, regardless if the system has hardware raid.
That's a different discussion. That controller explicitly doesn't offer JBOD at all. By keeping the hardware in place, you have informed the hardware not to allow JBOD. If JBOD was your goal (which is totally different than wanting RAID 0), then wiping the controller isn't the right action, removing it is. It's a RAID controller, it's one purpose is to avoid JBOD.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Which it is an option, but you have to fud around with the controller at start-up
It's not, you only mimic JBOD in a bad way. It's not safe, you should remove the controller for better safety. but why?
-
Can that controller do raid 0 with one drive to do fake jbod?
-
@jt1001001 said in Proxmox install for use with a ceph cluster:
Can that controller do raid 0 with one drive to do fake jbod?
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
So this lab experiment is over.