Hyper-V Guest and iSCSI
-
I'm looking for a sanity check as I look back on my creation from a couple of years ago and cringe.
Current environment (not this is not the same as the data center stuff I moved to local storage not long ago):
Synology 1513 with five 2 TB drives in a RAID 5, which presents storage to my Hyper-V host via iSCSI. This drive is configured as a pass-through disk for a domain controller / file server / and other things VM. This disk is only used as storage for the filer server (about 1 TB of storage is used) . A locally stored virutal disk is used as the boot disk with the OS and other software installed.
Hyper-V host's (Dell T620 with Server 2012R2) non-iSCSI storage (configured + purchased before my time): Two 500 GB hard disks in RAID 1 used for the OS and the VMs. Three 2 TB hard disks in RAID 5 used as storage for the VM's virtual disks (2.5 TBs of this storage used).
So now that you've recovered from the shock of the above mess, here's the first thing I'm considering for remediation.
Ideally, neither system would have RAID 5 raids. Additionally, I'd rather the Synology not be in the picture and local storage used for the VMs. What I want to do is reconfigure the Synology to be in RAID 10 (with the extra drive being a hotspare).
This brings me to my sanity check. Once the Synology is reconfigured for RAID 10, I intend to simply create a virtual disk (for the size I need) on the iSCSI disk and attach the virtual disk to the VM. That seems 100% better than attaching the entire iSCSI disk to the VM as a pass-through disk. Agree or disagree?
Among the many things wrong, I want to focus on the best way to use an iSCSI disk for this particular thread. I'm looking forward to the learning and confirmation that's about to take place.
-
So... how much space do you have now and how much do you need going forward? Would it make sense to drop the Synology all together and get three more 2TB drives and put the T620 in OBR10? 6TB of storage in that event. That depends on how many drive bays the T620 has.
You can then use the Synology in a RAID 6 array for backups.
-
@EddieJennings said in Hyper-V Guest and iSCSI:
This brings me to my sanity check. Once the Synology is reconfigured for RAID 10, I intend to simply create a virtual disk (for the size I need) on the iSCSI disk and attach the virtual disk to the VM. That seems 100% better than attaching the entire iSCSI disk to the VM as a pass-through disk. Agree or disagree?
This is the best thing you can do in your situation.
-
@EddieJennings said in Hyper-V Guest and iSCSI:
Among the many things wrong, I want to focus on the best way to use an iSCSI disk for this particular thread. I'm looking forward to the learning and confirmation that's about to take place.
What will you use iSCSI for?
If it's disk storage, I typically set up iSCSI to the Host, and do what you said above if needed.
If it's for something like a tape drive (via iSCSI), you can set that up in a VM no problem if needed. It doesn't matter.
I don't really use iSCSI for anything anymore except backups. Everything else is local storage. This is where you want to try to go towards.
-
@Tim_G said in Hyper-V Guest and iSCSI:
@EddieJennings said in Hyper-V Guest and iSCSI:
This brings me to my sanity check. Once the Synology is reconfigured for RAID 10, I intend to simply create a virtual disk (for the size I need) on the iSCSI disk and attach the virtual disk to the VM. That seems 100% better than attaching the entire iSCSI disk to the VM as a pass-through disk. Agree or disagree?
This is the best thing you can do in your situation.
Yup, I agree. Or, buy yet another disk and just get a bigger, faster RAID 10. Having a hot spare seems over the top. The old system had multiple single points of failure, an inverted pyramid, double RAID 5s, iSCSI layer, etc. You are doing something like a 1,000 fold increase in reliability by making the change to local RAID 10 as it is. The hot spare seems silly at that point.
-
@Tim_G said in Hyper-V Guest and iSCSI:
I don't really use iSCSI for anything anymore except backups. Everything else is local storage. This is where you want to try to go towards.
Same, or for hyperconverged interconnects. And when doing that you want RDMA via iSER.
-
This won't be an overnight fix. My first step though is complete. The Synology is now in RAID 10 with 1 hot spare disk. One thing to note; the disks in this Synology are WD reds :(. So even though the storage connects to the Hyper-V host via iSCSI, the VHDX stored on it, which contains the data for our file server is at least not living on a RAID 5 anymore.
Next will be dealing with the local storage situation on the Hyper-V host (the two-drive RAID 1 and the three-drive RAID 5). The only "production" VM storage left on the RAID 5 is our Spiceworks VM, which I'll move to the RAID 1 storage. The other VMs that have their VHDs on the RAID 5 are non-production / testing VMs, so if they're lost, it's not a big deal to just rebuild them.