Stratis is just an automation and communication layer for XFS and LVM
I did not know that.
I didn't either, was just researching it, lol. It makes it a lot more valuable, it is essentially taking the "one filesystem" advantages of ZFS or BtrFS and making them happen with traditional, mature technologies. Kind of a best of both worlds approach. Actually quite nice, if it works well.
keep your images and ISOs in the default location of /var/lib/libvirt/images/?
Yes I do, but I create 2 new folders there, iso and vm.
Fedora will be presented a 4 TB block device ?
Why dont you separe that a little, and have more fun. Block device I assume DAS, if no why dont you make the storage reliable and robust, and make it its own server, like another fedora or centos install, with RAID 10 and the simplest option to share is NFS, and this way you can have many KVMs and the migration feature will actually work, and you can do RAID on just /var, and you scan scale easily with KVM nodes + KVM nodes can be state file, think salt stack, and you can treat them as pure compute nodes.
Because @EddieJennings is talking about his home lab, which will consist of a single 1U server. That hadn't been mentioned in this thread.
Bah! Folks should be able to read my mind ;). There were some good ideas in this thread though.
What I decided on was giving enough space to / live comfortably, and gave everything else to /var.
@travisdh1 So on second thought, I'm thinking it might be a better approach to redirect the call recordings to the block device directly, without extending the LVM volume to the block device. So it would be like this:
Attach block device and create partition and file system.
Mount the new device to a new directory (/callrecordings)
In FreePBX, point the call recordings to this new directory.
This way, the VPS disk, is still completely separate from the block device disk. In my head, this just seems cleaner, and has less potential for errors if the block device is ever unavailable.
Yes, that makes way more sense.
The only thing that made me think of that was because about 2 weeks ago Vultr NJ had some issues with block storage. If they have an issue again, at least I could still boot the VM. (although I would have to remove the block device from the fstab. But then, It should boot fine I suppose. (crosses fingers)
Of course, in more modern systems, the use of advanced LVMs instead of older partitions makes this a little more flexible so that more control over the process can exist. But all of the core problems still exist.
Some vendors try to market this mechanism as "RAID virtualization", which isn't a completely crazy name due to the layers of abstraction, but it makes it sound valuable when, in reality, it is not. RAID virtualization when used for the purpose of enabling hot or live RAID array growth is generally a good idea. Used as a kludge to enable bad ideas, it remains bad.