ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Xenserver and Storage

    IT Discussion
    14
    145
    17.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierO
      olivier
      last edited by

      Eg XS patching for critical sec reasons, I don't have the resources to make our apps redundant at their level, so I rely on virt (and live mig) to avoid outage.

      scottalanmillerS 1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @olivier
        last edited by

        @olivier said in Xenserver and Storage:

        Eg XS patching for critical sec reasons, I don't have the resources to make our apps redundant at their level, so I rely on virt (and live mig) to avoid outage.

        Sure, but what service is so critical that you can't reboot? SMBs basically never have any service that needs to stay up. That's the thing. I get why services will go down without an HA solution in place, but what no one ever explains to me is why going down is a problem. How many users are impacted and in what way and for how long?

        1 Reply Last reply Reply Quote 0
        • olivierO
          olivier
          last edited by olivier

          Think priorities. It will impact some users (because the updater in XOA), not dramatic for the business but it's better to avoid that. So the cost to have it is negligible (already using virt). And I don't have the resource to make the service app HA (because live migration is free…)

          edit: in the end, if I follow your arguments, virtualization is also useless for SMBs.

          scottalanmillerS 1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @olivier
            last edited by

            @olivier said in Xenserver and Storage:

            edit: in the end, if I follow your arguments, virtualization is also useless for SMBs.

            Nope, not in the least. This would imply a misunderstanding of the purpose of virtualization. Virtualization is free and makes things safer. Everyone benefits from virtualization, every time.

            HA is not free, adds its own risks (that are very high) and provides uncommon benefits. Most shops are hurt by HA, not helped by it.

            Same logic, totally different results.

            1 Reply Last reply Reply Quote 0
            • olivierO
              olivier
              last edited by

              I'm not speaking about HA right now, I'm speaking about live migration 😉

              HA is another beast, I agree it should be used only after thinking benefits/problems.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller
                last edited by

                It's simply good business, look at the cost of downtime, look at the cost of HA. Then look at the risks without HA and the risks with HA. Put them together in a normal cost/risk analysis and the result is almost always that HA doesn't deliver something of value enough to overcome its costs. And as it adds a lot of risk (not as much as it mitigates) it is far, very far, from a clear win even in the risk portion.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @olivier
                  last edited by

                  @olivier said in Xenserver and Storage:

                  I'm not speaking about HA right now, I'm speaking about live migration 😉

                  They are essentially one and the same. The technology to do one does the other. If you have HA, you can live migrate. If you can live migrate, you can't necessarily do HA. I'm giving you the advantage by lumping them together since the cost of one gives you both.

                  1 Reply Last reply Reply Quote 0
                  • olivierO
                    olivier
                    last edited by

                    HA is automated and more "dangerous". Live migration is a manual process. That was the context I meant.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller
                      last edited by

                      If I'm willing to do live migration without HA, you get even more options, technically, making live migration easier and no shared storage needed at all.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @olivier
                        last edited by

                        @olivier said in Xenserver and Storage:

                        HA is automated and more "dangerous". Live migration is a manual process. That was the context I meant.

                        Makes sense. I've seen live migration beliefs take down big banks because someone thought it was safe and did it without a greenzone and the hypervisors (ESXi) died from trying to do it.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          If live migrations were free, carried no risks, and took no effort, of course they would be a great benefit. But free and riskless they are not. That's what causes problems.

                          1 Reply Last reply Reply Quote 0
                          • stacksofplatesS
                            stacksofplates
                            last edited by

                            I’ve said it before, but I let the VMs do this. It’s less complex (when automated). Gluster does the replication in the VMs (for the very few that need it). Everything else is either stateless with floating IPs and rp/lb or it’s at the application layer.

                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @stacksofplates
                              last edited by

                              @stacksofplates said in Xenserver and Storage:

                              I’ve said it before, but I let the VMs do this. It’s less complex (when automated). Gluster does the replication in the VMs (for the very few that need it). Everything else is either stateless with floating IPs and rp/lb or it’s at the application layer.

                              You are using Gluster inside of the "HA VMs"? So a file server cluster, for example, are VMs on top of local storage, but with Gluster inside of the VMs, so the VMs can be shut down and then fired up on top of any platform including physical or a different hypervisor, or just left off during work because there are other cluster members available, to remove the need for a live migration?

                              stacksofplatesS 1 Reply Last reply Reply Quote 0
                              • stacksofplatesS
                                stacksofplates @scottalanmiller
                                last edited by stacksofplates

                                @scottalanmiller said in Xenserver and Storage:

                                @stacksofplates said in Xenserver and Storage:

                                I’ve said it before, but I let the VMs do this. It’s less complex (when automated). Gluster does the replication in the VMs (for the very few that need it). Everything else is either stateless with floating IPs and rp/lb or it’s at the application layer.

                                You are using Gluster inside of the "HA VMs"? So a file server cluster, for example, are VMs on top of local storage, but with Gluster inside of the VMs, so the VMs can be shut down and then fired up on top of any platform including physical or a different hypervisor, or just left off during work because there are other cluster members available, to remove the need for a live migration?

                                Correct. Either scenario. So if we lose a host it’s rekickstarted (which is like 10 mins), the template is added, and then Ansible will recreate the guests and run the provisioning against them. The provisioning joins the VM back into the cluster and data starts replicating to it.

                                If they aren’t using Gluster they just run whatever provisioning is needed.

                                1 Reply Last reply Reply Quote 0
                                • stacksofplatesS
                                  stacksofplates
                                  last edited by

                                  It’s mainly stuff like repos, misc apps, stupid stuff. Any super important data goes on the Isilon.

                                  1 Reply Last reply Reply Quote 1
                                  • S
                                    StorageNinja Vendor @olivier
                                    last edited by

                                    @olivier said in Xenserver and Storage:

                                    Real life usage
                                    So we decided to take a look with some benchmarks, and despite choosing in priority something safe/flexible, we had pretty nice performances, as you can see in our multiple benchmarks.

                                    Your benchmarks leave a lot to be desired. I don't see working set size. Testing the performance of local DRAM (What gluster does). This isn't very real world....

                                    olivierO 1 Reply Last reply Reply Quote 0
                                    • olivierO
                                      olivier @StorageNinja
                                      last edited by

                                      @storageninja It's all explained here:

                                      FIO is used to make the benchmarks, on a Debian 9 VM. It's done on a 10GiB file (enough to avoid caching). Throughput is fetched from XenServer VDI RRDs values, which is pretty accurate and close to the "reality". IOPS are fetched from FIO directly. You can find some FIO examples in this Sam's blog post.

                                      4k for IOPS and 4M used for throughput.

                                      1 Reply Last reply Reply Quote 0
                                      • jrcJ
                                        jrc
                                        last edited by

                                        Ok, so let me check my understanding here.

                                        VSAN would be a management VSA running on each host, with the local storage assigned to it in 2GB VHD chunks, presumably the VSA would aggregate these chunks. The VSAs will then keep both local SRs perfectly in sync via a dedicated direct link between the hosts, they then allow me to present the total space to the hosts as an iSCSI SR on which I can place the VM VHDs (so it'll be VHDs in VHDs on the host's storage).

                                        If one host goes down, then the HA feature will auto migrate the VM to the running host.

                                        So what happens if it's just the dedicated link that dies? Will all my VMs be running on both hosts on my network (causing a ton of issues)? And how does this setup cope with the data getting out sync if a host fails?

                                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                                        • olivierO
                                          olivier
                                          last edited by

                                          I don't know about VSAN, but for XOSAN, it will be:

                                          • a virtual shared SR is exposed to XenServer (for XenServer, it's a bit like a NFS shared SR)
                                          • data is chunked (how depends on XOSAN mode, replicated or disperse) on various nodes
                                          • if one host is down with its VM, XS will boot those VMs to other hosts
                                          • each node decide to stop or not their write operations if they can meet the quorum. Eg on 3 hosts, it means 2 VMs can still communicate: it's OK. The isolated host (running but cut of from the rest) will be read only. Luckily, XenServer HA knows it, the host will stop and its VM started elsewhere.

                                          You won't have split brain scenario (data written "independently" on various sides). If quorum is not met: go back to read only. Data integrity is more important than being able to write.

                                          1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @jrc
                                            last edited by

                                            @jrc said in Xenserver and Storage:

                                            VSAN would be a management VSA ...

                                            So what happens if it's just the dedicated link that dies? Will all my VMs be running on both hosts on my network (causing a ton of issues)? And how does this setup cope with the data getting out sync if a host fails?

                                            I hate all these terms. LOL. VSAN is just a normal SAN, but virtualized. You can just use SAN and that, hopefully, answers all questions alone.

                                            VSA is a really weird acronym that is used to mean a virtualized NAS. So VSAN is not a VSA as one is SAN and one is NAS. Neither term needs to exist, because they are still SAN and NAS.

                                            How do SANs normally cope with losing connectivity to each other?

                                            jrcJ 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 5 / 8
                                            • First post
                                              Last post