ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Xenserver and Storage

    IT Discussion
    14
    145
    17.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stacksofplatesS
      stacksofplates @scottalanmiller
      last edited by stacksofplates

      @scottalanmiller said in Xenserver and Storage:

      @stacksofplates said in Xenserver and Storage:

      I’ve said it before, but I let the VMs do this. It’s less complex (when automated). Gluster does the replication in the VMs (for the very few that need it). Everything else is either stateless with floating IPs and rp/lb or it’s at the application layer.

      You are using Gluster inside of the "HA VMs"? So a file server cluster, for example, are VMs on top of local storage, but with Gluster inside of the VMs, so the VMs can be shut down and then fired up on top of any platform including physical or a different hypervisor, or just left off during work because there are other cluster members available, to remove the need for a live migration?

      Correct. Either scenario. So if we lose a host it’s rekickstarted (which is like 10 mins), the template is added, and then Ansible will recreate the guests and run the provisioning against them. The provisioning joins the VM back into the cluster and data starts replicating to it.

      If they aren’t using Gluster they just run whatever provisioning is needed.

      1 Reply Last reply Reply Quote 0
      • stacksofplatesS
        stacksofplates
        last edited by

        It’s mainly stuff like repos, misc apps, stupid stuff. Any super important data goes on the Isilon.

        1 Reply Last reply Reply Quote 1
        • S
          StorageNinja Vendor @olivier
          last edited by

          @olivier said in Xenserver and Storage:

          Real life usage
          So we decided to take a look with some benchmarks, and despite choosing in priority something safe/flexible, we had pretty nice performances, as you can see in our multiple benchmarks.

          Your benchmarks leave a lot to be desired. I don't see working set size. Testing the performance of local DRAM (What gluster does). This isn't very real world....

          olivierO 1 Reply Last reply Reply Quote 0
          • olivierO
            olivier @StorageNinja
            last edited by

            @storageninja It's all explained here:

            FIO is used to make the benchmarks, on a Debian 9 VM. It's done on a 10GiB file (enough to avoid caching). Throughput is fetched from XenServer VDI RRDs values, which is pretty accurate and close to the "reality". IOPS are fetched from FIO directly. You can find some FIO examples in this Sam's blog post.

            4k for IOPS and 4M used for throughput.

            1 Reply Last reply Reply Quote 0
            • jrcJ
              jrc
              last edited by

              Ok, so let me check my understanding here.

              VSAN would be a management VSA running on each host, with the local storage assigned to it in 2GB VHD chunks, presumably the VSA would aggregate these chunks. The VSAs will then keep both local SRs perfectly in sync via a dedicated direct link between the hosts, they then allow me to present the total space to the hosts as an iSCSI SR on which I can place the VM VHDs (so it'll be VHDs in VHDs on the host's storage).

              If one host goes down, then the HA feature will auto migrate the VM to the running host.

              So what happens if it's just the dedicated link that dies? Will all my VMs be running on both hosts on my network (causing a ton of issues)? And how does this setup cope with the data getting out sync if a host fails?

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • olivierO
                olivier
                last edited by

                I don't know about VSAN, but for XOSAN, it will be:

                • a virtual shared SR is exposed to XenServer (for XenServer, it's a bit like a NFS shared SR)
                • data is chunked (how depends on XOSAN mode, replicated or disperse) on various nodes
                • if one host is down with its VM, XS will boot those VMs to other hosts
                • each node decide to stop or not their write operations if they can meet the quorum. Eg on 3 hosts, it means 2 VMs can still communicate: it's OK. The isolated host (running but cut of from the rest) will be read only. Luckily, XenServer HA knows it, the host will stop and its VM started elsewhere.

                You won't have split brain scenario (data written "independently" on various sides). If quorum is not met: go back to read only. Data integrity is more important than being able to write.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @jrc
                  last edited by

                  @jrc said in Xenserver and Storage:

                  VSAN would be a management VSA ...

                  So what happens if it's just the dedicated link that dies? Will all my VMs be running on both hosts on my network (causing a ton of issues)? And how does this setup cope with the data getting out sync if a host fails?

                  I hate all these terms. LOL. VSAN is just a normal SAN, but virtualized. You can just use SAN and that, hopefully, answers all questions alone.

                  VSA is a really weird acronym that is used to mean a virtualized NAS. So VSAN is not a VSA as one is SAN and one is NAS. Neither term needs to exist, because they are still SAN and NAS.

                  How do SANs normally cope with losing connectivity to each other?

                  jrcJ 1 Reply Last reply Reply Quote 0
                  • jrcJ
                    jrc @scottalanmiller
                    last edited by

                    @scottalanmiller said in Xenserver and Storage:

                    @jrc said in Xenserver and Storage:

                    VSAN would be a management VSA ...

                    So what happens if it's just the dedicated link that dies? Will all my VMs be running on both hosts on my network (causing a ton of issues)? And how does this setup cope with the data getting out sync if a host fails?

                    I hate all these terms. LOL. VSAN is just a normal SAN, but virtualized. You can just use SAN and that, hopefully, answers all questions alone.

                    It does not, and yes I hate these acronyms as well.

                    How do SANs normally cope with losing connectivity to each other?

                    I've no clue as I have only ever worked with the one I have, an it is a single unit multipathed to my 2 hosts. So are you saying that real SANs also sync data between themselves?? So the 2 VMs (what I called VSAs) are like 2 physical SANs?

                    Look I get that VSAN = SAN in all functionality once setup. It's the setup, and the possible ramifications of said setup that I am unclear on.

                    I am trying to work out the best path from my current, fragile setup, to one that is more reliable and fault tolerant.

                    scottalanmillerS 3 Replies Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @jrc
                      last edited by

                      @jrc said in Xenserver and Storage:

                      I've no clue as I have only ever worked with the one I have, an it is a single unit multipathed to my 2 hosts. So are you saying that real SANs also sync data between themselves??

                      Of course, if you want high availability. That's the only path to HA. Any storage device that you want protection against system failure needs a synced unit that you can fail over to. That's why we say SANs generally only make sense when you have zero or two. A single unit is a single point of failure and most SANs aren't as reliable as normal servers, so generally a really fragile single point of failure.

                      So if you don't care about HA, you don't need two SANs or two VSANs. They are literally the same things. If you do want HA, you need at least two of either. VSAN can have the benefit of being RLS, which a hardware SAN cannot, so VSAN has the possibility of being way safer, even if you have only one. Fewer points of failure.

                      1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @jrc
                        last edited by

                        @jrc said in Xenserver and Storage:

                        So the 2 VMs (what I called VSAs) are like 2 physical SANs?

                        Yes, a SAN or a NAS is just a storage server, nothing else. There is no magic. And a virtualized server is exactly like a physical server, but with better design and abstraction. Nothing changes in reality.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @jrc
                          last edited by

                          @jrc said in Xenserver and Storage:

                          I am trying to work out the best path from my current, fragile setup, to one that is more reliable and fault tolerant.

                          For reliable storage in any small to moderate sized setup (that is, under ~20 physical servers in a single cluster) the only good answer is RLS. RLS is the big "magic" answer. How you get to RLS isn't critical. You can do VSAN, native RLS (like DRBD), VSA (virtualized NAS), or whatever. Systems like Scale HC3 or RHEV or HA-Lizard use native RLS via RAIN or Network RAID. Starwind does native on Hyper-V or VSAN on non-Hyper-V. VMware does VSAN. HPE does VSA. All of them work. Don't get caught up in "how" each does what they do, that's not very important. What matters is the RLS.

                          http://www.smbitjournal.com/2013/07/replicated-local-storage/

                          jrcJ 1 Reply Last reply Reply Quote 1
                          • jrcJ
                            jrc @scottalanmiller
                            last edited by jrc

                            @scottalanmiller said in Xenserver and Storage:

                            @jrc said in Xenserver and Storage:

                            I am trying to work out the best path from my current, fragile setup, to one that is more reliable and fault tolerant.

                            For reliable storage in any small to moderate sized setup (that is, under ~20 physical servers in a single cluster) the only good answer is RLS. RLS is the big "magic" answer. How you get to RLS isn't critical. You can do VSAN, native RLS (like DRBD), VSA (virtualized NAS), or whatever. Systems like Scale HC3 or RHEV or HA-Lizard use native RLS via RAIN or Network RAID. Starwind does native on Hyper-V or VSAN on non-Hyper-V. VMware does VSAN. HPE does VSA. All of them work. Don't get caught up in "how" each does what they do, that's not very important. What matters is the RLS.

                            Your response is like saying "Don't worry about the how to drive, what matters is that the car works and is safe to drive" Perfectly true, but completely useless if you have no idea how to drive and need to get from A to B.

                            So I get what you are saying, that RLS is what I need, I already knew this (maybe without the acronym), and I am on board with this. The whole point of this post was to try and work out how to get to an RLS setup, and which option would work best for our needs. I am solid on the concept of having replicated data on both hosts, makes perfect sense.

                            To extend my analogy of the car, I am perfectly aware of why I need a working safe car in order to get from A to B, so I don't need any more info on why it is needed. I now need info on how to drive the damn thing.

                            The technologies you list all have their pros and cons, so knowing what these are is what I really need to know. How do they handle a node failure? Out of sync data etc? How easy are they to implement? How much do they roughly cost?

                            scottalanmillerS 2 Replies Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @jrc
                              last edited by

                              @jrc said in Xenserver and Storage:

                              Your response is like saying "Don't worry about the how to drive, what matters is that the car works and is safe to drive" Perfectly true, but completely useless if you have no idea how to drive and need to get from A to B.

                              Not really. It's more like asking when you should drive in the left hand lane, and what I'm telling you is that you should just stay in the right hand lane and not worry about what the left hand lane is for.

                              RLS is the key to the architectural improvements. Picking the right product for your RLS is important, but how that product does it is of zero concern to you. That's under the hood. It might be interesting, but it doesn't matter. Kind of like worrying about how many inches of displacement your engine has. That never matters, ever. It's interesting, but what actually matters is reliability, efficiency, power, etc. You are getting stuck looking at how the engineers at these vendors are doing their under the hood designs. Certainly interesting, but doesn't affect you here.

                              1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @jrc
                                last edited by

                                @jrc said in Xenserver and Storage:

                                The technologies you list all have their pros and cons, so knowing what these are is what I really need to know. How do they handle a node failure? Out of sync data etc? How easy are they to implement? How much do they roughly cost?

                                That's kind of the point. VSAN vs. Native vs. VSA and RAID vs. RAIN do have pros and cons, but they are trivial and under the hood. They are background noise, a distraction. What matters to you (or to anyone) is actual, real world implementations and what is available, not how it works. For example, Starwind uses VSAN and Network RAID, but if they use VSA and RAIN, you'd not care at all. All you care about is the resulting performance, scale, reliability and cost. Does that make more sense?

                                So for you, it all comes down to actual products and how they meet your needs, not how they are doing that job.

                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  I hate to say it, because I love Xen, but it might really be worth leaving Xen behind. The solutions for it are few and far between and often rather complicated. KVM or Hyper-V have what you want, for free, from Starwind done in a really good way for what you need. And if you need or want support, you have that option from them. And they are active here, as well. So loads of choices.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller
                                    last edited by

                                    Going with Xen, you are far more limited to just a few options. With raw Xen, you have more options. With XenServer, they specifically remove or disallow certain common solutions like DRBD.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierO
                                      olivier
                                      last edited by

                                      DRBD is working with XS via HA Lizard, XOSAN coming in stable soon, people also working on Ceph (don't know the progress on this one, won't be hyperconvergence however).

                                      I'm not sure there is a ton of solution for a 2 node setup anyway. Even in VMWare (you need a "witness appliance" which is basically the arbiter node of Gluster).

                                      Maybe I missed something?

                                      scottalanmillerS 2 Replies Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @olivier
                                        last edited by

                                        @olivier said in Xenserver and Storage:

                                        DRBD is working with XS via HA Lizard, XOSAN coming in stable soon, people also working on Ceph (don't know the progress on this one, won't be hyperconvergence however).

                                        So one that has been known to not be very good and two that aren't out yet. Seems like that kind of answers that. Xen has a future, but not a present. Nothing wrong with that. But he needs to deploy today.

                                        1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller @olivier
                                          last edited by

                                          @olivier said in Xenserver and Storage:

                                          I'm not sure there is a ton of solution for a 2 node setup anyway. Even in VMWare (you need a "witness appliance" which is basically the arbiter node of Gluster).

                                          No one is considering VMWare, but no one needs three nodes. VMware, Hyper-V and KVM all have Starwind and other options, at two nodes.

                                          1 Reply Last reply Reply Quote 0
                                          • olivierO
                                            olivier
                                            last edited by olivier

                                            XOSAN is just weeks from release 😉

                                            How Starwind deals with split brain in 2 nodes scenario? Why it would be better than VSAN? (which is somehow a leader in the VMWare market itself IIRC)

                                            edit: those are true questions, not rhetorical. I'm curious about how to deal with some cases in 2 nodes.

                                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 6 / 8
                                            • First post
                                              Last post