ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ZFS Based Storage for Medium VMWare Workload

    SAM-SD
    zfs storage virtualization filesystems raid
    9
    156
    75.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @donaldlandru
      last edited by

      @donaldlandru said:

      @scottalanmiller said:

      @donaldlandru said:

      24 spindle 900Gb (7.2k SAS) in 12 mirrored vdevs

      That's RAID 01, you never want that. You want 12 mirrors in a stripe for RAID 10.

      Understanding RAID 10 and RAID 01.

      This was modeled after the way TrueNAS (commercial version of FreeNAS) quoted us.

      The exact people I warn people against.

      http://www.smbitjournal.com/2015/07/the-jurassic-park-effect/

      The FreeNAS community should be avoided completely. The worst storage advice and misunderstandings of storage basics I've ever seen. FreeNAS, by its nature, collects storage misunderstandings and creates a community of the worst storage advice possible.

      1 Reply Last reply Reply Quote 1
      • scottalanmillerS
        scottalanmiller
        last edited by

        The FreeNAS community tends to do things like promote software RAID when it doesn't make sense and attempts to dupe people by using carefully crafted marketing phrases like "in order for FreeNAS to monitor the disks", leaving out critical advice like "that isn't something you want FreeNAS to be doing."

        1 Reply Last reply Reply Quote 1
        • scottalanmillerS
          scottalanmiller @donaldlandru
          last edited by

          @donaldlandru said:

          1. We have the investment into this. Like another recent thread here discussed once an SMB gets heavily invested one way it is hard to switch. To be honest, I am not sure how I could convince them too at this point. This actually seems like an opportunity for a great learning experience

          You have what investment into it now? Once you replace the storage that you have today, aren't you effectively starting over and really this is about stopping you from wasting a new investment rather than protecting a current one. Everything that you proposed is, I believe, a greater "reinvestment" than what I am proposing. So, if I'm understanding the concern here correctly, your HP and/or ZFS approach is actually the one that this concern would rule out, correct? Since it requires a much larger new investment.

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller
            last edited by

            Also in referencing point one.... what you are sensing is the fear of people giving in to the sunk cost fallacy. Even if they don't end up doing this, take a moment to sit back and understand how the sunk cost fallacy can be destructive and maybe even have a talk with the decision makers before looking at options about this fiscal mistake to make sure that people are thinking about it logically before they get the amygdala (fight or flight) emotional reaction from the idea of changing direction.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @donaldlandru
              last edited by

              @donaldlandru said:

              1. Training of supporting resources -- I have a counterpart in our off-shore office that is just getting up to speed on how VMware works -- to be this will be even harder to change

              All the more reason to go to an easier architecture with fewer moving parts and fewer things to support. Moving from VMware to XenServer or HyperV should take maybe an hour, tops. These are all very similar products that all do very little. Hypervisors should not require any real training. Most people can move from VMware vSphere to XenServer in literally a few minutes. It's all super simple GUI management, they should be able to just look at the interface and know what to do.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @donaldlandru
                last edited by

                @donaldlandru said:

                Edit -- Stepping back and thinking, the lack of drive bays are not a valid limiting factor as I could easily add SAS and do DAS storage on these nodes.

                You can do a hybrid too. Local for some workloads and DAS or shared for others.

                Figuring out if you need to just do local storage, which is super simple, or if you need to have replicated local storage, which is more complex, is the place to start. From the description, it sounds like straight local storage might be the way to go. Very cheap, very easy to tune for big time performance. XenCenter will happily put many independent (non-clustered) nodes into a single interface to make it super simple for the support staff wherever they are.

                1 Reply Last reply Reply Quote 0
                • dafyreD
                  dafyre
                  last edited by

                  It seems I remember @donaldlandru mentioning making one big 5 host cluster. If he were to use something such as XenServer he would get the big cluster and still be able to separate the workloads out between the dev servers and the ops servers and still have "Local" storage right?

                  Even if the answer to the "Local" storage (I say that because XenServer can do its own shared storage now, right?) is a resounding "No", he can still leverage replicatoin to replicate the Dev hosts into the Ops environment and vice versa for maintenance and emergencies, right?

                  coliverC scottalanmillerS 2 Replies Last reply Reply Quote 0
                  • coliverC
                    coliver @dafyre
                    last edited by

                    @dafyre said:

                    It seems I remember @donaldlandru mentioning making one big 5 host cluster. If he were to use something such as XenServer he would get the big cluster and still be able to separate the workloads out between the dev servers and the ops servers and still have "Local" storage right?

                    Even if the answer to the "Local" storage (I say that because XenServer can do its own shared storage now, right?) is a resounding "No", he can still leverage replicatoin to replicate the Dev hosts into the Ops environment and vice versa for maintenance and emergencies, right?

                    The answer to all your questions is yes. XenServer can deploy VMs on the same "cluster" to different storage devices. It will also do live migrations between various storage devices.

                    1 Reply Last reply Reply Quote 0
                    • dafyreD
                      dafyre
                      last edited by

                      If that's the case, then @donaldlandru could just build one big 5-host cluster (assuming he can get the Politics taken care of and the CPUs are compatible -- if that is even an issue) on XenServer and be happy... Upgrade to 4 or 6TB drives per host (RAID 10) and also be happy.

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @dafyre
                        last edited by

                        @dafyre said:

                        It seems I remember @donaldlandru mentioning making one big 5 host cluster. If he were to use something such as XenServer he would get the big cluster and still be able to separate the workloads out between the dev servers and the ops servers and still have "Local" storage right?

                        Even if the answer to the "Local" storage (I say that because XenServer can do its own shared storage now, right?) is a resounding "No", he can still leverage replicatoin to replicate the Dev hosts into the Ops environment and vice versa for maintenance and emergencies, right?

                        Correct. This would actually make you question the term cluster as the boxes would actually not be associated with each other except that they are all managed from the same interface. Is that a cluster? Not to most people. Does it look like a single entity to someone managing it? Yes.

                        He could replicate things into other environments, yes.

                        dafyreD 1 Reply Last reply Reply Quote 0
                        • dafyreD
                          dafyre @scottalanmiller
                          last edited by

                          @scottalanmiller I was thinking in terms of XenServer doing its own shared storage amongst the 5 servers that make up the cluster.

                          coliverC 1 Reply Last reply Reply Quote 0
                          • donaldlandruD
                            donaldlandru
                            last edited by

                            So to define cluster a little better in our environment.

                            For the ops servers cluster is the likely the proper term. We have two nodes, ensure that each node has available resources to run the entire workload of both servers if needed and use VMware's HA to manage to this.

                            For the dev servers, it is simply a single pane of glass, which really is all the essentials kit provides you and the access to the backup APIs.

                            The politics are likely to be harder to play as we just renewed our SnS for both Essentials and Essentials plus in January for three years.

                            Coupled with this, our offshore datacenter also has a 3 node development "cluster" which pushes us even further from truly having a single pane of glass (three so far if you are keeping count) which is also based on an essentials kit.

                            Another important piece of information with the local storage is that everything is based on 2.5" disks -- and all but two servers only have two bays each, getting any really kind of local storage without going external direct attached (non-shared) is going to be a challenge.

                            dafyreD 1 Reply Last reply Reply Quote 0
                            • coliverC
                              coliver @dafyre
                              last edited by

                              @dafyre said:

                              @scottalanmiller I was thinking in terms of XenServer doing its own shared storage amongst the 5 servers that make up the cluster.

                              I don't think XenServer has anything like VMWare's vSAN. I think you could probably do something like this inside of dom0 and make a RAIN or something.

                              dafyreD 1 Reply Last reply Reply Quote 0
                              • DashrenderD
                                Dashrender @scottalanmiller
                                last edited by

                                @scottalanmiller said:

                                By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                                Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • dafyreD
                                  dafyre @coliver
                                  last edited by

                                  @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                                  coliverC scottalanmillerS 2 Replies Last reply Reply Quote 0
                                  • DashrenderD
                                    Dashrender
                                    last edited by

                                    What DAS chassis would someone recommend for this setup?

                                    @donaldlandru mentioned that he needed at least 9 TB for the dev trio of servers, but not how much was needed for operations two servers.

                                    1 Reply Last reply Reply Quote 0
                                    • coliverC
                                      coliver @dafyre
                                      last edited by

                                      @dafyre said:

                                      @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                                      You could use DRBD for this. But that would be network RAID-1, not sure if you can do other methods with DRBD..

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @dafyre
                                        last edited by

                                        @dafyre said:

                                        If that's the case, then @donaldlandru could just build one big 5-host cluster (assuming he can get the Politics taken care of and the CPUs are compatible -- if that is even an issue) on XenServer and be happy... Upgrade to 4 or 6TB drives per host (RAID 10) and also be happy.

                                        Yes. Getting big drives and RAID 10 are critical to getting necessary speed. Using WD RE SAS drives is probably best to get up to the kinds of IOPS that he needs. Best to get some kind of caching going on to really make sure enough performance is there.

                                        With RAID 10 WD RE SAS we are probably looking at around 500 - 700 Read IOPS per machine, which is tons better than what was stated as needed, but without the shared IOPS you want extra overhead on a node by node basis to be "safe" in performance terms.

                                        The additional capacity will be a huge win. With 3TB drives he would have 6TB usable PER NODE rather than 9TB usable for all five machines. That's huge. Leaping from 9TB total to 30TB total. Go to 4TB, 5TB or 6TB drives and those numbers skyrocket to as high as 60TB total available space!

                                        1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller @dafyre
                                          last edited by

                                          @dafyre said:

                                          @coliver That's what I was thinking. Somebody somewhere (Seems lik I remember Scott mentioning this) has said that XenServer uses DRBD under the hood for this.

                                          Sure does.

                                          1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @Dashrender
                                            last edited by

                                            @Dashrender said:

                                            @scottalanmiller said:

                                            By dropping VMware vSphere Essentials you are looking at a roughly $1200 savings right away. Both HyperV and XenServer will do what you need absolutely free.

                                            Did the price of Essentials double? I thought it was $600 for three nodes for Essentials? and something like $5000 for Essentials Plus.

                                            Those are the rough numbers. He has five nodes so that means either buying all licenses twice (so $1200 and $10,000) or being disqualified from Essentials pricing altogether and needing to move to Standard licensing options.

                                            donaldlandruD DashrenderD 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 2 / 8
                                            • First post
                                              Last post