ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer hyperconverged

    IT Discussion
    xenserver xenserver 7 xen orchestra hyperconvergence hyperconverged
    14
    111
    19.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierO
      olivier @R3dPand4
      last edited by

      @r3dpand4 said in XenServer hyperconverged:

      @olivier So you only gain in capacity scaling in 2's? Again as I mentioned earlier I'm curious how the Write suspension functions during failure....

      Write are suspended few secs, works fine. Tested on Linux and Windows VMs.

      @dashrender said in XenServer hyperconverged:

      @olivier said in XenServer hyperconverged:

      @dashrender As far you don't lose a complete mirror (2 hosts on the same pair), no problem. So on 8 nodes, it means up to 4 hosts if there in one by mirror.

      and what happens if you do loose an entire mirrored pair?

      If you lose an entire mirror pair, it's like in RAID10: you will lose data.

      @fateknollogee said in XenServer hyperconverged:

      Where will XOSAN exist in this pricing structure?
      https://xen-orchestra.com/#!/pricing

      Nope, it will be a dedicated pricing, per pool, without number of hosts or disk space limitations.

      FATeknollogeeF R3dPand4R 2 Replies Last reply Reply Quote 0
      • olivierO
        olivier @DustinB3403
        last edited by

        @dustinb3403 said in XenServer hyperconverged:

        @fateknollogee said in XenServer hyperconverged:

        Where will XOSAN exist in this pricing structure?
        https://xen-orchestra.com/#!/pricing

        I believe it was mentioned as being a separate price entirely. So you can get XOA with or without XOSAN.

        XOA Free at least to get XOSAN (you need XOA to deploy and manage it).

        DustinB3403D 1 Reply Last reply Reply Quote 0
        • DustinB3403D
          DustinB3403 @olivier
          last edited by

          @olivier said in XenServer hyperconverged:

          @dustinb3403 said in XenServer hyperconverged:

          @fateknollogee said in XenServer hyperconverged:

          Where will XOSAN exist in this pricing structure?
          https://xen-orchestra.com/#!/pricing

          I believe it was mentioned as being a separate price entirely. So you can get XOA with or without XOSAN.

          XOA Free at least to get XOSAN (you need XOA to deploy and manage it).

          So with XOA Free you can get XOSAN?

          olivierO 1 Reply Last reply Reply Quote 0
          • FATeknollogeeF
            FATeknollogee @olivier
            last edited by

            @olivier said in XenServer hyperconverged:

            Nope, it will be a dedicated pricing, per pool, without number of hosts or disk space limitations.

            When will pricing be available?

            1 Reply Last reply Reply Quote 0
            • olivierO
              olivier @DustinB3403
              last edited by olivier

              @dustinb3403 said in XenServer hyperconverged:

              So with XOA Free you can get XOSAN?

              Yes, XOA Free + paid XOSAN will work.

              @fateknollogee said in XenServer hyperconverged:

              When will pricing be available?

              It's a matter of weeks.

              DustinB3403D 1 Reply Last reply Reply Quote 0
              • DustinB3403D
                DustinB3403 @olivier
                last edited by DustinB3403

                @olivier said in XenServer hyperconverged:

                @dustinb3403 said in XenServer hyperconverged:

                So with XOA Free you can get XOSAN?

                Yes, XOA Free + paid XOSAN will work.

                @fateknollogee said in XenServer hyperconverged:

                When will pricing be available?

                It's a matter of weeks.

                So then when will the community edition get XOSAN?

                olivierO 1 Reply Last reply Reply Quote 0
                • R3dPand4R
                  R3dPand4 @olivier
                  last edited by

                  @olivier We're talking about node failure where you're replacing hardware correct?
                  "On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only."
                  So are Writes suspended until the 2nd node is brought back online and introduced to the XOSAN? Obviously that's going to be a lot longer than a few seconds even if you're talking about scaling outside of a 2 node cluster. Do you mean that Writes are suspended or cached in the event of a failure while Host is shutting down and then Writes resume as normal on the Active node? If this is the case when the new Host is introduced to the cluster the replication resumes, correct?

                  olivierO 1 Reply Last reply Reply Quote 0
                  • olivierO
                    olivier @DustinB3403
                    last edited by

                    @dustinb3403 said in XenServer hyperconverged:

                    @olivier said in XenServer hyperconverged:

                    @dustinb3403 said in XenServer hyperconverged:

                    So with XOA Free you can get XOSAN?

                    Yes.

                    @fateknollogee said in XenServer hyperconverged:

                    When will pricing be available?

                    It's a matter of weeks.

                    So then when will the community edition get XOSAN?

                    Not like this. We'll open source the gluster driver, to allow people to make their own solution (hyperconverged or not). But it won't be turnkey.

                    1 Reply Last reply Reply Quote 0
                    • FATeknollogeeF
                      FATeknollogee
                      last edited by

                      Are the hosts shown in your example using HW or software RAID?

                      What is preferred, HW or software RAID?

                      DustinB3403D olivierO 2 Replies Last reply Reply Quote 0
                      • DustinB3403D
                        DustinB3403 @FATeknollogee
                        last edited by

                        @fateknollogee said in XenServer hyperconverged:

                        Are the hosts shown in your example using HW or software RAID?

                        What is preferred, HW or software RAID?

                        Based on the blog post I'm guessing HW raid

                        1 Reply Last reply Reply Quote 0
                        • olivierO
                          olivier @R3dPand4
                          last edited by

                          @r3dpand4 said in XenServer hyperconverged:

                          @olivier We're talking about node failure where you're replacing hardware correct?
                          "On a 2 node setup, there is an arbiter VM that acts like the witness. If you lose the host with the 2x VMs (one arbiter and one "normal"), you'll go in read only."
                          So are Writes suspended until the 2nd node is brought back online and introduced to the XOSAN? Obviously that's going to be a lot longer than a few seconds even if you're talking about scaling outside of a 2 node cluster. Do you mean that Writes are suspended or cached in the event of a failure while Host is shutting down and then Writes resume as normal on the Active node? If this is the case when the new Host is introduced to the cluster the replication resumes, correct?

                          Nope, writes are suspended when a node is down (time for system to know what to do). If there is enough nodes to continue, writes are resumed after being paused few secs. If there isn't enough nodes to continue, it will be then in read only.

                          Let's imagine you have 2x2 (distributed-replicated). You lose one XenServer host in the first mirror. After few secs, writes are back without having any service failed. Then, when you'll replace the faulty node, this fresh node will "keep up" the missing data in the mirror, but your VM won't notice it (healing status).

                          1 Reply Last reply Reply Quote 1
                          • olivierO
                            olivier @FATeknollogee
                            last edited by

                            @fateknollogee said in XenServer hyperconverged:

                            Are the hosts shown in your example using HW or software RAID?

                            What is preferred, HW or software RAID?

                            @dustinb3403 said in XenServer hyperconverged:

                            @fateknollogee said in XenServer hyperconverged:

                            Are the hosts shown in your example using HW or software RAID?

                            What is preferred, HW or software RAID?

                            Based on the blog post I'm guessing HW raid

                            It's not that easy to answer. Phase III will bring multi-disk capability on each host (and even tiering). So it means you could use any number of disks on each hosts to make inception-like scenario (replication on host level + on cluster level). But obviously, hardware raid is perfectly fine too 🙂

                            1 Reply Last reply Reply Quote 0
                            • DustinB3403D
                              DustinB3403
                              last edited by

                              During an event where a host goes down, and for that brief time period where writes are paused, are those writes cached and then written once the system determines what to do?

                              Or are those writes lost?

                              olivierO 1 Reply Last reply Reply Quote 0
                              • R3dPand4R
                                R3dPand4
                                last edited by

                                @olivier Thank you for clarifying, I'm assuming this would apply principally at least the same to a 2 node cluster? One goes down, writes are briefly suspended, writes resume on the Active node, failed node is replaced, then rebuild/healing process continues on the New node. How long are you expecting for rebuilds? I'm sure that's a loaded question because it's data dependent.....

                                olivierO 1 Reply Last reply Reply Quote 0
                                • olivierO
                                  olivier @DustinB3403
                                  last edited by olivier

                                  @dustinb3403 No writes are lost, it's handled on your VM level (VM OS wait for "ack" of virtual HDD but it's not answering, so it waits). Basically, cluster said: "writes command won't be answered as long as we figured it out".

                                  So it's safe 🙂

                                  1 Reply Last reply Reply Quote 1
                                  • olivierO
                                    olivier @R3dPand4
                                    last edited by olivier

                                    @r3dpand4 This is a good question. We made the choice to use "sharding", which means making blocks of 512MB for your data to be replicated or spread.

                                    So the heal time will be time to fetch all new/missing 512MB blocks of data since node was down. It's pretty fast on the tests I've done.

                                    R3dPand4R 1 Reply Last reply Reply Quote 1
                                    • R3dPand4R
                                      R3dPand4 @olivier
                                      last edited by

                                      @olivier So essentially just deduplication?

                                      olivierO 1 Reply Last reply Reply Quote 0
                                      • olivierO
                                        olivier @R3dPand4
                                        last edited by olivier

                                        @r3dpand4 That has nothing to do with deduplication. There is just chunks of files replicated or distributed-replicated (or even disperse for disperse mode).

                                        By the way, nobody talks about this mode, but it's my favorite 😛 Especially for large HDD, it's perfect. Thanks to the ability to lose any of n disk in your cluster. Eg with 6 nodes:

                                        This is disperse 6 with redundancy 2 (like RAID6 if you prefer). Any 2 XenServer hosts can be destroyed, it will continue to work as usual:

                                        And in this case (6 with redundancy of 2), you'll be able to address 4/6th of your total disk space!

                                        1 Reply Last reply Reply Quote 1
                                        • olivierO
                                          olivier
                                          last edited by

                                          Here it is with improved pics of XOSAN, I suppose it's more clear now:

                                          0_1505215577248_8_DISPERSE_6(2).PNG

                                          0_1505215604111_5_DISTRIB-REP 3x2.PNG

                                          What do you think?

                                          DustinB3403D 1 Reply Last reply Reply Quote 2
                                          • DustinB3403D
                                            DustinB3403 @olivier
                                            last edited by

                                            @olivier That picture helps make it way more clear.

                                            Each server is providing 100GB and either are standalone systems (disperse) or are paired (dist. repl).

                                            olivierO 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 5 / 6
                                            • First post
                                              Last post