ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Food for thought: Fixing an over-engineered environment

    IT Discussion
    design server consolidation virtualization hyper-v storage backup
    9
    91
    7.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • travisdh1T
      travisdh1 @EddieJennings
      last edited by

      @eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.

      Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.

      EddieJenningsE scottalanmillerS 2 Replies Last reply Reply Quote 2
      • EddieJenningsE
        EddieJennings @travisdh1
        last edited by

        @travisdh1 said in Food for thought: Fixing an over-engineered environment:

        @eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.

        Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.

        I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.

        travisdh1T scottalanmillerS 2 Replies Last reply Reply Quote 2
        • travisdh1T
          travisdh1 @EddieJennings
          last edited by

          @eddiejennings said in Food for thought: Fixing an over-engineered environment:

          @travisdh1 said in Food for thought: Fixing an over-engineered environment:

          @eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.

          Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.

          I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.

          Yep.

          Where you'd see a major benefit in this case is if you ever needed to restore the backup storage from another backup. The hardware doesn't matter so long as it has the needed amount of storage.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @EddieJennings
            last edited by

            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

            @travisdh1 said in Food for thought: Fixing an over-engineered environment:

            @eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.

            Server 3 could be you're testing/development area and with Hyper-V installed on all of them, you could quickly restore images to it if one of the other servers goes down. Yeah, the performance would just hurt, but at least you'd be running till actual hardware replacements arrive.

            I see. So I'd also make Server 1 a hypervisor with one VM, and that VM would provide block storage to the VM that would be running backup software. That make sense; since on second thought, there would be no reason not to just make it a hypervisor.

            Yup, basically no exception to installing the hypervisor first.

            1 Reply Last reply Reply Quote 1
            • scottalanmillerS
              scottalanmiller @travisdh1
              last edited by

              @travisdh1 said in Food for thought: Fixing an over-engineered environment:

              @eddiejennings Getting there. Hyper-V should be installed on all the servers. Even when you just have a single VM running on a hypervisor, the time savings in restoring after a hardware failure is worthwhile.

              Can add stability, too.

              1 Reply Last reply Reply Quote 0
              • JaredBuschJ
                JaredBusch
                last edited by

                This is a disaster and needs updated from what you learned today in your other thread.

                But beyond that, you talk about putting the connections to the router. This is wrong. A router routes traffic. it is not a switch.

                You still require a switch.

                You put multiple NICS in a team and plug those into the switch. If the switch supports full LACP then you can get awesome performance, if not, switch independent mode is the best solution.

                You install Hyper-V Server 2016 on all three boxes.

                On all Servers:
                Create 1 partition for the Hyper-V Server drive C (I use 80GB, but I think the technical minimum is 32GB)
                Create 1 partition from the rest of the space to mount as drive D inside Hyper-V
                This D drive is where all of the guest files will be stored. Config files as well as replicas, checkpoints (snapshots), and the virtual hard disks.

                On Server 2, restore all your current servers as new VMs
                On Server 1, create a small VHDX to install Windows and run Veeam.
                On Server 1 create a large VHDX to house the backups. This will bethe D drive inside the Veeam guest.

                On Server 3, setup a test environment or sell the hardware. You could use Hyper-V replication, but you need SA on the original guest VMs or full licenses for the replicas.

                EddieJenningsE 1 Reply Last reply Reply Quote 5
                • EddieJenningsE
                  EddieJennings @JaredBusch
                  last edited by EddieJennings

                  @jaredbusch Not as disastrous as what we have, but not good either. This thread is my brainstorm thread :D. I'll put an updated diagram up tomorrow, when I return to work.

                  On the switch, you're right. Even though the router can probably handle the traffic, it is better to let it do its job and the switch likewise. Also since I have plenty of NICs on each physical server, I can utilizing teaming as you suggested -- which is what we have now, but with the needless VLANs. The Dell switch I have supports LACP, which is what we're using for the current teams.

                  On the storage configuration of the servers (drive Cs and Ds), that's what I was considering. I'm glad you mentioned putting the config files, etc., on the same partition as the VHDs, as I think about it, I don't see any advantage to keeping the VHDs separate from everything else.

                  From my OP, our main dev / one of my bosses (yes, that's as screwy as it sounds) is envisioning this idea of eventually having staging VMs, which could perhaps create a use for server 3. My main focus now is fixing the terrible environment from the OP.

                  1 Reply Last reply Reply Quote 2
                  • EddieJenningsE
                    EddieJennings
                    last edited by

                    Diagram update following lessons in backup design.

                    0_1510848781303_8425d1c6-df44-43b5-aec0-ef0006c7ea14-image.png

                    One other thing for me to consider is that each of these physical servers has 8 NICs (excluding the IPMI NIC). That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?

                    scottalanmillerS 1 Reply Last reply Reply Quote 1
                    • scottalanmillerS
                      scottalanmiller @EddieJennings
                      last edited by

                      @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                      That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?

                      Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.

                      DashrenderD 1 Reply Last reply Reply Quote 1
                      • DashrenderD
                        Dashrender @scottalanmiller
                        last edited by

                        @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                        That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?

                        Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.

                        Wouldn't this be 4 max per vNetwork in the VM host?

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • DashrenderD
                          Dashrender
                          last edited by

                          If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                          scottalanmillerS EddieJenningsE 2 Replies Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @Dashrender
                            last edited by

                            @dashrender said in Food for thought: Fixing an over-engineered environment:

                            If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                            Where "might be" = "long past due."

                            1 Reply Last reply Reply Quote 1
                            • scottalanmillerS
                              scottalanmiller @Dashrender
                              last edited by

                              @dashrender said in Food for thought: Fixing an over-engineered environment:

                              @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                              That's of course way overkill, but since I have them , would there be a reason to not team more than 4 NICs together (notwithstanding the fact that a decision hasn't been made yet about what to do with Server 3)?

                              Four is the max you can consider in a load balancing team. If you move to pure failover, you can do unlimited. Beyond four, the algorithms become so inefficient that you don't get faster, and by six, you start actually getting slower. Most people only go to two, four is the absolute max to consider. Since you have eight (how did that happen?) you might as well do four. But the rest are wasted or could be used for a different network connection entirely.

                              Wouldn't this be 4 max per vNetwork in the VM host?

                              Correct, if the connects are independent, you get to do another four.

                              1 Reply Last reply Reply Quote 0
                              • EddieJenningsE
                                EddieJennings @Dashrender
                                last edited by

                                @dashrender said in Food for thought: Fixing an over-engineered environment:

                                If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                                I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

                                scottalanmillerS DashrenderD 2 Replies Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @EddieJennings
                                  last edited by

                                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                  @dashrender said in Food for thought: Fixing an over-engineered environment:

                                  If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                                  I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

                                  That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.

                                  It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.

                                  EddieJenningsE 1 Reply Last reply Reply Quote 2
                                  • EddieJenningsE
                                    EddieJennings @scottalanmiller
                                    last edited by

                                    @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                                    @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                    @dashrender said in Food for thought: Fixing an over-engineered environment:

                                    If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                                    I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

                                    That's not how things work. Teaming is for bandwidth, not-teaming is for latency. Working in banks, we specifically avoided teaming because it increases latency slowing down the network traffic on a per packet basis. Everything is a trade off, or there wouldn't be options.

                                    It's like adding more memory to your server. It's more stuff that can go in memory, but more memory that the CPU has to manage and therefore, it adds load to the server which turns into latency for processes.

                                    That makes sense. Performance was a poor choice of words.

                                    1 Reply Last reply Reply Quote 0
                                    • JaredBuschJ
                                      JaredBusch
                                      last edited by

                                      I never use IPMI.

                                      DashrenderD EddieJenningsE 2 Replies Last reply Reply Quote 0
                                      • DashrenderD
                                        Dashrender @EddieJennings
                                        last edited by

                                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                        @dashrender said in Food for thought: Fixing an over-engineered environment:

                                        If you need more bandwidth than 4 GB, it might be time to look at 10 GB connections.

                                        I don't need more than 1 GB judging from what New Relic has shown me; however, since I have the hardware (and 4 of the 8 NICs are integrated on the motherboard) I might as well configure it to give the most performance it can.

                                        This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.

                                        If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.

                                        EddieJenningsE 1 Reply Last reply Reply Quote 1
                                        • EddieJenningsE
                                          EddieJennings @Dashrender
                                          last edited by

                                          @dashrender

                                          This is not only bad for the reasons Scott said, but it's also a waste of Switch ports and resources.

                                          If you only need 1 Gb, then I'd remove the card (less power use) and only use two onboard NICs.

                                          The whole situation is a waste of resources. I'm looking to see how to best utilize them.

                                          DashrenderD 1 Reply Last reply Reply Quote 0
                                          • DashrenderD
                                            Dashrender @JaredBusch
                                            last edited by

                                            @jaredbusch said in Food for thought: Fixing an over-engineered environment:

                                            I never use IPMI.

                                            @JaredBusch thought IPMI was something special for Hyper-V, not that you were talking about the iDRAC like interface - he stands corrected and uses the iDRAC like interface as much as he can.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 4 / 5
                                            • First post
                                              Last post