ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Food for thought: Fixing an over-engineered environment

    IT Discussion
    design server consolidation virtualization hyper-v storage backup
    9
    91
    7.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • EddieJenningsE
      EddieJennings @Dashrender
      last edited by

      @dashrender said in Food for thought: Fixing an over-engineered environment:

      Ok so you do plan to multihome the VM. What are you hoping to gain by having this internal virtual switch?

      The original idea (which is likely flawed) of the network was to separate server-to-server traffic from server-to-Internet traffic. I believe the purpose of this was keep the server-to-server pipe free of the Internet traffic to prevent bottlenecks. However, I don't think this issue is really an issue (see images below). Creating the private virtual switch preserves this architecture, and I would assume data transfer would be faster over the private virtual switch rather than through the physical switch.

      We also teamed two 1 GB NICs on each server to connect that internal VLAN to have more bandwidth. Each server has its host file configured to where the IPs of the other servers resolve to the internal subnet to make sure the traffic used the correct NIC.

      SQL Server Network Traffic
      Internal = Teamed GB NICs on VLAN2. External = single GB NIC on VLAN1
      0_1509544448805_4e15d792-a1ae-4cad-8f5d-50062a96499a-image.png
      0_1509544499906_5b7d5587-e712-4301-9c0f-2caa605f462b-image.png

      IIS server traffic is about the same. Three month average for the Internal team is 7.4 Mb/s TX and 1.69 Mb/s RX. The external NIC is 2.34 Mb/s TX and 863 Kb/s RX

      REDIS server (that has postfix VM) has almost no traffic on the external NIC, and a three month average of 726 Kb/s TX and 10.5 MB/s RX on the Internal team.

      Clearly, none of this is approaching saturation even for a 100 Mbps NIC, so perhaps the only thing gained from separating the traffic is extra complexity.

      coliverC 1 Reply Last reply Reply Quote 0
      • EddieJenningsE
        EddieJennings @coliver
        last edited by

        @coliver said in Food for thought: Fixing an over-engineered environment:

        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

        I'd configure two Virtual switches on the Hyper-V host. One external and one private. Each VM would have two vNIC, one connected to each virtual switch. Private switch would be for traffic between the VMs, and external switch for Internet access.

        This seems unnecessarily complex for your environment. Any reason for doing this and not just a single virtual switch?

        Ha! You beat me to the conclusion. ๐Ÿ˜›

        1 Reply Last reply Reply Quote 2
        • coliverC
          coliver @EddieJennings
          last edited by

          @eddiejennings said in Food for thought: Fixing an over-engineered environment:

          Creating the private virtual switch preserves this architecture, and I would assume data transfer would be faster over the private virtual switch rather than through the physical switch.

          This assumes that VM-to-VM traffic leaves the virtual switch to begin with. IIRC none of the VM-to-VM traffice would be going to your physical switch. The virtual switch would be handling all of that traffic.

          EddieJenningsE 1 Reply Last reply Reply Quote 2
          • EddieJenningsE
            EddieJennings @coliver
            last edited by

            @coliver said in Food for thought: Fixing an over-engineered environment:

            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

            Creating the private virtual switch preserves this architecture, and I would assume data transfer would be faster over the private virtual switch rather than through the physical switch.

            This assumes that VM-to-VM traffic leaves the virtual switch to begin with. IIRC none of the VM-to-VM traffice would be going to your physical switch. The virtual switch would be handling all of that traffic.

            That's correct. Right now, nothing is virtualized, so each physical server has two teamed NICs sending traffic to our physical switch on VLAN2 as the Internal network, with another NIC sending traffic on the default VLAN as the External network.

            Since my thought is to turn everything into a VM, it would be better performing to create a virtual private switch just for that VM-to-VM traffic rather than configure something that still utilizes the physical switch for such traffic. However, from what I'm seeing it doesn't look like separating that traffic onto its own private switch is necessary.

            coliverC DashrenderD 2 Replies Last reply Reply Quote 1
            • scottalanmillerS
              scottalanmiller @coliver
              last edited by

              @coliver said in Food for thought: Fixing an over-engineered environment:

              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

              I'd configure two Virtual switches on the Hyper-V host. One external and one private. Each VM would have two vNIC, one connected to each virtual switch. Private switch would be for traffic between the VMs, and external switch for Internet access.

              This seems unnecessarily complex for your environment. Any reason for doing this and not just a single virtual switch?

              I agree. Whatโ€™s the benefit here?

              EddieJenningsE 1 Reply Last reply Reply Quote 0
              • EddieJenningsE
                EddieJennings @scottalanmiller
                last edited by

                @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                @coliver said in Food for thought: Fixing an over-engineered environment:

                @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                I'd configure two Virtual switches on the Hyper-V host. One external and one private. Each VM would have two vNIC, one connected to each virtual switch. Private switch would be for traffic between the VMs, and external switch for Internet access.

                This seems unnecessarily complex for your environment. Any reason for doing this and not just a single virtual switch?

                I agree. Whatโ€™s the benefit here?

                From the data that New Relic shows me, it looks like there isn't any. I guess I could make an argument that the SQL Server VM and the REDIS VM shouldn't have Internet access. The problem with that is two fold.

                1. I'd add the complexity of having WSUS or something to be able to feed those VMs Windows updates.
                2. It seems like having a way to RDP into those machines would be overly complex.
                1 Reply Last reply Reply Quote 0
                • coliverC
                  coliver @EddieJennings
                  last edited by

                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                  Since my thought is to turn everything into a VM, it would be better performing to create a virtual private switch just for that VM-to-VM traffic rather than configure something that still utilizes the physical switch for such traffic. However, from what I'm seeing it doesn't look like separating that traffic onto its own private switch is necessary.

                  I'm confused as to how you would see better performance? You're going to have more then one host correct? Unless you are planning on setting up an independent physical switch for host-to-host/vm-to-vm communication then everything would be going over the physical switch regardless. VLANs aren't for performance purposes they are for security purposes.

                  1 Reply Last reply Reply Quote 0
                  • DashrenderD
                    Dashrender @EddieJennings
                    last edited by

                    @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                    Since my thought is to turn everything into a VM, it would be better performing to create a virtual private switch just for that VM-to-VM traffic rather than configure something that still utilizes the physical switch for such traffic. However, from what I'm seeing it doesn't look like separating that traffic onto its own private switch is necessary.

                    The idea might have some credibility in the real world, but in a single host, where the traffic is all on vswitches, this won't really make any difference.
                    Each VM with only one a single vswitch connection, all inter VM traffic will stay inside the hypervisor, never touching the physical switches. You team several 1 GB or upgrade to a 10GB NIC in the server (and a 10 GB port on the switch) and you shouldn't see that be a bottle neck at all.

                    1 Reply Last reply Reply Quote 1
                    • EddieJenningsE
                      EddieJennings
                      last edited by

                      @coliver
                      Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                      On performance, you're right about VLANs, they're designed for security. I guess you could argue you'd reducing potential broadcast traffic, but in this situation that wouldn't matter, as the number of devices is the same. It looks more and more like the separate-network-for-server-to-server communication is unnecessary.

                      @Dashrender
                      You're right. The only time VM traffic would be going over a 1 GB link would be when that traffic has to travel over the physical NIC to the physical switch. Even if the virtual switch was an external switch, the VM to VM traffic would be going over the 10 GB virtual switch link.

                      coliverC J 3 Replies Last reply Reply Quote 1
                      • coliverC
                        coliver @EddieJennings
                        last edited by

                        @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                        Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                        Probably not. But you're talking yourself out of it now so I don't need to say anything else.

                        EddieJenningsE 1 Reply Last reply Reply Quote 1
                        • EddieJenningsE
                          EddieJennings @coliver
                          last edited by

                          @coliver said in Food for thought: Fixing an over-engineered environment:

                          @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                          Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                          Probably not. But you're talking yourself out of it now so I don't need to say anything else.

                          ๐Ÿ˜„ Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.

                          DashrenderD coliverC 2 Replies Last reply Reply Quote 0
                          • DashrenderD
                            Dashrender @EddieJennings
                            last edited by

                            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                            @coliver said in Food for thought: Fixing an over-engineered environment:

                            @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                            Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                            Probably not. But you're talking yourself out of it now so I don't need to say anything else.

                            ๐Ÿ˜„ Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.

                            Definitely a hard thing to get over at times.

                            1 Reply Last reply Reply Quote 0
                            • coliverC
                              coliver @EddieJennings
                              last edited by

                              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                              @coliver said in Food for thought: Fixing an over-engineered environment:

                              @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                              Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                              Probably not. But you're talking yourself out of it now so I don't need to say anything else.

                              ๐Ÿ˜„ Yeah, during this thought process, I'll likely be talking myself out of most things that would be just a virtualized version of current architecture.

                              So green field it. Ignore current infrastructure for a bit. How would you make this work in an ideal environment. Then look at where what you have now differs from that ideal. Are those differences necessary? Would moving them toward ideal adversely effect users?

                              1 Reply Last reply Reply Quote 5
                              • J
                                Jimmy9008 @EddieJennings
                                last edited by

                                @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                @coliver
                                Right now, I'm planning on one host with multiple VMs. So if I had this separate, internal network, methinks performance would be better on a virtual private switch, rather than using virtual external switches bound to a physical NIC that is a part of a separate VLAN on the physical switch.

                                If the VMs are on the same host no need to give them internal and external virtual NICs. They will communicate over the external virtual switch, but the traffic wont go to the physical NIC/out to the LAN.

                                You only want internal switch between VMs where they are only supposed to talk with each other/not be on a LAN.

                                1 Reply Last reply Reply Quote 1
                                • J
                                  Jimmy9008 @EddieJennings
                                  last edited by

                                  @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                  @coliver

                                  On performance, you're right about VLANs, they're designed for security. I guess you could argue you'd reducing potential broadcast traffic, but in this situation that wouldn't matter, as the number of devices is the same. It looks more and more like the separate-network-for-server-to-server communication is unnecessary.

                                  I didn't think they were for security...

                                  I thought VLANs were purely for segregation of traffic to make quality of service/planning better. Yeah sure, something on VLAN1 wont interact with VLAN2... but its the same switch/hardware/cables. So I presume if I can get access to that kit with Wireshark or something id be able to get the traffic regardless of VLANs, and the fact they are VLANs wouldn't matter... Could be wrong here though (probably am)...

                                  scottalanmillerS 2 Replies Last reply Reply Quote 1
                                  • scottalanmillerS
                                    scottalanmiller @Jimmy9008
                                    last edited by

                                    @jimmy9008 said in Food for thought: Fixing an over-engineered environment:

                                    @eddiejennings said in Food for thought: Fixing an over-engineered environment:

                                    @coliver

                                    On performance, you're right about VLANs, they're designed for security. I guess you could argue you'd reducing potential broadcast traffic, but in this situation that wouldn't matter, as the number of devices is the same. It looks more and more like the separate-network-for-server-to-server communication is unnecessary.

                                    I didn't think they were for security...

                                    I thought VLANs were purely for segregation of traffic to make quality of service/planning better.

                                    No that's the myth. They actually make those things worse. They make planning harder and confuse people about QoS. They add overhead and bottlenecks so you have to plan more and do more QoS just ot overcome the VLAN problems. VLANs are for security in some limited cases and for management on a massive scale.

                                    1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @Jimmy9008
                                      last edited by

                                      @jimmy9008 said in Food for thought: Fixing an over-engineered environment:

                                      but its the same switch/hardware/cables. So I presume if I can get access to that kit with Wireshark or something id be able to get the traffic regardless of VLANs, and the fact they are VLANs wouldn't matter... Could be wrong here though (probably am)...

                                      That's subnets that you are thinking of. If you can do that with a VLAN, it's not a VLAN ๐Ÿ˜‰ The definition of a VLAN means that that can't be done.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller
                                        last edited by

                                        Okay, I've not read everything but starting from the top...

                                        Networking - VLANs are gone. You describe very clearly in the OP that they serve no purpose, don't talk about them again. Gone. Done. Over. One Big Flat Network, OBFN.

                                        Servers - Definitely no need for more than one. Going down to just one will significantly improve your performance and your reliability. Right now your apps depend on the separate database server which depends on your SAN. That's an inverted pyramid with another tier. So instead of the normal three tiers of risk, you have five! Collapsing that down to one will make you so much more reliable. Hyper-V is fine. So is KVM.

                                        Storage - This is easy, local disks. Either all SSD or one SSD pool and one spinner pool. That's all.

                                        1 Reply Last reply Reply Quote 2
                                        • scottalanmillerS
                                          scottalanmiller
                                          last edited by

                                          REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.

                                          EddieJenningsE 1 Reply Last reply Reply Quote 3
                                          • EddieJenningsE
                                            EddieJennings @scottalanmiller
                                            last edited by

                                            @scottalanmiller said in Food for thought: Fixing an over-engineered environment:

                                            REDIS should be on Linux, REDIS on Windows is crazy. It's expensive and slow.

                                            I just finished reading a little on REDIS yesterday, and when I asked myself why we're running it on Windows, the answer came to me. Previous and most of current regime (I'm the exception) = if there's a way to do X with Microsoft, use Microsoft.

                                            travisdh1T scottalanmillerS 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 1 / 5
                                            • First post
                                              Last post