ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RAID Controllers - Stupidly Expensive for what they are

    IT Discussion
    raid storage
    7
    64
    14.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dafyreD
      dafyre @scottalanmiller
      last edited by

      @scottalanmiller Again... budget constraints... User Files were already hosted on the SAN and a single physical server. We hijacked the Physical Server's name for the Cluster Role (and retired that server), so we didn't have to change any folder redirection GPOs, etc...

      That setup actually worked fine for about a year before that server died (it only acted up for about a week before it went kaput, lol). Now, AFAIK, the guys that still run that cluster have e-wasted the physical server that died. That just leaves one Windows 2012 Physical server (that has been rock solid) and a Windows 2012 VM running two File Server roles (one running on each server).

      I'd have to go look, but the cluster is setup so that even if the other physical server fails, the single, remaining VM can run both file server roles (Node and Disk Majority + Windows File Share Witness, I think).

      The net take away from that setup for us, has been Increased uptime and fewer headaches when servers start dropping like flies.

      scottalanmillerS 2 Replies Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @dafyre
        last edited by

        @dafyre said:

        @scottalanmiller Again... budget constraints... User Files were already hosted on the SAN and a single physical server.

        Ah, I see. Maybe the budgets wouldn't have been so constrained without being two devices to do the work of one 😉

        There is always an excuse as to why these things happen. But generally if you work back, there is an foundational decision that was bad or weird and led to a cascade of problems.

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @dafyre
          last edited by

          @dafyre said:

          The net take away from that setup for us, has been Increased uptime and fewer headaches when servers start dropping like flies.

          The net take away should have been "design sensibly from day one and reserve overspending for later improvements." Lower cost, easier management, higher reliability.

          dafyreD 1 Reply Last reply Reply Quote 0
          • dafyreD
            dafyre @scottalanmiller
            last edited by

            @scottalanmiller said:

            The net take away should have been "design sensibly from day one and reserve overspending for later improvements." Lower cost, easier management, higher reliability.

            While I agree, the design was sensible to us from day one. 8-), and as I have stated before, even knowing what I know now, I would have still done it that way because our experience, by and large, was pretty good. I didn't lose any sleep at night when things were working correctly.

            They have now reached the Lower Cost (no need to buy another SAN, thanks to Scale), Easier Management (most everything is virutalized) And Higher Reliability phase now... When that last Physical Machine dies? All they gotta do is Spin up a new VM, make sure it is on a diferent Host than the existing one, join it to the cluster, and be happy... (Arguably, they should have already spun up a new VM and made it part of the cluster...).

            scottalanmillerS 2 Replies Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @dafyre
              last edited by

              @dafyre said:

              While I agree, the design was sensible to us from day one. 8-), and as I have stated before, even knowing what I know now, I would have still done it that way because our experience, by and large, was pretty good. I didn't lose any sleep at night when things were working correctly.

              Even thought the cost was more than double a more reliable design? What makes the design sensible or "good" in hindsight? Doesn't hindsight suggest that a lot of money was lost and unnecessary risk was taken on? It might have been "reliable enough", but if you could spend half the money and be "even more reliable", why avoid that?

              1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller @dafyre
                last edited by

                @dafyre said:

                They have now reached the Lower Cost (no need to buy another SAN, thanks to Scale), Easier Management (most everything is virutalized) And Higher Reliability phase now... When that last Physical Machine dies? All they gotta do is Spin up a new VM, make sure it is on a diferent Host than the existing one, join it to the cluster, and be happy... (Arguably, they should have already spun up a new VM and made it part of the cluster...).

                Scale is pretty awesome. We have a cluster on its way, actually.

                1 Reply Last reply Reply Quote 1
                • dafyreD
                  dafyre
                  last edited by

                  Sweet! They are sponsoring a SpiceClub meeting in Atlanta tomorrow. I'm actually going to go since I get of early.

                  1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    They sponsored the last SpiceCorps that I was at as well (Rochester.)

                    1 Reply Last reply Reply Quote 1
                    • dafyreD
                      dafyre
                      last edited by

                      Cool. I actually spoke with (well somebody relayed for me on the phone) one of the Scale guys who's going to be there tomorrow night, lol.

                      How big of a cluster did you guys get?

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        Just three nodes for now. It is heading to the lab. I'll be writing about it as soon as we have time to have it up and running.

                        dafyreD 1 Reply Last reply Reply Quote 1
                        • dafyreD
                          dafyre @scottalanmiller
                          last edited by

                          @scottalanmiller That shouldn't take long, lol. We did it with a guy on the phone (super helpful, by the way) in like 30 minutes.

                          Each server we got came with screw drivers in it, lol. I still have a couple of them running around the house, lol.

                          1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            Like the afternoon equivalent of a mimosa? Nice.

                            1 Reply Last reply Reply Quote 1
                            • 1
                            • 2
                            • 3
                            • 4
                            • 4 / 4
                            • First post
                              Last post