ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Windows Failover Cluster

    IT Discussion
    4
    12
    1.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dafyreD
      dafyre
      last edited by

      Hi All,

      Got an oddball problem that kicked my tail for most of last week.

      I have a Windows Failover Cluster setup for SQL Server 2012. I configured the SQL Server, and added both nodes, and everything was happy. However, when I try to (manually) fail from Node1 to Node2, the system gives me an error when trying to mount the shared storage:

      upload-e9e5666f-b7a2-4eb5-9bbe-e456d81547d3

      The Computer Account for the cluster (The cluster management point) has been marked as FULL CONTROL for all Files & Folders on the disk. I have not added the individual Cluster Nodes to the full control for all files & folders yet. The Disks's Owner is marked as "SYSTEM".

      When I move everything back to NODE1 again, the disk comes up with no problem. Anybody have any ideas what is going on here?

      1 Reply Last reply Reply Quote 0
      • LakshmanaL
        Lakshmana
        last edited by

        Whether you have mounted the partition at raw disk

        dafyreD 1 Reply Last reply Reply Quote 0
        • dafyreD
          dafyre @Lakshmana
          last edited by

          @Lakshmana I did. I can mount it on one machine, but not another. The disk was formatted for NTFS.

          1 Reply Last reply Reply Quote 0
          • PSX_DefectorP
            PSX_Defector
            last edited by

            iSCSI or FC or what?

            Might need to ensure that the host can see the stores, sounds as though it might have a problem seeing the disk. I'm assuming it can see the quorum and such.

            dafyreD 1 Reply Last reply Reply Quote 0
            • dafyreD
              dafyre @PSX_Defector
              last edited by

              @PSX_Defector Thanks for the reminder, it's an iSCSI connection.

              I've narrowed the problem down to persistent reservations. (SCSI-3 Persistent Reservations). Apparently, that option is turned off on my LUN. I don't have access to the storage systems to fix it yet, so I got the next guy up the totem pole looking at it for me.

              1 Reply Last reply Reply Quote 0
              • dafyreD
                dafyre
                last edited by

                If I remove the iSCSI disk from the cluster, I can bring it on and offline on either node with no trouble. The problem only happens when I have it as part of the failover cluster.

                PSX_DefectorP 1 Reply Last reply Reply Quote 0
                • MattSpellerM
                  MattSpeller
                  last edited by

                  All I can picture is some tiny dude waving flags around in your server

                  dafyreD 1 Reply Last reply Reply Quote 0
                  • dafyreD
                    dafyre @MattSpeller
                    last edited by

                    @MattSpeller said:

                    All I can picture is some tiny dude waving flags around in your server

                    /me hides flags

                    What gives you that impression?

                    /me waits for you to look away and kicks server

                    1 Reply Last reply Reply Quote 1
                    • PSX_DefectorP
                      PSX_Defector @dafyre
                      last edited by

                      @dafyre said:

                      If I remove the iSCSI disk from the cluster, I can bring it on and offline on either node with no trouble. The problem only happens when I have it as part of the failover cluster.

                      To both servers?

                      1 Reply Last reply Reply Quote 0
                      • dafyreD
                        dafyre
                        last edited by

                        No, lol. It will actually work fine on NODE1... But when I fail it over to NODE2, the disk resource won't come online.

                        If I remove the disk from the Role, and then remove it from the Available Disks pool, then I can mount it on either server with no problem.

                        When I add it back to the cluster again, it only works on NODE1...

                        1 Reply Last reply Reply Quote 0
                        • PSX_DefectorP
                          PSX_Defector
                          last edited by

                          That's what I was looking for. So the pathing isn't a problem, since you can mount on both servers one at a time.

                          Sounds to me like some kind of presentation issue. Fibre Channel does this very easily, iSCSI not as much. I would have my SAN guy spin me up a LUN, make sure it is presented to both servers, setup a new cluster resource, make sure it can failover, then migrate your data over to it. Yeah, more of a pain, but at least you will know it will failover properly.

                          Same DC right? You not doing something stupid like WAN clustering using iSCSI.

                          dafyreD 1 Reply Last reply Reply Quote 0
                          • dafyreD
                            dafyre @PSX_Defector
                            last edited by

                            @PSX_Defector I have narrowed it down to the Persistent Reservations. This is a Nimble Storage system, and I believe that can be en/disabled per lun. AFAIK, this will be the only Windows Cluster accessing the Nimble systems in a Failover Cluster, so they probably don't have it enabled.

                            And yes, everything is in the same DC. Most likely in the same rack, lol... and no, not doing anything fun with iSCSI over the WAN.... although that does give me an idea, for new things to try on my own servers, lol.

                            1 Reply Last reply Reply Quote 0
                            • 1 / 1
                            • First post
                              Last post