ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Expanding LVM disk

    IT Discussion
    hyper-v fedora fedora 25 fedora 26 lvm linux hyper-v 2012 r2 storage
    3
    15
    2.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • gjacobseG
      gjacobse
      last edited by scottalanmiller

      So I have run into a minor addressable issue with the NextCloud instance - the 400GB disk has been filled.

      Over the weekend I shut the VM down, and using the Hyper-V system expanded the disk to 800GB -
      0_1501519479633_Selection_048.png

      However the system still reports only 400GB

      df
      Filesystem                    1K-blocks      Used Available Use% Mounted on
      devtmpfs                        2005988         0   2005988   0% /dev
      tmpfs                           2018208         0   2018208   0% /dev/shm
      tmpfs                           2018208       812   2017396   1% /run
      tmpfs                           2018208         0   2018208   0% /sys/fs/cgroup
      /dev/mapper/fedora-root        15718400  11923072   3795328  76% /
      tmpfs                           2018208       284   2017924   1% /tmp
      /dev/sda1                        999320    165984    764524  18% /boot
      /dev/mapper/vol_data1-lv_data 419221508 379061068  40160440  91% /data
      tmpfs                            403640         0    403640   0% /run/user/0
      
      

      It is running LVM - but it has not automatically extended, and I am unable to finish the sync needed.

      I've been looking for the proper syntax to force the system to update, but I have not found the correct one as of yet.

      1 Reply Last reply Reply Quote 0
      • stacksofplatesS
        stacksofplates
        last edited by

        So there's two things here. 1) Did you resize the volume with LVM or just resize the disk with Hyper-V? 2) If you resized with LVM, you might have to do a partprobe to get the system to see the changes, but the reboot should have handled that.

        Here's the commands you need for LVM:

        pvresize /dev/<whatever the data volume is> 
        
        lvextend  -r -l 85%FREE /dev/mapper/vol_data1-lv_data
        

        The -r will automatically resize the file system.

        I usually leave space at the end for things like snapshots if needed which is why I put 85%.

        gjacobseG 1 Reply Last reply Reply Quote 3
        • gjacobseG
          gjacobse @stacksofplates
          last edited by

          @stacksofplates said in Expanding LVM disk:

          So there's two things here. 1) Did you resize the volume with LVM or just resize the disk with Hyper-V? 2) If you resized with LVM, you might have to do a partprobe to get the system to see the changes, but the reboot should have handled that.

          Here's the commands you need for LVM:

          pvresize /dev/<whatever the data volume is> 
          
          lvextend  -r -l 85%FREE /dev/mapper/vol_data1-lv_data
          

          The -r will automatically resize the file system.

          I usually leave space at the end for things like snapshots if needed which is why I put 85%.

          Just resized with Hyper-V.

           pvresize /dev/sdb
            Physical volume "/dev/sdb" changed
            1 physical volume(s) resized / 0 physical volume(s) not resized
          
           lvextend -r -l 90%FREE /dev/mapper/vol_data1-lv_data
            New size given (92160 extents) not larger than existing size (102399 extents)
          
           df
          Filesystem                    1K-blocks      Used Available Use% Mounted on
          devtmpfs                        2005988         0   2005988   0% /dev
          tmpfs                           2018208         0   2018208   0% /dev/shm
          tmpfs                           2018208       820   2017388   1% /run
          tmpfs                           2018208         0   2018208   0% /sys/fs/cgroup
          /dev/mapper/fedora-root        15718400  11923364   3795036  76% /
          tmpfs                           2018208       284   2017924   1% /tmp
          /dev/sda1                        999320    165984    764524  18% /boot
          /dev/mapper/vol_data1-lv_data 419221508 379061192  40160316  91% /data
          tmpfs                            403640         0    403640   0% /run/user/0
          
          stacksofplatesS 1 Reply Last reply Reply Quote 0
          • stacksofplatesS
            stacksofplates @gjacobse
            last edited by

            @gjacobse said in Expanding LVM disk:

            @stacksofplates said in Expanding LVM disk:

            So there's two things here. 1) Did you resize the volume with LVM or just resize the disk with Hyper-V? 2) If you resized with LVM, you might have to do a partprobe to get the system to see the changes, but the reboot should have handled that.

            Here's the commands you need for LVM:

            pvresize /dev/<whatever the data volume is> 
            
            lvextend  -r -l 85%FREE /dev/mapper/vol_data1-lv_data
            

            The -r will automatically resize the file system.

            I usually leave space at the end for things like snapshots if needed which is why I put 85%.

            Just resized with Hyper-V.

             pvresize /dev/sdb
              Physical volume "/dev/sdb" changed
              1 physical volume(s) resized / 0 physical volume(s) not resized
            
             lvextend -r -l 90%FREE /dev/mapper/vol_data1-lv_data
              New size given (92160 extents) not larger than existing size (102399 extents)
            
             df
            Filesystem                    1K-blocks      Used Available Use% Mounted on
            devtmpfs                        2005988         0   2005988   0% /dev
            tmpfs                           2018208         0   2018208   0% /dev/shm
            tmpfs                           2018208       820   2017388   1% /run
            tmpfs                           2018208         0   2018208   0% /sys/fs/cgroup
            /dev/mapper/fedora-root        15718400  11923364   3795036  76% /
            tmpfs                           2018208       284   2017924   1% /tmp
            /dev/sda1                        999320    165984    764524  18% /boot
            /dev/mapper/vol_data1-lv_data 419221508 379061192  40160316  91% /data
            tmpfs                            403640         0    403640   0% /run/user/0
            

            You will probably have to either reboot/unmount that disk and remount it/try partprobe.

            1 Reply Last reply Reply Quote 0
            • gjacobseG
              gjacobse
              last edited by

              Reboot works for me... nice and simple...

              stacksofplatesS 1 Reply Last reply Reply Quote 0
              • stacksofplatesS
                stacksofplates @gjacobse
                last edited by

                @gjacobse said in Expanding LVM disk:

                Reboot works for me... nice and simple...

                after it reboots run the lvextend command again.

                gjacobseG 1 Reply Last reply Reply Quote 0
                • gjacobseG
                  gjacobse @stacksofplates
                  last edited by gjacobse

                  @stacksofplates said in Expanding LVM disk:

                  @gjacobse said in Expanding LVM disk:

                  Reboot works for me... nice and simple...

                  after it reboots run the lvextend command again.

                   lvextend -r -l 90%FREE /dev/mapper/vol_data1-lv_data
                    New size given (92160 extents) not larger than existing size (102399 extents)
                  
                  df
                  Filesystem                    1K-blocks      Used Available Use% Mounted on
                  devtmpfs                        2005988         0   2005988   0% /dev
                  tmpfs                           2018208         0   2018208   0% /dev/shm
                  tmpfs                           2018208       820   2017388   1% /run
                  tmpfs                           2018208         0   2018208   0% /sys/fs/cgroup
                  /dev/mapper/fedora-root        15718400  11943068   3775332  76% /
                  tmpfs                           2018208       128   2018080   1% /tmp
                  /dev/sda1                        999320    165984    764524  18% /boot
                  /dev/mapper/vol_data1-lv_data 419221508 379061772  40159736  91% /data
                  tmpfs                            403640         0    403640   0% /run/user/0
                  
                  1 Reply Last reply Reply Quote 0
                  • stacksofplatesS
                    stacksofplates
                    last edited by

                    What do pvdisplay, vgdisplay, and lvdisplay show?

                    1 Reply Last reply Reply Quote 0
                    • gjacobseG
                      gjacobse
                      last edited by

                      pvdisplay

                      pvdisplay
                        --- Physical volume ---
                        PV Name               /dev/sdb
                        VG Name               vol_data1
                        PV Size               800.00 GiB / not usable 3.00 MiB
                        Allocatable           yes
                        PE Size               4.00 MiB
                        Total PE              204799
                        Free PE               102400
                        Allocated PE          102399
                        PV UUID               GsfGRU-BNft-XL8c-4YHH-0fyp-y7No-oJXt0c
                      
                        --- Physical volume ---
                        PV Name               /dev/sda2
                        VG Name               fedora
                        PV Size               19.00 GiB / not usable 3.00 MiB
                        Allocatable           yes
                        PE Size               4.00 MiB
                        Total PE              4863
                        Free PE               511
                        Allocated PE          4352
                        PV UUID               aM5Bp0-CBN2-IDiy-M02P-20gb-0STR-Fucb0Z
                      
                      
                      1 Reply Last reply Reply Quote 0
                      • gjacobseG
                        gjacobse
                        last edited by

                        vgdisplay

                        vgdisplay
                          --- Volume group ---
                          VG Name               vol_data1
                          System ID
                          Format                lvm2
                          Metadata Areas        1
                          Metadata Sequence No  3
                          VG Access             read/write
                          VG Status             resizable
                          MAX LV                0
                          Cur LV                1
                          Open LV               1
                          Max PV                0
                          Cur PV                1
                          Act PV                1
                          VG Size               800.00 GiB
                          PE Size               4.00 MiB
                          Total PE              204799
                          Alloc PE / Size       102399 / 400.00 GiB
                          Free  PE / Size       102400 / 400.00 GiB
                          VG UUID               lr2fGv-9KgH-nQam-y3rZ-4sq4-hsO1-OACkXC
                        
                          --- Volume group ---
                          VG Name               fedora
                          System ID
                          Format                lvm2
                          Metadata Areas        1
                          Metadata Sequence No  3
                          VG Access             read/write
                          VG Status             resizable
                          MAX LV                0
                          Cur LV                2
                          Open LV               2
                          Max PV                0
                          Cur PV                1
                          Act PV                1
                          VG Size               19.00 GiB
                          PE Size               4.00 MiB
                          Total PE              4863
                          Alloc PE / Size       4352 / 17.00 GiB
                          Free  PE / Size       511 / 2.00 GiB
                          VG UUID               6ZJSML-ep20-ukkp-W1Vu-soRl-uufZ-kKDsMz
                        
                        1 Reply Last reply Reply Quote 0
                        • gjacobseG
                          gjacobse
                          last edited by

                          lvdisplay

                          lvdisplay
                            --- Logical volume ---
                            LV Path                /dev/vol_data1/lv_data
                            LV Name                lv_data
                            VG Name                vol_data1
                            LV UUID                GV6CFt-rFDT-VFuY-YSzB-8qHl-94pB-5Q0YVQ
                            LV Write Access        read/write
                            LV Creation host, time server-NextCloud.localdomain, 2017-06-14 16:56:18 -0400
                            LV Status              available
                            # open                 1
                            LV Size                400.00 GiB
                            Current LE             102399
                            Segments               1
                            Allocation             inherit
                            Read ahead sectors     auto
                            - currently set to     256
                            Block device           253:2
                          
                            --- Logical volume ---
                            LV Path                /dev/fedora/swap
                            LV Name                swap
                            VG Name                fedora
                            LV UUID                dARm8u-s32S-V1bN-VreO-Iret-G74L-Q1J5zd
                            LV Write Access        read/write
                            LV Creation host, time server-NextCloud.localdomain, 2017-06-14 10:51:10 -0400
                            LV Status              available
                            # open                 2
                            LV Size                2.00 GiB
                            Current LE             512
                            Segments               1
                            Allocation             inherit
                            Read ahead sectors     auto
                            - currently set to     256
                            Block device           253:1
                          
                            --- Logical volume ---
                            LV Path                /dev/fedora/root
                            LV Name                root
                            VG Name                fedora
                            LV UUID                d915Nx-AVEn-Ue96-6bux-WcMj-jPH5-CyJpor
                            LV Write Access        read/write
                            LV Creation host, time server-NextCloud.localdomain, 2017-06-14 10:51:10 -0400
                            LV Status              available
                            # open                 1
                            LV Size                15.00 GiB
                            Current LE             3840
                            Segments               1
                            Allocation             inherit
                            Read ahead sectors     auto
                            - currently set to     256
                            Block device           253:0
                          
                          1 Reply Last reply Reply Quote 0
                          • stacksofplatesS
                            stacksofplates
                            last edited by

                            You can try it two ways:

                            lvextend -r -l +90%FREE /dev/mapper/vol_data1-lv_data
                            
                            lvextend -r -l +102300 /dev/mapper/vol_data1-lv_data
                            

                            I hardly ever use the %FREE part and almost always use the extents instead. I figured the percentages took into account growing not just a percent of actual usage.

                            gjacobseG 1 Reply Last reply Reply Quote 0
                            • gjacobseG
                              gjacobse @stacksofplates
                              last edited by

                              @stacksofplates said in Expanding LVM disk:

                              You can try it two ways:

                              lvextend -r -l +90%FREE /dev/mapper/vol_data1-lv_data
                              
                              lvextend -r -l +102300 /dev/mapper/vol_data1-lv_data
                              

                              I hardly ever use the %FREE part and almost always use the extents instead. I figured the percentages took into account growing not just a percent of actual usage.

                              Le - Sigh.. And this is where I have to laugh - and I know you'll laugh,.. and @scottalanmiller will laugh..@JaredBusch will do something different...

                              lvextend -r -l 90%Free /dev/mapper/vol_data1-lv_data
                              

                              is totaly different than the correct one

                              lvextend -r -l +90%Free /dev/mapper/vol_data1-lv_data
                              

                              now we see it.

                               df
                              Filesystem                    1K-blocks      Used Available Use% Mounted on
                              devtmpfs                        2005988         0   2005988   0% /dev
                              tmpfs                           2018208         0   2018208   0% /dev/shm
                              tmpfs                           2018208       820   2017388   1% /run
                              tmpfs                           2018208         0   2018208   0% /sys/fs/cgroup
                              /dev/mapper/fedora-root        15718400  11941636   3776764  76% /
                              tmpfs                           2018208       128   2018080   1% /tmp
                              /dev/sda1                        999320    165984    764524  18% /boot
                              /dev/mapper/vol_data1-lv_data 796708868 379480528 417228340  48% /data
                              tmpfs                            403640         0    403640   0% /run/user/0
                              
                              1 Reply Last reply Reply Quote 2
                              • JaredBuschJ
                                JaredBusch
                                last edited by

                                Did you actually fill up 400GB?

                                Are you running maintenance?

                                If not, then versions and such will chew up the free space.

                                By default, this is set to AJAX. This is perfectly workable as long as your users are actually using the web interface. If you are only using the sync client, then you will run into issues with maintenance not being ran.

                                The site this image was taken from is 100% sync client. No one uses the web interface. Because of that I had to setup a cron job. because version history was eating all the storage space.

                                0_1501534729178_33dc25a7-55f4-42f4-99d4-2b7dc0faed23-image.png

                                1 Reply Last reply Reply Quote 0
                                • JaredBuschJ
                                  JaredBusch
                                  last edited by

                                  Open a crontab for the apache user

                                  crontab -u apache -e
                                  
                                  */15 * * * * php -f /var/www/html/nextcloud/cron.php
                                  
                                  1 Reply Last reply Reply Quote 0
                                  • 1 / 1
                                  • First post
                                    Last post