пятница, 23 мая 2014 г.

linux lvm: уменьшение файловой и экспорт VG.


Посколько файловая прикручена к lv на lvm при помощи говна и палок, успех не гарантирован, шанс получить после преобразований кашу на файловой весьма велик.


!) Операция строго offline-новая, нужен downtime. В силу вышесказанного бекап тоже обязателен.

[root@thor1 /]# umount /oradb/
[root@thor1 /]# e2fsck -f /dev/mapper/oravg-fslv_oradb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/oravg-fslv_oradb: 126150/384466944 files (24.7% non-contiguous), 113949041/1537857536 blocks


Проверка обязательна, впрочем resize2fs забыть не даст и четно скажет "Please run 'e2fsck -f /dev/mapper/oravg-fslv_oradb' first."

[root@thor1 /]# resize2fs -M /dev/mapper/oravg-fslv_oradb
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/mapper/oravg-fslv_oradb to 91252358 (4k) blocks.
The filesystem on /dev/mapper/oravg-fslv_oradb is now 91252358 blocks long.


Режем под занятый размер (гарантированно войдет в одну PV, поэтому без расчета потребного размера и прочих заморочек)

[root@thor1 /]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/oravg/fslv_oradb
  LV Name                fslv_oradb
  VG Name                oravg
  LV Size                5.73 TiB
  Current LE             1501814

  Segments               4


[root@thor1 /]# lvreduce -L -5T /dev/oravg/fslv_oradb
  WARNING: Reducing active logical volume to 746.46 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce fslv_oradb? [y/n]: y
  Size of logical volume oravg/fslv_oradb changed from 5.73 TiB (1501814 extents) to 746.46 GiB (191094 extents).
  Logical volume fslv_oradb successfully resized


проверка
[root@thor1 /]# mount /dev/mapper/oravg-fslv_oradb /oradb/
смотрим, проверяем, закрываем.

[root@thor1 /]# umount /oradb/

Смотрим на PV

[root@thor1 /]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/mpatha1
  VG Name               oravg
  Allocated PE          0
  
  --- Physical volume ---
  PV Name               /dev/mapper/mpathd1
  VG Name               oravg
  Allocated PE          0
  
  --- Physical volume ---
  PV Name               /dev/mapper/mpathb1
  VG Name               oravg
  PV Size               1.64 TiB / not usable 1.75 MiB
  Allocated PE          165469
 
  --- Physical volume ---
  PV Name               /dev/mapper/mpathc1
  VG Name               oravg
  PV Size               1.09 TiB / not usable 2.70 MiB
  Allocated PE          0

Все данные лежат на одном PV, но мне удобнее занять меньший раздел, поэтому данные переношу на самый маленький PV, остальные освобождаю.


[root@thor1 /]# vgreduce oravg /dev/mapper/mpath[a,b]1
  Removed "/dev/mapper/mpatha1" from volume group "oravg"
  Removed "/dev/mapper/mpathd1" from volume group "oravg"

[root@thor1 /]# pvmove -v /dev/mapper/mpathb1
    Cluster mirror log daemon is not running.
    Finding volume group "oravg"
    Archiving volume group "oravg" metadata (seqno 9).
    Creating logical volume pvmove0
    Moving 165469 extents of logical volume oravg/fslv_oradb
    activation/volume_list configuration setting not defined: Checking only host tags for oravg/fslv_oradb
    Updating volume group metadata
    Creating oravg-pvmove0
    Loading oravg-pvmove0 table (252:9)
    Loading oravg-fslv_oradb table (252:8)
    Suspending oravg-fslv_oradb (252:8) with device flush
    activation/volume_list configuration setting not defined: Checking only host tags for oravg/pvmove0
    Resuming oravg-pvmove0 (252:9)
    Loading oravg-pvmove0 table (252:9)
    Suppressed oravg-pvmove0 (252:9) identical table reload.
    Resuming oravg-fslv_oradb (252:8)
    Creating volume group backup "/etc/lvm/backup/oravg" (seqno 10).
    Checking progress before waiting every 15 seconds
  /dev/mapper/mpathb1: Moved: 0.0%
....
  /dev/mapper/mpathb1: Moved: 100.0%
    Loading oravg-fslv_oradb table (252:8)
    Loading oravg-pvmove0 table (252:9)
    Suspending oravg-fslv_oradb (252:8) with device flush
    Suspending oravg-pvmove0 (252:9) with device flush
    Resuming oravg-pvmove0 (252:9)
    Resuming oravg-fslv_oradb (252:8)
    Removing oravg-pvmove0 (252:9)
    Removing temporary pvmove LV
    Writing out final volume group after pvmove
    Creating volume group backup "/etc/lvm/backup/oravg" (seqno 12).

[root@thor1 /]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/mpathb1
  VG Name               oravg
  Allocated PE          0
  
  --- Physical volume ---
  PV Name               /dev/mapper/mpathc1
  VG Name               oravg
  Allocated PE          165469
  
  "/dev/mapper/mpathd1" is a new physical volume of "1.36 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/mapper/mpathd1
  VG Name              
  Allocatable           NO
  Allocated PE          0

 
  "/dev/mapper/mpatha1" is a new physical volume of "1.64 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/mapper/mpatha1
  VG Name              
  Allocatable           NO
  Allocated PE          0


[root@thor1 /]# vgreduce oravg /dev/mapper/mpathb1
  Removed "/dev/mapper/mpathb1" from volume group "oravg"


В сухом остатке получим 3 чистых PV и один с ужатой до 700Мб файловой.
Выносим с системы:
 [root@thor1 /]# vgdisplay
  --- Volume group ---
  VG Name               oravg
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  13
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.09 TiB
  PE Size               4.00 MiB
  Total PE              286063
  Alloc PE / Size       165469 / 646.36 GiB
  Free  PE / Size       120594 / 471.07 GiB
  VG UUID               xCRb3J-3gZ9-mie8-lRDI-3mdf-DpTS-u4BIIf

  
[root@thor1 /]# vgchange -an oravg
  0 logical volume(s) in volume group "oravg" now active
[root@thor1 /]# vgexport oravg
  Volume group "oravg" successfully exported

Комментариев нет: