Empecemos por listar los pools existentes en nuestra máquina con la orden zpool list
root@solaris:~# df -h Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris 19G 2.6G 13G 17% / rpool/ROOT/solaris/var 19G 315M 13G 3% /var /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 3.4G 6.5M 3.4G 1% /system/volatile swap 3.4G 4K 3.4G 1% /tmp objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd /usr/lib/libc/libc_hwcap1.so.1 16G 2.6G 13G 17% /lib/libc.so.1 rpool/VARSHARE 19G 2.7M 13G 1% /var/share rpool/VARSHARE/tmp 19G 31K 13G 1% /var/tmp rpool/VARSHARE/kvol 19G 31K 13G 1% /var/share/kvol rpool/VARSHARE/zones 19G 31K 13G 1% /system/zones rpool/export 19G 32K 13G 1% /export rpool/export/home 19G 32K 13G 1% /export/home rpool/export/home/pad 19G 34K 13G 1% /export/home/pad rpool 19G 4.3M 13G 1% /rpool rpool/VARSHARE/pkg 19G 32K 13G 1% /var/share/pkg rpool/VARSHARE/pkg/repositories 19G 31K 13G 1% /var/share/pkg/repositories rpool/VARSHARE/sstore 19G 440K 13G 1% /var/share/sstore/repo root@solaris:~#
root@solaris:/oracle/mcfee# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 19.6G 6.06G 13.6G 30% 1.00x ONLINE - root@solaris:/oracle/mcfee#
Como podemos ver, tenemos el pool principal que por defecto se crea cuando se instala el sistema operativo. Voy a agregar un nuevo disco al sistema para crear un pool.
Previamente agregado el nuevo disco, procedemos a ejecutar la orden format con el fin de ver el nuevo disco agregado y la ruta correspondiente.
root@solaris:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1d0 <QEMU HAR-QM0000-0001-20.00GB> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c2d0 <QEMU HAR-QM0000-0001 cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0 Specify disk (enter its number): ^C root@solaris:~#
El disco recién agregado es el 1. Con esta información creamos el nuevo pool. Para crear un pool usamos la opción create del comando zpool
root@solaris:~# zpool create -f oracle c2d0 root@solaris:~#
Revisamos el listado
root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle 9.94G 99.5K 9.94G 0% 1.00x ONLINE - rpool 19.6G 6.06G 13.6G 30% 1.00x ONLINE - root@solaris:~#
Pool agregado!
Hagamos un df -h para visualizar los filesystems en nuestra máquina
root@solaris:/oracle/mcfee# df -h Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris 19G 2.7G 13G 17% / rpool/ROOT/solaris/var 19G 315M 13G 3% /var /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 3.3G 6.5M 3.3G 1% /system/volatile swap 3.3G 4K 3.3G 1% /tmp objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd /usr/lib/libc/libc_hwcap1.so.1 16G 2.7G 13G 17% /lib/libc.so.1 rpool/VARSHARE 19G 2.7M 13G 1% /var/share rpool/VARSHARE/tmp 19G 31K 13G 1% /var/tmp rpool/VARSHARE/kvol 19G 31K 13G 1% /var/share/kvol rpool/VARSHARE/zones 19G 31K 13G 1% /system/zones rpool/export 19G 32K 13G 1% /export rpool/export/home 19G 32K 13G 1% /export/home rpool/export/home/pad 19G 34K 13G 1% /export/home/pad oracle 20G 32K 20G 1% /oracle rpool 19G 4.3M 13G 1% /rpool rpool/VARSHARE/pkg 19G 32K 13G 1% /var/share/pkg rpool/VARSHARE/pkg/repositories 19G 31K 13G 1% /var/share/pkg/repositories rpool/VARSHARE/sstore 19G 1.5M 13G 1% /var/share/sstore/repo root@solaris:/oracle/mcfee#
Se pueden observar los pools.
Vamos a agregar un nuevo pool. Esto con el fin de ver de nuevo el proceso y luego eliminamos el pool recién creado para usar el disco liberado en el pool oracle.
Primero agregamos otro disco
root@solaris:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1d0 <QEMU HAR-QM0000-0001-20.00GB> /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 1. c2d0 <QEMU HAR-QM0000-0001-10.00GB> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 2. c2d1 <QEMU HAR-QM0000-0001 cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0 Specify disk (enter its number): ^C root@solaris:~#
Luego creamos el pool
root@solaris:~# zpool create -f grid c2d1 root@solaris:~#
Listamos los pools
root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT grid 9.94G 99.5K 9.94G 0% 1.00x ONLINE - oracle 9.94G 99.5K 9.94G 0% 1.00x ONLINE - rpool 19.6G 6.06G 13.6G 30% 1.00x ONLINE - root@solaris:~#
Revisamos el estado de cada pool
root@solaris:~# zpool status pool: grid state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle ONLINE 0 0 0 c2d1 ONLINE 0 0 0 errors: No known data errors pool: oracle state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle ONLINE 0 0 0 c2d0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1d0 ONLINE 0 0 0 errors: No known data errors root@solaris:~#
Y ahora si , procedemos a borrar el pool grid y usar el disco en el pool oracle
root@solaris:~# zpool destroy grid root@solaris:~#
Revisamos con un df -h
root@solaris:~# df -h Filesystem Size Used Available Capacity Mounted on rpool/ROOT/solaris 19G 2.7G 13G 17% / rpool/ROOT/solaris/var 19G 315M 13G 3% /var /devices 0K 0K 0K 0% /devices /dev 0K 0K 0K 0% /dev ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 3.3G 6.5M 3.3G 1% /system/volatile swap 3.3G 4K 3.3G 1% /tmp objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd /usr/lib/libc/libc_hwcap1.so.1 16G 2.7G 13G 17% /lib/libc.so.1 rpool/VARSHARE 19G 2.7M 13G 1% /var/share rpool/VARSHARE/tmp 19G 31K 13G 1% /var/tmp rpool/VARSHARE/kvol 19G 31K 13G 1% /var/share/kvol rpool/VARSHARE/zones 19G 31K 13G 1% /system/zones rpool/export 19G 32K 13G 1% /export rpool/export/home 19G 32K 13G 1% /export/home rpool/export/home/pad 19G 34K 13G 1% /export/home/pad oracle 9.8G 32K 9.8G 1% /oracle rpool 19G 4.3M 13G 1% /rpool rpool/VARSHARE/pkg 19G 32K 13G 1% /var/share/pkg rpool/VARSHARE/pkg/repositories 19G 31K 13G 1% /var/share/pkg/repositories rpool/VARSHARE/sstore 19G 1.1M 13G 1% /var/share/sstore/repo root@solaris:~# #
Agregamos el disco que nos sobra al pool oracle
root@solaris:~# zpool add oracle c2d1 root@solaris:~#
Solo nos resta verificar que el tamaño haya aumentado en el pool oracle
root@solaris:/oracle/mcfee# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT oracle 19.9G 106K 19.9G 0% 1.00x ONLINE - rpool 19.6G 6.06G 13.6G 30% 1.00x ONLINE - root@solaris:/oracle/mcfee#
Revisamos el estado de los pools
root@solaris:/oracle/mcfee# zpool status pool: oracle state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM oracle ONLINE 0 0 0 c2d0 ONLINE 0 0 0 c2d1 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1d0 ONLINE 0 0 0 errors: No known data errors root@solaris:/oracle/mcfee#
Esto es todo…