• solaris硬盘格式化分区


    创建EFI分区及挂载文件系统的过程:

    # format
    
    
    
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <LSI-MR9261-8i-2.12-557.86GB>
    /pci@0,0/pci8086,3c0a@3,2/pci1000,9263@0/sd@0,0
    1. c1t1d0 <LSI-MR9261-8i-2.12-2.72TB>
    /pci@0,0/pci8086,3c0a@3,2/pci1000,9263@0/sd@1,0
    
    Specify disk (enter its number): 1    ————     这里默认系统已发现硬盘
    
    format> fdisk
    
    No fdisk table exists. The default partition for the disk is:
    
    a 100% "SOLARIS System" partition
    
    Type "y" to accept the default partition,  otherwise type "n" to edit the partition table.
    
    WARNING: Disk is larger than 2TB. Solaris partition will be limited to 2 TB.
    
    n  —————— 输入n表示不使用默认的分区方案
    
    SELECT ONE OF THE FOLLOWING:
    1. Create a partition
    2. Specify the active partition
    3. Delete a partition
    4. Change between Solaris and Solaris2 Partition IDs
    5. Edit/View extended partitions
    6. Exit (update disk configuration and exit)
    7. Cancel (exit without updating disk configuration)
    Enter Selection: 1  ————  输入1表示创建分区
    Select the partition type to create:
    1=SOLARIS2 2=UNIX 3=PCIXOS 4=Other 5=DOS12
    6=DOS16 7=DOSEXT 8=DOSBIG 9=DOS16LBA A=x86 Boot
    B=Diagnostic C=FAT32 D=FAT32LBA E=DOSEXTLBA F=EFI (Protective)
    G=EFI_SYS 0=Exit? 
    F    ——————————输入F表示创建EFI类型的分区
    SELECT ONE OF THE FOLLOWING:
    1. Create a partition
    2. Specify the active partition
    3. Delete a partition
    4. Change between Solaris and Solaris2 Partition IDs
    5. Edit/View extended partitions
    6. Exit (update disk configuration and exit)
    7. Cancel (exit without updating disk configuration)
    Enter Selection: 6    ——————输入6表示保存并退出
    
    
    
    format> quit
    

      

    然后只需要一条命令zpool create -m,就可以创建zfs文件系统并挂在到特定路径/export/home/mrftp

    # zpool create -m /export/home/mrftp pool_3TB c1t1d0
    

    其中pool_3TB是池的名称,c1t1d0是逻辑磁盘

    可用zpool list和zfs list、df -h查看挂载情况。

    root@lnltedmr-tds:/# zpool list
    NAME       SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    pool_3TB  2.72T   110K  2.72T   0%  1.00x  ONLINE  -
    ramrpool  45.8G   156K  45.7G   0%  1.00x  ONLINE  -
    rpool      556G  20.9G   535G   3%  1.00x  ONLINE  -
    root@lnltedmr-tds:/# zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    pool_3TB                           110K  2.68T    31K  /export/home/mrftp
    ramrpool                           156K  45.0G    31K  /ramrpool
    ramrpool/ftp                        33K  45.0G    33K  /export/home/omcrftp
    rpool                             20.9G   526G  4.32M  /rpool
    rpool/ROOT                        3.35G   526G    31K  none
    rpool/ROOT/solaris                3.35G   526G  2.67G  /
    rpool/ROOT/solaris/var             630M   526G   629M  /var
    rpool/VARSHARE                    62.3M   526G  23.2M  /var/share
    rpool/VARSHARE/kvol               27.7M   526G    31K  /var/share/kvol
    rpool/VARSHARE/kvol/dump_summary  1.22M   526G  1.02M  -
    rpool/VARSHARE/kvol/ereports      10.2M   526G  10.0M  -
    rpool/VARSHARE/kvol/kernel_log    16.2M   526G  16.0M  -
    rpool/VARSHARE/pkg                  63K   526G    32K  /var/share/pkg
    rpool/VARSHARE/pkg/repositories     31K   526G    31K  /var/share/pkg/repositories
    rpool/VARSHARE/sstore             11.1M   526G  11.1M  /var/share/sstore/repo
    rpool/VARSHARE/tmp                  31K   526G    31K  /var/tmp
    rpool/VARSHARE/zones                31K   526G    31K  /system/zones
    rpool/dump                        12.0G   526G  12.0G  -
    rpool/export                      1.53G   526G    32K  /export
    rpool/export/home                 1.53G   526G  1.53G  /export/home
    rpool/export/home/admin4a         99.5K   526G  99.5K  /export/home/admin4a
    rpool/export/home/omccheck          38K   526G    38K  /export/home/omccheck
    rpool/export/home/user4a            36K   526G    36K  /export/home/user4a
    rpool/swap                        4.00G   526G  4.00G  -
    root@lnltedmr-tds:/# df -h
    Filesystem             Size   Used  Available Capacity  Mounted on
    rpool/ROOT/solaris     547G   2.7G       526G     1%    /
    rpool/ROOT/solaris/var
                           547G   629M       526G     1%    /var
    /devices                 0K     0K         0K     0%    /devices
    /dev                     0K     0K         0K     0%    /dev
    ctfs                     0K     0K         0K     0%    /system/contract
    proc                     0K     0K         0K     0%    /proc
    mnttab                   0K     0K         0K     0%    /etc/mnttab
    swap                    11G   6.6M        11G     1%    /system/volatile
    swap                    11G     4K        11G     1%    /tmp
    objfs                    0K     0K         0K     0%    /system/object
    sharefs                  0K     0K         0K     0%    /etc/dfs/sharetab
    fd                       0K     0K         0K     0%    /dev/fd
    /usr/lib/libc/libc_hwcap1.so.1
                           529G   2.7G       526G     1%    /lib/libc.so.1
    rpool/VARSHARE         547G    23M       526G     1%    /var/share
    rpool/VARSHARE/tmp     547G    31K       526G     1%    /var/tmp
    rpool/VARSHARE/kvol    547G    31K       526G     1%    /var/share/kvol
    rpool/VARSHARE/zones   547G    31K       526G     1%    /system/zones
    rpool/export           547G    32K       526G     1%    /export
    rpool/export/home      547G   1.5G       526G     1%    /export/home
    rpool/export/home/admin4a
                           547G    99K       526G     1%    /export/home/admin4a
    rpool/export/home/omccheck
                           547G    38K       526G     1%    /export/home/omccheck
    rpool/export/home/user4a
                           547G    36K       526G     1%    /export/home/user4a
    rpool                  547G   4.3M       526G     1%    /rpool
    rpool/VARSHARE/pkg     547G    32K       526G     1%    /var/share/pkg
    rpool/VARSHARE/pkg/repositories
                           547G    31K       526G     1%    /var/share/pkg/repositories
    rpool/VARSHARE/sstore
                           547G    11M       526G     1%    /var/share/sstore/repo
    /dev/dsk/c2t0d0p0:1    3.5G   3.1G       357M    91%    /media/ORACLE_SSM
    ramrpool                45G    31K        45G     1%    /ramrpool
    ramrpool/ftp            45G    33K        45G     1%    /export/home/omcrftp
    pool_3TB               2.7T    31K       2.7T     1%    /export/home/mrftp
    

      

    存储池的缺省挂载点

    创建池时,顶层文件系统的缺省挂载点是 /pool-name。此目录必须不存在或者为空。如果目录不存在,则会自动创建该目录。如果该目录为空,则根文件系统会挂载在现有目录的顶层。要使用不同的缺省挂载点创建池,请在 -zpool create 命令中使用 m 选项。例如,

    # zpool create home c1t0d0
    default mountpoint '/home' exists and is not empty
    use '-m' option to provide a different default
    # zpool create -m /export/zfs home c1t0d0

    此命令会创建新池 home 和挂载点为 /export/zfs 的 home 文件系统。

    有关挂载点的更多信息,请参见管理 ZFS 挂载点

  • 相关阅读:
    phpStorm激活码
    找回自己
    延迟加载JavaScript
    [MAC]如何通过 macOS 恢复功能重新安装 macOS
    Realm JavaScript
    Realm .NET
    [MAC]获得在线帮助:恢复信息
    [Swift]UILabel文字截断
    算法和数据结构可视化
    Realm Swift
  • 原文地址:https://www.cnblogs.com/wangziyi0513/p/10841431.html
Copyright © 2020-2023  润新知