Solaris ZFS was really a pretty good thing, but this awesome ZFS could be a beginner's nightmare in the event of a complete system meltdown.
Right, recently encountered the same problem, the system after the damage, performance from the recovery mode boot into the system can find a lot of ZFS content, when the only/var this directory is empty.
We then used OpenSolaris's LiveCD to launch into the system, hoping to mount this ZFS. The following are described directly in the form of a similar annotation:
First, we need to use the Zpool command to force the import of the original ZFS pool//view available pools lab@opensolaris:~# zpool import pool:rpool id:68040892***************** state : ONLINE status:the Pool is formatted using a older On-disk version. Action:the pool can be imported using its name or numeric identifier, though some features won't be available without An explicit ' Zpool upgrade '. Config:rpool online c7t0d0s0 Online//So you see the available ZFS pool, and now we need to force it into the current storage pool//However, in our case, Direct force import will cause the original pool directory and// Some of the key directory conflicts in the existing file system are not imported, so we//need to replace the root path in this pool to avoid these conflicts. lab@opensolaris:~# zpool import-f-r/mnt rpool lab@opensolaris:~# zfs list NAME USED avail refer mountpoint rpool 84.3G 49.6G 35.5k/mnt/rpool rpool/root 24.3G 49.6G 18K legacy rpool/root/s10x_u6wos_07b 24.3G 49.6G 24.0G {Forget} rpool/root/s10x_ U6wos_07b/var 272M 49.6G 272M {seems to inherit to parent directory} rpool/dump 2.00G 49.6G 2.00g-rpool/export 56.0G 49.6G 19k/mnt/export rpool/e Xport/home 56.0G 49.6G 56.0g/mnt/export/home rpool/swap 2G 50.2G 1.40G-//Here right there is mountpoint this column of information, it is easy to misunderstand the mount point for these pools//But the actual situationCondition is that the mountpoint here represents the information//of the Mount configuration saved in the original pool. In this case, using the ZFS mount will find that there is actually no mount//In the Rpool/root the first question here is the legacy,legacy mount point here//is meant to be mounted as a normal file system, so legacy cannot use ZFS mount//command processing, which can only be handled//by using the Mount-f ZFS command and will be found to be blank after being hung directly with that command. Now, we want to hang up this partition. You must modify the mount point information lab@opensolaris:~# ZFS set MOUNTPOINT=/ROOT/S10 rpool/root/s10x_u6wos_07b//This way, the Rpool/root/s10x_u6wos_ 07b Mountpoint set in order to///ROOT/S10, here/represents Rpool records in the root path, the above we replace with///mnt, naturally after is/MNT/ROOT/S10//after the change after happy to mount all the bar lab@opensolaris:~# ZFS Mount Rpool/export/mnt/export rpool/export/home/mnt/export/home rpool/root/mnt/root rpool/mnt /rpool//Results not mounted. Reason. This is because the root file system//In the previous storage pool must be manually mounted. So we have to go offline lab@opensolaris:~# ZFS umount-a lab@opensolaris:~# ZFS Mount rpool/root/s10x_u6wos_07b lab@opensolaris:~# ZFS mount-a lab@opensolaris:~# ZFS Mount Rpool/root/s10x_u6wos_07b/mnt/root/s10 rpool/export/mnt/export rpool/export/ Home/mnt/export/home rpool/root/mnt/root Rpool/mnt/rpool//So, so put that root path to hang up,//In fact, Rpool/root almost no files below, the original operating system root path is/ /rpool/root/s10x_u6wos_07b, as for the Rpool/root/s10x_u6wos_07b/var directory//Does not appear the reason is the same truth.
In addition, in the process of encountering problems, you can reinitialize the state of the pool by removing the Mount Zpool export.
Resources:
http://blogs.sun.com/robinguo/entry/use _failsafe_ or _zpool_import
http://defect.opensolaris.org/bz/show_bug.cgi?id=5362
http://opensolaris.org/jive/thread.jspa?messageID=228499
Http://docs.sun.com/app/docs/doc/817-2271/gbaln?l=zh&a=view