Understanding the space used by zfsby Brian Leonard on Sep 28,201 0
Until recently, I 've been confused and frustrated by the ZFS list output as I try to clear up space on my hard drive.
Take this example using a 1 GB zpool:
[email protected]:~# mkfile 1G /dev/dsk/disk1[email protected]:~# zpool create tank disk1[email protected]:~# zpool list tankNAME SIZE USED AVAIL CAP HEALTH ALTROOTtank 1016M 73K 1016M 0% ONLINE -
Now let's create some files and snapshots:
[email protected]:~# mkfile 100M /tank/file1[email protected]:~# zfs snapshot [email protected][email protected]:~# mkfile 100M /tank/file2[email protected]:~# zfs snapshot [email protected][email protected]:~# zfs list -t all -r tankNAME USED AVAIL REFER MOUNTPOINTtank 200M 784M 200M /tank[email protected] 17K - 100M -[email protected] 0 - 200M -
The output here looks as I 'd CT. the I have used 200 MB of disk space, neither of which is used by the snapshots. snap1 refers to 100 MBS of data (file1) and snap2 refers to 200 MBS of data (file1 and file2 ).
Now let's delete file1 and look at our ZFS list output again:
[email protected]:~# rm /tank/file1[email protected]:~# zpool scrub tank[email protected]:~# zfs list -t all -r tankNAME USED AVAIL REFER MOUNTPOINTtank 200M 784M 100M /tank[email protected] 17K - 100M -[email protected] 17K - 200M -
Only 1 thing has changed-tank now only refers to 100 MB of data. file1 has been deleted and is only referenced by the snapshots. so why don't the snapshots reflect this in their used column? You may think we shocould show 100 MB used by snap1, however, this wocould be misleading as deleting snap1 has no effect on the data used by the tank file system. deleting snap1 wocould only free up 17 K of disk space. we'll come back to this test case in a moment.
There is an option to get more detail on the space consumed by the snapshots. although you can pretty easily deduct from the example abve that the snapshots are using 100 MB, by using the ZFS space option you can save yourself from doing the math:
[email protected]:~# zfs list -t all -o space -r tankNAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILDtank 784M 200M 100M 100M 0 154K[email protected] - 17K - - - -[email protected] - 17K - - - -
Here we can clearly see that of the 200 MB used by our file system, 100 mb is used by snapshots (File1(And 100 mb is used by the dataset itself (File2). Of course, there are other factors that can affect the total amount used-see the ZFS man page for details.
Now, if we were to delete snap1 (we know this is safe, because it's not using any space ):
[email protected]:~# zfs destroy [email protected][email protected]:~# zfs list -t all -r tankNAME USED AVAIL REFER MOUNTPOINTtank 200M 784M 100M /tank[email protected] 100M - 200M -
We can see that snap2 now shows 100 MBS used. If I were to delete snap2, I wocould be deleting 100 MB of data (or reclaiming 100 MB of space ):
Now let's look at a more realistic example-my home directory where I have time slider running:
[email protected]:~$ zfs list -t all -r -o space rpool/export/homeNAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILDrpool/export/home 25.4G 35.2G 17.9G 17.3G 0 0rpool/export/[email protected]:monthly-2010-08-03-09:30 - 166M - - - -rpool/export/[email protected]:30 - 5.06M - - - -rpool/export/[email protected]:56 - 5.15M - - - -rpool/export/[email protected]:12 - 54.6M - - - -rpool/export/[email protected]:monthly-2010-09-01-00:00 - 53.8M - - - -rpool/export/[email protected]:weekly-2010-09-08-00:00 - 95.8M - - - -rpool/export/[email protected]:04 - 53.9M - - - -rpool/export/[email protected]:weekly-2010-09-15-00:00 - 2.06G - - - -rpool/export/[email protected]:weekly-2010-09-22-00:00 - 89.7M - - - -rpool/export/[email protected]:frequent-2010-09-28-11:00 - 18.3M - - - -rpool/export/[email protected]:frequent-2010-09-28-11:15 - 293K - - - -rpool/export/[email protected]:frequent-2010-09-28-11:30 - 293K - - - -rpool/export/[email protected]:frequent-2010-09-28-11:45 - 1.18M - - - -
My snapshots are consuming almost 18 GBS of space. however, it wocould appear that I cocould only Reclaim about 2.5 GBS of space by deleting all of my snapshots. in reality, 15.5 GBS of space is referenced by 2 or more snapshots.
I can get a better idea of which snapshots might reclaim the most space by removing the space option so I get the refer field in the output:
[email protected]:~$ zfs list -t all -r rpool/export/homeNAME USED AVAIL REFER MOUNTPOINTrpool/export/home 35.2G 25.4G 17.3G /export/homerpool/export/[email protected]:monthly-2010-08-03-09:30 166M - 15.5G -rpool/export/[email protected]:30 5.06M - 28.5G -rpool/export/[email protected]:56 5.15M - 28.5G -rpool/export/[email protected]:12 54.6M - 15.5G -rpool/export/[email protected]:monthly-2010-09-01-00:00 53.8M - 15.5G -rpool/export/[email protected]:weekly-2010-09-08-00:00 95.8M - 15.5G -rpool/export/[email protected]:04 53.9M - 17.4G -rpool/export/[email protected]:weekly-2010-09-15-00:00 2.06G - 19.4G -rpool/export/[email protected]:weekly-2010-09-22-00:00 89.7M - 15.5G -rpool/export/[email protected]:frequent-2010-09-28-11:15 293K - 17.3G -rpool/export/[email protected]:frequent-2010-09-28-11:30 293K - 17.3G -rpool/export/[email protected]:frequent-2010-09-28-11:45 1.18M - 17.3G -rpool/export/[email protected]:hourly-2010-09-28-12:00 0 - 17.3G -rpool/export/[email protected]:frequent-2010-09-28-12:00 0 - 17.3G -
In the above output, I can see that 2 snapshots, taken 26 seconds apart, are referring to 28.5 GBS of disk space. if I were to delete one of those snapshots and check the ZFS list output again:
[email protected]:~$ pfexec zfs destroy rpool/export/[email protected]:30[email protected]:~$ zfs list -t all -r rpool/export/homeNAME USED AVAIL REFER MOUNTPOINTrpool/export/home 35.2G 25.4G 17.3G /export/homerpool/export/[email protected]:monthly-2010-08-03-09:30 166M - 15.5G -rpool/export/[email protected]:56 12.5G - 28.5G -rpool/export/[email protected]:12 54.6M - 15.5G -rpool/export/[email protected]:monthly-2010-09-01-00:00 53.8M - 15.5G -rpool/export/[email protected]:weekly-2010-09-08-00:00 95.8M - 15.5G -rpool/export/[email protected]:04 53.9M - 17.4G -rpool/export/[email protected]:weekly-2010-09-15-00:00 2.06G - 19.4G -rpool/export/[email protected]:weekly-2010-09-22-00:00 89.7M - 15.5G -rpool/export/[email protected]:frequent-2010-09-28-11:15 293K - 17.3G -rpool/export/[email protected]:frequent-2010-09-28-11:30 293K - 17.3G -rpool/export/h[email protected]:frequent-2010-09-28-11:45 1.18M - 17.3G -rpool/export/[email protected]:frequent-2010-09-28-12:00 537K - 17.3G -
I can now clearly see that the remaining snapshot is using 12.5 GBS of space and deleting this snapshot wocould reclaim much needed space on my laptop:
[email protected]:~$ zpool list rpoolNAME SIZE USED AVAIL CAP HEALTH ALTROOTrpool 149G 120G 28.5G 80% ONLINE -[email protected]:~$ pfexec zfs destroy rpool/export/[email protected]:56[email protected]:~$ zpool list rpoolNAME SIZE USED AVAIL CAP HEALTH ALTROOTrpool 149G 108G 41.0G 72% ONLINE -
And that shoshould be enough to keep time slider humming along smoothly and prevent the warning dialog from appearing (lucky you if you haven't seen that yet). https://blogs.oracle.com/observatory/entry/understanding_the_space_used_by