Updated March 10, 2023
Introduction to Dataset ZFS
The dataset created by Sun Microsystems that acts as a file system or volume manager where the data can be controlled and managed for placement and storage in computing systems is called ZFS dataset. Zettabyte file system helps in the integrity of data and scalability where replication and duplication of data can be done easily. This is a 128-bit file system where 256 quadrillion zettabytes can be scaled easily. All disks and storage are managed in a single entity and if additional capacity is needed, more drives can be added easily. The maximum file size is always supported here where two copies of metadata is stored in the disk while copying data.
What is Dataset ZFS?
- A filesystem inside the standard file system where the namespace is mounted and behaves like any other file system for storage and acts as a repository for all its metadata is called ZFS dataset. Most of the Linux distributions use ZFS through ZFS-FUSE and it is the logical volume manager of the system.
- The devices are managed as a storage pool where files are placed and this becomes the datastore for the file systems to be created. There is no virtualized storage space here and all the characteristics of the storage space is described in the storage pool such as data redundancy, device design and data deduplication.
- ZFS is one of the best file systems nowadays with its data security and large scale storage capacity in the filesystems. There are several complexities in the file systems but the security offered for the data is incomparable with any other file systems and can be used in combination with RAID. Also, it offers all the services for free making the users store huge amounts of data.
How can we Use it?
- A single server is used to run ZFS where any amount of data can be managed easily. If needed, we can add more drives to the pool and maintain the data storage. While the metadata is copied to the storage disk, the metadata has disk sections to store data, data size to be stored and a checkpoint to check the binary digits present in the data. This checkpoint is used to verify data when the user requests access to specific data by comparing the bits of data present in the storage system.
- If there is any damaged data and if the storage pool is mirrored, we can retrieve the data from another storage drive and rectify the damaged data. ZFS is called copy on write system and it does not overwrite the data once it has copied the same. A new version is stored and metadata is updated for the same data with all the relevant points and older version details.
- Previous data value is checked before coping where read, modify and write is followed for all the data being copied into the storage drives. Virtual server environments and network file systems are the common deployment options of the ZFS file systems.
ZFS Dataset Best Practices
- While taking ZFS snapshots, make sure to send them to external storage for future references. ZFS send and ZFS receive can be used for this purpose. Snapshots are an easy way to manage file versions in check and hence it is better to use zfs-auto-snapshot-script in the device. Also, it is better to use compression as the data stored will be in a compressed format where it does not affect the CPU or any memory. Deduplication can be followed if RAM is available in the system because deduplication itself will cause huge money without RAM storage. It is better to create datasets for /home/, /var/cache/ or /var/log/ than using it in the root systems of GNU/Linux.
- ZFS NFS works well than native NFS systems that help to ensure that datasets are mounted properly and is in place so that data will be received in prompt time. Do not use NFS kernel exports instead of ZFS NFS as the former one is complex and difficult to maintain in the system. While installing datasets in the system, it is better to set quotas for the datasets so that nested datasets can be used within the storage capacity.
- While sending snapshots to external storage, it is better to use it with incremental streams. Hence, the code to be used is zfs send-i to save time. Dataset properties can be saved using ZFS send instead of rsync and downtime can be lessened using ZFS destroy.
Creating the ZFS Datasets
- Ubuntu Server is needed to install ZFS. All the components are managed in a single ubuntu package and so run the command.
sudo apt install zfsutils-linux
- Once the command is run, to check whether it is installed properly, run whereis ZFS which shows us the location of ZFS package. Now, we have installed ZFS in the system and it is necessary to create a storage pool as well.
- Initially, we have to check the drives where we are planning to keep the storage pool. This can be checked by sudo fdisk -l. The drive names should be noted down for future reference. We can create striped pools and mirrored pools. Striped pools are those where data is stored in stripes in all drives whereas mirrored pools are those where data is stored separately. Striped pools perform better and can be created with sudo zpool to create new-pool /dev/mag /dev/ger where dev/mag and dev/ger are the names of two drives.
- Mirrored pools are created using sudo zpool create new-pool mirror /dev/sdb /dev/sdc.
- Now, both the pools will appear in Ubuntu and we can use any based on our convenience. Status of pools can be checked with sudo zpool status. In a striped pool, all data will be lost if the drive fails. So users mostly prefer the mirrored pool.
There are several features available in ZFS that makes it complicated for new users. Additional processing power is required sometimes and hence it is difficult to manage by the users. Also, running on a single server limits its capacity to parallel processing and hence parallel file systems in multiple servers.
This is a guide to Dataset ZFS. Here we discuss the introduction, how can we use it? best practices and creating the ZFS datasets. You may also have a look at the following articles to learn more –