Home > other >  Data of double insurance: Azure storage + ZFS
Data of double insurance: Azure storage + ZFS

Time:09-21

ZFS is what?

ZFS named sources like in "Zettabyte File System" acronym, but the abbreviation of ZFS itself does not have any meaning, expounds mainly as a File System has high capacity and there are many support extending function of a product, indeed, called ZFS File System is a bit of a misfit, because it is more than a File System in the traditional sense, ZFS logical volume manager concept and function of the rich and the large-scale extension of the File System,
ZFS is by the Sun's Jeff Bonwick and him Ahrens, led by a team of design and development, its development began in 2001, and in 2004 formally announced that in 2005, it has been integrated into the main trunk of Solaris and as part of the OpenSolaris release, it has many good function, make it suitable for enterprise server, in particular, it aims to protect the data integrity, and the snapshot, replication, compression and repeat to eliminate provides built-in support,

Azure storage + ZFS

Azure storage service will automatically copy your data, to help prevent accidental hardware failure, ensure the data is available at any time, by default, an area in three copies, regional redundancy options in the area of hundreds of kilometres away to create additional 3 copies, and further improve reliability and disaster recovery, regional redundant backup can read access options, data availability can be further improved to 99.99%,
ZFS itself also provide many data protection solution, here are the Azure storage, you don't need to worry about that, although the data protection provides the ability to regenerate the data when fault, but that doesn't involve in the first data validity, ZFS by metadata generated for each block write 32-bit checksum (or 256 - bit hash) solves this problem, and the size of a variable speed function of ZFS can help you to compress the data better,
That I took last month is introduced to make a simple case network backup,

Simple case

Diligent thinking of good things to share, team members to use Seafile set up private network backup made into Azure convenient one-click deployment - resource manager template ubuntu - netdisk - setup, the template used in ubuntu 16.04 virtual machine and equipped with ZFS as the data set to ensure the integrity of the data, different from the traditional file systems need to reside on separate devices or need a volume management system to use more than one device, ZFS created in the virtual, known as "zpools" storage pool, each storage pool can be made by a number of virtual devices (virtual devices, vdevs),
 # Create seafile - data with the help of ZFS 
# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Zpool create - f ${ZPOOL_NAME}/dev/SDC
Zpool set cachefile=/etc/ZFS/zpool cache ${ZPOOL_NAME}
ZFS create ${ZPOOL_NAME}/${ZFS_DATASET}
ZFS set compression will=${ZPOOL_NAME} gzip/${ZFS_DATASET}


Details see corresponding Azure on making the call to the resource manager script,

Advanced features
In order to improve the read/write performance of file system, ZFS offers two efficient mechanism: Cache L2ARC (Level 2 Adjustable Replacement Cache) and log ZIL (ZFS intent log), on a physical machine, we can be a small amount of high-speed disk (SSD) added to the storage pool, use ordinary disk at the same time as the main storage medium, and at a lower price to close to the effect of using high-speed disk completely, in the Azure environment, we can also adopt a similar approach, set the temporary disk on the VM to L2ARC (also no effect after restart to empty), create a small piece of SSD as ZIL equipment, and set the main storage device into a normal disk, such a configuration performance was quite good,

Better reading experience, you can click here,
  • Related