documentation:btrfs
Differences
This shows you the differences between two versions of the page.
| documentation:btrfs [2020/02/01 12:35] – external edit 127.0.0.1 | documentation:btrfs [2020/02/01 13:00] (current) – removed lucid | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== Btrfs documentation and tricks ====== | ||
| - | The B-Tree File System usually called btrfs, butterfs, betterfs, etc. "is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration. Jointly developed at Oracle, Red Hat, Fujitsu, Intel, SUSE, STRATO and many others, Btrfs is licensed under the GPL and open for contribution from anyone.\\ | ||
| - | Source https:// | ||
| - | |||
| - | ===== How to Setup RAID Using btrfs ===== | ||
| - | Regardless of what type of RAID setup you would like to use you will need to find the /dev/sdx names of the disks you would like to use in RAID using fdisk or lsblk. The last step before you make your disks into a btrfs pool is to partition the drives with a, preferably, empty partition using whatever utility gets the job done, cfdisk comes highly recommended. Once you have done this you can continue on to either the RAID 1 or RAID 10 section. | ||
| - | |||
| - | ==== RAID 1 ==== | ||
| - | The switch '' | ||
| - | < | ||
| - | |||
| - | ==== RAID 10 ==== | ||
| - | RAID 10 requires that you use 4 or more drives, increasing only in even numbers. The below command makes a RAID 10 with both the data and metadata being mirrored across all disks. | ||
| - | < | ||
| - | |||
| - | ===== Mounting ===== | ||
| - | Mounting a btrfs RAID is very simple all you have to do is mount any one of RAID members from the pool and the drives are mirrored. There is a difference between RAID 1 and RAID 10, in that btrfs shows the partitions on the drives for members in RAID 1 and does not for RAID 10. | ||
| - | |||
| - | Example: | ||
| - | < | ||
| - | NAME | ||
| - | sda 8:0 0 111.8G | ||
| - | └─sda1 | ||
| - | sdb 8:16 | ||
| - | └─sdb1 | ||
| - | sdc 8:32 | ||
| - | └─sdc1 | ||
| - | sdd 8:48 0 931.5G | ||
| - | sde 8:64 0 931.5G | ||
| - | sdf 8:80 1 931.5G | ||
| - | sdg 8:96 1 931.5G | ||
| - | sdh 8:112 1 931.5G | ||
| - | sdi 8:128 1 931.5G | ||
| - | </ | ||
| - | |||
| - | The fstab for the above configuration: | ||
| - | < | ||
| - | <file system> | ||
| - | / | ||
| - | / | ||
| - | </ | ||
| - | As you can see it does not matter which of the disks in RAID is mounted. You can also see that this is where the mount option for data compression is chosen, which in this case is LZO. It is best to set the compression method before you begin moving data to the drives so that you do not have to compress it later. Something something compression being the default and best way to store data because why wouldn' | ||
| - | |||
| - | ===== Monitoring ===== | ||
| - | To display the current space usage and drives in a pool: | ||
| - | < | ||
| - | |||
| - | It is highly recommended to setup scheduled scrubbing as a means of error correction on your btrfs pools, look [[btrfs# | ||
| - | |||
| - | ===== Replacing a Hard Drive ===== | ||
| - | Comment out the disk pool that needs the drive replaced in ''/ | ||
| - | |||
| - | Find out what the disk id is in /dev. In this example the drive that has just been inserted is ''/ | ||
| - | < | ||
| - | Example: | ||
| - | < | ||
| - | sdf | ||
| - | sdd | ||
| - | sdb | ||
| - | └─sdb1 | ||
| - | sdi | ||
| - | sdg | ||
| - | sde | ||
| - | sdc | ||
| - | └─sdc1 | ||
| - | sda | ||
| - | ├─sda2 | ||
| - | ├─sda5 | ||
| - | │ ├─waruwaru--vg-swap_1 253:1 0 | ||
| - | │ └─waruwaru--vg-root | ||
| - | └─sda1 | ||
| - | sdh | ||
| - | |||
| - | List the btrfs filesystem to find out the devid number | ||
| - | < | ||
| - | Example: | ||
| - | < | ||
| - | warning devid 5 not found already | ||
| - | Label: none uuid: 63e6a264-9951-4271-87f3-197a0c745036 | ||
| - | Total devices 6 FS bytes used 991.08GiB | ||
| - | devid | ||
| - | devid | ||
| - | devid | ||
| - | devid | ||
| - | devid | ||
| - | *** Some devices missing | ||
| - | </ | ||
| - | Force the array to mount in degraded mode. | ||
| - | < | ||
| - | Start the drive rebuild process | ||
| - | < | ||
| - | Show the rebuild progress | ||
| - | < | ||
| - | Should look something like this: | ||
| - | < | ||
| - | |||
| - | ===== Scrubbing ===== | ||
| - | Scrubbing is a type of checksum validation between the members of a RAID array. Commonly known as a consistency check in hardware RAID setups. With FreeNAS, using ZFS, scrubs are a part of the webgui which the function of scheduling them on a regular basis. It is highly recommended to setup scheduled scrubs on your RAID arrays no matter what type of RAID is being used. | ||
| - | |||
| - | To start a scrub manually with btrfs run | ||
| - | < | ||
| - | |||
| - | To see the status of a scrub, useful to monitor with the watch command. | ||
| - | < | ||
| - | |||
| - | Create a cron job to run a scrub on the first of every month | ||
| - | < | ||
| - | |||
| - | =====Fixing Errors ===== | ||
| - | The dread of any data hoarder, checksum errors. If you come across a checksum error during, you should be greeted with a message like this: | ||
| - | < | ||
| - | |||
| - | If you missed it while it was running the scrub you can check dmesg to see if anything happened. | ||
| - | < | ||
| - | |||
| - | The corrupted file can be found using the find command. | ||
| - | < | ||
| - | |||
| - | There are probably ways of fixing it, but likely you will be out of luck and have to restore from a backup or otherwise be fucked. | ||
| - | [[https:// | ||
| - | < | ||
| - | |||
| - | [[http:// | ||
| - | |||
| - | < | ||
| - | | ||
| - | | ||
| - | |||
| - | | ||
| - | | ||
| - | |||
| - | | ||
| - | |||
| - | -v | ||
| - | | ||
| - | | ||
| - | </ | ||
documentation/btrfs.1580560512.txt.gz · Last modified: 2021/06/18 16:36 (external edit)