EzDevInfo.com

btrfs interview questions

Top btrfs frequently asked interview questions

Is there any BTRFS library interface in c for creating , deleting or listing out btrfs subvolume?

I want a convenient API in c to get the list of sub-volumes in the given btrfs partition as listed out when we run the command below.

btrfs subvolume list btrfs/subvol/path


Source: (StackOverflow)

How to discover the btrfs subvolume id of a snapshot?

Given a BTRFS snapshot, how can I determine it's subvolume id? Seems btrfs qgroup only outputs subvolume ids:

btrfs qgroup show .
0/753    350494720  0

Source: (StackOverflow)

Advertisements

btrfs ioctl: get file checksums from userspace

I would like to obtain the BTRFS checksums related to the specific file, but unfortunately I have not found appropriate ioctl to perform this action. Is it possible to do? If so, how to do that? I need stored checksums to try to reduce CPU load in cases similar to rsync behaviour.


Source: (StackOverflow)

How to test if location is a btrfs subvolume?

In bash scripting, how could I check elegantly if a specific location is a btrfs subvolume?

I do NOT want to know if the given location is in a btrfs file system (or subvolume). I want to know if the given location is the head of a subvolume.

Ideally, the solution could be written in a bash function so I could write:

if is_btrfs_subvolume $LOCATION; then
    # ... stuff ...
fi 

An 'elegant' solution would be readable, small in code, small in resource consumption.


Source: (StackOverflow)

docker with btrfs ubuntu

I need help in order to start docker deamon using btrfs.

The deamon don't want to start when I try to start it using -s btrfs. There is an error in the logs (wrong filesystem?) when trying to start docker deamon with btrfs.

I use ubuntu as OS :

root@ionutmos-VirtualBox:/etc/default# uname -a
Linux ionutmos-VirtualBox 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8    09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux 

I mounted a new partition using btrfs on /var/lib/docker2.

    /dev/sda       btrfs     52428800     512  50302720   1% /var/lib/docker2

I have 1.6.2 docker version installed :

/etc/default# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 7c8fca2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 7c8fca2
OS/Arch (server): linux/amd64

I edited "/lib/systemd/system/docker.service" file and it's looking like this :

Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target docker.socket
Requires=docker.socket
[Service]
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/bin/docker -d -H fd:// $OPTIONS
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
Also=docker.socket

I inserted 2 Options in /etc/default/docker file:

OPTIONS="--storage-driver btrfs"
DOCKER_OPTS="-s btrfs"

When I try to start docker deamon manually this error is present into logs:

FATA[0000] Shutting down daemon due to errors: error intializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?)

Here it is the entire log file :

root@ionutmos-VirtualBox:/usr/lib/system-service# docker -d -D -s btrfs
DEBU[0000] waiting for daemon to initialize
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
DEBU[0000] Registering GET, /images/{name:.*}/get
DEBU[0000] Registering GET, /images/{name:.*}/history
DEBU[0000] Registering GET, /images/{name:.*}/json
DEBU[0000] Registering GET, /_ping
DEBU[0000] Registering GET, /events
DEBU[0000] Registering GET, /images/json
DEBU[0000] Registering GET, /images/get
DEBU[0000] Registering GET, /containers/{name:.*}/changes
DEBU[0000] Registering GET, /containers/{name:.*}/logs
DEBU[0000] Registering GET, /exec/{id:.*}/json
DEBU[0000] Registering GET, /containers/{name:.*}/attach/ws
DEBU[0000] Registering GET, /images/search
DEBU[0000] Registering GET, /containers/json
DEBU[0000] Registering GET, /containers/{name:.*}/export
DEBU[0000] Registering GET, /containers/{name:.*}/json
DEBU[0000] Registering GET, /containers/{name:.*}/top
DEBU[0000] Registering GET, /containers/{name:.*}/stats
DEBU[0000] Registering GET, /info
DEBU[0000] Registering GET, /version
DEBU[0000] Registering GET, /images/viz
DEBU[0000] Registering GET, /containers/ps
DEBU[0000] Registering POST, /auth
DEBU[0000] Registering POST, /exec/{name:.*}/start
DEBU[0000] Registering POST, /exec/{name:.*}/resize
DEBU[0000] Registering POST, /images/create
DEBU[0000] Registering POST, /images/load
DEBU[0000] Registering POST, /images/{name:.*}/push
DEBU[0000] Registering POST, /containers/{name:.*}/start
DEBU[0000] Registering POST, /containers/{name:.*}/rename
DEBU[0000] Registering POST, /containers/{name:.*}/exec
DEBU[0000] Registering POST, /build
DEBU[0000] Registering POST, /containers/{name:.*}/unpause
DEBU[0000] Registering POST, /containers/{name:.*}/restart
DEBU[0000] Registering POST, /containers/{name:.*}/wait
DEBU[0000] Registering POST, /containers/{name:.*}/attach
DEBU[0000] Registering POST, /containers/{name:.*}/copy
DEBU[0000] Registering POST, /containers/{name:.*}/resize
DEBU[0000] Registering POST, /commit
DEBU[0000] Registering POST, /images/{name:.*}/tag
DEBU[0000] Registering POST, /containers/create
DEBU[0000] Registering POST, /containers/{name:.*}/kill
DEBU[0000] Registering POST, /containers/{name:.*}/pause
DEBU[0000] Registering POST, /containers/{name:.*}/stop
DEBU[0000] Registering DELETE, /containers/{name:.*}
DEBU[0000] Registering DELETE, /images/{name:.*}
DEBU[0000] Registering OPTIONS,
DEBU[0000] docker group found. gid: 125
FATA[0000] Shutting down daemon due to errors: error intializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?)

Source: (StackOverflow)

How do BTRFS and ZFS snapshots work?

More specifically, how do they manage to look at the entire subvolume and remember everything about it (files, sizes of files, folder structure) while fitting it into such a small amount of data.


Source: (StackOverflow)

How do I revover a btrfs filesystem that will not mount (but mount returns without error), checks ok, errors out on restore?

SYNOPSIS

mount -o degraded,ro /dev/disk/by-uuid/ec3 /mnt/ec3/ && echo noerror

noerror

DESCRIPTION
mount -t btrfs fails but returns with noerror as above and only since the last reboot.
btrfs check seems clean to me (I am simple user).
btrfs restore errors out with "We have looped trying to restore files in"...
I had a lingering artifact btrfs filesystem show giving " *** Some devices missing " on the volume. This meant it would not automount on boot and I have been manually mounting (+ searching for a resolution to that)
I have previously used rdfind to deduplicate with hard links (as many as 10 per file)
I had just backed up using btrfs send/recieve but have to check if I have everything - this was the main Raid1 server

DETAILS

btrfs-find-root /dev/disk/by-uuid/ec3

Superblock thinks the generation is 103093
Superblock thinks the level is 1
Found tree root at 8049335181312 gen 103093 level 1

btrfs restore -Ds /dev/disk/by-uuid/ec3 restore_ec3

We have looped trying to restore files in

df -h /mnt/ec3/

Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 16G 16G 483M 97% /

mount -o degraded,ro /dev/disk/by-uuid/ec3 /mnt/ec3/ && echo noerror

noerror

df /mnt/ec3/

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/dm-0 16775168 15858996 493956 97% /

btrfs filesystem show /dev/disk/by-uuid/ec3

Label: none uuid: ec3
Total devices 3 FS bytes used 1.94TiB
devid 6 size 2.46TiB used 1.98TiB path /dev/mapper/26d2e367-65ea-47ad-b298-d5c495a33efe
devid 7 size 2.46TiB used 1.98TiB path /dev/mapper/3c193018-5956-4637-9ec2-dd5e49a4a412
*** Some devices missing #### comment, this is an old artifact unchanged since before unable to mount

btrfs check /dev/disk/by-uuid/ec3

Checking filesystem on /dev/disk/by-uuid/ec3
UUID: ec3
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
found 2132966506496 bytes used err is 0
total csum bytes: 2077127248
total tree bytes: 5988204544
total fs tree bytes: 3492638720
total extent tree bytes: 242151424
btree space waste bytes: 984865976
file data blocks allocated: 3685012271104
referenced 3658835013632
btrfs-progs v4.1.2

Update: After a reboot (had to wait for a slot to go down) the system mounts manually but not completely clean.

Now asking question on irc #btrfs:

! http://pastebin.com/359EtZQX

Hi, I'm scratching my head and searched in vain to remove *** Some devices missing. Can anyone help give me a clue to clean this up?
- is there a good way to 'fix' the artifacts I am seeing? trying: scrub, balance. Try: resize, defragment.
- would I be advised to move to a new clean volume set?
- would a fix via a btrfs send/recieve be safe from propogating errors?
- or (more painfully) rsync to a clean volume? http://pastebin.com/359EtZQX (My first ever day using irc)


Source: (StackOverflow)

How can I get 3 uncorrectable errors on a BTRFS with 3 disks?

I did this:

/sbin/btrfs scrub start -B /mnt/ospool

ospool is a pool with 3 sata drives.

$ sudo btrfs filesystem show /mnt/ospool
Label: ospool  uuid: ef62a9ec-887f-4a70-9c89-cf4ce29dfeb1
    Total devices 3 FS bytes used 125.16GiB
    devid    1 size 93.13GiB used 82.03GiB path /dev/sdc3
    devid    2 size 97.66GiB used 86.03GiB path /dev/sdd3
    devid    3 size 97.66GiB used 86.00GiB path /dev/sde3

I got this response:

scrub done for ef62a9ec-887f-4a70-9c89-cf4ce29dfeb1
        scrub started at Wed Dec 23 18:05:01 2015 and finished after 1074 seconds
        total bytes scrubbed: 231.87GiB with 19 errors
        error details: read=19
        corrected errors: 16, uncorrectable errors: 3, unverified errors: 0

How can I get 3 uncorrectable errors on a BTRFS with 3 disks?


Source: (StackOverflow)

graphdriver - prior storage driver \"btrfs\" failed: prerequisites for driver not satisfied (wrong filesystem?)

I tried to use the following command:

> docker run -d -p 8080:80 -p 8800:8800 -p 9002:9002 --privileged=true -e "GALAXY_LOGGING=full" bgruening/galaxy-stable 

and recieved the following docker error massages (http://0.0.0.0:9002):

time="2016-01-04T10:31:04.992632146Z" level=error msg="[graphdriver] prior storage driver \"btrfs\" failed: prerequisites for driver not satisfied (wrong filesystem?)" 
time="2016-01-04T10:31:04.992777695Z" level=fatal msg="Error starting daemon: error initializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?)" 
time="2016-01-04T10:31:06.329986754Z" level=error msg="[graphdriver] prior storage driver \"btrfs\" failed: prerequisites for driver not satisfied (wrong filesystem?)" 
time="2016-01-04T10:31:06.330140404Z" level=fatal msg="Error starting daemon: error initializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?)" 
time="2016-01-04T10:31:09.000757480Z" level=error msg="[graphdriver] prior storage driver \"btrfs\" failed: prerequisites for driver not satisfied (wrong filesystem?)" 
time="2016-01-04T10:31:09.000911410Z" level=fatal msg="Error starting daemon: error initializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?)" 

I am using Linux Mint 17.2 and docker 1.9.1 (build a34a1d5) in a VirtualBox. My host OS is CentOS 6.6.

$ uname -a
Linux galaxy-VirtualBox 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

> df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  6.1G  4.0K  6.1G   1% /dev
tmpfs          tmpfs     1.3G  1.2M  1.3G   1% /run
/dev/sda1      ext4      1.9T  122G  1.7T   7% /
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
none           tmpfs     5.0M     0  5.0M   0% /run/lock
none           tmpfs     6.2G  1.5M  6.2G   1% /run/shm
none           tmpfs     100M   32K  100M   1% /run/user

What did I do wrong?


Source: (StackOverflow)

Auto decrypt multiple LUKS Devices with Mandos

I played around with Mandos to automatically open an encrypted root device. I wanted to setup an encrypted btrfs raid 1 (sda1 and sdb1: LUKS). The first device is decrypted correctlly, but the second will noch be opened. Is there a way to do this?


Source: (StackOverflow)

Cannot create snapshot - Take over in Fedora existing btrfs /var/lib/docker filesystem under opensuse

I could mount /dev/sdb3 as /var/lib/docker btrfs filesystem that was originally running under opensuse 42.1 and is now mounted with Fedora 23.

Still, I cannot create a config with

snapper -c docker create-config /var/lib/docker

returns:

Creating config failed (creating btrfs snapshot failed).

And /var/log/snapper.log states

ERR libsnapper(19252) Btrfs.cc(createConfig):112 - create subvolume failed, ioctl(BTRFS_IOC_SUBVOL_CREATE) failed, errno:17 (File exists)

As a matter of fact, a lot of snapshots / subvolumes already exist as per

btrfs subvolume list /var/lib/docker

ID 257 gen 6407 top level 5 path @
ID 258 gen 6272 top level 257 path .snapshots
ID 259 gen 14 top level 258 path .snapshots/1/snapshot
ID 261 gen 6286 top level 257 path btrfs/subvolumes/bd2640ff850fed342d2405ad80e23d5643b957f37f6e9809bd63f8a07db0c45f
ID 262 gen 24 top level 257 path btrfs/subvolumes/b2638aec4b7d26050fea626cffc64d5e8bada6f695b94fd6472f8521538b8398

etc.

How can I get back to a from scratch situation where I do not corrupt the filesystem and can create a new snapshot in snapper? Let us say I don't need the btrfs snapshot recovery features.


Source: (StackOverflow)

Compare two directory trees

I have a btrfs-filesystem consisting of several harddrives in which is stored about 11 TB of Data. My backup consists of a NAS which exports one path via NFS. The path is then mounted on the machine with the btrfs-bilesystem and rsync is called to keep the nfs export synced to the main filesystem. I call rsync with one -v and send the results of the run to my email account to be sure everything is synchronized correctly. Now by pure chance I found out that some directories were not synchronized correctly - the directories existed on the NAS but they were empty. It is most likely not a rights issue since rsync is run as root. So it seems that in my situation rsync is not entirely trustworthy but I would like to compare the two directory trees to see if there are any files missing on the NAS and/or if there are files which dont exists on the btrfs anymore and which should have been deleted by rsync according. (I use the --delete option).

I am therefore looking for for a program or a script which can help me to check is rsync is running correctly. I don't need anything complicated like checksums, all I want to know if the NAS contains all the files in the btrfs-filesystem.

Any suggestions where to start looking?

Yours, Stefan


Source: (StackOverflow)

btrfs incremental backup with no changes increases backup volume?

I have written a litte bash script that will be executed via systemd to create read-only btrfs snapshots and sync them to my second btrfs drive for backups.

Creating the snapshots is working. First I have send a full snapshot as base.

Then every time I am running my script a symlink to the last snapshot is created (lastKeep).

Now sending the incremental snapshots without changing anything on the source increases my backup significantly. It's not some meta data only. My backup is about 388 GB, every time about 20 GB comes ontop.

btrfs send -p $SOURCE_MOUNTPOINT/.snapshots/$TARGET/lastKeep $SNAPSHOT | btrfs receive $BACKUP_MOUNTPOINT/$TARGET/

I thought there will be some KB of meta data only!


Source: (StackOverflow)

btrfs raid1 with multiple devices [closed]

I have 6 devices: 4TB, 3TB, 2TB, 2TB, 1.5TB, 1TB (/dev/sda to /dev/sdf).

First question:

With RAID-1 I'd have:

  • 2TB mirrored in 2TB
  • 1TB mirrored in 0.5@4TB + 0.5@3TB
  • 1.5TB mirrored in 1.25@4TB + 0.25@3TB
  • the rest 2.25 of 3TB mirrored in the rest 2.25TB of 4TB.

My total size would be in that case (4 + 3 + 2 + 2 + 1.5 + 1) = 13.5/2 = 6.75TB

Will $ mkfs.btrfs --data raid1 --metadata raid1 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf provide me with approximately 6.75TB? If yes, how many disks (and which?) can I afford losing?

Second question:

With the RAID-1 I can afford, for example, losing three disks:

  • one 2TB disk,
  • the 1TB disk and
  • the 1.5TB disk,

without losing data.

How can I have the same freedom in losing the same disks with btrfs?

Thanks!


Source: (StackOverflow)

Programmatically create a btrfs file system whose root directory has a specific owner

Background

I have a test script that creates and destroys file systems on the fly, used in a suite of performance tests.

To avoid running the script as root, I have a disk device /dev/testdisk that is owned by a specific user testuser, along with a suitable entry in /etc/fstab:

$ ls -l /dev/testdisk
crw-rw---- 1 testuser testuser 21, 1 Jun 25 12:34 /dev/testdisk
$ grep testdisk /etc/fstab
/dev/testdisk /mnt/testdisk auto noauto,user,rw 0 0

This allows the disk to be mounted and unmounted by a normal user.

Question

I'd like my script (which runs as testuser) to programmatically create a btrfs file system on /dev/testdisk such that the root directory is owned by testuser:

$ mount /dev/testdisk /mnt/testdisk
$ ls -la /mnt/testdisk
total 24
drwxr-xr-x 3 testuser  testuser   4096 Jun 25 15:15 .
drwxr-xr-x 3 root      root       4096 Jun 23 17:41 ..
drwx------ 2 root      root      16384 Jun 25 15:15 lost+found

Can this be done without running the script as root, and without resorting to privilege escalation (use of sudo) within the script?

Comparison to other file systems

With ext{2,3,4} it's possible to create a filesystem whose root directory is owned by the current user, with the following command:

mkfs.ext{2,3,4} -F -E root_owner /dev/testdisk

Workarounds I'd like to avoid (if possible)

I'm aware that I can use the btrfs-convert tool to convert an existing (possibly empty) ext{2,3,4} file system to btrfs format. I could use this workaround in my script (by first creating an ext4 filesystem and then immediately converting it to brtfs) but I'd rather avoid it if there's a way to create the btrfs file system directly.


Source: (StackOverflow)