EzDevInfo.com

lvm interview questions

Top lvm frequently asked interview questions

How to check whether a file is open anywhere in a cluster using GFS and lvm?

I wonder if it is possible to check if a file has already been opened by another node in the same GFS cluster. For example, the fuser command runs cluster-wide in TruCluster. Is it possible to query the lock manager's data via a command or API?


Source: (StackOverflow)

resize2fs: Bad magic number in super-block while trying to open

I am trying to resize a logical volume on CentOS7 but am running into the following error:

resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.

I have tried adding a new partition (using fdisk) and using vgextend to extend the volume group, then resizing. Resize worked fine for the logical volume using lvextend, but it failed at resize2fs.

I have also tried deleting an existing partition (using fdisk) and recreating it with a larger end block, then resizing the physical volume using lvm pvresize, followed by a resize of the logical volume using lvm lvresize. Again everything worked fine up to this point.

Once I tried to use resize2fs, using both methods as above, I received the exact same error.

Hopefully some of the following will shed some light.

fdisk -l

[root@server~]# fdisk -l

Disk /dev/xvda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009323a

Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *        2048     1026047      512000   83  Linux
/dev/xvda2         1026048    41943039    20458496   8e  Linux LVM
/dev/xvda3        41943040    62914559    10485760   8e  Linux LVM

Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-root: 29.5 GB, 29532094464 bytes, 57679872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

pvdisplay

[root@server ~]# pvdisplay
--- Physical volume ---
PV Name               /dev/xvda2
VG Name               centos
PV Size               19.51 GiB / not usable 2.00 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              4994
Free PE               0
Allocated PE          4994
PV UUID               7bJOPh-OUK0-dGAs-2yqL-CAsV-TZeL-HfYzCt

--- Physical volume ---
PV Name               /dev/xvda3
VG Name               centos
PV Size               10.00 GiB / not usable 4.00 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              2559
Free PE               0
Allocated PE          2559
PV UUID               p0IClg-5mrh-5WlL-eJ1v-t6Tm-flVJ-gsJOK6

vgdisplay

[root@server ~]# vgdisplay
--- Volume group ---
VG Name               centos
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  6
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                2
Act PV                2
VG Size               29.50 GiB
PE Size               4.00 MiB
Total PE              7553
Alloc PE / Size       7553 / 29.50 GiB
Free  PE / Size       0 / 0
VG UUID               FD7k1M-koJt-2veW-sizL-Srsq-Y6zt-GcCfz6

lvdisplay

[root@server ~]# lvdisplay
--- Logical volume ---
LV Path                /dev/centos/swap
LV Name                swap
VG Name                centos
LV UUID                KyokrR-NGsp-6jVA-P92S-QE3X-hvdp-WAeACd
LV Write Access        read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status              available
# open                 2
LV Size                2.00 GiB
Current LE             512
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     8192
Block device           253:0

--- Logical volume ---
LV Path                /dev/centos/root
LV Name                root
VG Name                centos
LV UUID                ugCOcT-sTDK-M8EV-3InM-hjIg-2nwS-KeAOnq
LV Write Access        read/write
LV Creation host, time localhost, 2014-10-09 08:28:42 +0100
LV Status              available
# open                 1
LV Size                27.50 GiB
Current LE             7041
Segments               2
Allocation             inherit
Read ahead sectors     auto
- currently set to     8192
Block device           253:1

I've probably done something stupid, so any help would be greatly appreciated!


Source: (StackOverflow)

Advertisements

How to deactivate a LVM2 physical volume to remove the drive?

How can I shut down/"unmount" a Linux lvm2 physical volume?

I plugged an external had drive to my computer. On the drive is a LVM2 PV with one volume group which has some logical volumes. I now want to remove this drive again properly.

I unmounted the filesystems, deactivated all logical volumes and the volume group.

How can I deactivate the physical volume? Or make the PV and the VG unknown to Linux again? Like just the opposite of lvmdiskscan and vgchange -a y ?

I want to leave the PV/VG and LVs on the disk intact.


Source: (StackOverflow)

kpartx: read error when removing mapping

I have a backup procedure that uses kpartx to read from a partitioned lvm volume. Seldomly it happens that the device cannot be unmapped.

Right now when I try to remove the mapping I get the following:

# kpartx -d /dev/loop7
read error, sector 0
read error, sector 1
read error, sector 29

I tried dmsetup clean loop7p1 but nothing changed. How can I free the partition without rebooting the server? thanks


Source: (StackOverflow)

Can't run Docker container due device mapper error

I just can't create and run new containers in Docker anymore. But in the same time a can run previously created containers.

When I try to do something like this:

[user@host ~ ] docker run --name=fpm-5.3 debian:jessie
2014/07/12 07:34:08 Error: Error running DeviceCreate (createSnapDevice) dm_task_run failed

From docker.log:

2014/07/12 05:57:11 POST /v1.12/containers/create?name=fpm-5.3
[f56fcb6f] +job create(fpm-5.3)
Error running DeviceCreate (createSnapDevice) dm_task_run failed
[f56fcb6f] -job create(fpm-5.3) = ERR (1)
[error] server.go:1025 Error: Error running DeviceCreate (createSnapDevice) dm_task_run failed
[error] server.go:90 HTTP Error: statusCode=500 Error running DeviceCreate (createSnapDevice) dm_task_run failed

dmsetup status

docker-8:1-1210426-pool: 0 209715200 thin-pool 352 2510/524288 205173/1638400 - ro discard_passdown queue_if_no_space 

But they are a lot of free space on disk.

dmsetup info

Name:              docker-8:1-1210426-pool
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      1
Major, minor:      252, 0
Number of targets: 1

docker info

Containers: 4
Images: 65
Storage Driver: devicemapper
 Pool Name: docker-8:1-1210426-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 12823.3 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 9.9 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.14.4

docker version

Client version: 1.0.0
Client API version: 1.12
Go version (client): go1.2.2
Git commit (client): 63fe64c
Server version: 1.0.0
Server API version: 1.12
Go version (server): go1.2.2
Git commit (server): 63fe64c

Source: (StackOverflow)

Mount LVM overlays/snapshots? [closed]

I'm trying to programmatically mount a disk image created with the Fedora LiveUSB creator, and I'm encountering some issues.

From what I've been told, it's very difficult to mount LVM snapshots outside of the host system. I have both the "pristine" image and the persistent snapshot, so I thought that it should be technically possible.

Any ideas?


Source: (StackOverflow)

How to mount a LVM partition within a LVM volume? [closed]

I have built a VG named cinder-volumes. Within this VG, I created a PV named leader-volume. Then I mounted this PV as the root filesystem of a KVM Ubuntu installation. During the installation process, I selected LVM partition. At last, I created a snapshot for the PV leader-volume. Now I want to read some files within my Ubuntu installation... What shall I do?


Source: (StackOverflow)

lvm mysql backup

I have been reading about mysql backup through the use of lvm I understand that you create a lvm partition and allocate a specific size to mysql, leaving enough space for the snapshots.

I read that the advantage is that backups are very quick.

Are there any pitfalls to watch out for or disadvantages?

Thanks


Source: (StackOverflow)

Create a volume group on Ubuntu to support non-loopback devicemapper driver for docker?

There is a lot of material pointing out the dangers of using a loopback devices with the devicemapper driver. This question seems to contain most of the information necessary to move away from a loopback device.

Warning of "Usage of loopback devices is strongly discouraged for production use."

My question is how to create the volume group /dev/my-vg in Ubuntu? Or are there other paths around the loopback device that don't involve creating a volume group?


Source: (StackOverflow)

Ubuntu 14.04 preseed LVM disk config

I'm having some issues getting my partitions to be of type primary, and not logical/extended.

Here is the relevant code in my preseed:

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm

d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true

d-i partman-auto-lvm/guided_size string max
d-i partman-auto/choose_recipe select boot-root
d-i partman-auto-lvm/new_vg_name string vg00
d-i partman-auto/expert_recipe string                         \
      boot-root ::                                            \
              512 512 512 ext3                             \
                      $primary{ } $bootable{ }                \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext3 }    \
                      mountpoint{ /boot }                     \
              .                                               \
              2048 2048 2048 swap                             \
                      $primary{ } $lvmok{ } lv_name{ lv_swap } $defaultignore{} \
                      method{ swap } format{ }                \
              .                                               \
              1024 10000 -1 ext4                              \
                      $primary{ } $lvmok{ } lv_name{ lv_root } $defaultignore{}\
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext4 }    \
                      mountpoint{ / }                         \
              .

d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-lvm/confirm_nooverwrite boolean true

The problem is, this then creates the following partition scheme:

root@ubuntu-server-1404-devit:~# fdisk /dev/sda

Command (m for help): p

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009ac4d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2000895      999424   83  Linux
/dev/sda2         2002942    20969471     9483265    5  Extended
/dev/sda5         2002944    20969471     9483264   8e  Linux LVM

I'd like to remove this unnecessary Extended / logical partition, and just have the Linux LVM partition be on sda2 (primary). Like so:

Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048     2000895      999424   83  Linux
    /dev/sda2         2002942    20969471     9483265   8e  Linux LVM

Source: (StackOverflow)

LXD with LVM backingstore to achieve disk quotas

I see from the LXD storage specs that LVM can be used as a backingstore. I've previously managed to get LVM working with LXC. This was very pleasing, since it allows quota-style control of disk consumption.

How do I achieve this with LXD?

From what I understand, storage.lvm_vg_name must point to my volume group. I've set this for a container by creating a profile, and applying that profile to the container. The entire profile config looks like this:

name: my-profile-name
config:
  raw.lxc: |
    storage.lvm_vg_name = lxc-volume-group
    lxc.start.auto = 1
    lxc.arch = amd64
    lxc.network.type = veth
    lxc.network.link = lxcbr0
    lxc.network.flags = up
    lxc.network.hwaddr = 00:16:3e:xx:xx:xx
    lxc.cgroup.cpu.shares = 1
    lxc.cgroup.memory.limit_in_bytes = 76895572
  security.privileged: "false"
devices: {}

The volume group should be available and working, according to pvdisplay on the host box:

  --- Physical volume ---
  PV Name               /dev/sdc5
  VG Name               lxc-volume-group
  PV Size               21.87 GiB / not usable 3.97 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              5599
  Free PE               901
  Allocated PE          4698
  PV UUID               what-ever

However after applying the profile and starting the container, it appears to be using file backing store:

me@my-box:~# ls /var/lib/lxd/containers/container-name/rootfs/
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt
proc  root  run  sbin  srv  sys  tmp  usr  var

What am I doing wrong?


Source: (StackOverflow)

Python logging fails when open file descriptor message encountered

I have the following Python code to create an lvm snapshot on a Linux machine.

#!/usr/bin/env python3.1

import subprocess
import logging
logging.basicConfig(filename='/var/log/lvsnap.log', filemode='w', level=logging.DEBUG)

lvm_vg = 'vg00-crunchbang'
lvm_name = 'root'
lvm_snapshot_size = '100'

def lvmCreateSnapshot(lvm_vg, lvm_name, lvm_snapshot_size):
    return subprocess.check_call(['lvcreate', '-s', '-l', '+' + lvm_snapshot_size + '%FREE', '-n', lvm_name + '-snapshot', lvm_vg + '/' + lvm_name])

logging.debug('logging is working before lvm snapshot')

''' create lvm snapshot '''
lvm_create_snapshot = lvmCreateSnapshot(lvm_vg, lvm_name, lvm_snapshot_size)
if lvm_create_snapshot:
    logging.debug('create lvm snapshot of %s/%s exited with status %s', lvm_vg, lvm_name, lvm_create_snapshot)

logging.debug('logging is working after lvm snapshot')

lvmCreateSnapshot runs fine and exits with 0 which should then run the logging.debug line in the if statement. However this does not happen and instead I received the following output from the script:

> /tmp/lvmsnap.py 
File descriptor 3 (/var/log/lvsnap.log) leaked on lvcreate invocation. Parent PID 7860: python3.1
Logical volume "root-snapshot" created
>

The output of the log is:

> cat /var/log/lvsnap.log 
DEBUG:root:logging is working before lvm snapshot
DEBUG:root:logging is working after lvm snapshot
>

Which, as you can see has the lvm logging.debug message missing (it should appear between the 2 test logging messages I created).

Why is this happening and how can I fix it?


Source: (StackOverflow)

Need to redirect an output to /dev/null.... works fine in command line but not in shell

I need to write an execute some command in bash file and ignore the inputs.

Example

pvs --noheadings -o pv_name,vg_name,vg_size 2> /dev/null

The above command works great in command line, but when I write the same in shell, it gives me an error

like

Failed to read physical volume "2>"
Failed to read physical volume "/dev/null"

I guess it looks it as an part of the whole command. Can you please give me some suggestions on how to rectify it?

Thanks in advance.

FULLCODE

#------------------------------

main() {
    pv_cmd='pvs'
    nh='--noheadings'
    sp=' '
    op='-o'
    vgn='vg_name'
    pvn='pv_name'
    pvz='pv_size'
    cm=','
    tonull=' 2 > /dev/null '
    pipe='|'

    #cmd=$pv_cmd$sp$nh$sp$op$sp$vgn$cm$pvn$cm$pvz$sp$pipe$tonull  #line A
    cmd='pvs --noheadings -o vg_name,pv_name,pv_size 2> /dev/null' #line B
    echo -n "Cmd="
    echo $cmd
    $cmd

}

main

#-----------------------------------------------------

If you look at the Line A & B both the versions are there, although one is commented out.....


Source: (StackOverflow)

Linux LVM lvs command fails from cron perl script but works from cron directly

I am trying to run "lvs" in a perl script to parse its output.

my $output = `lvs --noheadings --separator : --units m --nosuffix 2>&1`;
my $result = $?;
if ($result != 0 || length($output) == 0) {
    printf STDERR "Get list of LVs failed (exit result: %d): %s\n",
    $result, $output;
    exit(1);
}
printf "SUCCESS:\n%s\n", $output;

When I run the above script from a terminal window it runs fine. If I run via cron it fails:

Get list of LVs failed (exit result: -1): 

Note the lack of any output (stdout + stderr)

If I run the same "lvs --noheadings --separator : --units m --nosuffix" command directly in cron, it runs and outputs just fine.

If I modify the perl script to use open3() I also get the same failure with no output.

If I add "-d -d -d -d -d -v -v -v" to the lvs command to enable verbose/debug output I see that when I run the perl script from terminal, but there is no output when run via cron/perl.

I'm running this on RHEL 7.2 with /usr/bin/perl (5.16.3)

Any suggestions???


Source: (StackOverflow)

Can take incremental LVM snapshots in linux?

I have just made a lvm snapshot of /opt partition then mounted that lvm to a /data.Is there is any way to take incremental lvm snapshots ?


Source: (StackOverflow)