Page 1 of 1

running ifl from cli

Posted: Mon Feb 27, 2012 7:19 pm
by hps
Hi,

i am more familiar with the windows version (ifw) so please forgive me - this is a newbie question:

I'm used to run ifw from within a virtual windows machine an store the image on a nas.
I would like to do the same with a centos webserver vm (citrix xenserver), e.g. using cron.
I'm not using a boot disc since this should be a scheduled regular backup.
The guest os does not include any graphical interface (pure app server).

So i unzipped the cli version zip to the centos filesystem /opt/ifl and ran ./setup, entered the product key etc.
When i run ./imagel -? i get the version and options infos as expected.

I then tried to list the partition info with ./imagel -l (tried ./imagel --l --all as well).
but i get no response. no error message, nothing. Same if i try to run a backup.

What am i missing?

Thanks in advance for your help
Best regards
HP

Re: running ifl from cli

Posted: Mon Feb 27, 2012 8:17 pm
by TeraByte Support(PP)
Is the user a member of the 'disk' group?

These links may help, if you haven't already read them:
http://www.terabyteunlimited.com/howto/ ... _linux.htm
http://www.terabyteunlimited.com/kb/article.php?id=305

Re: running ifl from cli

Posted: Tue Feb 28, 2012 7:02 am
by hps
Thank you,

i have seen the first and can make use of the second.
But before constructing the cron tab, shouldn't i be able to run the command ./imagel -l ?
Since alle the mounted discs reside as lvm on a nas, i am not sure what disc/partition no i need to specify.
Using the command as given in the example produces no reaction either.
And since i have no gui, i cannot make use of the F6 command.

I am logged in as root via putty, which is member of the disk group.

Re: running ifl from cli

Posted: Tue Feb 28, 2012 1:29 pm
by hps
found out something more:

I tried the whole thing in a local vmware ubuntu guest und ./imagel --l does not output anything either.
But i managed to run imagel without parameters and got a text-like gui. At first it did not display any linux or virtual drives, even though my user is member of the disk group.
When running with sudo, i can see the drive in "linux drives" and the parameter string tells me to use --d:l0

I tried the same on the centos server but imagel does not display any linux or virtual drives, regardless of running it with or without sudo.

Do i need any additional packages installed to make the lvm partitions visible to imagel?

Re: running ifl from cli

Posted: Tue Feb 28, 2012 2:29 pm
by TeraByte Support(TP)
On 02/28/2012 08:29 AM, hps wrote:
> found out something more:
>
> I tried the whole thing in a local vmware ubuntu guest und ./imagel
> --l does not output anything either. But i managed to run imagel
> without parameters and got a text-like gui. At first it did not
> display any linux or virtual drives, even though my user is member
> of the disk group. When running with sudo, i can see the drive in
> "linux drives" and the parameter string tells me to use --d:l0
>
> I tried the same on the centos server but imagel does not display
> any linux or virtual drives, regardless of running it with or
> without sudo.
>
> Do i need any additional packages installed to make the lvm
> partitions visible to imagel?
>
>

No, that issue is probably because the "device-mapper" device type is
assigned a different device number in CentOS than is supported by IFL.
You can see what that is by running the command 'cat /proc/devices'. If
it shows anything other than 252 for device-mapper, then IFL won't see
LVM volumes. The IFL bootdisk uses 252, so it will always see them once
they are activated with 'start-lvm'

As far as not seeing the output of 'imagel -l', that most likely is
because you are using putty, meaning you are using an xterm (or similar)
terminal type. You can run the command 'echo $TERM' to see terminal
type. In that situation, the 'imagel -l' command will not display the
output on the screen. If you were to ssh into the VM from a Linux
console (terminal type = linux), you would be able to see the listing.
You could redirect to a file: ./imagel -l > part.txt And then view
part.txt in a text editor such as: nano part.txt The data will be there,
but there will be extra characters in the file making it difficult to read.

Most importantly though, it sounds like it may not be appropriate to
create this image from the "live" running system, and that it should be
done from the IFL boot disk instead. If you could run the following
commands and post the results here, it should help clear things up.

'df'
'fdisk -l'
'cat /etc/fstab'
'ls -l /dev/mapper'
' cat /proc/devices'


--
Tom Pfeifer
TeraByte Support

Re: running ifl from cli

Posted: Tue Feb 28, 2012 9:46 pm
by hps
Hi Tom,

thank you for your explanation. It looks like the device mapper is using device number 253.
Please find below the output as requested.

Running the image from a bootdisk is ok for a one-time backup or a disc copy, but not for an automated image backup like we use it in the windows guests. The idea is to do the image from within the guest because then we can mount the image and restore selected files or folders (e.g. a vhost folder or a pgsql db) instead of having to get an earlier snapshot of the vm running and then extract the needed data.

debug output:
==========================================================================
Character devices:
1 mem
4 /dev/vc/0
4 tty
5 /dev/tty
5 /dev/console
5 /dev/ptmx
6 lp
7 vcs
10 misc
13 input
29 fb
89 i2c
128 ptm
136 pts
162 raw
180 usb
189 usb_device
202 cpu/msr
203 cpu/cpuid
204 xvc
216 rfcomm
254 pcmcia

Block devices:
1 ramdisk
7 loop
9 md
202 xvd
253 device-mapper
254 mdp

========================================================================

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/sysvg-rootvol
3047184 1633368 1256544 57% /
/dev/mapper/sysvg-varvol
5078656 176832 4639732 4% /var
/dev/xvda1 101086 14184 81683 15% /boot
tmpfs 2097152 0 2097152 0% /dev/shm
/dev/mapper/sysvg-companyvol
459165856 350893484 84953164 81% /srv/export/company
/dev/mapper/sysvg-uservol
296219520 208847116 72328372 75% /srv/export/users
/dev/mapper/sysvg-profilevol
56766780 34006388 19876872 64% /srv/export/profile
/dev/mapper/sysvg-softwarevol
103212320 59222988 38747092 61% /srv/export/software
/dev/mapper/sysvg-starmoneyvol
2064208 952064 1007300 49% /srv/export/starmoney
/dev/mapper/sysvg-installvol
10321208 1031232 8765688 11% /srv/export/install
/dev/mapper/sysvg-gfiarchvol
88762616 65942956 18310860 79% /srv/export/gfiarch
//nas01/backup 961617664 201260804 760356860 21% /mnt/nas

=========================================================================

Disk /dev/xvda: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvda1 * 1 13 104391 83 Linux
/dev/xvda2 14 133674 1073631982+ 8e Linux LVM

=========================================================================


# HEADER: This file was autogenerated at Fri Sep 09 02:00:44 +0200 2011
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
/dev/sysvg/rootvol / ext3 defaults 1 1
/dev/sysvg/varvol /var ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sysvg/swapvol swap swap defaults 0 0
/dev/mapper/sysvg-companyvol /srv/export/company ext3 defaults,usrquota,grpquota,acl,user_xattr 0 0
/dev/mapper/sysvg-uservol /srv/export/users ext3 defaults,usrquota,acl,user_xattr 0 0
/dev/mapper/sysvg-profilevol /srv/export/profile ext3 defaults,usrquota,user_xattr 0 0
/dev/mapper/sysvg-softwarevol /srv/export/software ext3 defaults 0 0
/dev/mapper/sysvg-starmoneyvol /srv/export/starmoney ext3 defaults 0 0
/dev/mapper/sysvg-installvol /srv/export/install ext3 defaults 0 0
/dev/mapper/sysvg-gfiarchvol /srv/export/gfiarch ext3 defaults 0 0


==========================================================================


total 0
crw------- 1 root root 10, 62 Feb 23 14:59 control
brw-rw---- 1 root disk 253, 3 Feb 23 14:59 sysvg-companyvol
brw-rw---- 1 root disk 253, 10 Feb 23 14:59 sysvg-gfiarchvol
brw-rw---- 1 root disk 253, 9 Feb 23 14:59 sysvg-installvol
brw-rw---- 1 root disk 253, 5 Feb 23 14:59 sysvg-profilevol
brw-rw---- 1 root disk 253, 7 Feb 23 14:59 sysvg-restore
brw-rw---- 1 root disk 253, 0 Feb 23 15:00 sysvg-rootvol
brw-rw---- 1 root disk 253, 6 Feb 23 14:59 sysvg-softwarevol
brw-rw---- 1 root disk 253, 8 Feb 23 14:59 sysvg-starmoneyvol
brw-rw---- 1 root disk 253, 2 Feb 23 14:59 sysvg-swapvol
brw-rw---- 1 root disk 253, 4 Feb 23 14:59 sysvg-uservol
brw-rw---- 1 root disk 253, 1 Feb 23 15:00 sysvg-varvol

==============================================================

Re: running ifl from cli

Posted: Wed Feb 29, 2012 1:10 am
by TeraByte Support(TP)
Thanks for sending the data. I wanted to see the data to ensure that I
understood what you have there, but to do what you want, I think you
really need a file-based backup plan, rather than a partition-based one
like IFL.

The device-mapper issue could be worked around, but the main issue is
that files will be changing while you are creating the images, and
IFL will not be aware of it, so that can result in backups that aren't
valid.

IFL is designed to image partitions/volumes that are either unmounted,
or mounted read-only, so that files can't be written to during the
imaging. Obviously, you can't do that on a live system like that, with
servers running, etc. IFW can do it because of the PHYLock component.


--
Tom Pfeifer
TeraByte Support



On 02/28/2012 04:46 PM, hps wrote:
> Hi Tom,
>
> thank you for your explanation. It looks like the device mapper is using device number 253.
> Please find below the output as requested.
>
> Running the image from a bootdisk is ok for a one-time backup or a disc copy, but not for an automated image backup like we use it in the windows guests. The idea is to do the image from within the guest because then we can mount the image and restore selected files or folders (e.g. a vhost folder or a pgsql db) instead of having to get an earlier snapshot of the vm running and then extract the needed data.
>
> debug output:
> ==========================================================================
> Character devices:
> 1 mem
> 4 /dev/vc/0
> 4 tty
> 5 /dev/tty
> 5 /dev/console
> 5 /dev/ptmx
> 6 lp
> 7 vcs
> 10 misc
> 13 input
> 29 fb
> 89 i2c
> 128 ptm
> 136 pts
> 162 raw
> 180 usb
> 189 usb_device
> 202 cpu/msr
> 203 cpu/cpuid
> 204 xvc
> 216 rfcomm
> 254 pcmcia
>
> Block devices:
> 1 ramdisk
> 7 loop
> 9 md
> 202 xvd
> 253 device-mapper
> 254 mdp
>
> ========================================================================
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/mapper/sysvg-rootvol
> 3047184 1633368 1256544 57% /
> /dev/mapper/sysvg-varvol
> 5078656 176832 4639732 4% /var
> /dev/xvda1 101086 14184 81683 15% /boot
> tmpfs 2097152 0 2097152 0% /dev/shm
> /dev/mapper/sysvg-companyvol
> 459165856 350893484 84953164 81% /srv/export/company
> /dev/mapper/sysvg-uservol
> 296219520 208847116 72328372 75% /srv/export/users
> /dev/mapper/sysvg-profilevol
> 56766780 34006388 19876872 64% /srv/export/profile
> /dev/mapper/sysvg-softwarevol
> 103212320 59222988 38747092 61% /srv/export/software
> /dev/mapper/sysvg-starmoneyvol
> 2064208 952064 1007300 49% /srv/export/starmoney
> /dev/mapper/sysvg-installvol
> 10321208 1031232 8765688 11% /srv/export/install
> /dev/mapper/sysvg-gfiarchvol
> 88762616 65942956 18310860 79% /srv/export/gfiarch
> //nas01/backup 961617664 201260804 760356860 21% /mnt/nas
>
> =========================================================================
>
> Disk /dev/xvda: 1099.5 GB, 1099511627776 bytes
> 255 heads, 63 sectors/track, 133674 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/xvda1 * 1 13 104391 83 Linux
> /dev/xvda2 14 133674 1073631982+ 8e Linux LVM
>
> =========================================================================
>
>
> # HEADER: This file was autogenerated at Fri Sep 09 02:00:44 +0200 2011
> # HEADER: by puppet. While it can still be managed manually, it
> # HEADER: is definitely not recommended.
> /dev/sysvg/rootvol / ext3 defaults 1 1
> /dev/sysvg/varvol /var ext3 defaults 1 2
> LABEL=/boot /boot ext3 defaults 1 2
> tmpfs /dev/shm tmpfs defaults 0 0
> devpts /dev/pts devpts gid=5,mode=620 0 0
> sysfs /sys sysfs defaults 0 0
> proc /proc proc defaults 0 0
> /dev/sysvg/swapvol swap swap defaults 0 0
> /dev/mapper/sysvg-companyvol /srv/export/company ext3 defaults,usrquota,grpquota,acl,user_xattr 0 0
> /dev/mapper/sysvg-uservol /srv/export/users ext3 defaults,usrquota,acl,user_xattr 0 0
> /dev/mapper/sysvg-profilevol /srv/export/profile ext3 defaults,usrquota,user_xattr 0 0
> /dev/mapper/sysvg-softwarevol /srv/export/software ext3 defaults 0 0
> /dev/mapper/sysvg-starmoneyvol /srv/export/starmoney ext3 defaults 0 0
> /dev/mapper/sysvg-installvol /srv/export/install ext3 defaults 0 0
> /dev/mapper/sysvg-gfiarchvol /srv/export/gfiarch ext3 defaults 0 0
>
>
> ==========================================================================
>
>
> total 0
> crw------- 1 root root 10, 62 Feb 23 14:59 control
> brw-rw---- 1 root disk 253, 3 Feb 23 14:59 sysvg-companyvol
> brw-rw---- 1 root disk 253, 10 Feb 23 14:59 sysvg-gfiarchvol
> brw-rw---- 1 root disk 253, 9 Feb 23 14:59 sysvg-installvol
> brw-rw---- 1 root disk 253, 5 Feb 23 14:59 sysvg-profilevol
> brw-rw---- 1 root disk 253, 7 Feb 23 14:59 sysvg-restore
> brw-rw---- 1 root disk 253, 0 Feb 23 15:00 sysvg-rootvol
> brw-rw---- 1 root disk 253, 6 Feb 23 14:59 sysvg-softwarevol
> brw-rw---- 1 root disk 253, 8 Feb 23 14:59 sysvg-starmoneyvol
> brw-rw---- 1 root disk 253, 2 Feb 23 14:59 sysvg-swapvol
> brw-rw---- 1 root disk 253, 4 Feb 23 14:59 sysvg-uservol
> brw-rw---- 1 root disk 253, 1 Feb 23 15:00 sysvg-varvol
>
> ==============================================================
>
>