Search This Blog

Tuesday, August 7, 2018

Accessing VMFS Datastores from CentOS Live Linux

I've often run into issues with disk and other errors preventing me from getting VMs or other files off a VMFS volume.  I have used this process a couple times to retrieve VMs and other files when options for using VMware tools just are not enough.

This article assume the user has a decent bit of knowledge where to get things and how to work with their own servers.  For example, I don't describe how to boot your server from an ISO image.  I use remote tools to my server but you can burn a DVD, write an image and boot from USB, use an external drive, etc.

You will need a couple things to start with.  First, Download CentOS 7. I use the DVD ISO.

Next, Locate the epel repository rpm. You will download it later but it is good to verify the path first rather than debug it later.  Currently it is at http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm though the path and name may change.  if it does, I just work my way back up the file path in the URL until I find the change and navigate down from there.

Then, Locate the vmfs-tools package.  Again, you will download it later but verify the path and change the procedure below as necessary.  The package is currently at https://glandium.org/projects/vmfs-tools/vmfs-tools-0.2.5.tar.gz

Finally, make sure you have a SSH client.  I use teraterm but you can use putty or whatever you like.

Start by booting to the Centos ISO image.

Select Install CentOS 7.   Eventually you will be rewarded with a desktop.  From there, open a terminal window.  I right click on the desktop and select Konsole.

Elevate to root
su -

To make it easier to work with and make cut and paste commands easier, start ssh server
service sshd start

Now create a new user account and set the password.  I don't care the user name you use or the password. just remember them
useradd userx --groups root
passwd userx

SSH to the server
You can get the IP address using ifconfig






Log in using the account you created above.
Elevate to root
su -

Now work from the ssh client and not the console anymore.

Create a mount path for the datastore.  Create and many as you want to mount VMFS volumes and use any path and name you want.  Here I had 2 volumes to mount so I created 2 mount points.
mkdir -p /mnt/dsk1
mkdir -p /mnt/dsk2

Download and install the epel repository package. This makes it easy to install the next couple packages.
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
rpm -Uhv epel-release-7-11.noarch.rpm

Install Libuuid, Libuuid-devel and gcc. say yes to all prompts
yum install libuuid libuuid-devel
yum install gcc

Download the vmfs-tools package the extract it
wget https://glandium.org/projects/vmfs-tools/vmfs-tools-0.2.5.tar.gz
tar zxf vmfs-tools-0.2.5.tar.gz

Compile vmfs-tools
cd vmfs-tools-0.2.5
./configure
make
make install
("make install" didn't actually install it in /user/bin so I execute it from the build path)
cd ~/vmfs-tools-0.2.5/vmfs-fuse or just cd vmfs-fuse

When mounting a an VMWare boot disk, there are several partitions.  The 3rd partition is almost always the datastore. In this example, I have 2 VMware disk (sdc and sdd).  Notice only 2 partitions on sdd.  This is not a bootable VMWare drive, justa  datastore drive.













Now, lets mount the volume.  I am mounting partition3 (the datastore partition) to the mount path I created previously.
./vmfs-fuse /dev/sdc3 /mnt/dsk1

If you want to see the partitions on the disk, use fdisk -l /dev/sdc or whatever the sd? device is.
fdisk -l /dev/sdc












NOTE:  The file system is read only.  So, you can't write. I am not sure why, I don't really need to write.

You can also run a a fsck scan of the volume to see if there are issues with the file system
cd ~/vmfs-tools-0.2.5/fsck.vmfs/
./fsck.vmfs /dev/sdc3

Now you can copy and view files on the VMFS datastore by navigating around /mnt/dsk1.

To copy files somewhere, you will need  a USB drive or a network file system mount.  I used a NFS mount.  e.g.

mkdir /nfs
mount 192.168.1.1:/vol/nfs /nfs

I wanted to copy off some VMs in hops of saving them, so I used rsync which shows progress and will continue on errors.
rsync -r --info=progress2 mnt/dsk1/myvm /nfs/myvm

Now that I am done, I cleanly unmount my datastores and reboot.
umount /nfs



No comments: