Wednesday, August 22, 2018

Fixing Windows 10 Missing systemprofile Desktop folder

I ran into an issue that seems to come from Windows 10 Updates.  Moments after logging in, I get a warning windows advising C:\WINDOWS\system32\config\systemprofile\Desktop is unavailable.  There is no desktop, just a task bar and recycle bin.  

The problems stems from the relocation of the default desktop from c:\users\default to C:\WINDOWS\system32\config\systemprofile\ but the Desktop folder doesn't get migrated.  Thus, the error message.

I could not launch any program except task manager, so I worked out a convoluted procedure to launch a windows explorer instance and was able to show the now hidden Default user folder in c:\users, copy the contents and paste it in the new default user folder C:\WINDOWS\system32\config\systemprofile\default. 

After following the procedure and rebooting.  I was able to log in and was back to my normal desktop.

Here is the procedure I followed.  It might be a bit overkill (copying all files in the default users folder but I only wanted to do it once.  Sorry, no images this time.

On the task bar (if you see one) or press ALT+CTRL+DEL and select task manager.
In Task Manger, click File / Run New Task
Click Browse
Locate Desktop in the left panel
Right Click and select Properties
Click Location
Click Find Target  (wait a little bit for an explorer window to open)
On the top Menu click View
On the far right, click Options
Folder Options will open.  Click View
Click the radio button Show hidden files, folders, and drives
Uncheck Hide empty drives
Uncheck Hide extensions for known folders and icons
Click OK
Click Users in the address bar (This PC > Local Disk (C) > Users )

You should not see the Default user folder along with any other user folder as della s a Default.migrated
Double click on the Default folder to open the folder
On the menu ribbon (the menu below the top ment with File/Home?Share/View), far right, click Select All (or click white space in the details panel asn press CTRL+A).  All files should be selected/highlighted
Right Click on any highlighted are and select Copy (or click Copy from the menu ribbon)

Navigate to C:\Windows\System32\Config   - You may get a security box to allow you to access the folder.  Click Continue
Click systemprofile   - You may get a security box to allow you to access the folder.  Click Continue
In the details white space, right click and select Paste (or click Paste from the menu ribbon)

Reboot - Press ALT+CTRL+DEl then click the power icon in the lower right and select Restart

Tuesday, August 7, 2018

Accessing VMFS Datastores from CentOS Live Linux

I've often run into issues with disk and other errors preventing me from getting VMs or other files off a VMFS volume.  I have used this process a couple times to retrieve VMs and other files when options for using VMware tools just are not enough.

This article assume the user has a decent bit of knowledge where to get things and how to work with their own servers.  For example, I don't describe how to boot your server from an ISO image.  I use remote tools to my server but you can burn a DVD, write an image and boot from USB, use an external drive, etc.

You will need a couple things to start with.  First, Download CentOS 7. I use the DVD ISO.

Next, Locate the epel repository rpm. You will download it later but it is good to verify the path first rather than debug it later.  Currently it is at though the path and name may change.  if it does, I just work my way back up the file path in the URL until I find the change and navigate down from there.

Then, Locate the vmfs-tools package.  Again, you will download it later but verify the path and change the procedure below as necessary.  The package is currently at

Finally, make sure you have a SSH client.  I use teraterm but you can use putty or whatever you like.

Start by booting to the Centos ISO image.

Select Install CentOS 7.   Eventually you will be rewarded with a desktop.  From there, open a terminal window.  I right click on the desktop and select Konsole.

Elevate to root
su -

To make it easier to work with and make cut and paste commands easier, start ssh server
service sshd start

Now create a new user account and set the password.  I don't care the user name you use or the password. just remember them
useradd userx --groups root
passwd userx

SSH to the server
You can get the IP address using ifconfig

Log in using the account you created above.
Elevate to root
su -

Now work from the ssh client and not the console anymore.

Create a mount path for the datastore.  Create and many as you want to mount VMFS volumes and use any path and name you want.  Here I had 2 volumes to mount so I created 2 mount points.
mkdir -p /mnt/dsk1
mkdir -p /mnt/dsk2

Download and install the epel repository package. This makes it easy to install the next couple packages.
rpm -Uhv epel-release-7-11.noarch.rpm

Install Libuuid, Libuuid-devel and gcc. say yes to all prompts
yum install libuuid libuuid-devel
yum install gcc

Download the vmfs-tools package the extract it
tar zxf vmfs-tools-0.2.5.tar.gz

Compile vmfs-tools
cd vmfs-tools-0.2.5
make install
("make install" didn't actually install it in /user/bin so I execute it from the build path)
cd ~/vmfs-tools-0.2.5/vmfs-fuse or just cd vmfs-fuse

When mounting a an VMWare boot disk, there are several partitions.  The 3rd partition is almost always the datastore. In this example, I have 2 VMware disk (sdc and sdd).  Notice only 2 partitions on sdd.  This is not a bootable VMWare drive, justa  datastore drive.

Now, lets mount the volume.  I am mounting partition3 (the datastore partition) to the mount path I created previously.
./vmfs-fuse /dev/sdc3 /mnt/dsk1

If you want to see the partitions on the disk, use fdisk -l /dev/sdc or whatever the sd? device is.
fdisk -l /dev/sdc

NOTE:  The file system is read only.  So, you can't write. I am not sure why, I don't really need to write.

You can also run a a fsck scan of the volume to see if there are issues with the file system
cd ~/vmfs-tools-0.2.5/fsck.vmfs/
./fsck.vmfs /dev/sdc3

Now you can copy and view files on the VMFS datastore by navigating around /mnt/dsk1.

To copy files somewhere, you will need  a USB drive or a network file system mount.  I used a NFS mount.  e.g.

mkdir /nfs
mount /nfs

I wanted to copy off some VMs in hops of saving them, so I used rsync which shows progress and will continue on errors.
rsync -r --info=progress2 mnt/dsk1/myvm /nfs/myvm

Now that I am done, I cleanly unmount my datastores and reboot.
umount /nfs

Wednesday, February 7, 2018

MAC OSX Zero Free Space for VMWare Deduplication

Periodically working with virtual machines (vms),  enough files are created and deleted that the thin provisioned virtual disk (vdisk) expands to its maximum capacity even though the operating system (OS) file system shows free space.   This is typical and normal behavior.  Unfortunately, this consumes space on the underlying storage that is no longer being active used with the VM OS. 

For years I have been using the sdelete.exe Microsoft SDelete tool that securely erases deleted files using the option to just write zeros to all remaining free space.  In Linux, I use the dd tool to read from /dev/zero and write to a temporary file filling all free space with zeros then deleting the temporary file. 

These procedures simply write zeros to free space.  VMware and network storage systems will "see" these zeros and free allocated storage space to the virtual disk.  In essence, this shrinks the virtual file size on storage freeing unused space on the storage system.  Some advanced storage systems will automatically detect these zeros and automatically free the space.  Others you need to run a command line on ESXi to free the zeroed space. 

The MAC OS has a couple ways to write zeros on free space.  The graphical disk utility in Utilities menu and a command line too.  I haven't had much luck with the GUI tool, so this procedure uses the command line tool. 

First, open a terminal as an admin account.

type diskutil list to locate the drive you want to write zeros to.  In my case it is partition 2 on /dev/disk0.  Look for Apple_HFS macintosh HD.  the identifier for mine is disk0s2.

Now run the too and write some zeros

diskutil secureErase freespace 0 disk02


secureErase = Secure Erace.  There are 5 levels, we want level 0 for single-pass zeros
freespace = only write zeros to free, unused space.  Does not affect files or OS
0 = secureErase level 0 single-pass zeros
disk0s2 = my partition that has my data

I have a SSD drive so it is pretty quick.    3 or 4 minutes.  The progress bar will indicate how long the zero write will take.  The estimate updates every few seconds.

When it is all done, use the vmware command line tool vmkfstool to free space that contains all zeros.  First SSH to the ESXi hot and login as root (there are other methods but I use SSH to esxi to run the command line tools).

For example on my VM, I used the command
vmkfstools -K /vmfs/volumes/datastore1/Mac_OS_Master/Mac_OS_Master.vmdk

The starting size of the VM was 105GB with used and non-shared at 95GB.  After zero write procedure, the provisioned size was still 105GB the used and not-shared were now 57GB.

If the VM has frequent file writes and deletes, the used space will slowly increase again and eventually warrant another shrinking.  If it is very frequent and expands the virtual disk to capacity in a short time, it may not be worth the effort to shrink the virtual disk.  It is up to you.

Friday, May 19, 2017

Automated Applying of Microsoft WannaCry Security Patch With PSEXEC

I needed to ensure we had this patch on hundreds of servers that are pretty much unmanaged.  I've used various tools in the past but had to figure out how to do it to my windows 8.1 and 2012R2 hosts. This applies to any manual patch really.

First, I used psexec from the sysinternal tools at Microsoft

I needed a way to get the update onto the machines without needing credentials to access a network share.  I figured out how to use powershell to copy the file from a web server I've already setup.

I copied the update msu file to the webserver with a simpler name and renamed it as a .zip file to eliminate issues with file transfer through the web server.  IIS will block the file unless you have a content type configured for the extension msu.

I created a simple text file with the IP addresses of all the hosts I wanted to patch.  one per line.  e.g.

Using the powershell command Invoke-WebRequest is like using wget.  Just define the output file and specify the web url of the file. 

e.g. powershell Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/

With psexec, you should specify the full path to the file.  Finding powershell path is simple.  Just type where powershell from a command line.

The psexec command example for a list of hosts
psexec @buildshosts.txt -s -u username -p password C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/

Alternatively you can just issue it directly to a host
psexec \\ -s -u username -p password C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/

Now that the patch was on the host, I used psexec to apply it using the standalone wusa stand alone tool. Since I know it is in the default path c:\windows\system32, I didn't bother to specify the path.

I included the /quiet and /forcerestart options to silently install and then reboot.
psexec @hosts.txt -s -u username -p password wusa c:\temp\update.msu /quiet /forcerestart

The patch update tool exits with code 1641 if the application and reboot was successful.

The process is done serially so, it takes a while to iterate through a large number of hosts.

Using powershell to do a web download is really slow, so it takes several minutes to download the 200K rollup.

There is a chance powershell is old and doesn't support the  Invoke-WebRequest option.

Saturday, November 19, 2016

Freaking Out About To Run Out Of Disk Space On My Nimble AFA

So I am moving workloads over to a Nimble All Flash Array and I notice I am out of free space.  Now I start freaking out afraid my critical VMs are about to start crashing.  I checked the Nimble GUI and I am only using 20% of disk space after compression and deduplication.   I know I am not running out but VMware doesn't.  There is the Nimble free space vs VMware free space.

As I start move workloads off Nimble, I already know the problem, just now what to do about it yet.  When the Nimble volume was created, we chose to allocate the entire capacity to a single volume that will be mounted on a cluster of VMWare hosts.  The total free space is 7TB.

As I run through the issue and ping my account SE and a friend who knows this stuff better than I, I consider how Nimble is supposed to represent actual free space.  Or better yet, how is it going to dynamically show the volume size?  Starting out, there is no compression and deduplication savings so, the volume size is the max size of free space.  While I want it to change the volume size dynamically based on actual dedupe and compression ratios, Nimble doesn't.

I proceed to the Nimble GUI and navigate to my single large volume.  As I iterate through volume configuration, I decide to change the volume size or at least see if I can. Right there above the volume size is a blurb of text  advising you can create a volume size greater the the free space because of deduplication and compression.  It would be nice it the GUI gave me some guidance on how big based on the current ratios, but it doesn't.  With nearly 5X space reclamation, I could probably choose 35TB.  So I choose 15TB for now.

Back in VMware vCenter, I rescan the volume on each host and try to resize the volume on each mounted host to no avail.  I make an educated guess and connect directly to a host.  I picked the first host I mounted and formatted the volume and attempt a resize from there.  Sure enough, it allows me to resize.  Back in vCenter, I go to each host again and rescan which now shows the 15TB of total space.  I cancel my storage vMotions that were abandoning the storage and go back to moving the final set of workloads back onto the Nimble.

Crisis averted and I never needed help for the SE or my expert friend.

The Nimble AFA has been performing incredibly well with sub millisecond latency.  My jobs are performing quite predictably which is critical for the workload.  Further, Nimble AFA is saving me around 30-45 minutes over the fastest time from my next gen hybrid array.  The Nimble AFA is a better match to the workload than the next gen hybrid array which experiences unpredictable latency causing my jobs to vary between zero and 6 hours or additional time.  Of course time will tell but so far, Nimble is stellar.

Tuesday, October 18, 2016

Free VMware ESXi Active Directory Problems

I have several free license VMware ESXi servers where I use Active Directory (AD) authentication to log in via the vCenter client.  I frequently find AD credential don't work (usually invalid password or authentication failed).

When I look at the Host Configuration / Authentication Services Setting tab, I see:
Directory Service Type                    Active Directory
Domain                                            Mydomain.Com
Trusted Domain Controllers                        --

The "--"  for domain controllers means it isn't talking to my DC.  I've spend hours digging though DC and ESXi logs trying to figure out why with no clear reason found.  We have had to take drastic actions such as leaving the domain which causes all the defined permissions to be lost and have to be recreated.   Rebooting usually works but that is very disruptive. 

My star helper finally found a way to reconnect to AD that is fast, easy and non-disruptive.  Basically he kept searching through all the files on ESXi looking for anything to do with AD, domain and a dozen other keywords.  He found this gem!


This is the script that joins the domain or rejoins when disconnected for some reason.

The usage is: 

/usr/lib/vmware/likewise/bin/domainjoin-cli join

/usr/lib/vmware/likewise/bin/domainjoin-cli join jomebrew password

I have automated this using plink and a simple perl based web page where we can enter a number of IP addresses and iterate through the list issuing the command via plink.  It usually works but when it doesn't, we ssh to the host and issue the command manually.

Note:  We have experienced AD issues on Vmware 5.5 and 6.0.  These are not issues with vCenter or licenses hosts.  I imagine stand alone licensed hosts would experience the same issue.

Sunday, October 16, 2016

Installing RaspberryPints on Raspian Jesse

I'v setup a couple Raspberry Pints (RP) on new Raspberry Pi 3 Model B.  While most of the configuration at is still relevant, there are a couple differences.

I have images below to show the code lines in files that are edited.  This blog tool does not allow me ti easily list code snippets which would make it a lot easier to copy and paste.

Step 5: Package Configuration Wo/Flow Meters

The path to autostart has changed.  From the pi home directory, edit the following file


Use the following entry to load Chromium in kiosk mode and instruct Chromium not to display the unsafe shutdown message on restart.

@chromium-browser --incognito --kiosk localhost

Apache2 Default Document Root Directory

Apache2 default document root is /var/www/html.  you can choose to install Raspberry Pints in that directory or change Apache2 configuration.  I chose to change Apache 2 configuration.

Edit /etc/apache2/sites-enabled/000-default.conf and change 

DocumentRoot /var/www/html to DocumentRoot /var/www

My Changes to Raspberry Pints

I make a few changes to Raspberry Pints to suit me needs.  

Add Automatic Refresh 

RP does not refresh the browser on my systems.  Once taps are updated the page needs a manual refresh.  I add a refresh every 60 seconds to automatically pickup tap changes.  There is a tool that I expect should refresh but doesn't work on my system.  Maybe because xdotool isn't installed.

Edit /var/www/index.php

Add the meta refresh tag as show in the as shown below

Removing CR and LF from beer info.  If you use CR and LF, it breaks the program when you tap a new keg and probably elsewhere.  This change strips CR and LF on all database updates.

Edit /var/www/admin/includes/functions.php and add the following to the bottom.

Automatically Mark Kegs as Clean When A Keg is Kicked

I don't manage kegs and don't care about the keg feature.  I just want to show my taps.  The process to kick a keg and tap a new beer is a bit tedious especially at a festival.  So, I automatically mark a keg a clean when I kick it.

Edit /var/www/admin/includes/managers/

I prefer to backup the original instruction.  Find and add a # to comment out the following line and add the line afterwards.  It is easiest to copy and paste the first line then add the #.  Change NEEDS_CLEANING to CLEAN

#$sql="UPDATE kegs k, taps t SET k.kegStatusCode = 'NEEDS_CLEANING' WHERE t.kegId = AND t.Id = $id";
$sql="UPDATE kegs k, taps t SET k.kegStatusCode = 'CLEAN' WHERE t.kegId = AND t.Id = $id";

Remove Column Headers

Most people already know what the columns mean.  I can get some more screen real estate by removing the column headers.  the following simply adds comment tags to not display the header.

Edit /var/www/index.php 
Around line 111, locate the thead tag and add the !-- as show below.  Then locate the closing /thead tag and add the --> as show below.

Rearrange Columns to List Beer Name after the Tap Number

This is a bit more complex.  I recommend backing up index.php before making changes.  

cd /var/www
cp index.php index.php.orig

Locate the following text for the Name Column and copy to the clipboard **.  It is around line 174.

Now scroll up and locate. It is around line 160.

Paste the Name column data, from copied earlier, above the ConfigName line.  Save. Reload the taps page and verify the changes are ok.

** There are various ways to cut and paste or copy and delete depending in the editor you use.  

If you screw up the tables, just copy index.php.orig to index.php.

I also make changes to style.css and update fonts, sizes, colors, table width (so name column is wider).


Sunday, May 15, 2016

Double IPA and a Barleywine in a Single Brew

I am almost out of beer. I know, right?  Down to the Pineapple IPA hopped with Citra and Mosaic. The club needs a barleywine for NCHF, the Northern California Homebrew Festival.  The missus DIPA. Can I make a barleywine and a Double IPA in a single batch?  Technically, a Barleywine and is a Double/Tripel IPA and a game of semantics.

So I modified my Phoebe Pry IPA recipe and a club Barleywine recipe to do a no sparge Barleywine and mash on the same grain bed with the IPA recipe.  I have done this before with an Imperial Stout and a normal Stout.  They came out great.   I wasn't sure if dark beer fare better with this method or not.

Recipe and Brew Day Notes
Recipes and Brew Day Notes
I separated the recipe ingredients in two separate buckets.  Milled separately and added a note to each to ensure I was using he right ingredients at the right time.  I started with the no sparge Barleywine.  using Beersmith software, I chose a Brew In a Bag equipment profile which is an easy way to calculate a no sparge batch strike water volume. I definitely nailed the strike temp at 154F adding 7.2 gallons of 170F water to the 12 pounds of grain for the 3 gallon batch.

Now I had an hour for the mash before I would vorlauf 2 pints at a time, 10 times until the wort would flow clear.   So, it was time for some brew day breakfast; Triple decker fried ham and egg sandwich on a plain bagel cut into three layers.  Some beer mustard and spicy pepper jack cheese went well with my Chocolate Coconut Porter.

I transfered around 5 gallons of 13.2 BRIX wort which was 1.054 Specific Gravity.  A long way from my target 1.101 SG.  After 2 hours of boiling and several hop additions (Magnum, Cascade, Columbus) the boil was done an the gravity was 24 BRIX or 1.102 SG.  I transferred about 2.5 gallons into the 3 gallons fermentor at 68f.  I pitched a packet of Safale 04 dry yeast.

After transferring the Barleywine to the boil kettle, I added the additional 13lbs of grain to the grain bed and added 4 gallons of strike water at 165F hitting my smash temp of 146F.  A little over an hour later I sparged with another 2.5 gallons of 168F water and ended up with 5 gallons of clear wort in the kettle.  I was a little worried with the 14 BRIX pre boil gravity (1.058 SG) but after an hour oro so of boiling and adding magnum, Cascade and Simcoe hops I ended up with 3 gallons of 1.083 SG wort in the fermenter.

I pitched 2 vials of White Labs 001 yeast and rigged a blowoff for the high krausen yeast to drop into the Barleywine. The implementation was poor and was almost a disaster but worked well enough to get the happy yeast from the IPA into the Barleywine though there was a lot of leakage and cleaning still to do.

With a lot to clean and gnarly back pain, I took a break and had an intermission beer.

The beers stayed in the fermenter for two weeks.  I took both out and cleaned the mess in the fermenter from the blowoff experiment. It took gravity sample and dry hopped both.

The barleywine finished at 1.021 SG for a final ABV of 10.8.  I dry hopped with an ounce of Cascade, an ounce of Amarillo and two ounces of Liberty hops.

The IPA finished at 1.011 SG for a final ABV of 9.5%.  I dry hopped with an once of Simcoe, two ounces of Cascade and an ounce of Amarillo.

I will let both set another 5 days before kegging.  If I can score some spirals, I will oak age the barley wine another week or two before kegging.

Next up I plan to split a 6 gallon batch of Imperial Porter and finish with 3 gallons of Bourbon Vanilla and 3 gallons of Chocolate and something else.

Tuesday, May 10, 2016

Taxonomy Of A VMWare Outage After A Power Failure

A few VMs hosted on the two blade servers indicated they were inaccessible. Some were simply orphaned which is sometimes a result of a power loss event.  Others listed the GUID of the path (rather than the label for the storage).  Orphaned VMs must be removed from inventory and then added to inventory from the datastore browser.  VMs that show the GUID path can often be recovered simply by refreshing the storage after all services are restored.  Neither of these were successful.

The hosts have been configured with local storage and NetApp storage for some time.  VMs have been running from both datastore sources without issue.

After the power failure, we experienced problems with VMs on NetApp storage.

-          VMs running from local storage experienced no issues and could be started normally.
-          VMs running on the NetApp could not be started.  Orphaned VMs could not be added to inventory.  We were not able to modify or copy the files on the NetApp.   We could not unmount the NetApp.  Neither SSH or Desktop clients actions were successful.  Later in the morning, we identified another blade experiencing the same condition.  This indicated it was not isolated to just these two blades.

We verified the NetApp was in a normal state by accessing the files  as a CIFS share. 

We identified changes since the last power failure (a week ago).  1. We added a vSwitch and network interface to the Storage network.  2. We attached a Nimble volume to the host.  We decided to back out these two changes.

We set the Nimble Volume to Offline in Nimble.  Surprisingly, we were then able to unmount the NetApp.  This indicated a strange link to the datastores. VMWare indicated the datastore was in use. Several iterations of removing and adding did not solve the VM issues on NetApp.

We unmounted NetApp then removed the new Storage vSwitch followed by mounting the NetApp again.  VM operation on the NetApp were normal again.

We unmounted NetApp, added the Storage vSwitch then mounted NetApp again.  VM operations failed on the NetApp once again.

A short time later we realized the NetApp export access permissions permitted the (public) IP address of the original vSwitch of the blades.  Adding the vSwitch on the Storage network established a connection from the new IP on the local storage subnet.

The solution was to change the NetApp export permissions to allow root access from the new Storage vSwitch IP address.  Resolution time was 7 hours.

Last week we added another blade to the system.  In the course of adding the additional server, we added a Nimble storage volumes to the existing two servers.  We add a new Storage vSwitch connected to the IO Aggregator which has a 10Gb connection directly attached to the storage network.  After the power failure, the NetApp was remounted over the new Storage vSwitch as the destination was on the locally attached network.  This put the datastore in a read only state.  Technically, the NFS export is mounted as non-root which does not have write privileges.  Additional blades have this same configuration and experienced the same issue though the users hadn’t realized it yet.