Thursday, May 9, 2019

VMWare PowerCli Get All VMs In A Cluster On A Specific Datastore

I'm a novice using VMWare PowerCli so I use a lot of Google searches to figure out the complex and challenging PowerCli commands. 

What I was trying to do was migrate / move all VMs in a cluster from one datastore to a different datastore.  I had 200+ VMs spread across 16 servers to move. Doing this via the console has its own challenges mostly the web clients both are cumbersome to use.

My biggest challenge was getting a list of VMs by cluster AND datastore. I thought this would be easy. Just use "get-vm -location XXX -datastore YYY".  But no, that returns VMs in the cluster or on the datastore. That list of 500 VMs.

I was never able to figure out how to get the list I wanted so I just brute force tried to move every one of the 500 VMs. If they were already on the target datastore, then it just finished and moved onto the next VM.  If the VM was NOT on the source datastore, then it failed and moved on to the next VM.  Those on the source datastore were moved.  Not very deterministic but it worked.

I continued to investigate how to do this trying to figure out the the API doc which is really a pain to use as well. Luckily Google has all the answers.

With the help of a blog post from 2102 by psvmware I was able to get the list of VMs I sought. I am pretty sure this can be used for different objects but I didn't mess around with that yet.

The command looks like this
Get-Cluster "MyCluster"|Get-vm |?{($_.extensiondata.config.datastoreurl|%{$_.name}) -contains "SourceDatastore"}| select -expandproperty name 
This returns only the VM name which can be used to move the VM

Here is my simple script that get s a list of VMs in a  cluster on a specific datastore then sorts the VMs by name ascending order and moves them to a different datastore. I actually used 2 scripts. One that sorts ascending and one that sorts descending so I can run them both and move 2 VMs at a time eventually running into each other. (when they both tried to move the same VM, one just waited for the other to finish).



$vchost="my.vcenter.server"
$vcuser="myUser"
$vcpassword="myPassword"
$srcdatastore="mySrcDatastoreName"
$vccluster="myClusterName"
$dstdatastore="myDestinationDatastoreName"
Connect-VIServer -server $vchost -user "$vcuser" -password $vcpassword
$vm = Get-Cluster "$vccluster"|Get-vm |?{($_.extensiondata.config.datastoreurl|%{$_.name}) -contains "$srcdatastore"}| select -expandproperty name |sort-object -ascending
foreach ($i in $vm) {
move-vm $i -datastore $dstdatastore -DiskStorageFormat thin
}

Friday, March 15, 2019

Craft Beer and My Life - Hoppy Brewing

Several years ago, I think it was 1994, at the Northern California Small Brewers Festival in Mountain View, California I was a volunteer beer pourer as I had been in other festivals. This festival was known to be one of the more rigid ones in the bay area (where the Pleasanton was pretty loose and a lot of fun).

I was pouring for the brewery E.J. Phair.  The owner and brewer was JJ Phair. We got along quite well as I had been visiting dozens of micro breweries for a couple years and had quite a bit of experience with  (what would be later called) craft beers.  Especially hoppy beers. back then, hoppy beers were unusual and blacked hop flavor and aromas with bitterness was not yet mature in the micro brew scene.  JJ and I had quite a bit of fun tasting the various beers being poured at the festival.

While pouring for JJ, I kept my water glass in the bucket of ice with the kegs. I've never been a fan of drinking water from plastic as I always felt it was wasteful, unnecessary and left a plastic taste. So, I drink from glass (and stainless these days). Well, someone from the festival came by and reminded us I can't drink beer while pouring. Of course we said we weren't and I was only drinking water which I showed them.

A bit later, JJ excitedly came back to his taps and told me I had to go try Hoppy Face Amber from a new brewery Hoppy Brewing. So, I went over, used a token and got a sample. I don't remember the beer but I do remember a wow factor and that it the hop aroma and flavor were huge.

Moments after I returned to JJ's taps, 8 or 10 festival "security" descended on us and demanded I stop pouring and handover my volunteer shirt. It was quite comical to me. I once again demonstrated I was drinking water but they went on about drinking beer while pouring and then pointed out I had just went over to Hoppy Brewing (I removed my shirt first) while I wa "on duty".  Rightfully so, JJ went ballistic. The folks argued heavily with them.

JJ explained he sent me over to try the beer but they were unrelenting. JJ decided he would leave the festival and take his beer with him. They tried to stop him saying he had donated the beer. I recall him saying maybe so but not his kegs. I'd like to remember that he then poured all his beer out but I don't think that happened.

I let JJ know I would be fine getting booted but he wouldn't let them treat me like that.  I was escorted out by 7 or 8 staff as if I was some big threat. the people and circumstance is just as laughable now as it was back then. I don't know what happened after that.

I continued to support E.J.Phair everytime I see it. I did get to see JJ again several years later and he kind of remembered or at least he was considerate enough to remember. We sampled his Helles right off the bright tank and it was fantastic!

I do remember that Hoppy Face Amber. They still make it today. Funny enough, I haven't had it since. I think it is time!


Friday, March 8, 2019

Craft Beer and My Life - 20 Tank

20 Tank Brewery Nearly Changed My Life



I was first introduced to craft beer around 1988 by a new found friend, Brian. Until then my exposure to beer was typical American homogenized light lager with the occasional import such as Heineken. I usually just followed my brothers lead. At first, we were Michelob fans. To us, this was kind of a premium beer. I was not much of a beer fan. My father didn’t drink much and I don’t recall anyone in my family drinking much beer. My older brother, who is a couple years older than me, was pretty much my guide. There was not much to choose from so his leadership role was pretty easy.

Sometime in 1983 Stroh’s beer came to our area. For some reason, this became our favorite beer. Probably because it was simply new. I remember thinking it was so plain. It was not any different than the other american beers but certainly not stale like the german beers. Not long thereafter was my 21st birthday. The love of my life at the time was Lori. Her family had moved about two hours north a year or so earlier. I was not entirely sure what was happening with our relationship. The distance was a strain but breaking up was a foreboding. She arrived to help celebrate my birthday bearing my favorite cake. Upon cutting into the cake, I realized it was not cake but a 12 pack of Stroh’s. It made me realize talked about Stroh's a lot, I guess though I don’t remember. I also realized it was time follow a different path than Lori. I don’t know if or how we ever broke up. I am pretty sure we did. I don’t recall ever seeing her again. I don’t know if I ever drank the 12 pack either.

Not much happened in my beer scene until i started a job in San Francisco three or 4 years later. Some time around 1988 I worked with Brian. Through sketchy memories, I recall conversations with Brian about the new craft beer revolution. Brian was also the first to talk about homebrew. Though I don’t recall much of conversation 25 years ago, I do remember Brian talking about exploding bottle in cupboard. I don’t think I ever tried any of his beer nor enjoyed the experience of exploding bottles. I do think fondly of him when I reminisce about the my first craft beer experiences.

My most vivid memory is our first visit to a new brewery in San Francisco; 20 Tank. There were three of us. Myself, Brian and John. I don’t recall how familiar John was with craft beer but I don’t think he was much of a drinker. I clearly remember a Red/Amber ale that was a bit coarse and kind of sweet. I also remember a powerfully bitter assault on my palate from an IPA called War Boner or something like that. This particular night, we sat upstairs and did an all out blitz on these delightful beers. The laughter was pain inspiring and I still snicker at the thought of chili but I don't know why. There is a permanent hilarious imprint burned into my psyche. I can’t help but smile now.

The three of us staggered out to the street and fell into a taxicab. I looked out the window and can still see the bright lights from inside the brewpub illuminating the building with 20 TANK BREWERY in big bold letters. I was dropped of at the train station and somehow made it onto a train and ended up home though there is no recollection of how or when.

I visited 20 Tank a few more times. I met the brewer and talked beer. He went on to start 21st Amendment Brewing some time later. I mostly recall the last time I was there. I was with a spectacular woman; Judy. We met at work. She shared an office with another strikingly beautiful woman. I don’t recall her name but I will call her Kim. I became friends with Kim often chatting on our train ride to or from work. Kim was just honest and sweet and engaging. At times, it was comical as to how many guys would ask me how I know Kim. There was real shock that a woman this beautiful would be friends with me I guess. There was a bit of fun sport in all of it and I did enjoy being in such company. I would, at times, visit Kim’s office for no particular reason just to say hi. It was there I met Judy. She was tall and slender and though different equally as striking as Kim. Two great reasons to find a reason to visit them. It doesn’t seem real but somehow I asked Judy to go out or maybe she asked me. I do know she agreed to and even picked me up though I don’t know why. We had a great time at 20 Tank and at the end of the night we walked to her car hand in hand.

I was in inner turmoil. I was on the cusp of maybe starting to date another woman. This other woman had a grip on my heart. I really enjoyed this evening with Judy but could not stop thinking of the other woman either. As Judy and I sat in her small car, just inches from one another, I explained this terrible predicament. Any other time, I would have not hesitated to take things as far as they would go with Judy. Instead, I talked about this other woman and how things were starting to blossom and I had to follow that path. It was awkward for sure. I did tell her I didn’t expect we would get on so well and I was not really prepared to have such a great date. She dropped me at the train. I spent the next hour and a half wonder if I was a fool. I married that other woman a few years later and we are still happly married today. I sometimes wonder what became of Judy. I do know 20 Tank closed in 2000.

That visit to 20 Tank almost changed my life.

Monday, February 25, 2019

Migrating to VMware 6.7 vCenter - xVmotion

I've had two 6.0 Windows vCenter instances with external MSSQL databases for quite some time. I've been reluctant to move to 6.5 due to the end of life of the desktop client. The workflow of the desktop client as well as sleek interface is unmatched and, apparently, un doable in the web browser interface. Further, we have had many issues with browsers and seems to have to switch every so often because one just stops working.

As time progresses, so we must adapt to changes even though they are backwards in ease of use and likability. The interface to 6.7 brings no improvements over the desktop client and only stunked workflows and heaps of frustrations.

I opted to deploy a new 6.7 vCenter appliance with embedded PSC and planned to migrate between vCenters.

  • Partly because I am moving to a different Microsoft AD. 
  • Partly rather than assign AD users access to VM folders, I created security groups in AD and assigned groups to vcenter resources. users are placed in AD security groups that correspond to vcenter resources. 
  • Lastly, I planned to implement Distributed Virtual Switches and Distributed Virtual Port Groups which are supposed to make it a lot easier to deploy new hosts.  


First, distributed switches and port groups is a huge pain in the ass. I am shocked how complicated it is to deploy and how neither web interface (Flex or HTML 5) visualizes the configuration well. I wonder if the UI designers went on vacation and engineers put together the interface. Unscrewing my hosts of distributed switches caused so much grief, it was easier to reinstall ESXi and start again.

Then, I was burned buy licensing. I made a big mistake. I used the trial license when configuring distributed switches. When I went to license my host just before it expired, I was shocked to be rejected as has Standard licenses and not Enterprise. So, I slowly unwound the configuration and though I am not sure what I did and can't do it again, I was able to detach the host from the distributed switch and port groups, crease new standard switches and port groups and eventually delete all traces of the distributed switches and port groups. NOTE: If you do this, you will need console access to the host to reassign the management network to the NIC. Maybe it is easier if you have two NICs in the vswitch but I have just one.

Before unwinding the distributed switch, I tested XVMotion fling to migrate VMs from Old vcenter to New vCenter. Again, I ran into issues.

  • First, there was a MTU issue that took way too long to figure out but was a big problem on 6.7 but not 6.0. 
  • Next was duplicate folder names in a cluster. If you have, say, four folders A,B,C and D and under B you have a folder name A, xVmotion pukes. Well, you don't see any target folders and you have no idea why.
I successfully moved one cluster and about 100 VMs. I'd like to have been able to filter source VMs by folder rather than just a search which would have made moving a whole folder a lot easier.

When I tried to migrate on my next cluster, I ran into more problems. 
  • A general system error occurred: Host not found
    • I think this have been because DRS on the new vCenter cluster was not enabled
  • License not available to perform the operation
    • After enabling DRS started getting this error. I made sure the new host had a license. Problem persisted. Turns out the source host had a Standard license who doesn't support vMotion. How stupid is that? Once I applied an Enterprise license to the source host, I was able to migrate VMs.
Recap:
  • Be careful using the evaluation key. It can bite you later
  • You need Enterprise licenses to migrate VMs across vCenters
  • Triple check all MTU setting on hosts and switches
  • Make sure you don't have duplicate folder names
  • Keep an eye on both source and destination vCenter event logs to identify where an issue may be arising from. I tended to think it was destination
  • Make sure you have DRS on both source and destination 
  • Make sure host NICs enable vMotion

Friday, December 14, 2018

Windows 7 to 10 upgrade Failure 0x800f0955 - 0x20003

Tech.. another techie article.  Maybe I'll go back to weird, personal, hiking, adventuring or beer brewing (and drinking) again one day.  For now, another pain in my ass with computers.  I do work with technology all day long.  Lots of different tech with servers and data centers, so I have a lot to bitch about.

This time, I've been trying to (free) upgrade windows 7 to windows 10. It seems to fail a lot.  Like fail, fail.

In a nutshell, running MediaCreationTool1809.exe (versions wil vary over time) from https://www.microsoft.com/en-us/software-download/windows10 allows you to create windows 10 media; ISO file which can be burned to a DVD-ROM or USB drive.  You can also just upgraded right from the tool which will cache the install files on your hard drive (you need 8GB free but really 12 or so).

So, I take the easy route.  Run the media creation to and select to upgrade.  The process is pretty simple.  It download files, stages an upgrade, reboots, starts the upgrade, does some updates, fails, rolls back (very cleanly/safely every time).  The code I get is 0x800f0955 - 0x20003 and some Safe_OS and Updates message.  I use google university to find what the error means, do everything everyone suggests and it fails.

Having done this several times now and spent hours troubleshooting, I've come up empty.  If only the upgrade would log exactly what it is doing when it fails, maybe we would fix it.  But it doesn't and I can't get around that problem.  However, I've still been able to upgrade several systems.  All you do is...

First, do some cleanup of your Windows PC.  Necessary?  Probably not. Works every time I do?  Yup.

Ok, first, you will need to install from a USB drive.  Sorry. You need a 8GB drive.  These are cheap and easy to use.  So, get one, insert it into a USB slot and run the media creation tool to create the installer USB drive.  When you are all done, you can delete everything form the USB and have some portable storage. Note: Booting from USB can be slowwww on older systems or not using USB 3.0 flash drive. Like a couple hours slow. It was only a few minutes with USB 3.0 drive and interface.

You can click cleanup if you want and scroll down to see the EASY part. You should cleanup first though.

Cleanup.   Open Windows Explorer.  Navigate to C:\Users.  Double click the account name you log in as. e.g. c:\users\jomebrew\
Click in the address bar and add \appdata.  Looks like this c:\users\jomebrew\appdata.  press Enter

Open Local, then open Temp.  Now your address bar shows C:\Users\jomebrew\AppData\Local\Temp with your account name of course.

Select everything in this folder  Click inthe right panel and click Edit / Select All.
  Press the Delete key and click Yes.
  Skip any files that can't be deleted.

In the address bar, click Local

Open Microsoft, then open Windows then open WER.  Delete everything in ReportArchive and ReportQueue.  If they are empty, move on.

Open ReportArchive then select all and press delete key and yes

Click the left (back) arrow or WER from the address bar then open ReportQueue.  Select all and press the delete and then yes

On the address bar, click Local Disk (C:)

Click click in the address bar and add program data.  it will look like this C:\ProgramData
Again,
Open Microsoft, then open Windows then open WER.  Delete everything in ReportArchive and ReportQueue.  If they are empty, move on.
Open ReportArchive then select all and press delete key and yes

Click the left (back) arrow or WER from the address bar then open ReportQueue.  Select all and press the delete and then yes

On the address bar, click Local Disk (C:)

Open Windows then open Temp

Select All then press the Delete key and click Yes.
Skip any files that can't be deleted.

On the Address bar, click Computer
Locate Local Disk C:.  Right click on this and select Properties (or highlight it and select File / Properties) Same thing.

Click Disk Cleanup.  Let it scan and when done, select all the boxes

Click Delete then Delete files.  This can take a while but will finish.

Make sure recycle bin is empty.

The EASY Part
Now that things are clean, navigate to the USB drive. Double click Setup.exe.
This seems to be the important piece,  On the first screen it asks Get Updates and Optional Features.  Select Not Right Now.  Then click Next.

Now click the appropriate options to finish the install. That's it!

Once Windows 10 installs, I select No to all the sharing options presented during the configuration steps.  Be sure to apply latest updates once windows 10 is done installing.

Note:  the installer spends a lot of time at 66% and 67%.  Be patient.





Wednesday, August 22, 2018

Fixing Windows 10 Missing systemprofile Desktop folder

I ran into an issue that seems to come from Windows 10 Updates.  Moments after logging in, I get a warning windows advising C:\WINDOWS\system32\config\systemprofile\Desktop is unavailable.  There is no desktop, just a task bar and recycle bin.  

The problems stems from the relocation of the default desktop from c:\users\default to C:\WINDOWS\system32\config\systemprofile\ but the Desktop folder doesn't get migrated.  Thus, the error message.

I could not launch any program except task manager, so I worked out a convoluted procedure to launch a windows explorer instance and was able to show the now hidden Default user folder in c:\users, copy the contents and paste it in the new default user folder C:\WINDOWS\system32\config\systemprofile\default. 

After following the procedure and rebooting.  I was able to log in and was back to my normal desktop.

Here is the procedure I followed.  It might be a bit overkill (copying all files in the default users folder but I only wanted to do it once.  Sorry, no images this time.

On the task bar (if you see one) or press ALT+CTRL+DEL and select task manager.
In Task Manger, click File / Run New Task
Click Browse
Locate Desktop in the left panel
Right Click and select Properties
Click Location
Click Find Target  (wait a little bit for an explorer window to open)
On the top Menu click View
On the far right, click Options
Folder Options will open.  Click View
Click the radio button Show hidden files, folders, and drives
Uncheck Hide empty drives
Uncheck Hide extensions for known folders and icons
Click OK
Click Users in the address bar (This PC > Local Disk (C) > Users )


You should not see the Default user folder along with any other user folder as della s a Default.migrated
Double click on the Default folder to open the folder
On the menu ribbon (the menu below the top ment with File/Home?Share/View), far right, click Select All (or click white space in the details panel asn press CTRL+A).  All files should be selected/highlighted
Right Click on any highlighted are and select Copy (or click Copy from the menu ribbon)

Navigate to C:\Windows\System32\Config   - You may get a security box to allow you to access the folder.  Click Continue
Click systemprofile   - You may get a security box to allow you to access the folder.  Click Continue
In the details white space, right click and select Paste (or click Paste from the menu ribbon)

Reboot - Press ALT+CTRL+DEl then click the power icon in the lower right and select Restart



Tuesday, August 7, 2018

Accessing VMFS Datastores from CentOS Live Linux

I've often run into issues with disk and other errors preventing me from getting VMs or other files off a VMFS volume.  I have used this process a couple times to retrieve VMs and other files when options for using VMware tools just are not enough.

This article assume the user has a decent bit of knowledge where to get things and how to work with their own servers.  For example, I don't describe how to boot your server from an ISO image.  I use remote tools to my server but you can burn a DVD, write an image and boot from USB, use an external drive, etc.

You will need a couple things to start with.  First, Download CentOS 7. I use the DVD ISO.

Next, Locate the epel repository rpm. You will download it later but it is good to verify the path first rather than debug it later.  Currently it is at http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm though the path and name may change.  if it does, I just work my way back up the file path in the URL until I find the change and navigate down from there.

Then, Locate the vmfs-tools package.  Again, you will download it later but verify the path and change the procedure below as necessary.  The package is currently at https://glandium.org/projects/vmfs-tools/vmfs-tools-0.2.5.tar.gz

Finally, make sure you have a SSH client.  I use teraterm but you can use putty or whatever you like.

Start by booting to the Centos ISO image.

Select Install CentOS 7.   Eventually you will be rewarded with a desktop.  From there, open a terminal window.  I right click on the desktop and select Konsole.

Elevate to root
su -

To make it easier to work with and make cut and paste commands easier, start ssh server
service sshd start

Now create a new user account and set the password.  I don't care the user name you use or the password. just remember them
useradd userx --groups root
passwd userx

SSH to the server
You can get the IP address using ifconfig






Log in using the account you created above.
Elevate to root
su -

Now work from the ssh client and not the console anymore.

Create a mount path for the datastore.  Create and many as you want to mount VMFS volumes and use any path and name you want.  Here I had 2 volumes to mount so I created 2 mount points.
mkdir -p /mnt/dsk1
mkdir -p /mnt/dsk2

Download and install the epel repository package. This makes it easy to install the next couple packages.
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
rpm -Uhv epel-release-7-11.noarch.rpm

Install Libuuid, Libuuid-devel and gcc. say yes to all prompts
yum install libuuid libuuid-devel
yum install gcc

Download the vmfs-tools package the extract it
wget https://glandium.org/projects/vmfs-tools/vmfs-tools-0.2.5.tar.gz
tar zxf vmfs-tools-0.2.5.tar.gz

Compile vmfs-tools
cd vmfs-tools-0.2.5
./configure
make
make install
("make install" didn't actually install it in /user/bin so I execute it from the build path)
cd ~/vmfs-tools-0.2.5/vmfs-fuse or just cd vmfs-fuse

When mounting a an VMWare boot disk, there are several partitions.  The 3rd partition is almost always the datastore. In this example, I have 2 VMware disk (sdc and sdd).  Notice only 2 partitions on sdd.  This is not a bootable VMWare drive, justa  datastore drive.













Now, lets mount the volume.  I am mounting partition3 (the datastore partition) to the mount path I created previously.
./vmfs-fuse /dev/sdc3 /mnt/dsk1

If you want to see the partitions on the disk, use fdisk -l /dev/sdc or whatever the sd? device is.
fdisk -l /dev/sdc












NOTE:  The file system is read only.  So, you can't write. I am not sure why, I don't really need to write.

You can also run a a fsck scan of the volume to see if there are issues with the file system
cd ~/vmfs-tools-0.2.5/fsck.vmfs/
./fsck.vmfs /dev/sdc3

Now you can copy and view files on the VMFS datastore by navigating around /mnt/dsk1.

To copy files somewhere, you will need  a USB drive or a network file system mount.  I used a NFS mount.  e.g.

mkdir /nfs
mount 192.168.1.1:/vol/nfs /nfs

I wanted to copy off some VMs in hops of saving them, so I used rsync which shows progress and will continue on errors.
rsync -r --info=progress2 mnt/dsk1/myvm /nfs/myvm

Now that I am done, I cleanly unmount my datastores and reboot.
umount /nfs



Wednesday, February 7, 2018

MAC OSX Zero Free Space for VMWare Deduplication

Periodically working with virtual machines (vms),  enough files are created and deleted that the thin provisioned virtual disk (vdisk) expands to its maximum capacity even though the operating system (OS) file system shows free space.   This is typical and normal behavior.  Unfortunately, this consumes space on the underlying storage that is no longer being active used with the VM OS. 

For years I have been using the sdelete.exe Microsoft SDelete tool that securely erases deleted files using the option to just write zeros to all remaining free space.  In Linux, I use the dd tool to read from /dev/zero and write to a temporary file filling all free space with zeros then deleting the temporary file. 

These procedures simply write zeros to free space.  VMware and network storage systems will "see" these zeros and free allocated storage space to the virtual disk.  In essence, this shrinks the virtual file size on storage freeing unused space on the storage system.  Some advanced storage systems will automatically detect these zeros and automatically free the space.  Others you need to run a command line on ESXi to free the zeroed space. 

The MAC OS has a couple ways to write zeros on free space.  The graphical disk utility in Utilities menu and a command line too.  I haven't had much luck with the GUI tool, so this procedure uses the command line tool. 

First, open a terminal as an admin account.

type diskutil list to locate the drive you want to write zeros to.  In my case it is partition 2 on /dev/disk0.  Look for Apple_HFS macintosh HD.  the identifier for mine is disk0s2.

Now run the too and write some zeros

diskutil secureErase freespace 0 disk02

where...

secureErase = Secure Erace.  There are 5 levels, we want level 0 for single-pass zeros
freespace = only write zeros to free, unused space.  Does not affect files or OS
0 = secureErase level 0 single-pass zeros
disk0s2 = my partition that has my data

I have a SSD drive so it is pretty quick.    3 or 4 minutes.  The progress bar will indicate how long the zero write will take.  The estimate updates every few seconds.

When it is all done, use the vmware command line tool vmkfstool to free space that contains all zeros.  First SSH to the ESXi hot and login as root (there are other methods but I use SSH to esxi to run the command line tools).

For example on my VM, I used the command
vmkfstools -K /vmfs/volumes/datastore1/Mac_OS_Master/Mac_OS_Master.vmdk

The starting size of the VM was 105GB with used and non-shared at 95GB.  After zero write procedure, the provisioned size was still 105GB the used and not-shared were now 57GB.

If the VM has frequent file writes and deletes, the used space will slowly increase again and eventually warrant another shrinking.  If it is very frequent and expands the virtual disk to capacity in a short time, it may not be worth the effort to shrink the virtual disk.  It is up to you.



Friday, May 19, 2017

Automated Applying of Microsoft WannaCry Security Patch With PSEXEC

I needed to ensure we had this patch on hundreds of servers that are pretty much unmanaged.  I've used various tools in the past but had to figure out how to do it to my windows 8.1 and 2012R2 hosts. This applies to any manual patch really.

First, I used psexec from the sysinternal tools at Microsoft https://technet.microsoft.com/en-us/sysinternals/bb897553.aspx.

I needed a way to get the update onto the machines without needing credentials to access a network share.  I figured out how to use powershell to copy the file from a web server I've already setup.

I copied the update msu file to the webserver with a simpler name and renamed it as a .zip file to eliminate issues with file transfer through the web server.  IIS will block the file unless you have a content type configured for the extension msu.

I created a simple text file with the IP addresses of all the hosts I wanted to patch.  one per line.  e.g.
10.10.10.1
10.10.10.2
10.0.0.3

Using the powershell command Invoke-WebRequest is like using wget.  Just define the output file and specify the web url of the file. 

e.g. powershell Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/update.zip

With psexec, you should specify the full path to the file.  Finding powershell path is simple.  Just type where powershell from a command line.

The psexec command example for a list of hosts
psexec @buildshosts.txt -s -u username -p password C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/update.zip

Alternatively you can just issue it directly to a host
psexec \\10.10.10.1 -s -u username -p password C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Invoke-WebRequest -OutFile c:\temp\update.msu http://mywebserver/update.zip

Now that the patch was on the host, I used psexec to apply it using the standalone wusa stand alone tool. Since I know it is in the default path c:\windows\system32, I didn't bother to specify the path.

I included the /quiet and /forcerestart options to silently install and then reboot.
psexec @hosts.txt -s -u username -p password wusa c:\temp\update.msu /quiet /forcerestart

The patch update tool exits with code 1641 if the application and reboot was successful.

The process is done serially so, it takes a while to iterate through a large number of hosts.

Using powershell to do a web download is really slow, so it takes several minutes to download the 200K rollup.

There is a chance powershell is old and doesn't support the  Invoke-WebRequest option.