Simple(r) Hyper-V Networking on Windows 8


A while ago I wrote a post on setting up HyperV networking where I basically bridged all networks to the guests and let them worry about which one they needed to use.

Albeit a little complicated it worked pretty good. However sometimes the guests messed it up and couldn’t figure which network to use for the AD and DNS and also the Default Gateway setting could be stuck on the “wrong” network.

Therefore here is a new and vastly better approach J – using only one internal network and the windows Internet Connection Sharing (ICS).

(This is essentially a “NATing” approach where the other was primarily a “Bridging”)

Caution: The two network configurations are inherently incompatible; you should choose only one to go for.

The Goal (still…)

Is quite simply that I want my VMs to just work seamless with my Host machine and whatever network connection I’m using and – in my case – a guest AD.

This used to be really simple with VMWare Workstation but after converting my machines to Hyper-V there is a catch or two.

This post covers:

Part 1: Simple Network setup, if you need a simple fix this is it

Part 2: Automatic network sharing, this is what makes it shine

Part 3: Advanced stuff for the last 10% of needs

Note that this applies to a mobile working scenario where I run everything on a laptop and it frequently changes whether I’m using a wired, wifi or mobile net.

The Environment

I do SharePoint development and this post is written in that context, however it should all be generally applicable to Hyper-V.

My local Hyper-V (SharePoint Dev) environment (simplified) consists of two guests:

  • An AD server that should not communicate outside my box
  • A SharePoint server that need to communicate with my host, the guest AD and generally networking

In addition I need to connect with Remote Desktop to the guest machines with the host (the Hyper-V console is useless for all but boot and network configurations).

Part 1: Basic Network

We’ll set the network up as:

The Hyper-V Network Settings

The steps to configure the network on the host are:

  1. Create an internal switch for the connection between host and virtual machines (and in my environment between the AD and SharePoint server)
    1. Name it appropriately (I just like that)
  2. Go to your host and select the currently active network connection, choose Properties, Sharing and choose the HyperV switch
    1. When your active network connection changes, you’ll have to share the new connection (there can be only one)

That’s it for the host.

The Guest Configuration

We need to configure the guests themselves. Connect to each one using the Hyper-V console:

  1. For the AD guest server (internal communication only) assign the ip address 192.168.137.10 (pick the last number as you like) – there should only by one network adapter
    1. Set the default gateway to 192.168.137.1 (which is your host) and DNS to 127.0.0.1
    2. When you change the IP address for an AD server you need to restart it so that the DNS is properly updated
    3. Reboot the AD
  2. For the guests that need network access:
    1. Assign the chosen IP to the internal network, in my case, 192.168.137.20 (a unique one for each guest), use 192.168.137.1 as default gateway, use 192.168.137.10 (your AD) as your single DNS
  3. Test it, on each guest:
    1. Lookup an AD user or two – do you have a proper connection between the guests?
    2. Ping 192.168.137.1 – does your gateway respond?
      If not have a close look at ICS, google various troubleshooting tips
    3. Ping 8.8.8.8 – do you have basic external network?
    4. Ping google.com – do you have DNS resolution working?
      In case of DNS lookup problems: Have a look at your DNS and possibly remove any DNS forwarders that you may have (google it!)
    5. Browse a site – is the firewall open?

That’s it for the guests!

RDP Access

Finally you’ll want to access the guests through RDP so stuff like copy/paste and fullscreen works. You can of course use whatever RDP manager you prefer, here I’ve just used the standard one.

Make a connection for each Guest access and save it somewhere (Desktop?).

Run the command “mstsc” and setup the options you like and save the Connection. In particular enable “Clipboard” and “Drives” in the “Local Resources” tab. That will enable you to copy/paste both text and files between the guest and host.

In my case:

Note that I have added “sp2010″ in my host file as 192.168.137.10. You can also use just the IP.

Part 2: Making it Shine – Automate it

Now for the fun part. I want it all to to just work.

No fiddling around with sharing this or that network adapter, just make the one I’m using for the host the one that is shared.

Using PowerShell and the system task scheduler it’s doable.

To install:

  1. Download and unzip this file, e.g. to C:\AutoNetworkSharing
  2. Right click the dll and ps1 file, choose properties and “unblock” them
  3. Test 1:
    1. Start an administrative shell
    2. Execute “powershell.exe –File c:\AutoNetworkSharing\AutoSwitch.ps1″
    3. Watch the printed output. It should share your active network connection with your HyperV network adapter and write what it does in the process
    4. Run it again and make sure that this time it detects that it need not change the sharing properties
  4. Go to your Task Scheduler
  5. I like to create a new folder,e.g.”MyCustomTasks” to keep track of what I’m doing:

  6. Click “Import Task”
  7. Choose the “SwitchNetworkSharingScheduledTask.xml” file (in the zip)
  8. On the “Create Task” dialog, go to the Actions tab and correct the file name parameter to match your location:
  9. Have a look at the “Triggers” tab, here is the magic that ensures that this task is fired whenever your network connection changes
  10. Test 2:
    1. Enable and disable your network adatpers to see the “Sharing” label move around as you do so. It should take no more than 10 seconds to see the sharing property change
    2. If it doesn’t work have a look in the history tab on the scheduled task to see whether the task starts or if’s the actual script that fails

One caveat: The HyperV network will jump briefly when the network changes, so your RDP session will briefly freeze.

The actual script is mostly about deep diving into the WMI and looks like this (Note: Do not copy/paste this directly. Quotes and dashes will likely be wrong, take it from the zip file):

Import-Module ( join-path (Split-Path -Parent $MyInvocation.MyCommand.Path) “IcsManagerLibrary.dll” )

#Fetch the active adapter and Hyperv NIC from WMI
$activeAdapters = Get-WmiObject win32_networkadapterconfiguration -Filter ‘ipenabled = “true”‘

#CHANGE THIS LINE IF you need to handle multiple HyperV nics to you own rules
$hypervNics = $activeAdapters |? { $_.ServiceName -eq     ‘VMSMP’ }

if( @($hypervNics).Count -ne 1 ){
    Write-Error     “Cannot auto switch, found $(@($hypervNics).Count) hyper-V NICs”
    return
}

#Get the adapters with network connectivity
$activeNetworks = Get-WmiObject -Class Win32_IP4RouteTable -Filter “Destination=’0.0.0.0′” |% { Get-WmiObject win32_networkadapterconfiguration -Filter     “ipenabled=’true’ and InterfaceIndex=$($_.InterFaceIndex)” }
if( @($activeNetworks).Count -ne 1){
    Write-Warning     “Multiple active NICs found, picking one at random”
    $activeNetworks = $activeNetworks[0]
}
    
#Get the “real” NIC name, required by the cmdlet
$hyperVTypedNIC =  Get-WmiObject -Class Win32_NetworkAdapter -filter “DeviceID = $($hyperVNics.Index)”
$activeTypedNIC =  Get-WmiObject -Class Win32_NetworkAdapter -filter “DeviceID = $($activeNetworks.Index)”

#Retrieve what is shared by ICS now (if anything)
$sharedPublicNetwork =  Get-WmiObject -Class HNet_ConnectionProperties -Namespace “ROOT\microsoft\homenet” -Filter ‘IsIcsPublic = “true”‘
$sharedPrivateNetwork =  Get-WmiObject -Class HNet_ConnectionProperties -Namespace “ROOT\microsoft\homenet” -Filter ‘IsIcsPrivate = “true”‘

#Do a test to see if the selected NIC to be shared is already shared (then do nothing)
if( $sharedPublicNetwork -and $sharedPublicNetwork.Connection -match $activeTypedNIC.GUID -and $sharedPrivateNetwork -and $sharedPrivateNetwork.Connection -match $hyperVTypedNIC.GUID){
    
    Write-host     “Already shared ‘$($activeTypedNIC.NetConnectionID)’. Skipping.”
    return
}
else{
    Write-host     “Sharing ‘$($activeTypedNIC.NetConnectionID)’ to ‘$($hyperVTypedNIC.NetConnectionID)’”
    Enable-ics -Shared_connection $activeTypedNIC.NetConnectionID -Home_connection $hyperVTypedNIC.NetConnectionID -force $true      
Write-host     “Done”
}

It assumes that there is only enabled HyperV network adapter (if not change the marked line) and it will pick a random active internet connection to share if there is more than one.

Note: I’m dependent on the “ICSManager” module developed by Utapyngo for handling the ICS part.

Part 3: Going advanced

Well sometimes you need a bit more J

Here are the few things that I found myself needing.

Multiple subnets

If you have more than one HyperV teams, e.g. multiple AD controllers that should be kept separate, you need to split the network.

As the ICS service will reset your hosts’ HyperV adapter to an ip of 192.168.137.1 and reset all submask and DNS settings, you shouldn’t mess with them.

On the other hand it is quite possible (and easy) to separate your teams by specifying smaller submasks in the guests, i.e. assign the submask 255.255.255.128 to all guests and assign

  • AD in team 1 an IP of 192.168.137.10
  • AD in team 2 an IP of 192.168.137.130

And subsequent guests IPs within their network range. Keep the default gateway as 192.168.137.1 for all guests.

Multiple HyperV networks and bridges

In the case of multiple HyperV cards you will need to modify the PowerShell script file, to select the proper HyperV card to share with. Simple give your desired HyperV adapter an easy readable name in the host.

Perhaps just hardcode the HyperV adapter name in call to the “Enable-ICS” commandlet.

Happy networking :-)

Use CSOM from PowerShell!!!


The SharePoint 2010/2013 Client Object Model (CSOM) offers an alternative way to do basic read/write (CRUD) operations in SharePoint. I find that it is superior to the “normal” server object model/Cmdlets for

  1. Speed
  2. Memory
  3. Execution requirements – does not need to run on your SharePoint production server and does not need Shell access privileges. Essentially you can execute these kind of scripts with normal SharePoint privileges instead of sys admin privileges

And to be fair inferior in the type of operation you can be perform. It is essentially CRUD operations, especially in the SharePoint 2010 CSOM.

This post is about how to use it in PowerShell and a comparison of the performance.

How to use CSOM

First, there is a slight problem in PowerShell (v2 and v3); it cannot easily call generics such as the ClientContext.Load method. It simply cannot figure out which overloaded method to call – therefore we have to help it a bit.

The following is the function I use to include the CSOM dependencies in my scripts. It simply loads the two Client dlls and creates a new version of the ClientContext class that doesn’t use the offending “Load<T>(T clientObject)” method.

I nicked most of this from here, but added the ability to load the client assemblies from local dir (and fall back to GAC) – very useful if you are not running on a SharePoint server.

$myScriptPath = (Split-Path -Parent $MyInvocation.MyCommand.Path) 

function AddCSOM(){

     #Load SharePoint client dlls
     $a = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.dll")
     $ar = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.Runtime.dll")
    
     if( !$a ){
         $a = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client")
     }
     if( !$ar ){
         $ar = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client.Runtime")
     }
    
     if( !$a -or !$ar ){
         throw         "Could not load Microsoft.SharePoint.Client.dll or Microsoft.SharePoint.Client.Runtime.dll"
     }
    
    
     #Add overload to the client context.
     #Define new load method without type argument
     $csharp =     "
      using Microsoft.SharePoint.Client;
      namespace SharepointClient
      {
          public class PSClientContext: ClientContext
          {
              public PSClientContext(string siteUrl)
                  : base(siteUrl)
              {
              }
              // need a plain Load method here, the base method is a generic method
              // which isn't supported in PowerShell.
              public void Load(ClientObject objectToLoad)
              {
                  base.Load(objectToLoad);
              }
          }
      }"

    
     $assemblies = @( $a.FullName, $ar.FullName,     "System.Core")
     #Add dynamic type to the PowerShell runspace
     Add-Type -TypeDefinition $csharp -ReferencedAssemblies $assemblies
}

And in order to fetch data from a list you would do:

AddCSOM()

$context = New-Object SharepointClient.PSClientContext($siteUrl)

#Hardcoded list name
$list = $context.Web.Lists.GetByTitle("Documents")

#ask for plenty of documents, and the fields needed
$query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
$items = $list.GetItems( $query )

$context.Load($list)
$context.Load($items)
#execute query
$context.ExecuteQuery()


$items |% {
          Write-host "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
}

It doesn’t get much easier than that (when you have the AddCSOM function that is). It is a few more lines of code than you would need with the server OM (load and execute query) but not by much.

The above code works with both 2010 and 2013 CSOM.

Performance Measurement

To check the efficiency of the Client object model compared to the traditional server model I created two scripts and measured the runtime and memory consumption:

Client OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

AddCSOM

[System.GC]::Collect()
$membefore = (get-process -id $pid).ws

$duration = Measure-Command {

          $context = New-Object SharepointClient.PSClientContext($siteUrl)
         
          #Hardcoded list name
          $list = $context.Web.Lists.GetByTitle($listName)
         
          #ask for plenty of documents, and the fields needed
          $query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
          $items = $list.GetItems( $query )
         
          $context.Load($list)
          $context.Load($items)
          #execute query
          $context.ExecuteQuery()
         
         
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
          }
         
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

Server OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

Add-PsSnapin Microsoft.SharePoint.PowerShell -ea SilentlyContinue

[System.GC]::Collect()
$membefore =  (get-process -id $pid).ws

$duration = Measure-Command {
          $w = Get-SPWeb $siteUrl 
          $list = $w.Lists[$listName]

          $items = $list.GetItems()
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "url: $($_.Url), title: $($_.Title)"
          }
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

And executed them against a document library of 500 and 1500 elements (4 measurements at each data point).

The Results

Are very clear:

OMChart

As you can see it is MUCH more efficient to rely on CSOM and it scales a lot better. The server OM retrieves a huge number of additional properties, but it has the benefit of going directly at the database instead of the webserver. Curiously the CSOM version offers much more reliable performance where the Server OM various quite a bit.

In addition you get around the limitation of Shell access for the powershell account and the need for server side execution. Might be convenient for many.

Conclusions

The only downside I can see with the CSOM approach is that it is unfamiliar to most and it does require a few more lines of code. IF your specific need is covered by the API of course.

It’s faster, more portable, less memory intensive and simple to use.

Granted, there are lots of missing API’s (especially in the 2010 edition) but every data manipulation need is likely covered. That is quite a bit after all.

So go for it J

Quick Guide to PowerShell Remoting (for SharePoint stuff…)


Lately I have been doing quite a bit of PowerShell remoting (mostly in a SharePoint context) and while it is surprisingly easy and useful there are a few hoops I’ll detail here.

I have been a fan of PowerShell ever since TechEd ’06 in Barcelona where some young chap eloquently introduced PowerShell. Everyone in the audience understood the potential.

Since then it is now so pervasive that I don’t have to waste time to argue it’s role and prevalence among scripting languages – and the remoting part once again is head and shoulders over the alternatives (even SSH on the unixes).

For remoting in a SharePoint context, there are 3, perhaps 4 steps, for other purposes you wouldn’t hurt yourself to go through the same.

Note: Everything in this post must be executed within an administrative PowerShell console.

Note: There are about a gazillion settings and variations that I’ve skipped; this is how I normally do it.

Step 1: Enable PS Remoting

This one is dead simple.

On the clients (the servers that you are remoting to) execute

Enable-PSRemoting

In a PowerShell shell (in admin mode) – just press return for every confirmation prompt; the defaults are sensible (or add “-force”).

Step 2: Set Sensible Memory Limits

By default each remoting session will have a cap of 150 MB of memory assigned to it. After that you’ll get a more or less random exception from the remoting session; it may be really hard to figure out what went wrong.

When you work with SharePoint you can easily spend 150 MB if you iterate over a number of SPWebs or SPSites. It may not even be possible to limit your consumption by disposing explicitly (use start-spassignment for that) if you actually need to iterate it all (Side-note: When the session ends so does all allocated objects – whether you Dispose them or not doesn’t matter).

Let’s face it these are often expensive scripts with a lot of work to do.

The fix is simple (and in my opinion safe). On the clients execute:

Set-item wsman:\localhost\shell\MaxMemoryPerShellMB 1024

Which will set the limit to 1 GB.

Long explanation here.

Step 3: Disable UAC

If you need to do any SharePoint or IIS administration (or a hundred other tasks) you need your script to run with Administrative rights.

The PowerShell remote process does not request administrative rights from windows – it’ll just use whatever is assigned to it.

Before you accuse me of disabling all security take a good long read at this article from Microsoft; they actually do recommend to disable UAC on servers provided that you do not use your servers for non-admin stuff (that is NO Internet browsing!).

To test whether or not UAC is enabled start a PowerShell console (simply left click the powershell icon). Does it say “Administrator:…” in the title? If yes then UAC is disabled and you are good to go.

There are at least two places to fiddle with the UAC:

  1. You can go to Control Panel / User Account Control Settings to change when you see the notifications boxes. This will NOT help you here – it is only the notification settings not the feature itself
  2. You need to go to the MMC / Local Policy Editor to disable it:
    1. Run “MMC”
    2. Choose “Add/remove snap-in”
    3. Choose “Group Policy Object”
    4. Choose “Local Computer”
    5. Follow the picture below and set the “Run all administrators in Admin Approval mode” to disabled (Note: “Admin Approval Mode” is UAC)

      Disable UAC for Admins

      Disable UAC

    6. Reboot the server
    7. TEST that UAC is disabled by starting a PowerShell and check that the title reads “Administrator:…”

You should take a step back here and ask your local sys admin whether this is a policy enforced by him or whether or not it should be. He/She can create a new policy that targets specific servers to disable UAC.

Usage and Test

Finally try it!

To enter an interactive shell execute (from the controller)

Execute:

    Enter-PSSession –computername $clientComputerName

If all goes well you’ll see that the prompt changes a bit and every command is now executed remotely. Do a “dir” and check that you see the remote server’s files and not the local ones.

Type “exit” to return to the controller.

If you are going cross domains you’ll receive an error, execute step 4 (below) in that case.

To execute a command with arguments

To execute a script (and store the pipeline in $output):

$output = Invoke-Command -ComputerName $clientComputerName -FilePath “. \ScriptToExecute.p1″ -ArgumentList ($arg1, $arg2, $arg3)

Note that the script file (“ScriptToExecute.ps1″) is local to the controller and WinRM will handle the mundane task of transferring it to the client. The script is executed remotely and therefore you cannot reference other scripts as they are not transferred as well.

To execute a script block:

$output = Invoke-Command -ComputerName $clientComputerName -scriptblock { get-childitem “C:\” }

And you can of course pass arguments to your scriptblock and combine it in a hundred different ways.

Warning: Remember the context

The remote sessions are new PowerShell sessions; whatever information/variables you need you must either pass as arguments or embed within a scriptblock.

You can pass simple serializeable objects back to the controller on the pipeline, but it will not work to pass COM/WMI/SharePoint objects back.

Step 4: (Optional) Going Cross Domain?

By default PowerShell remoting will connect to the client computer and automatically pass the credentials of the current user to the client computer. That is normally what you want and ensures that you need no credentials prompt.

However that only works for servers within the same domain as the user. If you are crossing domain boundaries – AD trust or not – you need to do something more (before jumping through hoops do test it first to make sure that this is required for you)

Again, there are many options but the one with the least configuration hassles is:

  1. Add the client servers to the controller servers list of trusted hosts:set-item wsman:localhost\client\trustedhosts -value $serverList -forcewhere $serverList is a comma separated string of computernames (I use FQDN names).
  2. Pass explicit credentials to the remoting commands

    $c = Get-Credential

    Enter-PSSession -ComputerName $clientComputerName -Credential $c

    … and it should work. There are a million details regarding forests, firewalls, etc. I’ll not go into here.

(Other options are Kerberos, SSL channels, …)

Spring Cleaning Your Dev Box


After some time your dev box ends up looking like a well-used tool shed complete with unused tools and cobwebs in the corners.

While it’s quick and easy to get rid of half the icons on your desktop and delete droves of temporary working files – I always end up returning to a few well used scripts to get my much needed disk space back as well as a cleaner and faster box.

Often the trigger is exhausted disk space.

These are the scripts I always end up using.

Delete old site collections

If you are making site definitions (or web templates) you’ll likely end up with tons of small test site collections over time. I often use scripts to create them named after the current time.

To clean that all up I use the PS script (run in the SharePoint Administrative Console):

    get-spsite http://* -limit all |? { $_.Url -Match "http://.*/" -and $_.LastContentModifiedDate -gt [DateTime]::Today.AddMonths(-2)} | remove-spsite

In human terms: “Delete every site collection that is not a root site collection and that have not been modified within the last two months”. Phew.

Do NOT run this in production.

Set Simple SQL Recovery Mode

Next I ensure that all my local databases run in simple recovery mode, i.e. avoid huge transactions logs that need to truncated once in a while.

I nicked this script somewhere in google (likely here) (updated Jun 27: Fixed – my angle brackets had been eaten):

USE MASTER
declare
	@isql varchar(2000),
	@dbname varchar(64)
	
	declare c1 cursor for select name from master..sysdatabases where name not in ('master','model','msdb','tempdb')
	open c1
	fetch next from c1 into @dbname
	While @@fetch_status <> -1
		begin
		select @isql = 'ALTER DATABASE [@dbname] SET AUTO_CLOSE OFF'
		select @isql = replace(@isql,'@dbname',@dbname)
		print @isql
		exec(@isql)
		select @isql = 'ALTER DATABASE [@dbname] SET RECOVERY SIMPLE'
		select @isql = replace(@isql,'@dbname',@dbname)
		print @isql
		exec(@isql)
		select @isql='USE [@dbname] checkpoint'
		select @isql = replace(@isql,'@dbname',@dbname)
		print @isql
		exec(@isql)
		
		fetch next from c1 into @dbname
		end
	close c1
	deallocate c1

Run it within the SQL Server Management Studio.

Note that it is likely that the script will report a few minor errors if some DBs are detached/offline. Never mind.

Shrink DBs

Finally save some much needed space by shrinking the DB files. After site collection deletions and recovery mode changes there are likely a lot of space to be freed within the DB files.

This script will try to shrink all the DB files (I think I got it from here):

DROP TABLE #CommandQueue

CREATE TABLE #CommandQueue
(
    ID INT IDENTITY ( 1, 1 )
    , SqlStatement VARCHAR(1000)
)

INSERT INTO    #CommandQueue
(
    SqlStatement
)
SELECT
    'USE [' + A.name + '] DBCC SHRINKFILE (N''' + B.name + ''' , 1)'
FROM
    sys.databases A
    INNER JOIN sys.master_files B
    ON A.database_id = B.database_id
WHERE
    A.name NOT IN ( 'master', 'model', 'msdb', 'tempdb' )

DECLARE @id INT

select * from #CommandQueue

SELECT @id = MIN(ID)
FROM #CommandQueue

WHILE @id IS NOT NULL
BEGIN
    DECLARE @sqlStatement VARCHAR(1000)
    
    SELECT
        @sqlStatement = SqlStatement
    FROM
        #CommandQueue
    WHERE
        ID = @id

    PRINT 'Executing ''' + @sqlStatement + '''...'

    EXEC (@sqlStatement)

    DELETE FROM #CommandQueue
    WHERE ID = @id

    SELECT @id = MIN(ID)
    FROM #CommandQueue
END

Again expect some errors, inspect and accept them ;-)

Expand the Disk?

If the three steps above didn’t free enough space for you, the solution often is to just expand the VHDs on your virtual machines.

It’s a fairly easy process in both VMWare and HyperV it only requires you to turn off the VM, remove any snapshots, and expand the disk using the wizard for it. This will only expand the VHD; your partitions will not grow, so you need to do that next.

You can use the disk management tool for it – however I find it cumbersome. Especially if it is the system disk you’re expanding it is awkward. I prefer to use GPartED, which is a very nice linux partition editor. It is a downloadable iso that is a breeze to boot into and expand the partition whether it is the system or not.

One note: In a typical Linux way you are asked all sorts of questions at boot; just hit return at every one of them. Who cares about the keyboard layout for a GUI program with big buttons anyway?

It looks something like this:

(Note: Before using this program make sure that you close down the VM nicely from the guest)

Easy.

Quick tip: Handy Scripts for Local Hyper-V Management


Here is a quick post with a small script that I find incredibly handy for managing my local Hyper-V machines (on windows 8).

It simply ensures what set of VMs are running at a given time – attach that to a shortcut on the desktop and it saves me 3 minutes every day; I find it useful.

Why? Quite often, I need to switch from one set of VMs to another for different tasks (VMWare term is “teams”) e.g. switching between SP2010 and SP2013 development. Sometimes just turn them off if I need the resources for something else.

Normally you just go into Hyper-V manager and start/stop the relevant VMs. Automating that was quite simple and painless – the download is here.

  1. I have one PowerShell script (SetRunningVMs.ps1) that handles the VM management and then a couple of batch files for executing that script with proper parameters. The script is:
    <#
    .SYNOPSIS
    This is a simple Powershell script that adjust what VMs are running on the local Hyper-V host.
    
    .DESCRIPTION
    This script will start/resume the requested VMs and suspend all other VMs to give you maximum power and make it easy to switch from one
    development task to another, i.e. switch teams.
    
    .EXAMPLE
    Make sure that only the dev1 and ad01 machines are running:
    ./SetRunningVMs.ps1 'dev1' 'ad01'
    
    Stop all VMs (no arguments)
    ./SetRunningVMs.ps1 
    
    .NOTES
    Requires admin rights to run. Start using admin shell or use one of the provided batch files.
    Pauses on error.
    
    .LINK
    
    http://soerennielsen.wordpress.com
    
    #>
    
    param( [Parameter(ValueFromRemainingArguments=$true)][string[]] $allowedVMs = @() )
    
    try{
        get-vm |? { $_.State -eq "Running" -and $allowedVMs -notcontains $_.Name } |% { Write "Saving $($_.Name)"; Save-VM $_.Name }
    
        get-vm |? { $_.State -ne "Running" -and $allowedVMs -contains $_.Name } |% { Write "Starting $($_.Name)"; Start-VM $_.Name }
    
        write "SetRunningVMs succesfully done"
    }
    catch{
        Write-Error "Oops, error:" $_
        pause
    }

    I assume that the PowerShell Hyper-V module is loaded; it has always been the case in my tests.

  2. I have a number of batch files for easily executing the script, one for turning off (suspend) all VMs, one for a SP2010 team and one for the SP2013 team. It’s a one-liner bat file that will work with both PowerShell 2 and 3, with or without UAC.

    To Start my SP2010 team (StartSP2010Team.bat):

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD01\” \”Dev1\”‘ -verb RunAs}”

    (where you replace the bold VM names above with your own)

    To start my SP2013 team (StartSP2013Team.bat)

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD02\” \”SP2013\”‘ -verb RunAs}”

    To stop all VMs

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0\SetRunningVMs.ps1′ -verb RunAs}”

If you have UAC enabled (as I do) you will be prompted, otherwise it will just suspend/resume the relevant VMs.

It took about two hours to write the scripts, where the hardest part was getting the batch files properly in shape with escape characters and UAC.

You gotta love PowerShell ;-)

Find/Remove Obsolete Resource Labels


This is part 3 of 4 in a series on how to improve the way we usually work with resource (resx) files. At least the way my team and I work with them.

I generally like to – and do – use resource files for all string constants that are shown to end-users, however I do feel that it is needlessly cumbersome, therefore these posts:

  1. A Good Way to Handle Multi Language Resource Files
  2. Verify that your Resource Labels Exists
  3. Find Obsolete Resource Labels (this one)
  4. Supercharge your (Resource) Efficiency with Macros

Generally these issues are generic to .NET and not specific to SharePoint, though that is where I’m spending my time writing this.

So – Near the End of the Project – are those Resource Entries Still in Use?

The issue at hand is that you have hundreds of source files of various flavors and they are sprinkled with references to a number of resource files. When code is refactored or just deleted what happens to old resource labels? Likely nothing at all.

Are you happy with a ton of useless resource entries no longer in active use? What if you had to translate it to a couple of languages?

Quite obviously this is no biggie code/quality wise, but still…

The answer is that you run the PowerShell script below, that’ll check – and optionally fix – it ;-)

The Script

I made a small script to check and remove the excess resource entries to slim down those resource files a bit.

The script will, given a starting location (i.e. the root folder for your solution)

  1. Go through every code file (to be safe every file, except a list of binary extensions) and look for resource labels of the form “$Resources:filename,label_key
  2. Search recursively for resx files
  3. For every one of those resx files it will look through the resource labels in use and flag those that it cannot find
  4. (Optionally) Do a “safemode” check where every file is searched for the resource label, i.e. necessary if you are using multiple/other resource lookup methods then the $Resource moniker
  5. (Optionally) If you choose you may remove them automatically but do make a dry run first to sanity check that you got the paths right and that you have all the source files to be searched

Usage:

    PS> & VerifyResxLabels.ps1 “path to solution dir” [-remove] [-safemode]

(Tip: Download the script to somewhere, write an ampersand (&) and then drag the ps1 file into the PowerShell window and then drag the solution folder to the window.)

You’ll definitely want to pipe the output to a file.

Limitations

There are obviously some limitations

  • It will not: Check out the resx files from source control (but it will show the file error in the output)
  • It will not: Respect commented out code – it’s simple pattern matching so commented out code will be treated as actual code (hardly an issue)
  • Safemode is very slow of necessity and will likely find false positives, i.e. it will play it safe and keep entries that exists in some files, even though the same label may be used in a completely different context
    • It’s an (O(n*m) algorithm, with number of files and number of labels) – My test with 1000 unique labels, 28 resx files and 2400 files it takes a night

Download the script here

Verify that your Resource Labels Exists


This is part 2 of 4 in a series on how to improve the way we usually work with resource (resx) files. At least the way my team and I work with them.

I generally like to – and do – use resource files for all string constants that are shown to end-users, however I do feel that it is needlessly cumbersome, therefore these posts:

  1. A Good Way to Handle Multi Language Resource Files
  2. Verify that your Resource Labels Exists (this one)
  3. Find Obsolete Resource Labels
  4. Supercharge your Resource Efficiency with Macros

Generally these issues are generic to .NET and not specific to SharePoint, though that is where I’m spending my time writing this.

So, are you sure that your Resource Labels Exists?

The issue at hand is that you have hundreds of source files of various flavors and they are springled with references to a number of resource files. Usually only one for each project/wsp though.

So how can you be sure that you don’t use one in your code that isn’t defined in your resource file?

As missing resource labels are just output verbose to the end user/application this can be a major issue.

The answer is that you run the PowerShell script below, that’ll check it ;-)

The Script

I made a small script to check for resource labels after a major code rewrite that had me change literally hundreds of labels.

The script will, given a starting location (i.e. the root folder for your solution)

  1. Search recursively for resx files and store their locations
  2. Go through every code file (.xml, .cs, .vb, .as*x, .webpart, .dwp, .feature) and look for resource labels of the form “$Resources:filename,label_key
  3. For every one of those resource keys it’ll open the corresponding resx file and check that the key is present and write a warning to the console if not

Obviously this takes a minute or two – in my current project with 8500 files (including everything) it takes about 1 minute to complete (a VM on a power laptop) which is completely acceptable as you don’t really need to run it that often.

Enterprising people may want to add it to the buildserver’s list of tests. I haven’t found the need or the time yet ;-)

Usage:

    PS> & VerifyResourceLabelsExists.ps1 “path to solution dir

(Tip: Download the script to somewhere, write an ampersand (&) and then drag the ps1 file into the PowerShell window and then drag the solution folder to the window.)

Sample output:

Searching for resx files
Searching for labels
Parsing resx files
Checking labels
Warning: Could not find $Resources:Delegate.XXX.XXX,ContentType_XXX_Description in file C:\Dev\TFS\xxxx\Elements.xml
Warning: Could not find $Resources:Delegate.XXX.XXX,ListDefinition_XXX_Heading in file C:\Dev\TFS\xxxx\Schema.xml
...
 

You’ll likely want to pipe the results to a file and then run through your source files to add the missing labels (see part 4, when I finish it).

What it does not do

There are obviously some limitations. It will not

  • Add the missing labels for you
  • It simply looks at files. If you have some files that are excluded from your project files they will be counted in regardless
  • It simply does pattern matching – code commented out is still included by the script
  • It always works of the default non-language specific resx file – the task of ensuring that all labels are defined in all languages is a fairly simple xml comparison for which tools exists
  • Handle default resource files, i.e. the case where you define a default resource file in your feature.xml files and then use the “$Resources:label_key” shorthand with no file name

Download the script here

Script to Import/Export Metadata Termstore


Recently I’ve been using the Managed Metadata Store in SharePoint 2010 and been amazed by the lack of proper import/export functionality.

It feels like a blast from the past to be able to only import a CSV file… CSV?!? What happened to proper XML? What happened to Export? What happened to being able to transfer (meta)data between farms (like test and production) since the builtin Import insists on creating new TermSets and not update existing ones (and yes your managed metadata linked site columns do in fact store a strong reference, not just a name, so you’ll loose the link).

I couldn’t find any existing Powershell commandlets to the rescue either.

I couldn’t readily bing ;-) any usable scripts for this.

What I did

So I built a powershell script to take a CSV file and import it into the Term Store and merge it with any existing term store already there.

CSV?!?? Yeah…

Point is then you can still use the CSV file you likely already hold. You can use the TermSetImporter to export CSV files from your existing environment.

If you are starting Greenfield, then I recommend to use excel with some macros to create the Term Sets (then your users can create them instead of you) or you might just let your users loose in the term store manager.

How to use

First download my small script and the sample excel and CSV files.

Second, fire up powershell (on a SharePoint server), write:

. ./MergeTermSets.ps1 csvfile groupname urlForASharePointSite

Do remember the “dot space” at the start of the line. The second “./” is just the path for the ps1 file in this case.

The urlForASharePointSite is optional and will default to http://localhost:2010 which likely corresponds to a valid SharePoint Central Admin site on 50% of all SharePoint installations. Watch the output log. If something goes wrong it’s likely that you should have a look in your CSV file for errors and/or whether or not the managed metadata store is connected properly.

Notes:

  • I’ve tried to do some tricks to handle encoding properly and I also trim spaces which really causes the term store to stumble (it will trim spaces and every subsequent comparison the script might do will fail).
  • Note that LCID and the parent Terms need to be set on every line in the excel sheet. Don’t blame me I didn’t make that part ;-)
  • Terms are only added not updated, i.e. I don’t try to keep stuff like descriptions in sync
  • No fancy stuff, no merging, no deletions, deprecation etc.

Hope it’s useful for you too.

SharePoint Advanced Large Scale Deployment Scripting – “Dev and QA gold-plating” (part 3 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 (this part) is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read the two other parts first.

The Challenge

When you have a number of developers working on the same service line you need to be able to keep strict control of your QA and test environment.
So either you have one guy as the gatekeeper or you automate as much as possible. I’m the last type of guy so I’ve developed a “one-click” deployment methodology where the focus is
1. Simple and consistent deployment
2. Easy collection of solutions for next production deployment – you should deploy the exact code that you tested not a “fresh” build(!)

The Solution

Our build server is set up to produce date labelled zip files with the wsp and manual installer (exe) within folder of the same name.
I’m sure yours are different therefore the scripts work with a solution drop directory that accepts both zip files (that includes a wsp file) and also plain wsp files.
The folder structure is up to you I’ll just search recursively for zip and wsp.

The script files are:

03 DeployAndStoreDropDir.bat: The starting batch file that will execute the deployment process as a specific install user. That way the developers can login with their personalized service accounts and use a specialized one for deployment purposes. You need to change the user name in this batch file. The first time you run this you need to enter the password for your install account subsequent runs will remember it  (“runas /savecred”).

03b DeployAndStoreDropDirAux.bat: The batch file doing half the work. It’ll archive old log files in “.\ArchivedDeploymentLogFiles” (created if not present) and it execute “QADeploymentProcess.ps1″ that is doing the hard work. At the end it’ll look for error log and write an alert that you should look for it. Saves a lot of time – I don’t go looking in log files if I don’t get that message.

SharePointLib\QADeploymentProcess.ps1: The script that is handling the unzipping and storage of the zip/wsp files and executing both the deployment and feature activation scripts. The goal is to have a simple drop dir where for WSP/zips and a single storage location, “SolutionsDeployed”, for the currently deployed code. The “SolutionsDeployed” folder will be updated at every deployment, so that there is only the latest version of each wsp deployed. You do not need to drop all wsps in the drop dir every time; it does not just delete the SolutionsDeployed folder at every deployment.

This is how the files are moved around for a couple of WSPs and a zip file:

In short:

  1. If you want to start it with a specific install user, just execute “03 DeployAndStoreDropDir.bat”
  2. If you want to start it in your own name, execute “03b DeployAndStoreDropDirAux.bat” and perhaps delete the other bat file.
  3. If you want to modify how it works go to the “QADeploymentProcess.ps1″ file.
  4. If you want to work with zip files they can have the name and internal folder structure that you like. The one important thing is that they should only contain one WSP file. It is pretty smart; at the next deployment your zip file may be named differently (e.g. with a date/time in the name) and it will still remove the old version from “SolutionsDeployed” and store your new version with the new folder/zip name.
  5. At the end of your build iteration SolutionsDeployed will contain the latest version of your deployed code (provided that you emptied it at the last production deployment)

Closing Notes

It is not a big challenge to connect the remaining dots and have your build server do the deployment at certain build types and have a “zero-click” deployment, however I opted to not do it. It should be a conscious decision to deploy to your test and QA environments, not something equivalent to a daily checkin. Your milage may wary.

I would generally provide these scripts to everyone in the team and encourage them to use them on their own dev box once in a while – it always pays to have your dev environment resemble the actual production environment this makes it feasible to keep the code (almost) in sync on all dev environments.

The challenge may be the complexity of the scripts and the effort to understand them – you shouldn’t rely too much on stuff you don’t understand ;-)

Feel free to adjust the script a bit (likely QADeploymentProcess.ps1) to suit your environment.

If you make something brilliant then let me know so I can include it in my “official” version. Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the files here – part 1, 2 and 3 are all in there. I’ll update them a bit if I find any errors or make improvements.

Note: Updated Aug 29 2011, minor stuff.

Don’t forget to change the user name in “03 DeployAndStoreDropDir.bat” to your “install” account.

SharePoint Advanced Large Scale Deployment Scripting – “Features” (part 2 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read “Part 1″ first and excuse me for a bit of duplicated text.

It’s all about the Features

This part is about features and managing their (de-)activations.

What you can do with this is:

  • Maintain a set of templates (“FeatureSets”) that defines what features should be activated on a given type of site – the scripts then ensures that they are properly activated
  • Set actions for features to be either activate, deactivate or reactivate through the configuration file
    • It safely handles web.config updates so that the scripts do not enter a race condition with the SharePoint timer service
  • Selectively override the default behavior and reactivate certain features when you need to

In other words you can ensure that your features are properly activated across sites and farms.

A note on reactivation is in order here. Reactivation is (force) deactivation and followed by activation which is particular useful for features that updates files in document library, e.g. masterpages. Some features require it when they have been updated but it is very rare that a feature always require reactivation.

Therefore I usually do not specify any features to always be reactivated in the configuration file, however if I know that we updated the masterpages I’ll update the batch file, or supply an additional one, where that feature is forced reactivated. The safe option of always reactivating is simply too slow if many files are deployed (or lengthy feature receivers) are involved.

The Configuration

The scripts work based on a configuration file that is shared between your dev, test, QA and production environments.

The procedure is:

  1. Identify all valid Sites/URLs in the local farm (i.e. always executed on one of the servers in the farm)
  2. For each site/URL go through all Feature sets and ensure activations, deactivations and reactivations required
    1. Keep track of reactivations so same feature is only reactivated once at each scope, i.e. overlap between FeatureSets are allowed as long as they specify the same action
    2. Report inconsistencies as errors, but do continue with the next features so that you can have the complete overview of all errors at the end of the run

A sample configuration file is (note: I just picked some more or less random WSPs from codeplex):

<?xml version="1.0" ?>
  <Config>
    <Sites>
      <!-- note: urls for all environments can be added, the ones found in local farm will be utilized -->
      <!-- DEV -->
      <Site url="http://localhost/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      <Site url="http://localhost:8080/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="MySite" />
      </Site>
      <Site url="http://localhost:2010">
        <FeatureSet name="CA" />
      </Site>
      <!-- TEST -->
      <Site url="http://test.intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
      <!-- PROD -->
      <Site url="http://intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
    </Sites>

    <FeatureSets>
      <FeatureSet name="BrandingAndCommon">
        <Solution name="AccessCheckerWebPart.wsp" />
        <Feature nameOrId="AccessCheckerSiteSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="AccessCheckerWebPart" action="activate" ignoreerror="false" />
        <Solution name="Dicks.Reminders.wsp" />
        <Feature nameOrId="DicksRemindersSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="DicksRemindersTimerJob" action="activate" ignoreerror="false" />
        <!-- Todo - all sorts of masterpages, css, js, etc. -->
      </FeatureSet>
      <FeatureSet name="Intranet">
        <Solution name="ContentTypeHierarchy.wsp" />
        <Feature nameOrId="ContentTypeHierarchy" action="activate" ignoreerror="false" />
      </FeatureSet>
      <FeatureSet name="MySite">
      </FeatureSet>
      <FeatureSet name="CA">
      </FeatureSet>
    </FeatureSets>

    <Solutions>
      <!-- list of all solutions and whatever params are required for their deployment -->
      <Solution name="AccessCheckerWebPart.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="ContentTypeHierarchy.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="Dicks.Reminders.wsp" allcontenturls="true" centraladmin="false" upgrade="false" resettimerservice="true" />
    </Solutions>
  </Config>

Where the interesting bits are that Sites contain a number of named FeatureSets. The FeatureSets are gathered in their own section (for reusability) and defines a number of solutions to be deployed and the features to be handled. It is good practice to list the solutions for the features that you are working on so you can be sure that the solution deployment matches your features. Do not spend time on noting all the SharePoint system features to be activated as they are normally handled just fine by the Site Definitions.

Each feature activation line looks like:

<Feature nameOrId=”AccessCheckerWebPart” action=”activate” ignoreerror=”false” />

Where nameOrId is either the guid (with or without braces, dashes etc.) or the name of the feature, i.e. the internal name in the WSP packages, not the display name. When possible I always opt for flexibility ;-)

The action attribute is either “activate”, “deactivate” or “reactivate”.

The ignoreerror is simply a boolean true/false switch that determines how the error should be logged if the feature fails to be activated. It is quite usable for the occasional inconsistencies between environments, e.g. if you are removing a feature then it might be present in production but no longer in the test environment. The script continues with next feature in case of errors regardless of this switch.

It is important to note that web scoped features are not supported here, “only” farm, web app or site scope is allowed.

The features are activated in the order that they are present in the config file.

The Scripts

There are two sections here:

  1. A number of SharePoint PowerShell scripts that include each other as needed
  2. A few batch files that start it all. I generally prefer to have some (parameter less) batch files that executes the PowerShell scripts with appropriate parameters

I have gold-plated the scripts quite a bit and created a small script library for my SharePoint related PowerShell.

The PowerShell scripts are too long to write in the post, they can be downloaded here. The list is (\SharePointLib):

*(Part 1) DeploySharePointSoluitions.ps1: Start the WSP deployments. Parse command line arguments of config file name and directory of WSP solutions

*EnsureFeatureActivation.ps1: Start the feature activation scripts (need command line arguments)

*(Part 3) QADeploymentProcess.ps1: Deployment process for dev, test and QA environments (need command line arguments)

SharePointDeploymentScripts.ps1: Main script that defines a large number of methods for deployment and activations.

SharePointHelper.ps1: Fairly simple SharePoint helper methods to get farm information

*RestartSharePointServices.ps1: Restart all SharePoint services on all servers in the local farm

Services.ps1: Non-SharePoint specific methods for handling services, e.g. restarts

Logging.ps1: Generic logging method. Note that there are options for setting different log levels, output files, etc.

The ones with an asterisk are the ones that you are supposed to execute – the others just defined methods for the “asterisk scripts”.

The batch files (same download) (root of zip download):

(Part 1) 01 Deploy Solutions.bat: Start the solution deployment. It will deploy all the WSP files dropped in “\SolutionsDropDir\” that are also specified in the “\DeploymentConfig.xml”:

02 Activate Features.bat: Works with the “\DeploymentConfig.xml” file and ensures that all features are activated.

In other words you’ll execute “02 Activate Features.bat” to ensure that all features are activated in your farm.

The important line in 02 Activate Features.bat file is

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “” >>ActivateFeatures.log

And if you need to reactivate your branding features (masterpages) you can just copy the batch file and change the line to something like:

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “BrandingMasterPages,BrandingCSS,BrandingPageLayouts” >>ReactivateFeatures.log

to force the three (made up) features “BrandingMasterPages”, “BrandingCSS” and “BrandingPageLayouts” to be reactivated.

By default you will get one log file from each batch file named after the operation, one log file with a timestamp and in case of any errors an additional error log with the same timestamp. In other words if the error log file is not created then no errors occurred.

Runtimes

The scripts are quite fast and the time to execute is determined by the time it takes to activate the features within SharePoint. Therefore if the script has nothing to do, i.e. all features are activated as they should, then it’s normally less than 1 minute to check everything and return.

What usually takes time is features with many files to be added to document libraries (branding) and features that modify the web.config as we wait for the timer job to complete (assuming that the API is used to do this).

Closing Notes

The scripts write quite a bit of logging that can be daunting (especially if you set it to verbose) but don’t worry – I rarely look at them anymore. Instead I look for the EnsureFeatureActivation_date_error_log. If it’s not there then no error occurred and no reason to look into the log files.

These scripts have been a huge benefit for our deployment in order to reduce the post-deployment errors and fixes. It is however also the first to be blamed for errors whenever some feature does not behave as intended (“the unghosted file is not change that must be the fault of the deployment scripts”) and it rarely is the source of the error.

They can be blamed for being a bit complicated though and that is a fair point. If you don’t understand what they do and how you shouldn’t use them.

If you make some brilliant changes then let me know so I can include it in my “official” version.

Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the scripts here.

Note: Updated Aug 29 2011.

Follow

Get every new post delivered to your Inbox.