Use CSOM from PowerShell!!!


The SharePoint 2010/2013 Client Object Model (CSOM) offers an alternative way to do basic read/write (CRUD) operations in SharePoint. I find that it is superior to the “normal” server object model/Cmdlets for

  1. Speed
  2. Memory
  3. Execution requirements – does not need to run on your SharePoint production server and does not need Shell access privileges. Essentially you can execute these kind of scripts with normal SharePoint privileges instead of sys admin privileges

And to be fair inferior in the type of operation you can be perform. It is essentially CRUD operations, especially in the SharePoint 2010 CSOM.

This post is about how to use it in PowerShell and a comparison of the performance.

How to use CSOM

First, there is a slight problem in PowerShell (v2 and v3); it cannot easily call generics such as the ClientContext.Load method. It simply cannot figure out which overloaded method to call – therefore we have to help it a bit.

The following is the function I use to include the CSOM dependencies in my scripts. It simply loads the two Client dlls and creates a new version of the ClientContext class that doesn’t use the offending “Load<T>(T clientObject)” method.

I nicked most of this from here, but added the ability to load the client assemblies from local dir (and fall back to GAC) – very useful if you are not running on a SharePoint server.

$myScriptPath = (Split-Path -Parent $MyInvocation.MyCommand.Path) 

function AddCSOM(){

     #Load SharePoint client dlls
     $a = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.dll")
     $ar = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.Runtime.dll")
    
     if( !$a ){
         $a = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client")
     }
     if( !$ar ){
         $ar = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client.Runtime")
     }
    
     if( !$a -or !$ar ){
         throw         "Could not load Microsoft.SharePoint.Client.dll or Microsoft.SharePoint.Client.Runtime.dll"
     }
    
    
     #Add overload to the client context.
     #Define new load method without type argument
     $csharp =     "
      using Microsoft.SharePoint.Client;
      namespace SharepointClient
      {
          public class PSClientContext: ClientContext
          {
              public PSClientContext(string siteUrl)
                  : base(siteUrl)
              {
              }
              // need a plain Load method here, the base method is a generic method
              // which isn't supported in PowerShell.
              public void Load(ClientObject objectToLoad)
              {
                  base.Load(objectToLoad);
              }
          }
      }"

    
     $assemblies = @( $a.FullName, $ar.FullName,     "System.Core")
     #Add dynamic type to the PowerShell runspace
     Add-Type -TypeDefinition $csharp -ReferencedAssemblies $assemblies
}

And in order to fetch data from a list you would do:

AddCSOM()

$context = New-Object SharepointClient.PSClientContext($siteUrl)

#Hardcoded list name
$list = $context.Web.Lists.GetByTitle("Documents")

#ask for plenty of documents, and the fields needed
$query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
$items = $list.GetItems( $query )

$context.Load($list)
$context.Load($items)
#execute query
$context.ExecuteQuery()


$items |% {
          Write-host "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
}

It doesn’t get much easier than that (when you have the AddCSOM function that is). It is a few more lines of code than you would need with the server OM (load and execute query) but not by much.

The above code works with both 2010 and 2013 CSOM.

Performance Measurement

To check the efficiency of the Client object model compared to the traditional server model I created two scripts and measured the runtime and memory consumption:

Client OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

AddCSOM

[System.GC]::Collect()
$membefore = (get-process -id $pid).ws

$duration = Measure-Command {

          $context = New-Object SharepointClient.PSClientContext($siteUrl)
         
          #Hardcoded list name
          $list = $context.Web.Lists.GetByTitle($listName)
         
          #ask for plenty of documents, and the fields needed
          $query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
          $items = $list.GetItems( $query )
         
          $context.Load($list)
          $context.Load($items)
          #execute query
          $context.ExecuteQuery()
         
         
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
          }
         
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

Server OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

Add-PsSnapin Microsoft.SharePoint.PowerShell -ea SilentlyContinue

[System.GC]::Collect()
$membefore =  (get-process -id $pid).ws

$duration = Measure-Command {
          $w = Get-SPWeb $siteUrl 
          $list = $w.Lists[$listName]

          $items = $list.GetItems()
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "url: $($_.Url), title: $($_.Title)"
          }
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

And executed them against a document library of 500 and 1500 elements (4 measurements at each data point).

The Results

Are very clear:

OMChart

As you can see it is MUCH more efficient to rely on CSOM and it scales a lot better. The server OM retrieves a huge number of additional properties, but it has the benefit of going directly at the database instead of the webserver. Curiously the CSOM version offers much more reliable performance where the Server OM various quite a bit.

In addition you get around the limitation of Shell access for the powershell account and the need for server side execution. Might be convenient for many.

Conclusions

The only downside I can see with the CSOM approach is that it is unfamiliar to most and it does require a few more lines of code. IF your specific need is covered by the API of course.

It’s faster, more portable, less memory intensive and simple to use.

Granted, there are lots of missing API’s (especially in the 2010 edition) but every data manipulation need is likely covered. That is quite a bit after all.

So go for it J

Quick Guide to PowerShell Remoting (for SharePoint stuff…)


Lately I have been doing quite a bit of PowerShell remoting (mostly in a SharePoint context) and while it is surprisingly easy and useful there are a few hoops I’ll detail here.

I have been a fan of PowerShell ever since TechEd ’06 in Barcelona where some young chap eloquently introduced PowerShell. Everyone in the audience understood the potential.

Since then it is now so pervasive that I don’t have to waste time to argue it’s role and prevalence among scripting languages – and the remoting part once again is head and shoulders over the alternatives (even SSH on the unixes).

For remoting in a SharePoint context, there are 3, perhaps 4 steps, for other purposes you wouldn’t hurt yourself to go through the same.

Note: Everything in this post must be executed within an administrative PowerShell console.

Note: There are about a gazillion settings and variations that I’ve skipped; this is how I normally do it.

Step 1: Enable PS Remoting

This one is dead simple.

On the clients (the servers that you are remoting to) execute

Enable-PSRemoting

In a PowerShell shell (in admin mode) – just press return for every confirmation prompt; the defaults are sensible (or add “-force”).

Step 2: Set Sensible Memory Limits

By default each remoting session will have a cap of 150 MB of memory assigned to it. After that you’ll get a more or less random exception from the remoting session; it may be really hard to figure out what went wrong.

When you work with SharePoint you can easily spend 150 MB if you iterate over a number of SPWebs or SPSites. It may not even be possible to limit your consumption by disposing explicitly (use start-spassignment for that) if you actually need to iterate it all (Side-note: When the session ends so does all allocated objects – whether you Dispose them or not doesn’t matter).

Let’s face it these are often expensive scripts with a lot of work to do.

The fix is simple (and in my opinion safe). On the clients execute:

Set-item wsman:\localhost\shell\MaxMemoryPerShellMB 1024

Which will set the limit to 1 GB.

Long explanation here.

Step 3: Disable UAC

If you need to do any SharePoint or IIS administration (or a hundred other tasks) you need your script to run with Administrative rights.

The PowerShell remote process does not request administrative rights from windows – it’ll just use whatever is assigned to it.

Before you accuse me of disabling all security take a good long read at this article from Microsoft; they actually do recommend to disable UAC on servers provided that you do not use your servers for non-admin stuff (that is NO Internet browsing!).

To test whether or not UAC is enabled start a PowerShell console (simply left click the powershell icon). Does it say “Administrator:…” in the title? If yes then UAC is disabled and you are good to go.

There are at least two places to fiddle with the UAC:

  1. You can go to Control Panel / User Account Control Settings to change when you see the notifications boxes. This will NOT help you here – it is only the notification settings not the feature itself
  2. You need to go to the MMC / Local Policy Editor to disable it:
    1. Run “MMC”
    2. Choose “Add/remove snap-in”
    3. Choose “Group Policy Object”
    4. Choose “Local Computer”
    5. Follow the picture below and set the “Run all administrators in Admin Approval mode” to disabled (Note: “Admin Approval Mode” is UAC)

      Disable UAC for Admins

      Disable UAC

    6. Reboot the server
    7. TEST that UAC is disabled by starting a PowerShell and check that the title reads “Administrator:…”

You should take a step back here and ask your local sys admin whether this is a policy enforced by him or whether or not it should be. He/She can create a new policy that targets specific servers to disable UAC.

Usage and Test

Finally try it!

To enter an interactive shell execute (from the controller)

Execute:

    Enter-PSSession –computername $clientComputerName

If all goes well you’ll see that the prompt changes a bit and every command is now executed remotely. Do a “dir” and check that you see the remote server’s files and not the local ones.

Type “exit” to return to the controller.

If you are going cross domains you’ll receive an error, execute step 4 (below) in that case.

To execute a command with arguments

To execute a script (and store the pipeline in $output):

$output = Invoke-Command -ComputerName $clientComputerName -FilePath “. \ScriptToExecute.p1″ -ArgumentList ($arg1, $arg2, $arg3)

Note that the script file (“ScriptToExecute.ps1″) is local to the controller and WinRM will handle the mundane task of transferring it to the client. The script is executed remotely and therefore you cannot reference other scripts as they are not transferred as well.

To execute a script block:

$output = Invoke-Command -ComputerName $clientComputerName -scriptblock { get-childitem “C:\” }

And you can of course pass arguments to your scriptblock and combine it in a hundred different ways.

Warning: Remember the context

The remote sessions are new PowerShell sessions; whatever information/variables you need you must either pass as arguments or embed within a scriptblock.

You can pass simple serializeable objects back to the controller on the pipeline, but it will not work to pass COM/WMI/SharePoint objects back.

Step 4: (Optional) Going Cross Domain?

By default PowerShell remoting will connect to the client computer and automatically pass the credentials of the current user to the client computer. That is normally what you want and ensures that you need no credentials prompt.

However that only works for servers within the same domain as the user. If you are crossing domain boundaries – AD trust or not – you need to do something more (before jumping through hoops do test it first to make sure that this is required for you)

Again, there are many options but the one with the least configuration hassles is:

  1. Add the client servers to the controller servers list of trusted hosts:set-item wsman:localhost\client\trustedhosts -value $serverList -forcewhere $serverList is a comma separated string of computernames (I use FQDN names).
  2. Pass explicit credentials to the remoting commands

    $c = Get-Credential

    Enter-PSSession -ComputerName $clientComputerName -Credential $c

    … and it should work. There are a million details regarding forests, firewalls, etc. I’ll not go into here.

(Other options are Kerberos, SSL channels, …)

Quick tip: Handy Scripts for Local Hyper-V Management


Here is a quick post with a small script that I find incredibly handy for managing my local Hyper-V machines (on windows 8).

It simply ensures what set of VMs are running at a given time – attach that to a shortcut on the desktop and it saves me 3 minutes every day; I find it useful.

Why? Quite often, I need to switch from one set of VMs to another for different tasks (VMWare term is “teams”) e.g. switching between SP2010 and SP2013 development. Sometimes just turn them off if I need the resources for something else.

Normally you just go into Hyper-V manager and start/stop the relevant VMs. Automating that was quite simple and painless – the download is here.

  1. I have one PowerShell script (SetRunningVMs.ps1) that handles the VM management and then a couple of batch files for executing that script with proper parameters. The script is:
    <#
    .SYNOPSIS
    This is a simple Powershell script that adjust what VMs are running on the local Hyper-V host.
    
    .DESCRIPTION
    This script will start/resume the requested VMs and suspend all other VMs to give you maximum power and make it easy to switch from one
    development task to another, i.e. switch teams.
    
    .EXAMPLE
    Make sure that only the dev1 and ad01 machines are running:
    ./SetRunningVMs.ps1 'dev1' 'ad01'
    
    Stop all VMs (no arguments)
    ./SetRunningVMs.ps1 
    
    .NOTES
    Requires admin rights to run. Start using admin shell or use one of the provided batch files.
    Pauses on error.
    
    .LINK
    
    http://soerennielsen.wordpress.com
    
    #>
    
    param( [Parameter(ValueFromRemainingArguments=$true)][string[]] $allowedVMs = @() )
    
    try{
        get-vm |? { $_.State -eq "Running" -and $allowedVMs -notcontains $_.Name } |% { Write "Saving $($_.Name)"; Save-VM $_.Name }
    
        get-vm |? { $_.State -ne "Running" -and $allowedVMs -contains $_.Name } |% { Write "Starting $($_.Name)"; Start-VM $_.Name }
    
        write "SetRunningVMs succesfully done"
    }
    catch{
        Write-Error "Oops, error:" $_
        pause
    }

    I assume that the PowerShell Hyper-V module is loaded; it has always been the case in my tests.

  2. I have a number of batch files for easily executing the script, one for turning off (suspend) all VMs, one for a SP2010 team and one for the SP2013 team. It’s a one-liner bat file that will work with both PowerShell 2 and 3, with or without UAC.

    To Start my SP2010 team (StartSP2010Team.bat):

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD01\” \”Dev1\”‘ -verb RunAs}”

    (where you replace the bold VM names above with your own)

    To start my SP2013 team (StartSP2013Team.bat)

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD02\” \”SP2013\”‘ -verb RunAs}”

    To stop all VMs

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0\SetRunningVMs.ps1′ -verb RunAs}”

If you have UAC enabled (as I do) you will be prompted, otherwise it will just suspend/resume the relevant VMs.

It took about two hours to write the scripts, where the hardest part was getting the batch files properly in shape with escape characters and UAC.

You gotta love PowerShell ;-)

Supercharge your (Resource) Efficiency with Macros


This is part 4 of 4 in a series on how to improve the way we usually work with resource (resx) files.

I generally like to – and do – use resource files for all string constants that are shown to end-users, however I do feel that it is needlessly cumbersome, therefore these posts:

  1. A Good Way to Handle Multi Language Resource Files
  2. Verify that your Resource Labels Exists
  3. Find/Remove Obsolete Resource Labels
  4. Supercharge your (Resource) Efficiency with Macros (this one)

Generally these issues are generic to .NET and not specific to SharePoint, though that is where I’m spending my time writing this.

What is it again?

This is a macro to increase developer productivity. It is a topic that lies very close to my hearth and a healthy fraction of my posts are about automation and productivity. Every developer should really know the macro features especially the record/play shortcuts.

Working with resources (in SharePoint) is actually quite tedious – we write “$Resources:” a thousand times and occasionally cannot be bored and skip the chore.

This is simple a Visual Studio macro to improve that workflow, usage is:

1. You highlight a string

2. Press a shortcut (of your choice)

3. Write a name for the resource key to create(optionally use an existing if one exists with same value)

4. And you’re done.

It will then add the resource key to your existing resx file, it will type in “$Resources:….” as needed. If it’s an aspx/ascx file it will also throw in some “<%=” and “%>” tags.

(It works very well for xml files too.)

The Gory Detail

This is a VB macro script that took an inordinate amount of time to write mainly because the Visual Studio Macro object model (DTE) is one of the most hideous, awkward, APIs I’ve ever worked with. If there is one place the VS team could improve it would be here – the macro IDE is also tedious to work with.

I’m very certain that there are ways to make the code perform better (speed is fine) and look better (is it important?) – let me know your nuggets in the comments it’s always good to learn something new.

What you need to know is:

  1. It looks for a resx file with the same name as the target name of your project, i.e. the dll/wsp name
    • It only works on the culture independent resx file
    • It even adds a comment in the resource file as to where a key was first used J
  2. It works only for text editor windows
    • I have not found a way to make the selection work in the designer windows, specifically the feature designer would have been nice. This is annoying.

      The workaround is to simply choose to open the .feature file in the “XML (text) editor” (choose “Open with” in the file open dialog)

  3. You may need to customize the replacement pattern for cs and as?x files as it calls a simple utility method “ResourceLookup.SPGetLocalizedString” to translate the “$Resources:…” key (see code below)
  4. It handles quotes in/around the selection

At the moment it is only tested with VS2010 – I’m certain that changes need to be made for VS2013. In due time…

The resource lookup method I’m using is:

        public static string SPGetLocalizedString(string key)
        {
            if (!key.StartsWith("$Resources:"))
            {
                return key;
            }
            var split = key.TrimEnd(';').Remove(0, "$Resources:".Length).Split(',');

            if (split.Length != 2 || string.IsNullOrEmpty(split[1]))
            {
                return key;
            }

            return SPUtility.GetLocalizedString(key, split[0], (uint)System.Globalization.CultureInfo.CurrentUICulture.LCID);
        }

Download and Installation

It is simple:

  1. Download the Macro project here
  2. Dump it somewhere on your disk – the usual location is in “Documents\Visual Studio 2010\Projects\VSMacros80″
  3. Choose “Tools / Macros / Load Macro Project” and pick the DGMacros.vsmacros file
  4. Bind a keyboard shortcut for the text editor to the macro. Just write “dgm” in the search box and pick the “ReplaceStringWithResource” macro

  5. Have a try and perhaps a look at point 3 of the gory details above ;-)

How to Make List Items Visible to Anonymous Users (in Search)


I had a funny little issue with showing list items from a custom list (with a custom view form) to anonymous users on a publishing site.

My good colleague Bernd Rickenberg insisted that I blogged the resolution since he had found quite a few post detailing the problem but no viable solutions ;-)

The Issue

You want to show a custom list to anonymous users. In our setting it was through search, your use case is likely different, but the issue remains the same with or without search.

Quite simply forms (List forms) are generally not accessible to anonymous users when you have the lockdown feature enabled (ViewFormPagesLockdown). It is a critical feature to have enabled otherwise you expose way too much information to SharePoint savvy users. Many of the solutions to this issue suggest turning it off.

It is fairly simple to test if you have this problem. Start Firefox, assuming that you have not enabled the NTLM auto login setting, and hit the …/DispForm.aspx page for a specific item. If you receive the login prompt as anonymous in Firefox and sail through when logged in this blog is for you.

If the ordinary publishing pages are also not viewable then this blog is not for you.

Note: Never use IE to test for anonymous access or not, I can’t count the number of times it has tricked consultants into thinking it works because they are tricked by the auto login feature while on the corporate network.

The Resolution

Is fairly simple.

The basics for search is that the list must be made searchable (it is by default) in the list settings, anonymous users must have access rights to the site and the lockdown feature should(!) be enabled.

What the lockdown feature does is that it changes the anonymous permission mask at the root site (which is inherited by all by default). The mask is basically the permission level assigned to all anonymous users and is similar to the normal permission sets – but is not editable in the UI.

The “View Application Pages” permission level is removed, see the image below on where to find it (for non-anonymous users):

Permission level settings page

Permission level settings page – not available for anonymous users

The best option, security wise, is to break the permission inheritance for your particular list and then add the “View Application Pages” permission to the anonymous users. Do not do that at the web level as you do not want to expose e.g. All site content etc.)

The Script

You need to run the following the PowerShell commands on one of your servers (replace url and list name):

$web = get-spweb "http://yoursiteurl/subweb/subsubweb" 
$list = $web.Lists["ListName"]
$list.BreakRoleInheritance($true)
$list.AnonymousPermMask = $list.AnonymousPermMask -bor ([int][Microsoft.SharePoint.SPBasePermissions]::ViewFormPages) #binary or adding the permissions
$list.Update()
 

(Note: Do be careful if copying directly from this page to PowerShell – you need to re-type the quotes as WordPress mangles them)

SharePoint Advanced Large Scale Deployment Scripting – “Dev and QA gold-plating” (part 3 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 (this part) is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read the two other parts first.

The Challenge

When you have a number of developers working on the same service line you need to be able to keep strict control of your QA and test environment.
So either you have one guy as the gatekeeper or you automate as much as possible. I’m the last type of guy so I’ve developed a “one-click” deployment methodology where the focus is
1. Simple and consistent deployment
2. Easy collection of solutions for next production deployment – you should deploy the exact code that you tested not a “fresh” build(!)

The Solution

Our build server is set up to produce date labelled zip files with the wsp and manual installer (exe) within folder of the same name.
I’m sure yours are different therefore the scripts work with a solution drop directory that accepts both zip files (that includes a wsp file) and also plain wsp files.
The folder structure is up to you I’ll just search recursively for zip and wsp.

The script files are:

03 DeployAndStoreDropDir.bat: The starting batch file that will execute the deployment process as a specific install user. That way the developers can login with their personalized service accounts and use a specialized one for deployment purposes. You need to change the user name in this batch file. The first time you run this you need to enter the password for your install account subsequent runs will remember it  (“runas /savecred”).

03b DeployAndStoreDropDirAux.bat: The batch file doing half the work. It’ll archive old log files in “.\ArchivedDeploymentLogFiles” (created if not present) and it execute “QADeploymentProcess.ps1″ that is doing the hard work. At the end it’ll look for error log and write an alert that you should look for it. Saves a lot of time – I don’t go looking in log files if I don’t get that message.

SharePointLib\QADeploymentProcess.ps1: The script that is handling the unzipping and storage of the zip/wsp files and executing both the deployment and feature activation scripts. The goal is to have a simple drop dir where for WSP/zips and a single storage location, “SolutionsDeployed”, for the currently deployed code. The “SolutionsDeployed” folder will be updated at every deployment, so that there is only the latest version of each wsp deployed. You do not need to drop all wsps in the drop dir every time; it does not just delete the SolutionsDeployed folder at every deployment.

This is how the files are moved around for a couple of WSPs and a zip file:

In short:

  1. If you want to start it with a specific install user, just execute “03 DeployAndStoreDropDir.bat”
  2. If you want to start it in your own name, execute “03b DeployAndStoreDropDirAux.bat” and perhaps delete the other bat file.
  3. If you want to modify how it works go to the “QADeploymentProcess.ps1″ file.
  4. If you want to work with zip files they can have the name and internal folder structure that you like. The one important thing is that they should only contain one WSP file. It is pretty smart; at the next deployment your zip file may be named differently (e.g. with a date/time in the name) and it will still remove the old version from “SolutionsDeployed” and store your new version with the new folder/zip name.
  5. At the end of your build iteration SolutionsDeployed will contain the latest version of your deployed code (provided that you emptied it at the last production deployment)

Closing Notes

It is not a big challenge to connect the remaining dots and have your build server do the deployment at certain build types and have a “zero-click” deployment, however I opted to not do it. It should be a conscious decision to deploy to your test and QA environments, not something equivalent to a daily checkin. Your milage may wary.

I would generally provide these scripts to everyone in the team and encourage them to use them on their own dev box once in a while – it always pays to have your dev environment resemble the actual production environment this makes it feasible to keep the code (almost) in sync on all dev environments.

The challenge may be the complexity of the scripts and the effort to understand them – you shouldn’t rely too much on stuff you don’t understand ;-)

Feel free to adjust the script a bit (likely QADeploymentProcess.ps1) to suit your environment.

If you make something brilliant then let me know so I can include it in my “official” version. Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the files here – part 1, 2 and 3 are all in there. I’ll update them a bit if I find any errors or make improvements.

Note: Updated Aug 29 2011, minor stuff.

Don’t forget to change the user name in “03 DeployAndStoreDropDir.bat” to your “install” account.

SharePoint Advanced Large Scale Deployment Scripting – “Features” (part 2 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read “Part 1″ first and excuse me for a bit of duplicated text.

It’s all about the Features

This part is about features and managing their (de-)activations.

What you can do with this is:

  • Maintain a set of templates (“FeatureSets”) that defines what features should be activated on a given type of site – the scripts then ensures that they are properly activated
  • Set actions for features to be either activate, deactivate or reactivate through the configuration file
    • It safely handles web.config updates so that the scripts do not enter a race condition with the SharePoint timer service
  • Selectively override the default behavior and reactivate certain features when you need to

In other words you can ensure that your features are properly activated across sites and farms.

A note on reactivation is in order here. Reactivation is (force) deactivation and followed by activation which is particular useful for features that updates files in document library, e.g. masterpages. Some features require it when they have been updated but it is very rare that a feature always require reactivation.

Therefore I usually do not specify any features to always be reactivated in the configuration file, however if I know that we updated the masterpages I’ll update the batch file, or supply an additional one, where that feature is forced reactivated. The safe option of always reactivating is simply too slow if many files are deployed (or lengthy feature receivers) are involved.

The Configuration

The scripts work based on a configuration file that is shared between your dev, test, QA and production environments.

The procedure is:

  1. Identify all valid Sites/URLs in the local farm (i.e. always executed on one of the servers in the farm)
  2. For each site/URL go through all Feature sets and ensure activations, deactivations and reactivations required
    1. Keep track of reactivations so same feature is only reactivated once at each scope, i.e. overlap between FeatureSets are allowed as long as they specify the same action
    2. Report inconsistencies as errors, but do continue with the next features so that you can have the complete overview of all errors at the end of the run

A sample configuration file is (note: I just picked some more or less random WSPs from codeplex):

<?xml version="1.0" ?>
  <Config>
    <Sites>
      <!-- note: urls for all environments can be added, the ones found in local farm will be utilized -->
      <!-- DEV -->
      <Site url="http://localhost/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      <Site url="http://localhost:8080/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="MySite" />
      </Site>
      <Site url="http://localhost:2010">
        <FeatureSet name="CA" />
      </Site>
      <!-- TEST -->
      <Site url="http://test.intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
      <!-- PROD -->
      <Site url="http://intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
    </Sites>

    <FeatureSets>
      <FeatureSet name="BrandingAndCommon">
        <Solution name="AccessCheckerWebPart.wsp" />
        <Feature nameOrId="AccessCheckerSiteSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="AccessCheckerWebPart" action="activate" ignoreerror="false" />
        <Solution name="Dicks.Reminders.wsp" />
        <Feature nameOrId="DicksRemindersSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="DicksRemindersTimerJob" action="activate" ignoreerror="false" />
        <!-- Todo - all sorts of masterpages, css, js, etc. -->
      </FeatureSet>
      <FeatureSet name="Intranet">
        <Solution name="ContentTypeHierarchy.wsp" />
        <Feature nameOrId="ContentTypeHierarchy" action="activate" ignoreerror="false" />
      </FeatureSet>
      <FeatureSet name="MySite">
      </FeatureSet>
      <FeatureSet name="CA">
      </FeatureSet>
    </FeatureSets>

    <Solutions>
      <!-- list of all solutions and whatever params are required for their deployment -->
      <Solution name="AccessCheckerWebPart.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="ContentTypeHierarchy.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="Dicks.Reminders.wsp" allcontenturls="true" centraladmin="false" upgrade="false" resettimerservice="true" />
    </Solutions>
  </Config>

Where the interesting bits are that Sites contain a number of named FeatureSets. The FeatureSets are gathered in their own section (for reusability) and defines a number of solutions to be deployed and the features to be handled. It is good practice to list the solutions for the features that you are working on so you can be sure that the solution deployment matches your features. Do not spend time on noting all the SharePoint system features to be activated as they are normally handled just fine by the Site Definitions.

Each feature activation line looks like:

<Feature nameOrId=”AccessCheckerWebPart” action=”activate” ignoreerror=”false” />

Where nameOrId is either the guid (with or without braces, dashes etc.) or the name of the feature, i.e. the internal name in the WSP packages, not the display name. When possible I always opt for flexibility ;-)

The action attribute is either “activate”, “deactivate” or “reactivate”.

The ignoreerror is simply a boolean true/false switch that determines how the error should be logged if the feature fails to be activated. It is quite usable for the occasional inconsistencies between environments, e.g. if you are removing a feature then it might be present in production but no longer in the test environment. The script continues with next feature in case of errors regardless of this switch.

It is important to note that web scoped features are not supported here, “only” farm, web app or site scope is allowed.

The features are activated in the order that they are present in the config file.

The Scripts

There are two sections here:

  1. A number of SharePoint PowerShell scripts that include each other as needed
  2. A few batch files that start it all. I generally prefer to have some (parameter less) batch files that executes the PowerShell scripts with appropriate parameters

I have gold-plated the scripts quite a bit and created a small script library for my SharePoint related PowerShell.

The PowerShell scripts are too long to write in the post, they can be downloaded here. The list is (\SharePointLib):

*(Part 1) DeploySharePointSoluitions.ps1: Start the WSP deployments. Parse command line arguments of config file name and directory of WSP solutions

*EnsureFeatureActivation.ps1: Start the feature activation scripts (need command line arguments)

*(Part 3) QADeploymentProcess.ps1: Deployment process for dev, test and QA environments (need command line arguments)

SharePointDeploymentScripts.ps1: Main script that defines a large number of methods for deployment and activations.

SharePointHelper.ps1: Fairly simple SharePoint helper methods to get farm information

*RestartSharePointServices.ps1: Restart all SharePoint services on all servers in the local farm

Services.ps1: Non-SharePoint specific methods for handling services, e.g. restarts

Logging.ps1: Generic logging method. Note that there are options for setting different log levels, output files, etc.

The ones with an asterisk are the ones that you are supposed to execute – the others just defined methods for the “asterisk scripts”.

The batch files (same download) (root of zip download):

(Part 1) 01 Deploy Solutions.bat: Start the solution deployment. It will deploy all the WSP files dropped in “\SolutionsDropDir\” that are also specified in the “\DeploymentConfig.xml”:

02 Activate Features.bat: Works with the “\DeploymentConfig.xml” file and ensures that all features are activated.

In other words you’ll execute “02 Activate Features.bat” to ensure that all features are activated in your farm.

The important line in 02 Activate Features.bat file is

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “” >>ActivateFeatures.log

And if you need to reactivate your branding features (masterpages) you can just copy the batch file and change the line to something like:

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “BrandingMasterPages,BrandingCSS,BrandingPageLayouts” >>ReactivateFeatures.log

to force the three (made up) features “BrandingMasterPages”, “BrandingCSS” and “BrandingPageLayouts” to be reactivated.

By default you will get one log file from each batch file named after the operation, one log file with a timestamp and in case of any errors an additional error log with the same timestamp. In other words if the error log file is not created then no errors occurred.

Runtimes

The scripts are quite fast and the time to execute is determined by the time it takes to activate the features within SharePoint. Therefore if the script has nothing to do, i.e. all features are activated as they should, then it’s normally less than 1 minute to check everything and return.

What usually takes time is features with many files to be added to document libraries (branding) and features that modify the web.config as we wait for the timer job to complete (assuming that the API is used to do this).

Closing Notes

The scripts write quite a bit of logging that can be daunting (especially if you set it to verbose) but don’t worry – I rarely look at them anymore. Instead I look for the EnsureFeatureActivation_date_error_log. If it’s not there then no error occurred and no reason to look into the log files.

These scripts have been a huge benefit for our deployment in order to reduce the post-deployment errors and fixes. It is however also the first to be blamed for errors whenever some feature does not behave as intended (“the unghosted file is not change that must be the fault of the deployment scripts”) and it rarely is the source of the error.

They can be blamed for being a bit complicated though and that is a fair point. If you don’t understand what they do and how you shouldn’t use them.

If you make some brilliant changes then let me know so I can include it in my “official” version.

Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the scripts here.

Note: Updated Aug 29 2011.

SharePoint Advanced Large Scale Deployment Scripting – “Deploy” (part 1 of 3)


I have been battling deployments for a while and finally decided to do something about it.

:-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and t for automatic large scale feature activations

Part 3 is about gold-plating the dev/test deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time and countless iterations to develop and it will likely take you, dear reader, at least a few hours to implement in your environment.

The Requirements

My design goals were to solve

  • Automatic solution deployments (including timer jobs resets)
  • Consistent activation of features across the farm while allowing for different types of sites
  • Easy, fast and reliable execution
  • Not too awkward configuration of solutions and features for farms with many sites – it will never be a no-brainer
  • Exact same configuration (file) to be re-used across Dev, Test, QA and Prod tiers

The cost is some fairly complex scripts that took quite a while to write and test.

The same script works equally well for both 2007 and 2010.

There is almost no difference between SharePoint 2007 and 2010 in this area – 2010 comes with some new CmdLets for PowerShell but they are often just convenient access to the API. I’m taking the extra effort to hit the API directly which is essentially unchanged for deployment purposes. One of the really cool new features in 2010 is the ability to control solution upgrade in detail which is of course supported by these scripts simply by choosing the upgrade option for a given solution.

The Solution

The idea is that you have the same configuration file for all your app tiers, and

  1. When you need to deploy a new version of some of your WSPs you just provide them and execute the scripts
  2. Script will scan the local farm and map the URLs with the provided WSPs and deploy them
  3. Run through every identified Site and match feature activations with the configuration file and fix any discrepancies

The features are

  • Deploy WSP files regardless of whether or not they were deployed beforehand (uninstall followed by install is the default installation mode for me)
    • If solution deployment goes wrong the behavior is to restart all SharePoint related services on all servers in farm and retry (a few times). It surprisingly often solves problems like locked files etc.
    • If a solution contains timer jobs the timer service is restarted on all servers in farm. Finally
  • Upgrade existing WSP files
  • (Part 2) Scan the farm and ensure that required features are activated (or perhaps deactivated) on specified sites
    • Farm and WebApp scoped features are by default deactivated after an uninstall/install – this script reactivates them
    • Features are always (de)activated with the “force” parameter just for good measure
    • Yes, it’ll wait for web config modifications jobs when required
  • (Part 2) Re-activate selected features i.e. deactivate first then activate
  • (Part 3) Automatic deployment and activations for dev, test and QA environments

The Configuration File

The hearth of the scripts is the configuration file.

The one configuration file can be re-used for both handling solution deployment and feature activations across all tiers of your farms. You can also conceivable use the same config file for different types of farms, e.g. the intranet and extranet, however I feel that in those scenarios it is better to keep those things separate.

The configuration file contains:

  • A number of “Sites” which is the URLs of either (host header named) site collections or web applications
    • The local farm will be scanned and the sites found in the config file that are present in the local farm is the ones that we work on – the rest are ignored (e.g. running in test it’ll skip all the production URLs)
    • Each site node contains a number of FeatureSets but no list of features or solutions
  • A number of “FeatureSets” that groups WSPs and features. Features are either “activated”, “deactivated” or “reactivated”
    • (The feature thingie is in part 2)
    • For a given site the union of all the solutions specified in all the FeatureSets associated with the sites defines what solutions should be deployed to that site.
  • A number of “Solutions” that list the WSP names and their individual options, i.e. deploy to central admin, all content URLs, upgrade or (re)install, does it contain timer jobs (which requires timer service to be restarted on all farms)
    • Note: The scripts deploy all the solutions in a given “drop” directory and ignore all the configuration nodes without a matching WSP file

To demo it I’ve downloaded a couple of WSPs from CodePlex and created the following configuration file for my dev box:

<?xml version="1.0" ?>
  <Config>
    <Sites>
      <!-- note: urls for all environments can be added, the ones found in local farm will be utilized -->
      <!-- DEV -->
      <Site url="http://localhost/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      <Site url="http://localhost:8080/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="MySite" />
      </Site>
      <Site url="http://localhost:2010">
        <FeatureSet name="CA" />
      </Site>
      <!-- TEST -->
      <Site url="http://test.intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
      <!-- PROD -->
      <Site url="http://intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
    </Sites>

    <FeatureSets>
      <FeatureSet name="BrandingAndCommon">
        <Solution name="AccessCheckerWebPart.wsp" />
        <Feature nameOrId="AccessCheckerSiteSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="AccessCheckerWebPart" action="activate" ignoreerror="false" />
        <Solution name="Dicks.Reminders.wsp" />
        <Feature nameOrId="DicksRemindersSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="DicksRemindersTimerJob" action="activate" ignoreerror="false" />
        <!-- Todo - all sorts of masterpages, css, js, etc. -->
      </FeatureSet>
      <FeatureSet name="Intranet">
        <Solution name="ContentTypeHierarchy.wsp" />
        <Feature nameOrId="ContentTypeHierarchy" action="activate" ignoreerror="false" />
      </FeatureSet>
      <FeatureSet name="MySite">
      </FeatureSet>
      <FeatureSet name="CA">
      </FeatureSet>
    </FeatureSets>

    <Solutions>
      <!-- list of all solutions and whatever params are required for their deployment -->
      <Solution name="AccessCheckerWebPart.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="ContentTypeHierarchy.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="Dicks.Reminders.wsp" allcontenturls="true" centraladmin="false" upgrade="false" resettimerservice="true" />
    </Solutions>
  </Config>

Note: I more or less picked some random WSPs from CodePlex for demo purposes.

Note 2: The scripts will respect the order given in the config file. The Sites are processed in order, the FeatureSets listed first within a “Site” node is handled in top-down order. The solutions deployed to the farm are deployed in the order they are listed in the “Solutions” node.

Note 3: XML is case sensitive; therefore, all the named sections are case sensitive. The one exception is that I accept different casing between the WSP file names and the name they are given within the config file.

The Scripts

There are two sections here:

  1. A number of SharePoint PowerShell scripts that include each other as needed
  2. A few batch files that start it all. I generally prefer to have some (parameter less) batch files that executes the PowerShell scripts with appropriate parameters

I have gold-plated the scripts quite a bit and created a small script library for my SharePoint related PowerShell.

The PowerShell scripts are too long to write in the post, they can be downloaded here. The list is (\SharePointLib):

*DeploySharePointSoluitions.ps1: Start the WSP deployments. Parse command line arguments of config file name and directory of WSP solutions

*(Part 2) EnsureFeatureActivation.ps1: Start the feature activation scripts (need command line arguments)

*(Part 3) QADeploymentProcess.ps1: Deployment process for dev, test and QA environments (need command line arguments)

SharePointDeploymentScripts.ps1: Main script that defines a large number of methods for deployment and activations.

SharePointHelper.ps1: Fairly simple SharePoint helper methods to get farm information

*RestartSharePointServices.ps1: Restart all SharePoint services on all servers in the local farm (see runtime notes below)

Services.ps1: Non-SharePoint specific methods for handling services, e.g. restarts

Logging.ps1: Generic logging method. Note that there are options for setting different log levels, output files, etc.

The ones with an asterisk are the ones that you are supposed to execute – the others just defined methods for the “asterisk scripts”.

The batch files (same download) (root of zip download):

01 Deploy Solutions.bat: Start the solution deployment. It will deploy all the WSP files dropped in “\SolutionsDropDir\” that are also specified in the “\DeploymentConfig.xml”:

02 Activate Features.bat (Part 2): Works with the “\DeploymentConfig.xml” file and ensures that all features are activated.

In other words you’ll execute “01 Deploy Solutions.bat” to deploy all your solutions.


By default you will get one log file from each batch file named after the operation, one log file with a timestamp and in case of any errors an additional error log with the same timestamp. In other words if the error log file is not created then no errors occurred.

Note: Scripts must be executed on one of your SharePoint servers which sufficient rights. That is more or less local admin and owner on the individual databases (or local admin on the SQL box as well).

Note 2: I write quite a bit of logging it is invaluable in times of trouble. You can change the amount in the logging.ps1 file by adjusting the loglevel variable.

Note 3: In case of deployment errors the default behavior is to restart every SharePoint service on every server and then retry. These service restarts are performed async, so it should take no more than a minute or two, no matter how many servers you have (almost).

Note 4: Any WSP found in the drop dir that has not been specified in the config is considered an error – therefore you know that all the wsps provided are deployed if no errors are reported.

Runtimes

The time it takes to search the local farm for relevant site collections / web applications is negligible so you shouldn’t worry unduly about impact on runtime to add 300 URLs to cover all your possible environments including every developer’s individual preferred setup.

But there is no magic and it simply runs in the time it takes to deploy solutions to your environment. If it takes about 3 minutes for a full retract, remove, add, deploy then it takes 30 minutes for 10 WSPs.

The default way of handling deployment errors is to restart all SharePoint services in the farm. The clever way of doing that is to send the stop/start commands to all services first and then wait for the operations afterwards. That way the time taken to do restarts is about the time of restarting the slowest service in the farm once – it usually takes 1-2 minutes regardless of the number of servers. In my opinion this is quite cool :-)

Closing notes

If you are wondering about my exception handling note that the scripts were originally made for PowerShell 1.0 and later I decided to only target 2.0. Some statements may be written prettier without me taking the time to do so.

These scripts save us a ton of time and prevents a lot conversations of the sort “oh I forgot to activate that feature, 2 secs, there you go” – but of course quite some time has been spent creating the deployment scripts and fixing errors within them.

Some of the scripts are quite usable for other purposes. For instance, the remote service restart script I have not seen anywhere else – this is simpler than installing PowerShell remoting (WinRM?) on every server.

If you make some brilliant changes then let me know so I can include it in my “official” version.

Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the scripts here (updated with part 2 and 3).

Note: Updated Aug 29 2011.

VMWare or Hyper-V for Virtualization?


On every project I work we use virtualized development machines. It is almost a requirement for SharePoint development and I would recommend it to all over physical servers, simply to keep matters (projects) separate and to avoid the inevitable pollution of any dev environment.

So the question always becomes which technology should I choose? VMWare or Hyper-V?

Traditionally I’ve always opted for VMWare as it sports more features and has less operating system requirements. I’ve used VMWare Server GSX, 1 and 2 and Workstation 4 till 7 (I try to avoid the Player). VMWare basically runs on everything (but requires some CPU support to virtualize 64 bit Guest OS) while (latest) Hyper-V requires Windows Server 2008 R2.

So I finally found some time to sit down and compare the two. Now there are a lot of variables here so I’ve cut it down to me being interested in getting the most out of the Virtual Machine in terms of performance of the Guest OS. I do not test how well it works for multiple servers, how it works for databases etc.

I compare VMWare Workstation to Hyper-V, one is a program the other a server like thingy. It might be an apples and oranges comparison however it is the choice I would usually have when working in SharePoint. Price does not matter to me – my time does.

Setup

I’ve recently reinstalled my desktop computer with Windows Server 2008 R2 x64 (SP1) and configured it to be usable as a desktop operating system. I’ve basically enabled all the features to make it look and behave almost as Windows 7 and giving me the additional option of having fun with Hyper-V.

I highly recommend looking Shinvas posts (part 1 and 2) for the how to – it is by far the better option virtualization or not.

Hardware

This is a modest PC a few years old:

CPU: E8400, 1 CPU, 2 Cores, 3 GHz

Mem: 1.111 GHz DDR3, 6 GB

Disk: Three 2 x Raid 0 (bad planning in HD buys resulted in three not one raid drive)

Software

Host is running Windows Server 2008 R2 x64 sp1.

I’ve enabled the Hyper-V role and configured it appropriately. After that I found that you cannot have both Hyper-V enabled and install/run VMWare workstation as they both come with their own version of a Hypervisor, though VMWare’s runs on top of the OS and Hyper-V the other way around. That can be solved with a dual boot configuration to the same system, one with Hyper-Vs Hypervisor (phew) enabled and one without (here).

VMWare is VMWare workstation 7.1.4.

For performance testing I used SiSoftware Sandra 2011 sp2c which should be well known and reliable. I’ve used HDTunePro for disk measurements though because Sandra was very unreliable for that (I don’t believe in x10 increase in speed just because the disk is now virtual).

Virtual Machines

I’ve unpacked an old favorite VM of mine originally made with VMWare Workstation that is a full SharePoint 2010 single server development box with SharePoint, SQL, Office and Visual Studio (though no AD). It includes VMWare tools.

I removed all snapshots and converted it to a Hyper-V VHD with VMHD to VHD (ancient tool that still works perfectly) and created a new VM for Hyper-V from that. Removed remnants of VMWare and installed the integration service components that come with Hyper-V. Also ensure that no differencing disk is in play.

The two VMs are therefore picture perfect identical and I’ve seen no issues due to the conversion at all.

I’ve executed a number of tests and on both VMWare and Hyper-V you’ll receive (by far) the best performance if you match the number (and type) of virtual CPUs with the physical numbers on the host. Therefore the VMs are configured to use 2 CPUs (Hyper-V: 2 CPUs, VMWare: 1 CPU two cores).

Results

We need some graphs! In a minute… J

To understand the following you need to understand that enabling Hyper-V installs the Hypervisor which will then slow down the host machine even when Hyper-V is not used. Your host will act as a kind of pseudo virtual machine.

Therefore I compare the following configurations:

  1. Base: Host with no hypervisor (i.e. Hyper-V disabled)
  2. Host with with hypervisor/Hyper-V
  3. Hyper-V Virtual Machine
  4. WMWare Virtual Machine, normal priority
  5. VMWare Virtual Machine, high priority

What is interesting is the performance numbers compared to the bare host and the goal is to get as close to 100% as possible.

What is measured? I’ve taken a cue from a benchmarking post at Capitalhead (excellent post!) and I’ve measured

  • “Processor Cryptography”, basically core CPU performance
  • “Processor Arithmetic” CPU performance for floating point calculations
  • “Memory Bandwidth”
  • “Disk” / “File System Performance” – to keep it simple just the average amount of data transferable is measured. Note: This measurement has been very hard to get a reliable figure for, so I’ve used both Sandra and HD Tune Pro.

Every measurement was repeated 3-5 times and averaged (took quite a while L).

The following graph is the sum of it all measured relatively to the base performance numbers, i.e. I do not consider the absolute numbers to be all that interesting:

There does not seem to be too much penalty on the host to have Hyper-V/hypervisor enabled only 6% in arithmetic. It’s somewhat better than Capitalhead, cause could be SP1, different hardware and measurement technique.

The Hyper-V VM performs consistently well in all categories, where VMWare lags a bit in the memory and arithmetic categories. Interesting the Hyper-V VM performs better than the host in arithmetic which is likely due to measurement inaccuracy.

VMWare priority seems to be of minor importance.

Conclusion

So the big question is what to choose.

From the performance test it is clear that the difference is really not that big. That means that no matter your choice it’ll work ;-) It also makes it a bit harder to choose.

Note that there are many variables and different hardware will likely have slightly different characteristics however I do believe the data is valid and representative.

Feature wise VMWare wins:

  • VMWare Workstation generally sports a lot more features than Hyper-V, the one I consider most important is the Snapshot trees that does nearly as developed in Hyper-V. Better networking support, shared folders, console drag and drop, ACE, unity, etc.
  • Hyper-V wins (in my mind) a bit on administration though I know that is because we’re comparing a server product with a workstation here, though the same conclusion is valid for the VMWare server as well (it definitely does not apply to any of their datacenter products). I’m a bit annoyed with the network support as it apparently does not support wireless network adaptors (ridiculous limitation that you need to use Internet Connection Sharing to get around). Once networking is running and you access the server through Remote Desktop the other stuff with drag and drop and shared folders does not matter.

Performance wise Hyper-V wins:

  • It is very impressive to only loose about 4% in performance (excluding the disk measurement) relative to the bare metal host – at the expense of the host
  • VMWare is naturally disadvantaged here as they do not have the option of penalizing the host and have to work on top of it. It simply is slower and the only way to remedy that is to install the full ESX server, which should have at least similar performance characteristics as the Hyper-V (at least of you only install Server 2008 Core)

For me I’ll choose Hyper-V from now on.

I like the performance and I can get around the feature limitations and I don’t care about Unity etc. That said I’ll likely continue to have a dual boot setup, just in case.

Fixing Event Error 7888: “User not found”


We’ve been having an annoying recurring event log error every minute for a while on some of our farms. Recently we digged deep into the issue and finally fixed it J

This applies to SharePoint 2007 and SQL Server 2005. It’s likely fixed in other versions.

The error message is “Windows NT user or group ‘domain\user‘ not found” returned from the SQL server when a SSP synchronization timer jobs tries to sync service account access to the databases. It executes “sp_grantlogin ‘domain\user’” even though the account is already granted login in the databases (content and config).

After a lot of digging and code disassembling it turns out that the database stored procedure is case sensitive on the user name and SharePoint provides the user name in whatever casing you originally supplied to SharePoint (when creating app pools). I’m quite surprised by this and not sure if it’s always the case. We are using a SQL Server 2005 with proper case insensitive collation, but still…

Interestingly the domain name is not case sensitive.

The Implications

This error seems to be only an annoyance in the event log and nothing more as the user the job tries to add to the database is actually already there. Nothing is really broken by this.

The Fix

Fortunately it can easily be fixed by an administrator.

Procedure:

  1. Go to Central Administration / Operations / Service Accounts (/_admin/FarmCredentialManagement.aspx)
  2. Choose “Web Application Pool” and “Windows SharePoint Services Web Application”

  3. Go through every application pool and verify that the user name on the page is using the correct casing
    1. What is the correct casing? Lookup the user in your AD and use the value for sAMAccountName
  4. Click Ok

That’s it!

For good measure I restarted the timer services on every SharePoint server in the farm.

The Symptoms

You get the following lines in the ULS log:

12/08/2010 07:15:31.75     OWSTIMER.EXE (0x01D0)                       0x1FB8    Office Server                     Office Server General             6pqn    High        Granting user 'domain\user' login access to server 'cbdks173\mosstest,1433'.     
12/08/2010 07:15:31.77     OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880i    High        System.Data.SqlClient.SqlException: Windows NT user or group 'domain\user' not found. Check the name again.     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)     at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)     at Syste...     
12/08/2010 07:15:31.77*    OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880i    High        ...m.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)     at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)     at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()     at Microsoft.Office.Server.Data.SqlSession.ExecuteNonQuery(SqlCommand command)     
12/08/2010 07:15:31.77     OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880j    High        SqlError: 'Windows NT user or group 'domain\user' not found. Check the name again.'    Source: '.Net SqlClient Data Provider' Number: 15401 State: 1 Class: 11 Procedure: 'sp_grantlogin' LineNumber: 49 Server: 'cbdks173\mosstest,1433'     
12/08/2010 07:15:31.77     OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880k    High           at Microsoft.Office.Server.Data.SqlServerManager.GrantLogin(String user)     at Microsoft.Office.Server.Administration.SharedResourceProvider.SynchronizeConfigurationDatabaseAccess(SharedComponentSecurity security)     at Microsoft.Office.Server.Administration.SharedResourceProvider.SynchronizeAccessControl(SharedComponentSecurity sharedApplicationSecurity)     at Microsoft.Office.Server.Administration.SharedResourceProvider.Microsoft.Office.Server.Administration.ISharedComponent.Synchronize()     at Microsoft.Office.Server.Administration.SharedResourceProviderJob.Execute(Guid targetInstanceId)     at Microsoft.SharePoint.Administration.SPTimerJobInvoke.Invoke(TimerJobExecuteData& data, Int32& result)       
12/08/2010 07:15:31.77     OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880l    High        ConnectionString: 'Data Source=cbdks173\mosstest,1433;Initial Catalog=master;Integrated Security=True;Enlist=False;Pooling=False'    ConnectionState: Open ConnectionTimeout: 15     
12/08/2010 07:15:31.77     OWSTIMER.EXE (0x01D0)                       0x1FB8                                      484                               880m    High        SqlCommand: 'sp_grantlogin'     CommandType: StoredProcedure CommandTimeout: 0     Parameter: '@loginame' Type: NVarChar Size: 128 Direction: Input Value: 'domain\user'
   

Combined with this application event log error every minute:

Log Name:      Application
Source:        Office SharePoint Server
Date:          08-12-2010 08:32:31
Event ID:      7888
Task Category: (1516)
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      <computer>
Description:
The description for Event ID 7888 from source Office SharePoint Server cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event: 

Windows NT user or group 'domain\user' not found. Check the name again.
System.Data.SqlClient.SqlException: Windows NT user or group 'domain\user' not found. Check the name again.
 at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
 at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
 at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
 at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
 at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
 at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
 at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)
 at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
 at Microsoft.Office.Server.Data.SqlSession.ExecuteNonQuery(SqlCommand command)
 at Microsoft.Office.Server.Data.SqlServerManager.GrantLogin(String user)
 at Microsoft.Office.Server.Administration.SharedResourceProvider.SynchronizeConfigurationDatabaseAccess(SharedComponentSecurity security)
 at Microsoft.Office.Server.Administration.SharedResourceProvider.SynchronizeAccessControl(SharedComponentSecurity sharedApplicationSecurity)
 at Microsoft.Office.Server.Administration.SharedResourceProvider.Microsoft.Office.Server.Administration.ISharedComponent.Synchronize()
Follow

Get every new post delivered to your Inbox.