Use CSOM from PowerShell!!!


The SharePoint 2010/2013 Client Object Model (CSOM) offers an alternative way to do basic read/write (CRUD) operations in SharePoint. I find that it is superior to the “normal” server object model/Cmdlets for

  1. Speed
  2. Memory
  3. Execution requirements – does not need to run on your SharePoint production server and does not need Shell access privileges. Essentially you can execute these kind of scripts with normal SharePoint privileges instead of sys admin privileges

And to be fair inferior in the type of operation you can be perform. It is essentially CRUD operations, especially in the SharePoint 2010 CSOM.

This post is about how to use it in PowerShell and a comparison of the performance.

How to use CSOM

First, there is a slight problem in PowerShell (v2 and v3); it cannot easily call generics such as the ClientContext.Load method. It simply cannot figure out which overloaded method to call – therefore we have to help it a bit.

The following is the function I use to include the CSOM dependencies in my scripts. It simply loads the two Client dlls and creates a new version of the ClientContext class that doesn’t use the offending “Load<T>(T clientObject)” method.

I nicked most of this from here, but added the ability to load the client assemblies from local dir (and fall back to GAC) – very useful if you are not running on a SharePoint server.

$myScriptPath = (Split-Path -Parent $MyInvocation.MyCommand.Path) 

function AddCSOM(){

     #Load SharePoint client dlls
     $a = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.dll")
     $ar = [System.Reflection.Assembly]::LoadFile(    "$myScriptPath\Microsoft.SharePoint.Client.Runtime.dll")
    
     if( !$a ){
         $a = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client")
     }
     if( !$ar ){
         $ar = [System.Reflection.Assembly]::LoadWithPartialName(        "Microsoft.SharePoint.Client.Runtime")
     }
    
     if( !$a -or !$ar ){
         throw         "Could not load Microsoft.SharePoint.Client.dll or Microsoft.SharePoint.Client.Runtime.dll"
     }
    
    
     #Add overload to the client context.
     #Define new load method without type argument
     $csharp =     "
      using Microsoft.SharePoint.Client;
      namespace SharepointClient
      {
          public class PSClientContext: ClientContext
          {
              public PSClientContext(string siteUrl)
                  : base(siteUrl)
              {
              }
              // need a plain Load method here, the base method is a generic method
              // which isn't supported in PowerShell.
              public void Load(ClientObject objectToLoad)
              {
                  base.Load(objectToLoad);
              }
          }
      }"

    
     $assemblies = @( $a.FullName, $ar.FullName,     "System.Core")
     #Add dynamic type to the PowerShell runspace
     Add-Type -TypeDefinition $csharp -ReferencedAssemblies $assemblies
}

And in order to fetch data from a list you would do:

AddCSOM()

$context = New-Object SharepointClient.PSClientContext($siteUrl)

#Hardcoded list name
$list = $context.Web.Lists.GetByTitle("Documents")

#ask for plenty of documents, and the fields needed
$query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
$items = $list.GetItems( $query )

$context.Load($list)
$context.Load($items)
#execute query
$context.ExecuteQuery()


$items |% {
          Write-host "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
}

It doesn’t get much easier than that (when you have the AddCSOM function that is). It is a few more lines of code than you would need with the server OM (load and execute query) but not by much.

The above code works with both 2010 and 2013 CSOM.

Performance Measurement

To check the efficiency of the Client object model compared to the traditional server model I created two scripts and measured the runtime and memory consumption:

Client OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

AddCSOM

[System.GC]::Collect()
$membefore = (get-process -id $pid).ws

$duration = Measure-Command {

          $context = New-Object SharepointClient.PSClientContext($siteUrl)
         
          #Hardcoded list name
          $list = $context.Web.Lists.GetByTitle($listName)
         
          #ask for plenty of documents, and the fields needed
          $query = [Microsoft.SharePoint.Client.CamlQuery]::CreateAllItemsQuery(10000, 'UniqueId','ID','Created','Modified','FileLeafRef','Title') 
          $items = $list.GetItems( $query )
         
          $context.Load($list)
          $context.Load($items)
          #execute query
          $context.ExecuteQuery()
         
         
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "Url: $($_["FileRef"]), title: $($_["FileLeafRef"]) "
          }
         
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

Server OM:

param 
(
[string]$listName = $(throw "Provide list name"),
[string] $siteUrl = $(throw "Provide site url")
)

Add-PsSnapin Microsoft.SharePoint.PowerShell -ea SilentlyContinue

[System.GC]::Collect()
$membefore =  (get-process -id $pid).ws

$duration = Measure-Command {
          $w = Get-SPWeb $siteUrl 
          $list = $w.Lists[$listName]

          $items = $list.GetItems()
          $items |% {
                  #retrieve some properties (but do not spend the time to print them
                  $t = "url: $($_.Url), title: $($_.Title)"
          }
}

[System.GC]::Collect()
$memafter =  (get-process -id $pid).ws

Write-Host "Items iterated: $($items.count)"
Write-Host "Total duration: $($duration.TotalSeconds), total memory consumption: $(($memafter-$membefore)/(1024*1024)) MB"

And executed them against a document library of 500 and 1500 elements (4 measurements at each data point).

The Results

Are very clear:

OMChart

As you can see it is MUCH more efficient to rely on CSOM and it scales a lot better. The server OM retrieves a huge number of additional properties, but it has the benefit of going directly at the database instead of the webserver. Curiously the CSOM version offers much more reliable performance where the Server OM various quite a bit.

In addition you get around the limitation of Shell access for the powershell account and the need for server side execution. Might be convenient for many.

Conclusions

The only downside I can see with the CSOM approach is that it is unfamiliar to most and it does require a few more lines of code. IF your specific need is covered by the API of course.

It’s faster, more portable, less memory intensive and simple to use.

Granted, there are lots of missing API’s (especially in the 2010 edition) but every data manipulation need is likely covered. That is quite a bit after all.

So go for it J

Quick Guide to PowerShell Remoting (for SharePoint stuff…)


Lately I have been doing quite a bit of PowerShell remoting (mostly in a SharePoint context) and while it is surprisingly easy and useful there are a few hoops I’ll detail here.

I have been a fan of PowerShell ever since TechEd ’06 in Barcelona where some young chap eloquently introduced PowerShell. Everyone in the audience understood the potential.

Since then it is now so pervasive that I don’t have to waste time to argue it’s role and prevalence among scripting languages – and the remoting part once again is head and shoulders over the alternatives (even SSH on the unixes).

For remoting in a SharePoint context, there are 3, perhaps 4 steps, for other purposes you wouldn’t hurt yourself to go through the same.

Note: Everything in this post must be executed within an administrative PowerShell console.

Note: There are about a gazillion settings and variations that I’ve skipped; this is how I normally do it.

Step 1: Enable PS Remoting

This one is dead simple.

On the clients (the servers that you are remoting to) execute

Enable-PSRemoting

In a PowerShell shell (in admin mode) – just press return for every confirmation prompt; the defaults are sensible (or add “-force”).

Step 2: Set Sensible Memory Limits

By default each remoting session will have a cap of 150 MB of memory assigned to it. After that you’ll get a more or less random exception from the remoting session; it may be really hard to figure out what went wrong.

When you work with SharePoint you can easily spend 150 MB if you iterate over a number of SPWebs or SPSites. It may not even be possible to limit your consumption by disposing explicitly (use start-spassignment for that) if you actually need to iterate it all (Side-note: When the session ends so does all allocated objects – whether you Dispose them or not doesn’t matter).

Let’s face it these are often expensive scripts with a lot of work to do.

The fix is simple (and in my opinion safe). On the clients execute:

Set-item wsman:\localhost\shell\MaxMemoryPerShellMB 1024

Which will set the limit to 1 GB.

Long explanation here.

Step 3: Disable UAC

If you need to do any SharePoint or IIS administration (or a hundred other tasks) you need your script to run with Administrative rights.

The PowerShell remote process does not request administrative rights from windows – it’ll just use whatever is assigned to it.

Before you accuse me of disabling all security take a good long read at this article from Microsoft; they actually do recommend to disable UAC on servers provided that you do not use your servers for non-admin stuff (that is NO Internet browsing!).

To test whether or not UAC is enabled start a PowerShell console (simply left click the powershell icon). Does it say “Administrator:…” in the title? If yes then UAC is disabled and you are good to go.

There are at least two places to fiddle with the UAC:

  1. You can go to Control Panel / User Account Control Settings to change when you see the notifications boxes. This will NOT help you here – it is only the notification settings not the feature itself
  2. You need to go to the MMC / Local Policy Editor to disable it:
    1. Run “MMC”
    2. Choose “Add/remove snap-in”
    3. Choose “Group Policy Object”
    4. Choose “Local Computer”
    5. Follow the picture below and set the “Run all administrators in Admin Approval mode” to disabled (Note: “Admin Approval Mode” is UAC)

      Disable UAC for Admins

      Disable UAC

    6. Reboot the server
    7. TEST that UAC is disabled by starting a PowerShell and check that the title reads “Administrator:…”

You should take a step back here and ask your local sys admin whether this is a policy enforced by him or whether or not it should be. He/She can create a new policy that targets specific servers to disable UAC.

Usage and Test

Finally try it!

To enter an interactive shell execute (from the controller)

Execute:

    Enter-PSSession –computername $clientComputerName

If all goes well you’ll see that the prompt changes a bit and every command is now executed remotely. Do a “dir” and check that you see the remote server’s files and not the local ones.

Type “exit” to return to the controller.

If you are going cross domains you’ll receive an error, execute step 4 (below) in that case.

To execute a command with arguments

To execute a script (and store the pipeline in $output):

$output = Invoke-Command -ComputerName $clientComputerName -FilePath “. \ScriptToExecute.p1″ -ArgumentList ($arg1, $arg2, $arg3)

Note that the script file (“ScriptToExecute.ps1″) is local to the controller and WinRM will handle the mundane task of transferring it to the client. The script is executed remotely and therefore you cannot reference other scripts as they are not transferred as well.

To execute a script block:

$output = Invoke-Command -ComputerName $clientComputerName -scriptblock { get-childitem “C:\” }

And you can of course pass arguments to your scriptblock and combine it in a hundred different ways.

Warning: Remember the context

The remote sessions are new PowerShell sessions; whatever information/variables you need you must either pass as arguments or embed within a scriptblock.

You can pass simple serializeable objects back to the controller on the pipeline, but it will not work to pass COM/WMI/SharePoint objects back.

Step 4: (Optional) Going Cross Domain?

By default PowerShell remoting will connect to the client computer and automatically pass the credentials of the current user to the client computer. That is normally what you want and ensures that you need no credentials prompt.

However that only works for servers within the same domain as the user. If you are crossing domain boundaries – AD trust or not – you need to do something more (before jumping through hoops do test it first to make sure that this is required for you)

Again, there are many options but the one with the least configuration hassles is:

  1. Add the client servers to the controller servers list of trusted hosts:set-item wsman:localhost\client\trustedhosts -value $serverList -forcewhere $serverList is a comma separated string of computernames (I use FQDN names).
  2. Pass explicit credentials to the remoting commands

    $c = Get-Credential

    Enter-PSSession -ComputerName $clientComputerName -Credential $c

    … and it should work. There are a million details regarding forests, firewalls, etc. I’ll not go into here.

(Other options are Kerberos, SSL channels, …)

Quick tip: Handy Scripts for Local Hyper-V Management


Here is a quick post with a small script that I find incredibly handy for managing my local Hyper-V machines (on windows 8).

It simply ensures what set of VMs are running at a given time – attach that to a shortcut on the desktop and it saves me 3 minutes every day; I find it useful.

Why? Quite often, I need to switch from one set of VMs to another for different tasks (VMWare term is “teams”) e.g. switching between SP2010 and SP2013 development. Sometimes just turn them off if I need the resources for something else.

Normally you just go into Hyper-V manager and start/stop the relevant VMs. Automating that was quite simple and painless – the download is here.

  1. I have one PowerShell script (SetRunningVMs.ps1) that handles the VM management and then a couple of batch files for executing that script with proper parameters. The script is:
    <#
    .SYNOPSIS
    This is a simple Powershell script that adjust what VMs are running on the local Hyper-V host.
    
    .DESCRIPTION
    This script will start/resume the requested VMs and suspend all other VMs to give you maximum power and make it easy to switch from one
    development task to another, i.e. switch teams.
    
    .EXAMPLE
    Make sure that only the dev1 and ad01 machines are running:
    ./SetRunningVMs.ps1 'dev1' 'ad01'
    
    Stop all VMs (no arguments)
    ./SetRunningVMs.ps1 
    
    .NOTES
    Requires admin rights to run. Start using admin shell or use one of the provided batch files.
    Pauses on error.
    
    .LINK
    
    http://soerennielsen.wordpress.com
    
    #>
    
    param( [Parameter(ValueFromRemainingArguments=$true)][string[]] $allowedVMs = @() )
    
    try{
        get-vm |? { $_.State -eq "Running" -and $allowedVMs -notcontains $_.Name } |% { Write "Saving $($_.Name)"; Save-VM $_.Name }
    
        get-vm |? { $_.State -ne "Running" -and $allowedVMs -contains $_.Name } |% { Write "Starting $($_.Name)"; Start-VM $_.Name }
    
        write "SetRunningVMs succesfully done"
    }
    catch{
        Write-Error "Oops, error:" $_
        pause
    }

    I assume that the PowerShell Hyper-V module is loaded; it has always been the case in my tests.

  2. I have a number of batch files for easily executing the script, one for turning off (suspend) all VMs, one for a SP2010 team and one for the SP2013 team. It’s a one-liner bat file that will work with both PowerShell 2 and 3, with or without UAC.

    To Start my SP2010 team (StartSP2010Team.bat):

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD01\” \”Dev1\”‘ -verb RunAs}”

    (where you replace the bold VM names above with your own)

    To start my SP2013 team (StartSP2013Team.bat)

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0SetRunningVMs.ps1 \”AD02\” \”SP2013\”‘ -verb RunAs}”

    To stop all VMs

    powershell -noprofile -command “&{ start-process powershell -ArgumentList ‘-noprofile -file %~dp0\SetRunningVMs.ps1′ -verb RunAs}”

If you have UAC enabled (as I do) you will be prompted, otherwise it will just suspend/resume the relevant VMs.

It took about two hours to write the scripts, where the hardest part was getting the batch files properly in shape with escape characters and UAC.

You gotta love PowerShell ;-)

Supercharge your (Resource) Efficiency with Macros


This is part 4 of 4 in a series on how to improve the way we usually work with resource (resx) files.

I generally like to – and do – use resource files for all string constants that are shown to end-users, however I do feel that it is needlessly cumbersome, therefore these posts:

  1. A Good Way to Handle Multi Language Resource Files
  2. Verify that your Resource Labels Exists
  3. Find/Remove Obsolete Resource Labels
  4. Supercharge your (Resource) Efficiency with Macros (this one)

Generally these issues are generic to .NET and not specific to SharePoint, though that is where I’m spending my time writing this.

What is it again?

This is a macro to increase developer productivity. It is a topic that lies very close to my hearth and a healthy fraction of my posts are about automation and productivity. Every developer should really know the macro features especially the record/play shortcuts.

Working with resources (in SharePoint) is actually quite tedious – we write “$Resources:” a thousand times and occasionally cannot be bored and skip the chore.

This is simple a Visual Studio macro to improve that workflow, usage is:

1. You highlight a string

2. Press a shortcut (of your choice)

3. Write a name for the resource key to create(optionally use an existing if one exists with same value)

4. And you’re done.

It will then add the resource key to your existing resx file, it will type in “$Resources:….” as needed. If it’s an aspx/ascx file it will also throw in some “<%=” and “%>” tags.

(It works very well for xml files too.)

The Gory Detail

This is a VB macro script that took an inordinate amount of time to write mainly because the Visual Studio Macro object model (DTE) is one of the most hideous, awkward, APIs I’ve ever worked with. If there is one place the VS team could improve it would be here – the macro IDE is also tedious to work with.

I’m very certain that there are ways to make the code perform better (speed is fine) and look better (is it important?) – let me know your nuggets in the comments it’s always good to learn something new.

What you need to know is:

  1. It looks for a resx file with the same name as the target name of your project, i.e. the dll/wsp name
    • It only works on the culture independent resx file
    • It even adds a comment in the resource file as to where a key was first used J
  2. It works only for text editor windows
    • I have not found a way to make the selection work in the designer windows, specifically the feature designer would have been nice. This is annoying.

      The workaround is to simply choose to open the .feature file in the “XML (text) editor” (choose “Open with” in the file open dialog)

  3. You may need to customize the replacement pattern for cs and as?x files as it calls a simple utility method “ResourceLookup.SPGetLocalizedString” to translate the “$Resources:…” key (see code below)
  4. It handles quotes in/around the selection

At the moment it is only tested with VS2010 – I’m certain that changes need to be made for VS2013. In due time…

The resource lookup method I’m using is:

        public static string SPGetLocalizedString(string key)
        {
            if (!key.StartsWith("$Resources:"))
            {
                return key;
            }
            var split = key.TrimEnd(';').Remove(0, "$Resources:".Length).Split(',');

            if (split.Length != 2 || string.IsNullOrEmpty(split[1]))
            {
                return key;
            }

            return SPUtility.GetLocalizedString(key, split[0], (uint)System.Globalization.CultureInfo.CurrentUICulture.LCID);
        }

Download and Installation

It is simple:

  1. Download the Macro project here
  2. Dump it somewhere on your disk – the usual location is in “Documents\Visual Studio 2010\Projects\VSMacros80″
  3. Choose “Tools / Macros / Load Macro Project” and pick the DGMacros.vsmacros file
  4. Bind a keyboard shortcut for the text editor to the macro. Just write “dgm” in the search box and pick the “ReplaceStringWithResource” macro

  5. Have a try and perhaps a look at point 3 of the gory details above ;-)

How to Make List Items Visible to Anonymous Users (in Search)


I had a funny little issue with showing list items from a custom list (with a custom view form) to anonymous users on a publishing site.

My good colleague Bernd Rickenberg insisted that I blogged the resolution since he had found quite a few post detailing the problem but no viable solutions ;-)

The Issue

You want to show a custom list to anonymous users. In our setting it was through search, your use case is likely different, but the issue remains the same with or without search.

Quite simply forms (List forms) are generally not accessible to anonymous users when you have the lockdown feature enabled (ViewFormPagesLockdown). It is a critical feature to have enabled otherwise you expose way too much information to SharePoint savvy users. Many of the solutions to this issue suggest turning it off.

It is fairly simple to test if you have this problem. Start Firefox, assuming that you have not enabled the NTLM auto login setting, and hit the …/DispForm.aspx page for a specific item. If you receive the login prompt as anonymous in Firefox and sail through when logged in this blog is for you.

If the ordinary publishing pages are also not viewable then this blog is not for you.

Note: Never use IE to test for anonymous access or not, I can’t count the number of times it has tricked consultants into thinking it works because they are tricked by the auto login feature while on the corporate network.

The Resolution

Is fairly simple.

The basics for search is that the list must be made searchable (it is by default) in the list settings, anonymous users must have access rights to the site and the lockdown feature should(!) be enabled.

What the lockdown feature does is that it changes the anonymous permission mask at the root site (which is inherited by all by default). The mask is basically the permission level assigned to all anonymous users and is similar to the normal permission sets – but is not editable in the UI.

The “View Application Pages” permission level is removed, see the image below on where to find it (for non-anonymous users):

Permission level settings page

Permission level settings page – not available for anonymous users

The best option, security wise, is to break the permission inheritance for your particular list and then add the “View Application Pages” permission to the anonymous users. Do not do that at the web level as you do not want to expose e.g. All site content etc.)

The Script

You need to run the following the PowerShell commands on one of your servers (replace url and list name):

$web = get-spweb "http://yoursiteurl/subweb/subsubweb" 
$list = $web.Lists["ListName"]
$list.BreakRoleInheritance($true)
$list.AnonymousPermMask = $list.AnonymousPermMask -bor ([int][Microsoft.SharePoint.SPBasePermissions]::ViewFormPages) #binary or adding the permissions
$list.Update()
 

(Note: Do be careful if copying directly from this page to PowerShell – you need to re-type the quotes as WordPress mangles them)

SharePoint Advanced Large Scale Deployment Scripting – “Dev and QA gold-plating” (part 3 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 (this part) is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read the two other parts first.

The Challenge

When you have a number of developers working on the same service line you need to be able to keep strict control of your QA and test environment.
So either you have one guy as the gatekeeper or you automate as much as possible. I’m the last type of guy so I’ve developed a “one-click” deployment methodology where the focus is
1. Simple and consistent deployment
2. Easy collection of solutions for next production deployment – you should deploy the exact code that you tested not a “fresh” build(!)

The Solution

Our build server is set up to produce date labelled zip files with the wsp and manual installer (exe) within folder of the same name.
I’m sure yours are different therefore the scripts work with a solution drop directory that accepts both zip files (that includes a wsp file) and also plain wsp files.
The folder structure is up to you I’ll just search recursively for zip and wsp.

The script files are:

03 DeployAndStoreDropDir.bat: The starting batch file that will execute the deployment process as a specific install user. That way the developers can login with their personalized service accounts and use a specialized one for deployment purposes. You need to change the user name in this batch file. The first time you run this you need to enter the password for your install account subsequent runs will remember it  (“runas /savecred”).

03b DeployAndStoreDropDirAux.bat: The batch file doing half the work. It’ll archive old log files in “.\ArchivedDeploymentLogFiles” (created if not present) and it execute “QADeploymentProcess.ps1″ that is doing the hard work. At the end it’ll look for error log and write an alert that you should look for it. Saves a lot of time – I don’t go looking in log files if I don’t get that message.

SharePointLib\QADeploymentProcess.ps1: The script that is handling the unzipping and storage of the zip/wsp files and executing both the deployment and feature activation scripts. The goal is to have a simple drop dir where for WSP/zips and a single storage location, “SolutionsDeployed”, for the currently deployed code. The “SolutionsDeployed” folder will be updated at every deployment, so that there is only the latest version of each wsp deployed. You do not need to drop all wsps in the drop dir every time; it does not just delete the SolutionsDeployed folder at every deployment.

This is how the files are moved around for a couple of WSPs and a zip file:

In short:

  1. If you want to start it with a specific install user, just execute “03 DeployAndStoreDropDir.bat”
  2. If you want to start it in your own name, execute “03b DeployAndStoreDropDirAux.bat” and perhaps delete the other bat file.
  3. If you want to modify how it works go to the “QADeploymentProcess.ps1″ file.
  4. If you want to work with zip files they can have the name and internal folder structure that you like. The one important thing is that they should only contain one WSP file. It is pretty smart; at the next deployment your zip file may be named differently (e.g. with a date/time in the name) and it will still remove the old version from “SolutionsDeployed” and store your new version with the new folder/zip name.
  5. At the end of your build iteration SolutionsDeployed will contain the latest version of your deployed code (provided that you emptied it at the last production deployment)

Closing Notes

It is not a big challenge to connect the remaining dots and have your build server do the deployment at certain build types and have a “zero-click” deployment, however I opted to not do it. It should be a conscious decision to deploy to your test and QA environments, not something equivalent to a daily checkin. Your milage may wary.

I would generally provide these scripts to everyone in the team and encourage them to use them on their own dev box once in a while – it always pays to have your dev environment resemble the actual production environment this makes it feasible to keep the code (almost) in sync on all dev environments.

The challenge may be the complexity of the scripts and the effort to understand them – you shouldn’t rely too much on stuff you don’t understand ;-)

Feel free to adjust the script a bit (likely QADeploymentProcess.ps1) to suit your environment.

If you make something brilliant then let me know so I can include it in my “official” version. Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the files here – part 1, 2 and 3 are all in there. I’ll update them a bit if I find any errors or make improvements.

Note: Updated Aug 29 2011, minor stuff.

Don’t forget to change the user name in “03 DeployAndStoreDropDir.bat” to your “install” account.

SharePoint Advanced Large Scale Deployment Scripting – “Features” (part 2 of 3)


I have been battling deployments for a while and finally decided to do something about it.

;-)

The challenge is to handle a dozen WSP packages on farms that host 20-50 web applications (or host header named site collections) to complete the deployments in a timely manner, ensure that the required features are uniformly activated every time while minimizing the human error element of the poor guy deploying through the night.

The deployment scripts require PowerShell v2 and are equally applicable to both SharePoint 2007 and 2010 – the need is roughly the same and the APIs are also largely unchanged. Some minor differences in service names are covered by the code.

To keep this post reasonable in length I’ve split the subject into three parts:

Part 1 is the main (powershell) deployment scripts primarily targeted at your test/QA/production deployments

Part 2 is the scripts and configuration for automatic large scale feature activations

Part 3 is about gold-plating the dev/test/QA deployment scenario where convenience to developers is prioritized while maintaining strict change control and records

Note: This took a long time to write and it will likely take you, dear reader, at least a few hours to implement in your environment.

If you have not done so already read “Part 1″ first and excuse me for a bit of duplicated text.

It’s all about the Features

This part is about features and managing their (de-)activations.

What you can do with this is:

  • Maintain a set of templates (“FeatureSets”) that defines what features should be activated on a given type of site – the scripts then ensures that they are properly activated
  • Set actions for features to be either activate, deactivate or reactivate through the configuration file
    • It safely handles web.config updates so that the scripts do not enter a race condition with the SharePoint timer service
  • Selectively override the default behavior and reactivate certain features when you need to

In other words you can ensure that your features are properly activated across sites and farms.

A note on reactivation is in order here. Reactivation is (force) deactivation and followed by activation which is particular useful for features that updates files in document library, e.g. masterpages. Some features require it when they have been updated but it is very rare that a feature always require reactivation.

Therefore I usually do not specify any features to always be reactivated in the configuration file, however if I know that we updated the masterpages I’ll update the batch file, or supply an additional one, where that feature is forced reactivated. The safe option of always reactivating is simply too slow if many files are deployed (or lengthy feature receivers) are involved.

The Configuration

The scripts work based on a configuration file that is shared between your dev, test, QA and production environments.

The procedure is:

  1. Identify all valid Sites/URLs in the local farm (i.e. always executed on one of the servers in the farm)
  2. For each site/URL go through all Feature sets and ensure activations, deactivations and reactivations required
    1. Keep track of reactivations so same feature is only reactivated once at each scope, i.e. overlap between FeatureSets are allowed as long as they specify the same action
    2. Report inconsistencies as errors, but do continue with the next features so that you can have the complete overview of all errors at the end of the run

A sample configuration file is (note: I just picked some more or less random WSPs from codeplex):

<?xml version="1.0" ?>
  <Config>
    <Sites>
      <!-- note: urls for all environments can be added, the ones found in local farm will be utilized -->
      <!-- DEV -->
      <Site url="http://localhost/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      <Site url="http://localhost:8080/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="MySite" />
      </Site>
      <Site url="http://localhost:2010">
        <FeatureSet name="CA" />
      </Site>
      <!-- TEST -->
      <Site url="http://test.intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
      <!-- PROD -->
      <Site url="http://intranet/">
        <FeatureSet name="BrandingAndCommon" />
        <FeatureSet name="Intranet" />
      </Site>
      ...
    </Sites>

    <FeatureSets>
      <FeatureSet name="BrandingAndCommon">
        <Solution name="AccessCheckerWebPart.wsp" />
        <Feature nameOrId="AccessCheckerSiteSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="AccessCheckerWebPart" action="activate" ignoreerror="false" />
        <Solution name="Dicks.Reminders.wsp" />
        <Feature nameOrId="DicksRemindersSettings" action="activate" ignoreerror="false" />
        <Feature nameOrId="DicksRemindersTimerJob" action="activate" ignoreerror="false" />
        <!-- Todo - all sorts of masterpages, css, js, etc. -->
      </FeatureSet>
      <FeatureSet name="Intranet">
        <Solution name="ContentTypeHierarchy.wsp" />
        <Feature nameOrId="ContentTypeHierarchy" action="activate" ignoreerror="false" />
      </FeatureSet>
      <FeatureSet name="MySite">
      </FeatureSet>
      <FeatureSet name="CA">
      </FeatureSet>
    </FeatureSets>

    <Solutions>
      <!-- list of all solutions and whatever params are required for their deployment -->
      <Solution name="AccessCheckerWebPart.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="ContentTypeHierarchy.wsp" allcontenturls="false" centraladmin="false" upgrade="false" resettimerservice="false" />
      <Solution name="Dicks.Reminders.wsp" allcontenturls="true" centraladmin="false" upgrade="false" resettimerservice="true" />
    </Solutions>
  </Config>

Where the interesting bits are that Sites contain a number of named FeatureSets. The FeatureSets are gathered in their own section (for reusability) and defines a number of solutions to be deployed and the features to be handled. It is good practice to list the solutions for the features that you are working on so you can be sure that the solution deployment matches your features. Do not spend time on noting all the SharePoint system features to be activated as they are normally handled just fine by the Site Definitions.

Each feature activation line looks like:

<Feature nameOrId=”AccessCheckerWebPart” action=”activate” ignoreerror=”false” />

Where nameOrId is either the guid (with or without braces, dashes etc.) or the name of the feature, i.e. the internal name in the WSP packages, not the display name. When possible I always opt for flexibility ;-)

The action attribute is either “activate”, “deactivate” or “reactivate”.

The ignoreerror is simply a boolean true/false switch that determines how the error should be logged if the feature fails to be activated. It is quite usable for the occasional inconsistencies between environments, e.g. if you are removing a feature then it might be present in production but no longer in the test environment. The script continues with next feature in case of errors regardless of this switch.

It is important to note that web scoped features are not supported here, “only” farm, web app or site scope is allowed.

The features are activated in the order that they are present in the config file.

The Scripts

There are two sections here:

  1. A number of SharePoint PowerShell scripts that include each other as needed
  2. A few batch files that start it all. I generally prefer to have some (parameter less) batch files that executes the PowerShell scripts with appropriate parameters

I have gold-plated the scripts quite a bit and created a small script library for my SharePoint related PowerShell.

The PowerShell scripts are too long to write in the post, they can be downloaded here. The list is (\SharePointLib):

*(Part 1) DeploySharePointSoluitions.ps1: Start the WSP deployments. Parse command line arguments of config file name and directory of WSP solutions

*EnsureFeatureActivation.ps1: Start the feature activation scripts (need command line arguments)

*(Part 3) QADeploymentProcess.ps1: Deployment process for dev, test and QA environments (need command line arguments)

SharePointDeploymentScripts.ps1: Main script that defines a large number of methods for deployment and activations.

SharePointHelper.ps1: Fairly simple SharePoint helper methods to get farm information

*RestartSharePointServices.ps1: Restart all SharePoint services on all servers in the local farm

Services.ps1: Non-SharePoint specific methods for handling services, e.g. restarts

Logging.ps1: Generic logging method. Note that there are options for setting different log levels, output files, etc.

The ones with an asterisk are the ones that you are supposed to execute – the others just defined methods for the “asterisk scripts”.

The batch files (same download) (root of zip download):

(Part 1) 01 Deploy Solutions.bat: Start the solution deployment. It will deploy all the WSP files dropped in “\SolutionsDropDir\” that are also specified in the “\DeploymentConfig.xml”:

02 Activate Features.bat: Works with the “\DeploymentConfig.xml” file and ensures that all features are activated.

In other words you’ll execute “02 Activate Features.bat” to ensure that all features are activated in your farm.

The important line in 02 Activate Features.bat file is

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “” >>ActivateFeatures.log

And if you need to reactivate your branding features (masterpages) you can just copy the batch file and change the line to something like:

powershell.exe -File “SharePointLib\EnsureFeatureActivation.ps1″ “%config%” “BrandingMasterPages,BrandingCSS,BrandingPageLayouts” >>ReactivateFeatures.log

to force the three (made up) features “BrandingMasterPages”, “BrandingCSS” and “BrandingPageLayouts” to be reactivated.

By default you will get one log file from each batch file named after the operation, one log file with a timestamp and in case of any errors an additional error log with the same timestamp. In other words if the error log file is not created then no errors occurred.

Runtimes

The scripts are quite fast and the time to execute is determined by the time it takes to activate the features within SharePoint. Therefore if the script has nothing to do, i.e. all features are activated as they should, then it’s normally less than 1 minute to check everything and return.

What usually takes time is features with many files to be added to document libraries (branding) and features that modify the web.config as we wait for the timer job to complete (assuming that the API is used to do this).

Closing Notes

The scripts write quite a bit of logging that can be daunting (especially if you set it to verbose) but don’t worry – I rarely look at them anymore. Instead I look for the EnsureFeatureActivation_date_error_log. If it’s not there then no error occurred and no reason to look into the log files.

These scripts have been a huge benefit for our deployment in order to reduce the post-deployment errors and fixes. It is however also the first to be blamed for errors whenever some feature does not behave as intended (“the unghosted file is not change that must be the fault of the deployment scripts”) and it rarely is the source of the error.

They can be blamed for being a bit complicated though and that is a fair point. If you don’t understand what they do and how you shouldn’t use them.

If you make some brilliant changes then let me know so I can include it in my “official” version.

Please also leave credits in there so it’ll be possible for your successors to find the source and documentation.

Download

Grab the scripts here.

Note: Updated Aug 29 2011.

Follow

Get every new post delivered to your Inbox.