Sunday, August 28, 2011

Custom Alerts in SharePoint - Templates or Code?

My client needed a customized alert created each time a new issue was added to a standard Issues list. The only difference from out-of-the-box alert functionality was that customized alert Email message needed to have an Issue ID in its subject. I haven’t customized alerts before so I did some research, which showed that:

  • you can customize HTML of an alert through alert templates and use placeholders to insert values in an Email message.
  • there is an API letting you to tap into alert message processing pipeline and customize Email message before it is sent out.

There is a lot of information online on the subject, yet I still had difficulties with this seemingly easy problem. The following resources were most useful to me: MSDN (, Albert Meerscheidt’s post about alerts in WSS 3.0 and Yaroslav Pentsarskyy’s post about customizing alerts in SharePoint 2010.

For me empowered with this information the task came down to writing a correct alert template, or so I thought. Take a look at this fragment of an out-of-the-box immediate alert template defining Email message subject text (immediate alerts are sent right away while digest alerts are sent later as a summary):

<GetVar Name="AlertTitle" />
<HTML><![CDATA[ - ]]></HTML>
<GetVar Name="ItemName" />

The interesting part here is placeholders “AlertTitle” and “ItemName” and the way they are used. I have a field named “ID” (it is a part of a standard Issues list), and a naive approach of writing <GetVar Name=”ID” /> didn’t get me anywhere. Same result with adding a custom text column named “ATextColumn” and then doing <GetVar Name=”ATextColumn” />. Well, the <GetVar> element is a part of CAML Vew Schema and yields a value of a local or a global variable set in current page context, but what are these placeholders and how are they set? At this point I have realized that my effort estimates were a little too optimistic. Then I bumped into a help article about alerts in WSS 2.0. Among other things it had a list of “tags” that could be included in alert templates. Here it is:

SiteUrlThe full URL to the site.
SiteNameThe name of the site.
SiteLanguageThe locale ID (LCID) for the language used in the site.
For example, 1033 for U.S. English.
AlertFrequencyImmediate (0), Daily (1), or Weekly (2).
ListUrlThe full URL to the list.
ListNameThe name of the list.
ItemUrlThe full URL to the item.
ItemNameThe name of the item.
EventTypeItemAdded (1), Item Modified (2), Item Deleted (4), DiscussionAdded (16),
Discussion Modified (32), Discussion Deleted (64), Discussion Closed (128),
Discussion Activated (256).
ModifiedByThe name of the user who modified an item.
TimeLastModifiedThe time the item was last modified.
MySubsUrlThe full URL to the My Alerts on this Site page in Site Settings.

Some of these tags are used in the out-of-the-box templates. I tried the rest of them and all have worked. So it appears as we are limited to using these 12 tags only, and that it is an old functionality which survived WSS 2.0, WSS 3.0 and SharePoint Foundation 2010 releases. If someone knows more about it please post a comment to validate, disprove or complete this statement.

Another thing that comes out of reflecting over template XML is usage of <GetVar Name=”OldValue#{Field}” /> or <GetVar Name=”NewValue#{Field}” /> or <GetVar Name=”DisplayName#{Field}” />. These elements are descendents of <Fields> element for immediate alerts, and of <RowFields> element for digest alerts. If you inspect generated alert body HTML then you would notice that fields (or columns) are iterated over and their values are inserted in the body except when a field is listed inside of <ImmediateNotificationExcludedFields> or <DigestNotificationExcludedFields> elements. So then <Fields> element establishes a loop, and the {Field} must be a contextual variable inside this loop. With the above syntax you can get display name, old or new values for each field into the Email body and exclude fields you don’t want to be listed.

Great, but how do I get the ID into my Email’s subject? I don’t want to list a bunch of fields in my subject, just the ID, so I don’t want to use <Fields> element there. The API did the trick. I have created a class implementing IAlertNotifyHandler interface and used regular expressions to replace a placeholder with a value:

public class MessageCustomizer : IAlertNotifyHandler

public bool OnNotification(SPAlertHandlerParams parameters)
string webUrl = parameters.siteUrl + parameters.webUrl;

using (SPSite site = new SPSite(webUrl))
using(SPWeb web = site.OpenWeb())
string to = parameters.headers["To"];
string subjectTemplate = parameters.headers["Subject"];
string itemId = parameters.eventData[0].itemId.ToString();

// Below we are replacing a placeholder we have
// created in our alert template with the actual value.

string subject = Regex.Replace(
bool result = SPUtility.SendEmail(
return result;

We still need a customized alert template – firstly to insert our own custom placeholder (in my example #ID#) and secondly to register the MessageCustomizer class so its OnNotification() method would get called. Here is updated fragment defining Email’s subject:

<HTML><![CDATA[Issue ID #ID#: ]]></HTML>
<GetVar Name="AlertTitle" />
<HTML><![CDATA[ - ]]></HTML>
<GetVar Name="ItemName" />

Registration of MessageCustomizer class and its assembly is done inside of <NotificationHandlerClassName> and <NotificationHandlerAssembly> elements of the template:

<NotificationHandlerAssembly>AlertCustomization, Version=, Culture=neutral, PublicKeyToken=bda7bcef852778f0</NotificationHandlerAssembly>

We can now wire up the templates and the handler. Handler’s assembly needs to go to the GAC, then you copy and rename alerttemplates.xml file sitting in 14\TEMPLATE\XML folder, add your template (or again copy an existing one and change it), then you register this file with SharePoint running stsadm –o updatealerttemplates command. I didn’t find PowerShell cmdlets equivalent to this command. Lastly you need to assign your template to a list using SPList.AlertTemplate property. You can write a PowerShell script or use a feature receiver. The latter approach is demonstrated in Yaroslav Pentsarskyy’s post mentioned earlier.

So we are arriving at a standard “It depends…” answer to the question of whether to customize alerts via templates or via code. Regardless of which approach works for you for any such customization that is not an ad hoc fix or a proof of concept you are looking into creating a package including deployment script, a SharePoint solution and possibly a feature with feature receiver. The functionality is almost identical between WSS 3.0 and SharePoint Foundation 2010 with the latter adding SMS support. Also with Visual Studio 2010 it is much easier to package things, yet you are probably still looking into a few hours of work to get it done right.

Tuesday, July 12, 2011

Calling WCF Web Services from a SharePoint Timer Job

Imagine that you are building an enterprise application on top of SharePoint 2010, which is installed on a multi-server farm. The application consumes a WCF web service – custom libraries use generated proxy classes and endpoint configuration is stored inside of a web application’s web.config file. Configuration settings are applied to all servers in the farm by a script when application is provisioned. And your application needs to be deployed to development, testing and production farms, all of which have differences in how WCF endpoints and bindings are configured, including variance in WCF binding types.

Now imagine that you need to call this web service from two places: from your custom web application code and from a timer job, perhaps because you need to cache results for better performance, but also be able to fall back to synchronous call when results get outdated. You will face a complication: how do you configure your WCF client when you invoke the service from a timer job?

You have a few options:

1. Implement a configuration file OWSTIMER.exe.config;

2. Construct binding and endpoint objects inside of a timer job, set their properties through code then create a channel or extended ClientBase<T> object and execute a service method call on it.

3. Load service model configuration from application’s web.config file and create and populate appropriate binding and endpoint objects. Then create and use a channel or a ClientBase<T> object to invoke the service method.

Option 1 has issues with provisioning files to locations not intended for custom application files and keeping same configuration information in two locations on all farm servers.

Option 2 hard-codes binding information, which makes it very difficult to maintain code and troubleshoot WCF issues in multiple environments.

Option 3 is apparently the best choice since configuration is stored in one place, it can easily be changed in web.config, and the changes will affect WCF service client objects in both locations. So let us look at what’s involved in implementing the option 3.

Before you can load a configuration object from web.config file you need to locate it. Here we can leverage SPIisSettings object, which is available for each security zone. Next you load web.config content by using WebConfigurationManager.OpenMappedConfiguration() method:

private System.Configuration.Configuration GetWebConfig(SPUrlZone zone)
SPIisSettings iisSettings = this.WebApplication.IisSettings[zone];
string rootPath = iisSettings.Path.ToString();
WebConfigurationFileMap map = new WebConfigurationFileMap();
new VirtualDirectoryMapping(rootPath, true));
System.Configuration.Configuration webConfig =

return webConfig;

Once you have obtained configuration object, you need to infer type of binding from service model configuration, and apply all attributes to it, as well as create an endpoint. Given names of a binding and an endpoint you find corresponding BindingCollectionElement and ChannelEndpointElement. The actual binding object is created using .NET Reflection and value of BindingCollectionElement.BindingType property:

private MyServiceClient MakeMyServiceClient(
System.Configuration.Configuration config)
// Get endpoint and binding names from settings.

string bindingName = GetValue("Key_MyBindingName");
string endpointName = GetValue("Key_MyEndpointName");

// Determine endpoint and binding elements used.

var sectionGroup = ServiceModelSectionGroup.
ChannelEndpointElement endpointElement = null;

for (int i = 0; i < sectionGroup.Client.Endpoints.Count; ++i)
if (sectionGroup.Client.Endpoints[i].Name == endpointName)
endpointElement = sectionGroup.Client.Endpoints[i];

BindingCollectionElement collectionElement = sectionGroup.
item => item.BindingName == endpointElement.Binding);
IBindingConfigurationElement bindingConfig = new
collectionElement.ConfiguredBindings).Find(item =>
item.Name == endpointElement.BindingConfiguration);

// Create address and binding of proper type and populate them.

Binding binding = (Binding)collectionElement.BindingType.
GetConstructor(new Type[0]).Invoke(new object[0]);
EndpointAddress address = new EndpointAddress(

MyServiceClient client = new MyServiceClient(binding, address);
return client;

That’s it. The credit for MakeMyServiceClient() method goes to Microsoft. When I was searching for examples of inferring binding type and properties from configuration I bumped into implementation of a read-only property named Binding on an internal type Microsoft.SharePoint.Administration.Claims.SPSecurityTokenServiceApplication inside of Microsoft.SharePoint assembly. I have reused that code with minor deviations in the example above. Open up Reflector and take a look at that property.

Wednesday, April 13, 2011

Property Bag in Application Setting Manager Does Not Find a Key… Easily

I am working on a custom SharePoint 2010 application now, which relies on Microsoft SharePoint 2010 Guidance for developers. Today I came across an interesting issue while utilizing Application Setting Manager component of the guidance library. I thought I’ve set up my application according to the instructions, but I could not get settings retrieved. It turned out that there is a nuance that was not obvious from documentation, which I’d like to point out.

So the setting manager stores configuration settings in a property bag of a web site, site collection, web application or a farm. It can be hierarchical, or you can specify level at which you want to store a setting. Quite powerful. In my case I used a site collection scoped feature to deploy my settings and store them in a site collection property bag. Here is a simplified fragment of my feature receiver class provisioning application settings:

var site = properties.Feature.Parent as SPSite;
var locator = SharePointServiceLocator.GetCurrent();

// Acquire instances of config manager and property bag
var configManager = locator.GetInstance<IConfigManager>();
var bag = configManager.GetPropertyBag(ConfigLevel.CurrentSPSite);

// Provision application settings
configManager.SetInPropertyBag("Key1", "Value1", bag);
configManager.SetInPropertyBag("Key2", "Value2", bag);

To retrieve the settings I used the following logic:

var locator = SharePointServiceLocator.GetCurrent();
var config = locator.GetInstance<IConfigManager>();
var bag = config.GetPropertyBag(ConfigLevel.CurrentSPSite);

if (!bag.Contains(key))
throw new MyKeyNotFoundException();

I would be not finding the key and getting an exception. My problem was in just relying on the intellisense – why not? Well, the Microsoft guidance had it right though: in the examples  the check is made as follows:

if (!config.ContainsKeyInPropertyBag(key, bag))
throw new MyKeyNotFoundException();

And this one works fine.  You can check quite easily what is in your site’s property bag; this and availability of the source code of Microsoft.Practices.SharePoint.Common library help to understand what is going on. Here is how you can see your site collection’s property bag contents using PowerShell (it is stored in the root web of the site collection):

$web = Get-SPWeb http://yoursitecollectionaddress/
$web.AllProperties | fl

You will find that your keys are in the form PnP.Config.Key.Key1._Site_ where the highlighted parts are attached to your actual key. The prefix is a constant, the suffix depends on a property bag type you use. Now if you try calling original method bag.Contains(key) passing it PnP.Config.Key.Key1 – then it will find the key!

One other observation, – you are probably using SharePoint Service Locator pattern with the setting manager, and like I do have unit tests. I needed to replace implementation of IConfigManager and IPropertyBag interfaces with mock classes. Without knowing about this “feature” my mock implementation would find a value for the key as follows: bag.Contains(“Key1”) == true, which would add confusion when the same key would not work in a web site. So try to always use config.ContainsInPropertyBag(key, bag) method instead.

Saturday, January 29, 2011

Extending CoreResultsWebPart to Handle Search Queries Written in FAST Query Language

This was one of unanswered items in my recent two presentations on Search at TSPUG and MSPUG, so I was driven to figure it out and eventually did get it to work although not without some controversial steps. In this post I chose to also describe other approaches I tried and things I learned along the way, which didn’t necessarily get me to the end goal but may still be useful in your specific scenario. If you are looking for just extending CoreResultsWebPart so that it can “understand” FQL then you may want to scroll down a bit. You can download the complete source code here. I was testing my code on a SharePoint Server 2010 with December 2010 CU installed.

As you may know search web parts in SharePoint 2010 are no longer sealed, which gives you lots of flexibility via extending them. CoreResultsWebPart is probably the most important of all and therefore it is a great candidate for being extended. I wanted to take a search phrase passed as a query string argument to a search results page and write my own FQL query using this phrase as a parameter. My FQL query would do something interesting with it, for example use XRANK to boost documents created by a particular author. I certainly wanted to leverage all the goodness of CoreResultsWebPart, just use my FQL query instead of a plain search phrase. Contrary to my expectations it turned out to be not trivial to accomplish. So let’s dive into the details.

The story started with it completely not working, so I was forced to write a custom web part that used query object model and in particular the KeywordQuery class to deal with queries written in FAST Query Language (FQL). This is the one I have demonstrated at MSPUG and (with little success) at TSPUG.  Below is a fragment showing how the query is submitted and results are rendered (BTW here is related MSDN reference with example).

using (KeywordQuery q = new KeywordQuery(ssaProxy))
q.QueryText = fql;
q.EnableFQL = true;
q.ResultTypes = ResultType.RelevantResults;
ResultTableCollection tables = q.Execute();

if (tables.Count == 0)

using (ResultTable table = tables[ResultType.RelevantResults])
while (table.Read())
int ordinal = table.GetOrdinal("Title");
string title = table.GetString(ordinal);
new LiteralControl(
String.Format("<br>{0}</br>\r\n", title)));


As you can see, rendering of results is very basic. Of course it can be made whatever it needs to be but why bother if there is CoreResultsWebPart around, which does it great already and is also highly customizable? So at this point I rolled up my sleeves. I have a book titled “Professional SharePoint 2010 Development” with a red Boeing 767 on the cover, and in Chapter 6 there is a great discussion of how to extend CoreResultsWebPart. Also Todd Carter speaks about it and shows a demo in his excellent video on Search. If you haven’t seen it I highly recommend spending an hour and 16 minutes watching it (Module 7).
Empowered with all this I wrote a web part that extended the CoreResultsWebPart and used a FixedQuery property to set a custom query, being almost one-to-one copy of Todd’s demo example. Here is the listing of this web part class (note how ConfigureDataSourceProperties() method override is used to set the FixedQuery property):

public class ExtendedCoreResultsWebPartWithKeywordSyntax : CoreResultsWebPart
protected override void ConfigureDataSourceProperties()
const string LongQueryFormat = "(FileExtension=\"doc\" OR FileExtension=\"docx\") AND (Author:\"{0}\")";
const string ShortQuery = "(FileExtension=\"doc\" OR FileExtension=\"docx\")";
string query = null;

if (String.IsNullOrEmpty(AuthorToFilterBy))
query = ShortQuery;
query = String.Format(LongQueryFormat, AuthorToFilterBy);

this.FixedQuery = query;

WebDisplayName("Author to filter by"),
Description("First and last name of the author to filter results by.")]
public string AuthorToFilterBy { get; set; }

The web part actually filters results by a given author. It uses keyword query syntax and not the FQL so we are far from being done yet. Remember how in the previous code fragment there was a line q.EnableFQL = true;? If just we could set it somewhere we would be essentially done! Well right but the KeywordQuery object is not directly accessible from the CoreResultsWebPart because it uses federation object model on top of the query object model (as do other search web parts). Purpose of the federation object model is sending same query to multiple search results providers and aggregating results later either in different spots on results page or in the same list. This is done by abstracting each search results provider by means of a Location class. Important classes in federation object model are shown on the diagram below.


As you can see, CoreResultsWebPart is connected to the federation OM through CoreResultsDatasource and CoreResultsDatasourceView types with the latter actually doing all the hard work interacting with the model. And also our objective, the EnableFQL property, exists in the FASTSearchRuntime class, which in turn sets this property on the KeywordQuery class, and as I showed at the beginning, this is what’s required to get FQL query syntax accepted.
As Todd Carter and the authors of Professional SharePoint 2010 Development point out, we need to extend the CoreResultsDatasourceView class in order to be able to control how our query is handled, and wire up our own class by also extending CoreResultsDatasource class. CoreResultsDatasourceView class creates a LocationList with a single Location object and correctly determines which concrete implementation type of ILocationRuntime to wire up based on search service application configuration. In other words, federation by default is not happening for the CoreResultsWebPart. There is another web part, FederatedResultsWebPart, and another view class, FederatedResultsDatasourceView, whose purpose is to do exactly that. With that, let us get back to our objective.
If we were using SharePoint Enterprise Search then we would be almost done, because public virtual method AddSortOrder(SharePointSearchRuntime runtime) defined in the SearchResultsBaseDatasourceView class would let us get our hands on the instance of ILocationRuntime. But since we deal with FASTSearchRuntime we are sort of out of luck. Yes there exists a method overload AddSortOrder(FASTSearchRuntime runtime) but it is internal! This is where the controversy that I have mentioned at the beginning comes to play: I was not able to find a better way than to invoke an internal member via Reflection. My way works for me, but keep in mind that usually methods are made private or internal for a reason. I used Reflection to access internal property LocationRuntime of the Location object. I don’t know why this property is internal. If someone knows or has a better way to get at the FASTSearchRuntime instance or the KeywordQuery instance – please leave a comment! Here is a code fragment showing extension of the CoreResultsDatasourceView and getting an instance of FASTSearchRuntime from there.

class CoreFqlResultsDataSourceView : CoreResultsDatasourceView
public CoreFqlResultsDataSourceView(SearchResultsBaseDatasource dataSourceOwner, string viewName)
: base(dataSourceOwner, viewName)
CoreFqlResultsDataSource fqlDataSourceOwner = base.DataSourceOwner as CoreFqlResultsDataSource;

if (fqlDataSourceOwner == null)
throw new ArgumentOutOfRangeException();

public override void SetPropertiesOnQdra()
// At this point the query has not yet been dispatched to a search
// location and we can set properties on that location, which will
// let it understand the FQL syntax.

private void UpdateFastSearchLocation()
if (base.LocationList == null || 0 == base.LocationList.Count)

foreach (Location location in base.LocationList)
// We examine the contents of an internal
// location.LocationRuntime property using Reflection. This is
// the key step, which is also controversial since there is
// probably a reason for not exposing the runtime publically.
Type locationType = location.GetType();
PropertyInfo info = locationType.GetProperty(
BindingFlags.NonPublic | BindingFlags.Instance,
new Type[0],
object value = info.GetValue(location, null);
FASTSearchRuntime runtime = value as FASTSearchRuntime;

if (null != runtime)
// This is a FAST Search runtime. We can now enable FQL.
runtime.EnableFQL = true;

By the way, another limitation of my approach is that using Reflection requires full trust CAS policy level. That said, finally we have arrived at our objective – we can set the flag on the FASTSearchRuntime, and it will understand our FQL queries. Our extended search results web part will show results as directed by the query (in the attached source code it uses XRANK) and leverage presentation richness of the CoreResultsWebPart.

Friday, January 21, 2011

Decks and Source Code from January 19th TSPUG Meeting

Thanks to all who came to Wednesday’s TSPUG meet up. I’ve uploaded a package with presentation and source code files. I was able to resolve the first of the questions unanswered during my previous talk about SharePoint search technologies. Although I meant to show it, I didn’t get to talk much about promoting user profile properties for selection in FAST user context for visual best bets. Basically as Steve Peschka has pointed out,  the first thing that needs to be done is permissions granted to the profile store. This will let you see the list of properties in his Property Explorer tool. Secondly, to promote profile properties to be available for selection in FAST Search user context administration section you need to edit value of FASTSearchContextProperties property of FAST Query search service application (SSA). I wrote a command-line utility for this purpose, which can be invoked as follows:

FASTProfilePropertyUpdater.exe –ssaId <FAST query SSA Guid> -action Add|Remove -property <User Profile property name>

Code fragment below demonstrates how SSA reference is acquired and the “Add” operation is accomplished. Full source code is a part of the package. Ideally you want to use PowerShell script to manage this, I just wrote an executable as it was easier to get it working quickly for me.

SPFarm farm = SPFarm.Local;
var settingsService = SPFarm.Local.Services.GetValue<SearchQueryAndSiteSettingsService>();
var serviceApp = settingsService.Applications[ssaId];
var searchApp = serviceApp as SearchServiceApplication;

if (null == searchApp)
"Cannot find search service application with the ID '{0}'.",
return 2;

string properties = (String)searchApp.Properties[FASTSearchContextProperties];
"Updating service application properties for property key '{0}'... Value before update was: '{1}'",
List<string> propertiesList = properties.Split(',').
Select(p => p.Trim()).ToList();

if (ContextPropertyActions.Add == action &&
properties.IndexOf(propertyName, StringComparison.Ordinal) < 0)
properties = string.Join(",", propertiesList.ToArray());
searchApp.Properties[FASTSearchContextProperties] = properties;
Console.WriteLine("Property added. Updated value is: '{0}'.", properties);
return 0;

Also in the package is the source code for a web part demonstrating how to dynamically boost search relevance ranking from within an FQL query by using XRANK keyword.The demo of this web part didn’t go well, but the code is actually working (I dragged the wrong web part to the page during presentation!). Also someone has asked me about proximity-based filtering of results in FAST query language. Yes this is possible  - there is a number of keywords that support this.

Saturday, January 8, 2011

Automate Internet Proxy Management with PowerShell

The solution is very simple, yet the issue was so annoying to me that I thought to share this. I have a laptop and do work on it on site at my client and when I am at home. When I am in the office connected to my client’s network I need to use a proxy to access the Internet, while usually I do not need a proxy when I am located elsewhere. You normally set up a proxy using browser settings. With Internet Explorer 8 you open a browser, then go to Tools >> Internet Options >> Connections >> LAN Settings. There you configure the proxy parameters. As a part of configuration you can turn the proxy on or off by setting or clearing a Use a proxy server for your LAN check box, and as you switch the proxy the rest of your proxy configuration settings such as exceptions is preserved for you. Great!

My problem was that on almost daily basis I had to open the browser, navigate to settings then check or uncheck that box. I would come to work and forget to go through the steps so my browser would get stuck, and then I would go “oh yeah, I turned the proxy off last night”. Of course the reverse would happen at home… I tolerated this too long because I didn’t expect that a simple solution exists, but it does and here it is:

 HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyEnable key stores a DWORD value, either 1 or 0 to indicate if the proxy is enabled.

So I wrote two 1-line PowerShell scripts and added two shortcuts to them to my Windows taskbar: Enable Proxy, Disable Proxy! Here is how the script to enable proxy looks like:

Set-ItemProperty "HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings" -Name ProxyEnable -Value 1

Then I’ve got excited, and wrote yet another rock star PowerShell script. Here it is:

$matches = @(ipconfig /all | findstr "myclientdomain")
if($matches.Count -gt 0){

That’s right I am using output of the ipconfig command to determine if I am on the client’s network and if yes then I turn on my proxy. After I have added a scheduled task running at user logon to invoke this script, the quality of my professional life has improved.