Wednesday, 30 November 2011

Avoiding bugs from cached JavaScript and CSS files in SharePoint

This article got quite detailed so here’s an executive summary – if you store custom JavaScript/CSS files etc. somewhere under SharePoint’s ‘layouts’ folder (or upload them manually to a library within the site), make sure the URL changes on each update. This is usually done with a querystring parameter in the URL e.g. /_layouts/MyProject/JS/MyScript.js?Rev=2011.11.20.1

Update 5th Dec 2011: Correction - if you exclusively use Microsoft's ScriptLink control to add JavaScript references to a page, the issue will NOT affect you if you store the .js files in “layouts”. It WILL, however, if you store them in a library and do not recycle IIS every time the file changes - something that should happen automatically for a WSP deployment to “layouts” of course. You may find though that it's not always possible to use ScriptLink exclusively - on my current project for example, we had many dependencies between JavaScript files, and ScriptLink does not allow you to specify the order (sequence) in which custom .js files should be added to the page (AFAIK). So, whilst your mileage may vary, I think the main thrust of the article still applies for many. In any case, many thanks to Mahmoud Hamed's comment below for spotting the flaw in my test results - I've updated the table below.

For most projects I work on, I prefer to store JavaScript/CSS etc. on the filesystem somewhere under the SharePoint root folder (e.g. “14”), usually under the “layouts” folder - for me, these files are often “site infrastructure” and are therefore critical for branding/functionality of the whole platform. That said, there are a couple of situations where I’d go the other route and store them in the site collection (and thus content database) – these would be:

  • For CSS/JS files only applicable to one site (e.g. a team site)
  • Sandboxed solutions

When I outlined these reasons in this answer to a forum question, the voting showed many agreed, and in general I see many folks using this approach. However, I’ve noticed recently that many are unaware of a crucial gotcha – and it’s so important that I’m constantly amazed it’s doesn’t get discussed more. The issue is end users not seeing updated JavaScript and CSS files following a recent code deployment. And OK, OK, I cheated a little bit with the article title – in fact the problem is not unique to SharePoint’s “layouts” folder (or even IIS), and when I say “JavaScript and CSS files”, I really mean JavaScript/CSS/images/audio/video and in fact most static files. But hey, sometimes framing a topic in a SharePointy way is the best way to remind folks that general behaviour of the web also applies to us ‘special’ SharePoint-types. It seems many folks may miss this stuff because they are focused on, say, some super-cool SharePoint ribbon development, and in terms of technical reading I think it’s fair to say that many focus almost exclusively SharePoint content.

Problem

As with many of life’s simplest pitfalls, this one is so easy to fall into. Remember that it applies only to SharePoint projects which do some degree of customization (e.g. branding/custom code) and store files on the filesystem – if those two things don’t apply to you, the problem is purely with Microsoft and they take care of it for their files, so there’s nothing to worry about. Otherwise, imagine the following:

  1. Developer makes an update to some custom code – previously deployed files are updated. Let’s say we’re deploying some updated JavaScript and CSS.
  2. The associated WSPs are deployed to SharePoint, and the files get updated across all web servers in the farm.
  3. Users do not see the Javascript/CSS changes until they do a hard refresh (CTRL + F5). Worse still, they could experience script bugs or broken formatting which could make the application completely unusable.

This scenario typically results in the oft-heard question “Hmm, what happens if you do a CTRL + F5?”. Which is all fine, of course, for the test users who are in touch with the developers. In production, it’s certainly not fine for the rest of the organisation – strangely enough, asking 30,000 information workers to do a CTRL + F5 does NOT go down well in the enterprise! Things get even better in an internet/WCM scenario – would you really hear about JavaScript errors or CSS bugs from site users? Would that message actually get to the implementation team (who, let’s remember, are probably not seeing errors in event logs from this stuff)? Or would the users just go elsewhere?

Cause

The reason this happens is because the first time a file is requested, the web server (IIS here of course, but the same applies to most) serves it with a HTTP header which specifies that it can be cached for a long time. For the “layouts” folder, by default this is 31536000 seconds (1 year):

CacheControlHeader
Browsers and proxies typically obey this, so on subsequent page requests, this file is pulled from ‘Temporary internet files’ on the user’s machine or a proxy between the user and the server. Without this, of course, web pages would load much more slowly. My testing showed that, generally, files served from content databases do not get served with the same headers and therefore updates show immediately. However, it turns out that SharePoint 2010 (and maybe earlier) treats files deployed using a Feature differently to files uploaded manually to the same library. My table below attempts to explain most of my testing and results:

Location Blob cache enabled? Result Notes
Style library (publishing site) No OK
Style library (publishing site) Yes OK
Style library (publishing site) Yes (now with max-age) OK Added by Feature
Style library (publishing site) Yes (now with max-age) Issue Added manually to library
Layouts Yes OK ScriptLink DID add querystring ?rev=q%2Fb304kubfwrNJVD%2BdYdxg%3D%3D but not clear when this gets updated – NOT when file changes!
Layouts No Issue BLOB cache only relevant for files in content DB anyway
Custom library Yes Issue Added manually to library
Custom library Yes OK Added by Feature

Conclusions and notes:

  • The issue applies to files in ‘layouts’ or files in a library which were not deployed using a Module tag in a Feature (or at least, which were uploaded through the browser)
    • N.B. I did actually go digging in Reflector to find the source of this behavior, but didn’t find it.
  • For JavaScript files, it doesn’t matter how the file is added to the page (e.g. ScriptLink control or plain <script> tag)
  • BLOB caching does not make any change
  • Sandboxed or farm solution does not make any change
  • Note – tested on SharePoint 2010 RTM (old) and SharePoint 2010 SP1 + August CU (fairly new at time of writing)

Solution

For the affected cases, to avoid browsers/proxies caching a file despite it being updated, wherever the file is linked simply add/change a querystring value on each deployment. So instead of something like:

  • /_layouts/MyProject/JS/MyScript.js

..we need something like one of these:

  • /_layouts/MyProject/JS/MyScript.js?Rev=1.0.0.0
  • /_layouts/MyProject/JS/MyScript.js?Rev=2011.11.20.1

The parameter can have any name (though ‘rev’ or ‘revision’ is common), but the important thing is to change/increment it in some way on every modification. If you’re incrementing assembly versions (using AssemblyFileVersion rather than the main version number remember), then it may be appropriate to use same value – on my current project we do this in some places* (more on this later) so that we don’t have to remember to increment links like those above manually. Effectively we use code to read the current assembly file version, and append that as the ‘Rev’ querystring value (and this value will be the TFS automatically-generated build number for the current build). The trade-off, of course, is that you may be forcing users to re-download files the first time after an update which haven’t actually changed – but it’s safer than having bugs. As an aside, you may have noticed Microsoft do this with their links to CSS/JS files – their values will get updated in service packs/cumulative updates etc:

SharePoint2010_JavaScriptLinks

Attempts to solve the problem in a better way

So we can solve the problem by adding the querystring onto any URL which links to a CSS/JS file, great! In many cases though, this means remembering to do a find/replace in Visual Studio on all the locations which have links to these files before every deployment. This isn’t ideal – in our current project, this can be master pages, user controls, code-behind which uses ClientScriptManager and many other locations. What would be ideal is if all this could take care of itself on every release!

I haven’t spent too much time on this, but in our project we do use a solution for CSS files at least. All of our CSS files are emitted to the page using SharePoint’s CssRegistration and CssLink controls – this means all of the links are generated by one control, giving us a convenient ‘funnel’ which we can amend. So, we simply derive from the CssLink control, and in our custom class we rewrite the links to have the ‘Rev’ parameter. The value used is tied to the AssemblyFileVersion of our ‘core’ assembly (which also hosts this control), the idea being that the CI process updates this for every release. Of course, you could use whatever scheme you like:

   1: public class CacheSafeCssLinkControl : CssLink
   2: {
   3:     protected override void Render(HtmlTextWriter output)
   4:     {  
   5:         using (HtmlTextWriter tempWriter = new HtmlTextWriter(new StringWriter()))
   6:         {
   7:             base.Render(tempWriter);
   8:             string html = tempWriter.InnerWriter.ToString();
   9:             if (!string.IsNullOrEmpty(html))
  10:             {
  11:                 html = html.ToLower().Replace(".css\"", string.Format(".css?Rev={0}\"", Core.Common.Utilities.GetAssemblyFileVersion(Assembly.GetExecutingAssembly())));   
  12:             }
  13:             output.Write(html);
  14:         }
  15:     }
  16: }

So this works great for CSS files. But what about other files, especially JavaScript? Well, here’s where we hit the snag – unfortunately Microsoft.SharePoint.WebControls.ScriptLink (the equivalent control for JavaScript links) is sealed, so we can’t use the same approach. Interestingly, if you were happy to use reflection (perhaps with some caching), you could add the ‘Rev’ parameter using the same technique I use in Eliminating large JS files to optimize SharePoint 2010 internet sites – since the collection of links is stored in HttpContext.Current.Items, there’s an opportunity to modify the links before they are written to the page with no need to swap out Microsoft’s control.

I’d probably be comfortable with that, but the real issue in our project at least, is that JavaScript tends to be added to pages through other routes too such as ClientScriptManager and <script> tags. We’d need to update the (large) codebase to exclusively use ScriptLink so that we have a more effective funnel. Still, it gives me an idea of how I might look to do things in the future - in the long term though, it would be nice if Microsoft’s own controls offered a means of adding a cache-busting querystring.

Summary

Whilst developers happily use CTRL + F5 in their development processes, it’s often overlooked that end users will face the same issue of not seeing updated CSS and JavaScript files by default. Clearly this can cause all sorts of issues, including JavaScript bugs if the page HTML structure has changed but the corresponding JavaScript is not being used. The issue is mainly seen where such files are stored in the ‘layouts’ folder, but interestingly appears where files are manually uploaded to a library too (in SharePoint 2010 at least). The key is ensuring that the URL changes every time the file is updated, and since a querystring will suffice there’s no need to actually rename the file. Ultimately it doesn’t matter how you change the querystring value, whether it’s a simple manual update or a more automated process tied into CI.

Sunday, 20 November 2011

SP2010 Continuous Integration–pt 5: Deploy WSPs as part of build

The 5th article in our CI with SharePoint 2010 and TFS series is now live on the official SharePoint Developer blog (as mentioned on Twitter last week) – see Deploy WSPs as part of an automated build. This post is probably the most important of the series. It shows how to join up the automated build with PowerShell, so that you can automatically deploy WSPs on every build e.g. every night or (if appropriate) every time a developer checks-in. The article covers a few things:

Most SharePoint developers would recognize immediately that one set of PowerShell scripts could never perform all of the deployment steps for every project – each application (and indeed each iteration) will introduce new Features, Solutions and other artifacts, for which specific PS would be needed to deploy. However, what the SharePoint/TFS Continuous Integration Starter Pack aims to do is provide key mechanisms to enable CI in SharePoint – so that all you have to worry about is the PowerShell specific to your application.

Links:

 

CI_mechanisms

Wednesday, 16 November 2011

Slide deck from ‘Customizing the SharePoint ribbon’ talk (SPC11 and SPSUK)

I’d previously published the code samples, but so far hadn’t published the slide deck for the ‘Deep Dive on SharePoint Ribbon Development & Extensibility’ talk I gave recently. This is a session I initially gave with Andrew Connell at the SharePoint Conference 2011 (Anaheim), and also as an amended version at SharePoint Saturday UK a few days ago (12th November 2011). Certainly part of the credit here needs to go to my co-speaker, this is not just my work.

What I’m publishing here is a slightly amended version of the deck I used at SharePoint Saturday UK. It has all of the goodness with a couple of simplifications, but also many screenshots which try to convey what was in the demos.

I should mention we aimed for this deck and set of code samples to be THE definitive starter pack for SharePoint ribbon development right now - hopefully it helps some folks. You can view this version of the slide deck at:

http://bit.ly/tql2Bx

..and here are some screenshots which illustrate the contents:

AdvancedRibbonControls

FlyoutAnchor_Review1

FlyoutAnchor_Review2

Tuesday, 11 October 2011

My ribbon samples from SPC11 (session with AC) + revisions to my ribbon articles

I had an awesome time at the SharePoint Conference 2011 – all the clichés about catching up with friends and meeting new ones were true, and it was a privilege to be on stage speaking to people. Both my sessions were huge fun! Here I want to focus on the level 400 session Andrew Connell and I co-presented, “Deep Dive on SharePoint Ribbon Development & Extensibility” – between us we had some great samples and information, and were both pleased with how the session turned out. I should mention here that as part of this work, I revisited my original ribbon articles and fixed up some samples and added additional info. These articles were written way back in January 2010 during SharePoint 2010 beta, and Microsoft changed a couple of things after these were published. So if you ever used these articles and hit an issue, it is now resolved – apologies. Anyway, here are some things AC and I covered:

  • Ribbon architecture/how it works
  • Fundamental ribbon tasks – adding a tab/group/button
  • Contextual ribbon tabs (e.g. tied to a web part)
  • How to respond to ribbon commands (including analysis of CommandHandler vs. page components)
  • Working with dialogs
  • Sandbox considerations
  • Cool ribbon controls – SplitButton, ToggleButton, Spinner, FlyoutAnchor etc.
  • AC’s tips and tricks
  • COB’s tips and tricks

AC has already published his samples, and for convenience here are both links:

  • AC’s ribbon samples (contextual tabs, commands explained, async processing, dialogs)
  • COB’s ribbon samples (adding a tab/group/button, cool controls [SplitButton, ToggleButton, Spinner], static/dynamic FlyoutAnchor samples)

I’m biased no doubt, but I personally think that together they make an awesome starter pack for SharePoint ribbon development. In case you’re wondering what the hell a FlyoutAnchor is by the way, it’s basically a dropdown on steroids, with grouping etc. – here’s the example I used:

StandardDropdown

FlyoutAnchorTeaser

The talk has given me a revived enthusiasm for the ribbon, and I’ll post some more articles about things I’ve learnt since my original articles soon. To be quite honest, I also learnt a number of things from AC during our prep – our Lync calls before the conference had quite a lot of “No way, I didn’t know that!” from my end, and so he may well write more in the future too.

Oh by the way - if you were at the session, you will have seen Andrew’s “Help” ribbon button which (unexpectedly) popped a dialog with a somewhat comical photo of me with a beer in hand. What a nice surprise, and how kind to choose such a flattering photo! Ha! Anyway if you see him, be sure to remind him I’ve got a long memory, OK? :)

That download link again was:

Friday, 23 September 2011

SP2010 Continuous Integration–pt 4: Assembly versioning

The 4th article in our CI with SharePoint 2010 and TFS series is now live on the official SharePoint Developer blog. This post is a quick overview, but the actual article can be found here - Configuring Versioning of Assemblies in SharePoint Automated Build. In this episode we cover how to introduce assembly versioning to your automated build process. The benefits of versioning code are widely documented, but in my simple mind I think of it like this – if you don’t have versioned assemblies, how can you easily tell which version of code is running in a given environment/customer/product? The answer is you can’t – at least, not without resorting to Reflector or some other decompilation tool and literally scanning code. And that’s really not efficient nor effective – far better to put something in place once, and forever be able to determine the version by looking at the file in Windows Explorer. I’d argue that most pure .Net development shops tend to have adopted assembly versioning a long time ago, but again it’s one of those things which is somewhat behind in the SharePoint world due to the other complexities which come with it.

Additionally, versioning in SharePoint can be a very bad idea if not done appropriately (no doubt a few people tried and got bitten by this). Version numbers are stored in many places inside SharePoint, and using typical versioning approaches will result in a broken application. Instead, I always recommend that a secondary version number such as AssemblyFileVersion is incremented with SharePoint.

So if we’re agreed that it’s worthwhile, how do we adopt it? It would be nice if TFS had some simple switch, but that’s not the case up to and including TFS 2010 (personally I’ve no idea about future versions). Implementing versioning requires some customization of the build process (which we introduced in the last article, Creating your first TFS Build Process for SharePoint projects), and essentially we need to drop in a custom workflow activity which takes care of the versioning step. I’ve written such an activity and published it on Codeplex at http://tfssimpleversioning.codeplex.com (others exist too). The article explains how to implement the sample to perform versioning in the right way.

That link again is Configuring Versioning of Assemblies in SharePoint Automated Build.

The next article will be “Deploying WSPs as part of an automated build”.

Thursday, 15 September 2011

My SharePoint Conference 2011 talks–ribbon development and CI

Unless you’ve been under a rock you’ll know that this year’s big event in the SharePoint world is the official SharePoint Conference 2011, held in Anaheim near Los Angeles. We’re now just over 2 weeks to go, and since many are starting to build their conference schedule I thought I’d mention my talks. I’m presenting 2 sessions and, somewhat weirdly, both of them are co-presents with other speakers. I was looking forward to speaking already, but this aspect makes me even more excited – both my co-speakers rock and I think these are gonna be fun sessions!

Both talks are on the Wednesday – one in the morning, one in the afternoon:

Application Lifecycle Management : Automated builds and testing for SharePoint projects – SPC319, Weds 10:30am
With Mike Morton, Visual Studio Senior Program Manager (Microsoft)

Whether small or large, experienced or novice, any SharePoint development team can improve their process! This demo-heavy session will show you how Visual Studio Ultimate and Team Foundation Server can be used to automatically build your SharePoint solutions, deploy the resulting WSP’s to a remote SharePoint environment, and leverage automated Visual Studio UI testing (which can have a lower barrier to entry than unit testing) to help you find issues faster. Come hear some great, real world information that will help you create higher quality code; as a bonus you will get to admire Chris’ good looks and enjoy Mike’s crazy humour!

Deep Dive on SharePoint Ribbon Development & ExtensibilitySPC402, Weds 5pm
With Andrew Connell (MVP, Critical Path Training) 

Take advantage of the Ribbon in your SharePoint applications for a tightly integrated and great user experience! Developers can customize and extend the ribbon for custom solutions. In this session we'll examine the different components of the ribbon as well as how to create page components, asynchronous callbacks and prompt the user with intuitive dialogs. Best of all you can do all this from the sandbox and avoid getting admins involved in deploying farm solutions!

Hopefully see you there!

Friday, 26 August 2011

SP2010 Continuous Integration - pt 3: Creating your first TFS Build Process for SharePoint projects

The 3rd article on Continuous Integration for SharePoint 2010 is now live on the official SharePoint Developer blog – Creating your first TFS Build Process for SharePoint projects. As before I will post an overview here, but the content itself is on the MSDN blog. This one is another lengthy article (by me this time), and should be good information for those looking to get automated builds running or those just curious.

The article picks up where the last one (Configuring a TFS Environment with Test Controller, Test Agent, and Build Server) left off - you have a TFS build server running (and can build a plain .Net app), but so far it is not able to build SharePoint projects since it does not have the assemblies installed. This article answers questions such as:

  • What setup do I need to do to build SharePoint projects?
  • How do I get my WSPs to build automatically? (e.g. on each check-in/as part of a nightly build etc.)
  • How do I create my first build definition?
  • The build process is now a .NET 4 workflow instead of MSBuild – how do I make a change to the process?
  • How should I deal with dependencies between projects?

That link again is Creating your first TFS Build Process for SharePoint projects.

The next article will be “Implementing assembly versioning for SharePoint projects”. This should be published on September 8th.

Thursday, 18 August 2011

SP2010 Continuous Integration – Installing TFS2010 (inc. Build and Test agents)

It’s been a while since I posted here, but my collaborators and I are actually well into writing our article series on Continuous Integration for SharePoint 2010. As Mike Morton posted a few weeks back on the official SharePoint Developer blog, the rest of the series will now be published there. When each article is published I’ll post a quick summary here and update links in my kick-off post.

Anyway, the main news is that the 2nd article in the series has been live there for few days – – see Configuring a TFS Environment with Test Controller, Test Agent, and Build Server (Kirk Evans).

Kirk (a Microsoft Principal PFE) did a great job. MSDN is referenced throughout (in case you need more detail), but the article is easy to digest and makes the ideal starting point for initial installation and configuration (we think). The article shows you how to:

  • Create appropriate service accounts for TFS
  • Create a distributed TFS 2010 environment, where the build server(s) are separate from other services such as source control (recommended topology)
    • N.B. If you already have TFS 2010 for source control and just want to get started with automated builds, you can skip to the ‘Install the Build Service on the TFSBuild Machine’ section and pick up there
  • Install and configure the TFS build service (build controller/build agent)
  • Install and configure Visual Studio 2010 test infrastructure (test controller/test agent)
  • Configure optional TFS functionality such as SQL Reporting Services and SharePoint site/dashboard for tracking velocity and issues
  • Verify configuration by creating a test build

That link again is Configuring a TFS Environment with Test Controller, Test Agent, and Build Server (Kirk Evans).

The next article will be “Creating your first TFS Build Process for SharePoint projects”. This should be published on August 25th, and discusses creating a build definition to build WSP files (and more).

Wednesday, 15 June 2011

SP2010 Continuous Integration–pt 1: Benefits

On the back of my talk at the SharePoint Best Practices Conference, this is the first article in a series on implementing automated builds (aka Continuous Integration) in SharePoint development projects – specifically using Team Foundation Server 2010. In actual fact, the rest of the series may appear on a Microsoft blog (or a whitepaper) rather than this one, but some Microsoft folks and I are still working through the detail on that and I wanted to at least discuss the contents and get the thing kicked off here. With this particular topic, I could imagine that some folks will dismiss it as being irrelevant to them (and not read anything in this series), when in fact there may be pieces which would work for them and can be implemented fairly easily. Let’s be clear though – Continuous Integration (CI) is probably best suited to projects with the following characteristics:

  • Development-oriented – perhaps with more than, say, 3 Visual Studio projects
  • Multiple developers
  • Fairly long-running (e.g. > 2-3 months; to give the team a chance to implement CI alongside the actual deliverables)

The original plan for this blog series/whitepaper (which may change as myself, Kirk Evans and Mike Morton evolve it) looks something like this:

  1. CI benefits - why do it? (this post)
  2. TFS 2010 Build installation/config
  3. Creating your first TFS build process for SharePoint projects
  4. Implementing assembly versioning
  5. Using PowerShell to deploy WSPs from build output
  6. Running coded UI tests as part of a build
  7. Integrating tools such as SPDisposeCheck and Code Analysis into the build

Benefits of Continuous Integration

Although CI has become a fairly standard approach in the plain .Net world, it’s fair to say that SharePoint brings some additional considerations (as usual) and things are slightly more involved. Consider that in .Net the output is often either an executable or a website which can often be deployed with XCOPY. Yet SharePoint has Solutions and Features, all the logical constructs such as web applications and site collections, and is often heavily “state-based” (e.g. activate a Feature, then create a site, then create a list and so on). So CI is more involved in many cases, but all the feedback I hear tells me more and more people want to adopt it, and rightly so. Here are some of the benefits as I see them:

  • Consistent builds
    • E.g. no human error leading to debug (not release) build assemblies
  • Automatically versioned assemblies
  • Ability to track code versions in production back to release labels in source control
  • Less time spent by developers keeping their environments up-to-date (this alone can be huge)
  • Team cohesion through build notifications
    • Upon each build (e.g. after every check-in), everyone on the team sees a pop-up in the system tray letting them know if the build is currently passing/failing
  • Automated testing
    • Once WSPs have been deployed, run a series of tests (e.g. unit tests/UI tests/load tests) to check what would be deployed to the customer
    • Also run checks on the code (e.g. code analysis, SPDisposeCheck) to keep on top of any code smells from that day

What it looks like

Different ‘styles’ of CI

Once you have the capability to use TFS Build (remember we’ll talk about topologies/install next post), one of the first steps is to establish what you’re trying to achieve with CI. In my mind, there are two broad styles of Continuous Integration:

Style:

Best suited for:

“Rebuild everything” Pre-live
“Build current sprint against production state” Post-live (or frequent delivery)

The former is typically simpler than the latter, so conveniently that’s the style I’m focusing on this series (though I’m really discussing the core mechanics which apply to both styles). That said, let’s discuss the more complex “building against the current state” type for a moment. This model is probably the best choice if your solution is in production but still evolving – after all, would you really want to focus all your effort on testing the “build everything from scratch” process when you might only do that in a disaster recovery situation? Probably not. In this model clearly the idea of state is very important and, unsurprisingly, the best way of dealing with this is to use virtual machine snapshots. The idea is that step 1 of the build process rolls the target machine back to a snapshot which is a reflection of the current state of production (same code versions, solutions/features etc.) - from there, your build process deploys the resulting WSPs to this target and you get to see whether your current work will deploy to/run in production successfully.

Microsoft recognize this as an important scenario and the ‘Lab Management’ capability in TFS Build is designed to help. It provides special ‘activities’ for the build workflow to rollback/take snapshots – these can be dropped into the workflow (N.B. we’ll discuss the build workflow in article 3) and there’s also a wizard UI to help you select available snapshots etc. Now, there is a catch – obviously there’s a wide range of virtualization platforms out there these days, and Microsoft only go the extra mile to make it easy with their own; specifically this requires Hyper-V with System Center Virtual Machine Manager. The good news is that it shouldn’t be too difficult to achieve the same result with another platform – less slick perhaps, but use of an InvokeProcess activity which calls out to the command-line (e.g. something like vmrum revertToSnapshot C:\VMs\MyVM.vmx MySnapshotName for VMWare) should do the trick just fine.

Configuring the build (quick overview)

To get started, you create a new build definition in Visual Studio Team Explorer, and configure which Visual Studio projects/solutions should be built. Many other settings live here too, such as whether code analysis should be performed during the build:

BuildDef_Process

From there, we need to select when builds happen – manually, on every check-in, every night and so on:

BuildDef_Trigger 

In TFS 2010 Build, the actual steps which happen during a build (e.g. retrieving files from source control, compiling, deploying, testing etc.) are configured as a .Net 4.0 workflow. For SharePoint builds, we need to make some modifications to the sample builds which come with TFS – this can take some figuring out but I’ll post mine as a potential starting point in article 3 ‘Customizing the build workflow for SharePoint builds’:

BuildDef_WorkflowZoomedIn 

By this point, each build will compile assemblies from the latest code, generate WSP packages and copy them to a remote SharePoint box (assuming you’re not using an ‘all-in-one’ topology). We now need something to actually deploy/upgrade the WSPs in SharePoint, and perhaps do some other steps such as creating a test site – the build workflow will hand-off to a PowerShell script to do this. I’ll supply my PowerShell script later in the series and discuss the mechanisms of passing data from the workflow to the script, and collecting the success/failure return value – for now it’s just important to realize that a big part of automated builds is just standard PowerShell which you might already be using, and that every implementation will vary somewhat here depending on what you’re building for your client.

So by now, we would have a working build – WSPs are being built and deployed automatically, and any test site is being recreated to check the validity of the build. Now we can think about some automated tests to help us verify this. After all, we might be able to get a WSP deployed but is the actual functionality working or would a user see an error somewhere? Is there any way we can establish this without a tester manually checking each time a build completes?

Running automated tests (e.g. UI tests) as part of build

Now, it’s not that I’m against unit testing or anything….heck we even have some in our current project! But it definitely feels like this area is still challenging enough with SharePoint that most projects don’t do it, e.g. due to the reliance on mocking frameworks. I’m thinking more and more that the UI testing capabilities in Visual Studio 2010 offer a great alternative – coded UI tests can simulate a user on the website by using the controls on the page (e.g. navigation, buttons, a form, whatever) in exactly the same way. Sure, it’s not a unit test, and ideally you’d have both, but it does seem to have a much lower barrier to entry that unit testing – and with a few tests you could get good real-life test coverage pretty quickly. Here’s what it looks like – first the test gets recorded in the browser (a special widget appears):

CodedUITest_AddAssertion2

After pressing some buttons to imitate a specific user action (something I want to test), I then add an assertion to check that something is present on the page. In this case, I’ve pressed a ribbon button which has done something, and the assertion is testing that the green status bar is visible with a specific message, meaning the action was successful. If the action wasn’t successful, the green status message would not be present and the test would fail. What’s interesting about this is what’s behind the ribbon button – it’s some functionality developed for our current client around the social features of SharePoint, and lot is happening there. The button calls into some jQuery which calls a HTTP handler, which invokes a custom service application which talks to the data access layer, which then writes a record to a custom SQL database. You see what I mean about integration tests rather than unit tests?

But it does mean I can test a lot of code with minimal effort. There are some things I need to guard against – like the fact that the UI could return a positive when something underneath failed, but one thing which can mitigate this (in addition to simply recording many tests) is the fact that the tests generate .Net code, meaning I can supplement it with other checks if required. The cool part, of course, is that it’s very easy to integrate these tests into the automated build – if tests like these are automatically running every night/every few hours, you get to find regression bugs very quickly.

Getting the payback - finding the cause of a failed build

So if we had automated builds in place, what happens when a bug is checked in? How do we find out? Well that might depend upon the specifics of the bug and the tests you have in place, but let’s work through an example. Let’s say there’s an issue in the DAL in the example used above –we could imagine that there’s a mismatch of stored procedure parameters for example, but for simplicity I’m just going to add a dummy exception to the method which is behind the ribbon button:

throw new ApplicationException(“Dev did something silly”);

The developer checks in because everything builds/packages fine, and y’know, because he/she has been testing throughout and that little last change couldn’t possibly have broke anything! So the build runs - if you’re in the office at this time (you might not be if it’s configured to run nightly), then team members have the option of seeing a system tray notification pop-up whenever a build succeeds or fails:

BuildNotificationFailure

The main entry point is the build report – this gives us a view onto any build warnings/errors and a summary of test results:

BuildResultsSummary

There is also a more detailed log, and if UI tests are included then you’ll see the result of these – note the ‘Error Message’ section below shows that one of our asserts failed in a test:

CodedUITest_Result1

That’s useful, but we kinda need more detail to understand why we have a bug. Depending on how the build is configured, we should see links to some files at the bottom of the build report.

CodedUITest_Result6

Let’s walk through these files in turn:

  1. A .png file – this is a screenshot of what the UI looked like at the time of the failing test. This is incredibly useful, and we can see that the UI did not show the green success bar – so we get an understanding of how the bug manifested itself in the UI:

    AutomatedBuild_FailedUITest2 
  2. An XML file – this is data collected from the Windows event log at the time of the failing test. In many cases, this is how you’ll find the actual bug – we can clearly see the class/method and stack trace of where the problem occurred:

    CodedUITest_Result3
    Note that this does require that in your code you’re reporting exceptions to the event log – we’re using the SharePoint Patterns & Practices libraries to do this. You might extrapolate that what the SharePoint world probably needs is a ULS data collector – I’ve mentioned this to Microsoft, and folks seemed to agree that could be a good idea. In any case, we now have a screenshot of the failed test and event log data which should locate our bug, woohoo! But it doesn’t stop there – other files captured include…
  3. An iTrace file – this is Visual Studio 2010’s incredible IntelliTrace feature. IntelliTrace is a ‘historical debugger’, which allows a developer to step into a debugging session even if he/she wasn’t there at the start of it. In this context, it’s used from a test result – the image below doesn’t quite show it, but I can select an exception which occurred during a UI test (or any other kind of test for that matter) and then press a ‘Start debugging’ button. This will take me into a ‘live’ debugging session where all variables will have the values they did during the test, and I can step through line-by-line. And this is despite the fact that the test ran at 3am from an overnight build – so no worries that a dev will not be able to reproduce an issue the build encountered:    

    AutomatedBuild_IntelliTrace
  4. A .vsp file – this is the results of a code profiling session, where the code which executed during the tests was evaluated for performance. The first image below shows me ‘hot paths’ in the codebase i.e. where we spent a lot of time, and therefore could consider optimizing:

    CodedUITest_Result7

    This tells me that our WeatherWebPart class and some code in our ActivityFeedSubscription namespace might benefit from some attention. Drilling down, we can see some graphical representations of individual methods:

    CodedUITest_Result9

Going further still – possible things to include in the build

The last area, code profiling, wasn’t so much about identifying the bug as perhaps a daily check on any new alarm bells in our codebase. Expanding that theme further, consider the following as things we could fairly easily include in an automated build:

  • Unit tests (of course)
  • SPDisposeCheck
  • Code analysis (FxCop)
  • Documentation builds
  • Creating TFS work items from build/test failures
  • Etc.

Really, the sky’s the limit given that you can shell out to PowerShell or the command-line during the build.

Summary

Although fairly common in the .Net world, SharePoint projects which do automated builds are fairly rare - arguably due to other complexities that come with developing with SharePoint. Team Foundation Server 2010 makes build automation easier to achieve than previously with the Microsoft stack, and has superb integration with related Visual Studio capabilities such as testing. Implementing these techniques can make a big difference on SharePoint development projects. Although this post presents an overview of the end result, a couple of Microsoft folks and I are working on detailed content (roughly following the article series listed at the beginning) – when I know the final home for this content (blog/whitepaper), I’ll keep you updated and will amend this post with appropriate links.

Thursday, 26 May 2011

My jQuery, AJAX and SharePoint 2010 slide deck (SUGUK)

I attended a great SharePoint User Group UK meeting last night – compared to the London event it was notable how many people had travelled significant distances to be there, great dedication! The first session was me (doing the first dev-focused session at this event) and the second was a panel Q & A discussion with some excellent conversations. The topic for my talk was ‘SharePoint, jQuery and AJAX - a beginner's survival guide’, and although I’ve given this talk before I want to publish the deck and code samples again as I made some updates since the last time. In fact, it’s worth calling out one of these in particular here I think:

  • If using jQuery with SharePoint 2010, ALWAYS put jQuery into ‘no conflict mode’ via jQuery.noConflict(). This is necessary because SharePoint’s internal JavaScript uses the $ symbol as a variable name in a couple of places, and this causes clashes since it’s the alias used by jQuery

The deck has information on my ‘3 core techniques’ for jQuery/AJAX apps with SharePoint, and also tips and tricks like how to get Intellisense for jQuery and the SP2010 Client Object Model, tools for debugging AJAX apps etc. The code samples cover a fairly wide range of things:

  • jQuery
    • Showing/hiding elements
    • Setting the HTML of an element
    • Cascading dropdowns
    • AJAX requests
  • Client Object Model
    • Fetching simple data
    • Implementing a “type-ahead filtering” sample against the documents in a document library
    • Creating data e.g. a new list item
    • Techniques for reducing the data going over the wire (by 95% in my example!)
  • jQuery + HTTP handlers
    • Why/how
    • Returning simple data
    • Returning complex data as JSON

You can download my slide deck and code samples from:

http://dl.dropbox.com/u/11342240/ChrisOBrien_jQueryAJAXSP2010_SamplesAndDeck.zip

Big thanks to Mark Stokes for hosting the event and inviting me to speak.

Tuesday, 10 May 2011

Speaking at SUGUK (Manchester) on jQuery, AJAX and SP2010–24th May

In a couple of weeks (24th May) I head north to speak at the North West branch of the UK SharePoint user group – I’m looking forward to it for several reasons, not least because I come from Manchester originally. As usual I’m talking about SharePoint development matters, but hopefully the evening will have fairly broad SharePoint appeal as the 2nd session will be an open Q&A session with myself, Brett Lonsdale, Mark Stokes, Sam Dolan and Alex Pearce on the panel. This means we’ll be strong on topics such as development, architecture, BCS, tools, branding/design, governance, end-user adoption, education and Office 365 etc., but there should also be enough combined expertise to answer an even wider range of questions. It should be an interesting discussion. 

My talk is aimed at devs, but I’m also hoping it’s interesting in user experience terms to a wider audience. Here’s the summary:

SharePoint, jQuery and AJAX - a beginner's survival guide

It takes a different approach to build SharePoint apps with a slick AJAX experience - getting rid of those nasty postbacks involves declining some help from .Net and learning new techniques. Whilst the answers often lie with the Client Object Model and jQuery, both are hefty topics so the barrier can be intimidating. This talk aims to boil things down to "3 core techniques to survive by", with step by step demos to show the development process. Tips and tricks such as enabling jQuery intellisense, debugging JavaScript and using JSON will be covered. To pull the concepts together, the session will round off with a demo migration of an existing mini-application from postbacks to AJAX. Building custom SharePoint experiences that users love is simpler than you think!

I’ve given this talk before (at SharePoint Saturday UK), but it’s taken on another dimension for me since then as I’ve spent the last 6 months using these techniques day in, day out for my current project. I’m extremely glad I spent time getting to grips with jQuery and AJAX, my career would probably be in the toilet otherwise. Hopefully my talk will be useful information to novices and experienced devs alike.

The event is free and on Tuesday 24th May, at the Palace Hotel Manchester. Sign up here - http://suguk.org/forums/26548/ShowThread.aspx

Wednesday, 20 April 2011

Automated SharePoint builds/UI testing–slide deck

It took a few days as I wanted to add screenshots from the demos, but I’ve now published my deck from my talk at the SharePoint Best Practices Conference. I had a great time presenting, and although the prep for this talk was harder than others I’ve done I was happy with how it turned out. I think this subject is starting to get a lot more attention, and you know there’s interest in the topic when you end up giving a repeat demo in the speaker room to fellow speakers who couldn’t attend the talk. The talk is oriented around Team Foundation Server 2010 for the build platform. Here’s a summary of one of the demos:

  • Developer checks in (a breaking change to the data access layer in my demo) and an automated build is triggered
  • Assemblies and WSPs are built (in release mode) and deployed to remote SharePoint server
  • Once deployment is complete, the app pool is warmed up and we start hitting the site with automated UI tests (a feature of Visual Studio 2010 Premium and above)
  • If a test fails, the build is a failure and all developers are notified (via the TFS build notification tool)
  • Build manager/developer checks build report and sees:
    • Screenshot of failing UI test
    • Critical entries from the event log at the time
  • >> Build manager/developer now understands why latest code change broke the build
  • “Added value” bonus demo
    • Same process but with the following data also captured:
      • Code profiling (see which bits of code should be optimized)
      • IntelliTrace (start a debugging session at the exception hit during the test)
      • Video recording of UI tests (not just screenshot)

The idea of course, is that if this information is being surfaced every 24 hours (or perhaps even more frequently from builds throughout the day), then it’s easy to quickly identify problems as code is written. This can lead to a greater chance of success since bugs and other issues are driven out sooner, reducing the cost to fix.

ChrisOBrien_BPC_Small2

Download/view the slide deck

http://slidesha.re/gSQnyD

The future

I got a lot of positive feedback on the talk and it convinced me to do a blog series on this subject. Whilst I know some folks will instantly dismiss this as not being relevant to them, my take is that any SharePoint 2010 project which has some custom code (say above 3 Visual Studio projects) and has Team Foundation Server should at least do the “first level” of automating the build (building assemblies and WSPs) – the build itself takes minutes to configure (though a build server is a prerequisite), and can bring significant benefits.

Also, it looks like I’ll be working with Microsoft (Kirk Evans in particular) on an MSDN whitepaper on the topic. This will be a more formal treatment of many of the principles/techniques I plan to outline in my blog series, and will hopefully become a valuable resource to those looking to implement automated builds.

Wednesday, 23 March 2011

Speaking on nightly SharePoint builds and UI testing - European SharePoint Best Practices Conference 2011

I’m privileged to again be part of this event – it kinda seems strange because it’s on my doorstep here in London, but for SharePoint material this conference is probably only bettered by the official SharePoint Conference held by Microsoft in the U.S (my opinion of course). The speaker list again consists of the biggest names in SharePoint (and me!), and probably the biggest issue an attendee could face is wanting to go to two sessions simultaneously – trust me, there’s worse problems to have with a conference.

After doing 3 sessions last year, I get to focus on just 1 this time so I wanted to make it special. I ended up somewhere in the ALM (Application Lifecycle Management) space again, but I’m hoping it’s something different from some of the other dev sessions (and other talks I’ve given recently). Here’s the abstract:

From good development to great – nightly builds and UI testing with SharePoint 2010 and Team Foundation Server 2010

So you (or your dev team) know your way around the SharePoint API, but deployments are still painful and there are quality issues. Maybe you looked at unit testing SharePoint, but didn't yet manage to fully adopt it. This session looks at how Visual Studio Team Foundation Server can help SharePoint projects, specifically with automated WSP builds and VS2010 UI testing (which can have a much lower barrier to entry than unit testing). When a few of these capabilities are strung together, the results are incredible for dev teams. Over several demos, we'll cover how to get started with automating the build, deploying the resulting WSPs to a remote SharePoint machine, then automatically running UI tests against the site. Part case study, the session will use an innovative SP2010 social/collab implementation (in production at Tesco) as the test bed – with ribbon customizations, a custom service application, and activity feed extensions thrown into the mix.

Hopefully the session will be interesting to anyone who at some point has said “we should get into automated builds”, or indeed, anyone interested in building the kind of social SP2010 intranet we’re building. Note my colleague Wes Hackett also has a community session at the conference on this project.

If you haven’t yet signed up for the Best Practices Conference but are considering it, I highly encourage you to go ahead. Read more on the conference site - European SharePoint Best Practices Conference 2011

Tuesday, 15 March 2011

Optimizing SharePoint 2010 internet sites – slide deck

Last week I gave a talk at Microsoft on this topic, and I thought the slide deck was worth posting. You may remember I recently wrote about Eliminating large JS files to optimize SharePoint 2010 internet sites – my talk mentioned this approach but also had much wider coverage on other techniques to use. Some of the material came from recent experiences of optimizing my current employer’s site (www.contentandcode.com) and also a fairly large client’s social/collab platform – more and more I find that if code and infrastructure aren’t in a really bad place it’s “page-level” optimization which yields the most benefit, and this was the focus of the talk.

I also talked about Aptimize – you might know this product from it’s use on http://sharepoint.microsoft.com. Aptimize automates many of the optimizations I discussed, and I find it pretty interesting as a product. I’ve done some initial testing with it on a site I’ve been involved with and got good results – I’m hoping to look into it further on behalf of a client, and will most likely post further info here.

My slide deck also has some information on:

  • How infrastructure bottlenecks may change in the internet scenario (vs. intranet/collab)
  • How to determine infrastructure bottlenecks
  • Measuring page load times
  • Load testing

You can see the deck on SlideShare at http://slidesha.re/gpAaRc

UPDATE – if you’re interested in performance, I should mention I recorded a SharePoint Podshow episode on the subject with Rob Foster whilst in Redmond recently. It’s not yet published (I’ll update this post when it is) – I talked in some detail about the kind of attention we paid on a client project recently, including around completely “non-code” aspects like disk performance, optimization for SQL’s Temp DB and content database distribution. I’ve no idea how it worked out as an interview, but Rob certainly knows his stuff on performance too so it was a great chat! Look out for it soon.

Thursday, 17 February 2011

CAML.Net Intellisense – now with added Feature upgrade schema

Although many SharePoint developers working with SP2010 are now aware of how useful vital CKS:Dev is, I fear that far fewer are also using John Holliday’s excellent CAML.Net Intellisense. I love this project so much I decided to contribute. If you’re not aware of the tool, the idea is to make it easier to work with the many XML files involved with SharePoint development. It does this by providing detailed documentation “as you type” on the many XML attributes and elements, which so often you’d have open in a separate browser window onto the MSDN docs. There are two huge benefits here:

  • The documentation is generally much more detailed than MSDN. It’s clear that John has spent hours on this.
  • The documentation is right there in Visual Studio, right where you’re working.
    • Additionally, many nodes have direct links into the corresponding detail page on MSDN – so if you do want more detail, you don’t even have to GoogleBing it.

CAML.Net Intellisense existed for SharePoint 2007 development, but John has done a great job rolling things forward for 2010. The tool is now WPF-based and provides a rich (and striking) presentation of the documentation – here’s me working in a file declaring a list instance:

Caml.Net.Intellisense_ListInstance

The tool works across most aspects of SharePoint XML schema, so you get support for declaring Features, fields, content types, modules and so on. Although I’ve typed the first few characters in the screenshot above, the dialog actually appears as soon as you hit space so it’s great for discovering which attributes/child elements hang off the current node.

I did some beta testing on the tool before it was released, and noticed some areas such as the Feature upgrade schema which didn’t have documentation. John was open to me adding these, so after a couple of nights of XSD editing the Intellisense is now useful here too - Here’s me editing a Feature.xml file:

Caml.Net.Intellisense_ApplyElementManifests

..and showing some other elements:

Caml.Net.Intellisense_AddContentTypeField

Caml.Net.Intellisense_CustomUpgradeAction

Don’t stop me now!

If I was reading this post, I’d be saying “Yeah, you know this type of thing is great and everything, but they always slow Visual Studio down. And that makes me mad!”. John mentioned in an e-mail that a design goal for CAML.Net Intellisense was to have zero performance impact on the developer experience – it’s astonishingly fast, and just doesn’t get in the way at all. On that basis, I see no reason why every SharePoint developer shouldn’t install it as a core tool. John’s done a great job and I’m happy to contribute in some minor way. In the future I might look at the giant undertaking of documenting the ribbon schema, but secretly I’m hoping Wictor (or someone else) gets there before me ;)

CAML.Net Intellisense download

Wednesday, 9 February 2011

Repost - SP2010 AJAX part 8: Migrating existing apps to jQuery/AJAX

Special post – if you read my blog through an RSS reader, part 8 of this series may not show up due to a Feedburner screw-up. This is just to let you know that the article is on my site if you want it.

Thursday, 20 January 2011

Eliminating large JS files to optimize SharePoint 2010 internet sites

Back in the SharePoint 2007 timeframe, I wrote my checklist for optimizing SharePoint sites – this was an aggregation of knowledge from various sources (referenced in the article) and from diagnosing performance issues for my clients, and it’s still one of my more popular posts. Nearly all of the recommendations there are still valid for SP 2010, and the core tips like output caching, BLOB caching, IIS compression etc. can have a huge impact on the speed of your site. Those who developed SharePoint internet sites may remember that suppressing large JavaScript files such as core.js was another key step, since SharePoint 2007 added these to every page, even for anonymous users. This meant that the ‘page weight’ for SharePoint pages was pretty bad, with a lot of data going over the wire for each page load. This made SharePoint internet sites slower than they needed to be, since anonymous users didn’t actually need core.js (since it facilitates editing functionality typically only needed for authenticated users) and indeed Microsoft published a workaround using custom code here.

The SP2010 problem

To alleviate some of this problem, SharePoint 2010 introduces the Script On Demand framework (SOD) – this is designed to only send JavaScript files which are actually needed, and in many cases can load them in the background after the page has finished loading. Additionally, the JavaScript files themselves are minified so they are much smaller. Sounds great. However, in my experience it doesn’t completely solve the issue, and there are many variables such as how the developers reference JavaScript files. I’m guessing this is an area where Your Mileage May Vary, but certainly on my current employer’s site (www.contentandcode.com) we were concerned that SP2010 was still adding some heavy JS files for anonymous users, albeit some apparently after page load thanks to SOD. Some of the bigger files were for ribbon functionality, and this seemed crazy since our site doesn’t even use the ribbon for anonymous users. I’ve been asked about the issue several times now, so clearly other people have the same concern. Waldek also has an awesome solution to this problem involving creation of two sets of master pages/page layouts for authenticated/anonymous users, but  that wasn’t an option in our case.

 N.B. Remember that we are primarily discussing the “first-time” user experience here – on subsequent page loads, files will be cached by the browser. However, on internet sites it’s the first-time experience that we tend to care a lot about!

When I use Firebug, I can see that no less than 480KB of JavaScript is being loaded, with an overall page weight of 888KB (and consider that, although this is an image-heavy site, it is fairly optimized with sprite maps for images etc.):

TimelineWithoutScriptsSuppressed2

If we had a way to suppress some of those bigger files for anonymous users entirely, we’d have 123KB of JavaScript with an overall page weight of 478.5KB (70% of it now being the images):

TimelineWithScriptsSuppressed2

But what about page load times?

Right now, if you’ve been paying attention you should be saying “But Chris, those files should be loading after the UI anyway due to Script On Demand, so who cares? Users won’t notice!”. That’s what I thought too. However, this doesn’t seem to add up when you take measurements. I thought long and hard about which tool to measure this with – I decided to use Hammerhead, a tool developed by highly-regarded web performance specialist Steve Souders of Google. Hammerhead makes it easy to hit a website say 10 times, then average the results. As a sidenote, Hammerhead and Firebug do reasssuringly record the same page load time – if you’ve ever wondered about this in Firebug, it’s the red line in Firebug which which we care about. Mozilla documentation defines the blue and red lines (shown in the screenshots above) as:

  • Blue = DOMContentLoaded. Fired when the page's DOM is ready, but the referenced stylesheets, images, and subframes may not be done loading.
  • Red = load. Use the “load” event to detect a fully-loaded page.

Additionally, Hammerhead conveniently simulates first-time site visitors (“Empty cache”) and returning visitors (“Primed cache”) - I’m focusing primary on the first category. Here are the page load times I recorded:

Without large JS files suppressed:

image

With large JS files suppressed:

PageLoadStats_Suppressed

Reading into the page load times

Brief statistics diversion - I suggest we consider both the median and average (arithmetic mean) when comparing, in case you disagree with my logic on this. Personally I think we can use average, since we might have outliers but that’s fairly representative of any server and it’s workload. Anyway, by my maths the differences (using both measures) for a new visitor are:

  • Median – 16% faster with JS suppressed
  • Average – 24% faster with JS suppressed

Either way, I’ll definitely take that for one optimization. We’ve also shaved something off the subsequent page loads which is nice.

The next thing to consider here is network latency. The tests were performed locally on my dev VM – this means that in terms of geographic distance between user and server, it’s approximately 0.0 metres, or 0.000 if you prefer that to 3 decimal places. Unless your global website audience happens to be camped out in your server room, real-life conditions would clearly be ‘worse’ meaning the benefit could be greater than my stats suggest. This would especially be the case if your site has visitors located in other continents to the servers or if users otherwise have slow connections – in these cases, page weight is accepted to be an even bigger factor in site performance than usual.

How it’s done

The approach I took was to prevent SharePoint from adding the unnecessary JS files to the page in the first place. This is actually tricky because script references can originate from anywhere (user controls, web parts, delegate controls etc.) – however, SharePoint typically adds the large JS files using a ClientScriptManager or ScriptLink control and both work the same way. Controls on the page register which JS files they need during the page init cycle (early), and then the respective links get added to the page during the prerender phase (late). Since I know that some files aren’t actually needed, we can simply remove registrations from the collection (it’s in HttpContext.Current.Items) before the rendering happens – this is done via a control in the master page. The bad news is that some reflection is required in the code (to read, not write), but frankly we’re fine with that if it means a faster website. If you’re interested in the details, it’s because it’s not a collection of strings which are stored in HttpContext.Current.Items, but Microsoft.SharePoint.WebControls.ScriptLinkInfo objects (internal).

Control reference (note that files to suppress is configurable):

<!-- the SuppressScriptsForAnonymous control MUST go before the ScriptLink control in the master page -->
<COB:SuppressScriptsForAnonymous runat="server" FilesToSuppress="cui.js;core.js;SP.Ribbon.js" />
<SharePoint:ScriptLink language="javascript" Defer="true" OnDemand="true" runat="server"/> 

The code:

using System;
usingSystem.Collections;
using System.Collections.Generic;
using System.Reflection;
using System.Web;
using System.Web.UI;
 
namespace COB.SharePoint.WebControls
{
    /// <summary>
    /// Ensures anonymous users of a SharePoint 2010 site do not receive unnecessary large JavaScript files (slows down first page load). Files to suppress are specified 
    /// in the FilesToSuppress property (a semi-colon separated list). This control *must* be placed before the main OOTB ScriptLink control (Microsoft.SharePoint.WebControls.ScriptLink) in the 
    /// markup for the master page.
    /// </summary>
    /// <remarks>
    /// This control works by manipulating the HttpContext.Current.Items key which contains the script links added by various server-side registrations. Since SharePoint uses sealed/internal 
    /// code to manage this list, some minor reflection is required to read values. However, this is preferable to end-users downloading huge JS files which they do not need.
    /// </remarks>
    [ToolboxData("<{0}:SuppressScriptsForAnonymous runat=\"server\" />")]
    public class SuppressScriptsForAnonymous : Control
    {
        private const string HTTPCONTEXT_SCRIPTLINKS = "sp-scriptlinks";
        private List<string> files = new List<string>();
        private List<int> indiciesOfFilesToBeRemoved = new List<int>();
 
        public string FilesToSuppress
        {
            get;
            set;
        }
        
        protected override void OnInit(EventArgs e)
        {
            files.AddRange(FilesToSuppress.Split(';'));
 
            base.OnInit(e);
        }
 
        protected override void OnPreRender(EventArgs e)
        {
            // only process if user is anonymous..
            if (!HttpContext.Current.User.Identity.IsAuthenticated)
            {
                // get list of registered script files which will be loaded..
                object oFiles = HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS];
                IList registeredFiles = (IList)oFiles;
                int i = 0;
 
                foreach (var file in registeredFiles)
                {
                    // use reflection to get the ScriptLinkInfo.Filename property, then check if in FilesToSuppress list and remove from collection if so..
                    Type t = file.GetType();
                    PropertyInfo prop = t.GetProperty("Filename");
                    if (prop != null)
                    {
                        string filename = prop.GetValue(file, null).ToString();
 
                        if (!string.IsNullOrEmpty(files.Find(delegate(string sFound)
                        {
                            return filename.ToLower().Contains(sFound.ToLower());
                        })))
                        {
                            indiciesOfFilesToBeRemoved.Add(i);
                        }
                    }
 
                    i++;
                }
 
                int iRemoved = 0;
                foreach (int j in indiciesOfFilesToBeRemoved)
                {
                    registeredFiles.RemoveAt(j - iRemoved);
                    iRemoved++;
                }
 
                // overwrite cached value with amended collection.. 
                HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS] = registeredFiles;
            }
            
            base.OnPreRender(e);
        }
    }
}

Usage considerations

For us, this was an entirely acceptable solution. It’s hard to say whether an approach like this would be officially supported, but it would be simple to add a “disable” switch to potentially assuage those concerns for support calls. Ultimately, it doesn’t feel too different to the approach used in the 2007 timeframe to me, but in any case it would be an implementation decision for each deployment and it may not be suitable for all. Interestingly, I’ve shared this code previously with some folks and last I heard it was probably going to be used on a high-traffic *.microsoft.com site running SP2010, so it was interesting for me to hear those guys were fine with it too.

Additionally, you need to consider if your site uses any of the JavaScript we’re trying to suppress. Examples of this could be SharePoint 2010’s modal dialogs, status/notification bars, or Client OM etc.

Finally, even better results could probably be achieved by tweaking the files to suppress (some sites may not need init.js for example), and extending the control to deal with CSS files also. Even if you weren’t to do this, test, test, test of course.

Summary

Although there are many ways to optimize SharePoint internet sites, dealing with page weight is a key step and in SharePoint much of it is caused by JavaScript files which are usually unnecessary for anonymous users. Compression can certainly help here, but comes with a trade-off of additional server load, and it’s not easy to calculate load/benefit to arrive at the right compression level. It seems to me that it would be better to just not send those unnecessary files down the pipe in the first place if we care about performance, and that’s where I went with my approach. I’d love to hear from you if you think my testing or analysis is flawed in any way, since ultimately a good outcome for me would be to discover it’s a problem which doesn’t really need solving so that the whole issue goes away!