Thursday, 20 January 2011

Eliminating large JS files to optimize SharePoint 2010 internet sites

Back in the SharePoint 2007 timeframe, I wrote my checklist for optimizing SharePoint sites – this was an aggregation of knowledge from various sources (referenced in the article) and from diagnosing performance issues for my clients, and it’s still one of my more popular posts. Nearly all of the recommendations there are still valid for SP 2010, and the core tips like output caching, BLOB caching, IIS compression etc. can have a huge impact on the speed of your site. Those who developed SharePoint internet sites may remember that suppressing large JavaScript files such as core.js was another key step, since SharePoint 2007 added these to every page, even for anonymous users. This meant that the ‘page weight’ for SharePoint pages was pretty bad, with a lot of data going over the wire for each page load. This made SharePoint internet sites slower than they needed to be, since anonymous users didn’t actually need core.js (since it facilitates editing functionality typically only needed for authenticated users) and indeed Microsoft published a workaround using custom code here.

The SP2010 problem

To alleviate some of this problem, SharePoint 2010 introduces the Script On Demand framework (SOD) – this is designed to only send JavaScript files which are actually needed, and in many cases can load them in the background after the page has finished loading. Additionally, the JavaScript files themselves are minified so they are much smaller. Sounds great. However, in my experience it doesn’t completely solve the issue, and there are many variables such as how the developers reference JavaScript files. I’m guessing this is an area where Your Mileage May Vary, but certainly on my current employer’s site (www.contentandcode.com) we were concerned that SP2010 was still adding some heavy JS files for anonymous users, albeit some apparently after page load thanks to SOD. Some of the bigger files were for ribbon functionality, and this seemed crazy since our site doesn’t even use the ribbon for anonymous users. I’ve been asked about the issue several times now, so clearly other people have the same concern. Waldek also has an awesome solution to this problem involving creation of two sets of master pages/page layouts for authenticated/anonymous users, but  that wasn’t an option in our case.

 N.B. Remember that we are primarily discussing the “first-time” user experience here – on subsequent page loads, files will be cached by the browser. However, on internet sites it’s the first-time experience that we tend to care a lot about!

When I use Firebug, I can see that no less than 480KB of JavaScript is being loaded, with an overall page weight of 888KB (and consider that, although this is an image-heavy site, it is fairly optimized with sprite maps for images etc.):

TimelineWithoutScriptsSuppressed2

If we had a way to suppress some of those bigger files for anonymous users entirely, we’d have 123KB of JavaScript with an overall page weight of 478.5KB (70% of it now being the images):

TimelineWithScriptsSuppressed2

But what about page load times?

Right now, if you’ve been paying attention you should be saying “But Chris, those files should be loading after the UI anyway due to Script On Demand, so who cares? Users won’t notice!”. That’s what I thought too. However, this doesn’t seem to add up when you take measurements. I thought long and hard about which tool to measure this with – I decided to use Hammerhead, a tool developed by highly-regarded web performance specialist Steve Souders of Google. Hammerhead makes it easy to hit a website say 10 times, then average the results. As a sidenote, Hammerhead and Firebug do reasssuringly record the same page load time – if you’ve ever wondered about this in Firebug, it’s the red line in Firebug which which we care about. Mozilla documentation defines the blue and red lines (shown in the screenshots above) as:

  • Blue = DOMContentLoaded. Fired when the page's DOM is ready, but the referenced stylesheets, images, and subframes may not be done loading.
  • Red = load. Use the “load” event to detect a fully-loaded page.

Additionally, Hammerhead conveniently simulates first-time site visitors (“Empty cache”) and returning visitors (“Primed cache”) - I’m focusing primary on the first category. Here are the page load times I recorded:

Without large JS files suppressed:

image

With large JS files suppressed:

PageLoadStats_Suppressed

Reading into the page load times

Brief statistics diversion - I suggest we consider both the median and average (arithmetic mean) when comparing, in case you disagree with my logic on this. Personally I think we can use average, since we might have outliers but that’s fairly representative of any server and it’s workload. Anyway, by my maths the differences (using both measures) for a new visitor are:

  • Median – 16% faster with JS suppressed
  • Average – 24% faster with JS suppressed

Either way, I’ll definitely take that for one optimization. We’ve also shaved something off the subsequent page loads which is nice.

The next thing to consider here is network latency. The tests were performed locally on my dev VM – this means that in terms of geographic distance between user and server, it’s approximately 0.0 metres, or 0.000 if you prefer that to 3 decimal places. Unless your global website audience happens to be camped out in your server room, real-life conditions would clearly be ‘worse’ meaning the benefit could be greater than my stats suggest. This would especially be the case if your site has visitors located in other continents to the servers or if users otherwise have slow connections – in these cases, page weight is accepted to be an even bigger factor in site performance than usual.

How it’s done

The approach I took was to prevent SharePoint from adding the unnecessary JS files to the page in the first place. This is actually tricky because script references can originate from anywhere (user controls, web parts, delegate controls etc.) – however, SharePoint typically adds the large JS files using a ClientScriptManager or ScriptLink control and both work the same way. Controls on the page register which JS files they need during the page init cycle (early), and then the respective links get added to the page during the prerender phase (late). Since I know that some files aren’t actually needed, we can simply remove registrations from the collection (it’s in HttpContext.Current.Items) before the rendering happens – this is done via a control in the master page. The bad news is that some reflection is required in the code (to read, not write), but frankly we’re fine with that if it means a faster website. If you’re interested in the details, it’s because it’s not a collection of strings which are stored in HttpContext.Current.Items, but Microsoft.SharePoint.WebControls.ScriptLinkInfo objects (internal).

Control reference (note that files to suppress is configurable):

<!-- the SuppressScriptsForAnonymous control MUST go before the ScriptLink control in the master page -->
<COB:SuppressScriptsForAnonymous runat="server" FilesToSuppress="cui.js;core.js;SP.Ribbon.js" />
<SharePoint:ScriptLink language="javascript" Defer="true" OnDemand="true" runat="server"/> 

The code:

using System;
usingSystem.Collections;
using System.Collections.Generic;
using System.Reflection;
using System.Web;
using System.Web.UI;
 
namespace COB.SharePoint.WebControls
{
    /// <summary>
    /// Ensures anonymous users of a SharePoint 2010 site do not receive unnecessary large JavaScript files (slows down first page load). Files to suppress are specified 
    /// in the FilesToSuppress property (a semi-colon separated list). This control *must* be placed before the main OOTB ScriptLink control (Microsoft.SharePoint.WebControls.ScriptLink) in the 
    /// markup for the master page.
    /// </summary>
    /// <remarks>
    /// This control works by manipulating the HttpContext.Current.Items key which contains the script links added by various server-side registrations. Since SharePoint uses sealed/internal 
    /// code to manage this list, some minor reflection is required to read values. However, this is preferable to end-users downloading huge JS files which they do not need.
    /// </remarks>
    [ToolboxData("<{0}:SuppressScriptsForAnonymous runat=\"server\" />")]
    public class SuppressScriptsForAnonymous : Control
    {
        private const string HTTPCONTEXT_SCRIPTLINKS = "sp-scriptlinks";
        private List<string> files = new List<string>();
        private List<int> indiciesOfFilesToBeRemoved = new List<int>();
 
        public string FilesToSuppress
        {
            get;
            set;
        }
        
        protected override void OnInit(EventArgs e)
        {
            files.AddRange(FilesToSuppress.Split(';'));
 
            base.OnInit(e);
        }
 
        protected override void OnPreRender(EventArgs e)
        {
            // only process if user is anonymous..
            if (!HttpContext.Current.User.Identity.IsAuthenticated)
            {
                // get list of registered script files which will be loaded..
                object oFiles = HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS];
                IList registeredFiles = (IList)oFiles;
                int i = 0;
 
                foreach (var file in registeredFiles)
                {
                    // use reflection to get the ScriptLinkInfo.Filename property, then check if in FilesToSuppress list and remove from collection if so..
                    Type t = file.GetType();
                    PropertyInfo prop = t.GetProperty("Filename");
                    if (prop != null)
                    {
                        string filename = prop.GetValue(file, null).ToString();
 
                        if (!string.IsNullOrEmpty(files.Find(delegate(string sFound)
                        {
                            return filename.ToLower().Contains(sFound.ToLower());
                        })))
                        {
                            indiciesOfFilesToBeRemoved.Add(i);
                        }
                    }
 
                    i++;
                }
 
                int iRemoved = 0;
                foreach (int j in indiciesOfFilesToBeRemoved)
                {
                    registeredFiles.RemoveAt(j - iRemoved);
                    iRemoved++;
                }
 
                // overwrite cached value with amended collection.. 
                HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS] = registeredFiles;
            }
            
            base.OnPreRender(e);
        }
    }
}

Usage considerations

For us, this was an entirely acceptable solution. It’s hard to say whether an approach like this would be officially supported, but it would be simple to add a “disable” switch to potentially assuage those concerns for support calls. Ultimately, it doesn’t feel too different to the approach used in the 2007 timeframe to me, but in any case it would be an implementation decision for each deployment and it may not be suitable for all. Interestingly, I’ve shared this code previously with some folks and last I heard it was probably going to be used on a high-traffic *.microsoft.com site running SP2010, so it was interesting for me to hear those guys were fine with it too.

Additionally, you need to consider if your site uses any of the JavaScript we’re trying to suppress. Examples of this could be SharePoint 2010’s modal dialogs, status/notification bars, or Client OM etc.

Finally, even better results could probably be achieved by tweaking the files to suppress (some sites may not need init.js for example), and extending the control to deal with CSS files also. Even if you weren’t to do this, test, test, test of course.

Summary

Although there are many ways to optimize SharePoint internet sites, dealing with page weight is a key step and in SharePoint much of it is caused by JavaScript files which are usually unnecessary for anonymous users. Compression can certainly help here, but comes with a trade-off of additional server load, and it’s not easy to calculate load/benefit to arrive at the right compression level. It seems to me that it would be better to just not send those unnecessary files down the pipe in the first place if we care about performance, and that’s where I went with my approach. I’d love to hear from you if you think my testing or analysis is flawed in any way, since ultimately a good outcome for me would be to discover it’s a problem which doesn’t really need solving so that the whole issue goes away!

16 comments:

Mario Cortés Flores said...

good job!!!

Jeff said...

Nice write up Chris! Thanks for the detail on speed savings and how the solution works. Very interesting.

Benedict Alphonse said...

Nice Post. Chris. Adds value to development activities.

kg said...

Excellent job, great post.

Unknown said...

Excellent Post!!
If we add back init.js it adds all the JS files that we might have exluded as it has a refrence to all the other JS files.

After removing the JS file Forms submit stopped working and also had JS error on page where we have used OTB content editor webpart to display the static contnet. To resolve this we have created another master page where we have not excluded any of the JS files and it works. But we have to test test and test.

Can you suggest what are the dependencies on these JS files.

Chris O'Brien said...

@Anjul,

Interesting finding - that's not the behaviour we have with our site. As you can see in the 2nd screenshot init.js is present but the other large JS files are not. My guess is you're finding a difference due to the way you are using ScriptLink controls in your site.

In terms of the dependencies between the Microsoft JS files - I don't have an established list, an in case you could have some variance depending on which JS code runs on your pages. Test, test, test is the only way I'm afraid.

Thanks,

Chris.

Sal Carl said...

Hi Chris,
Thank you for the post. I'm trying to implement it on my master page. I have built a class and now try it to referne it on my master page. Do I need to register a tag prefix?

Thank you
Sal

Chris O'Brien said...

@Sal,

Yes you will need a TagPrefix, just as you would when adding any control to an .aspx page.

Cheers,

Chris.

Christophe said...

hi Chris,

I'd like to follow your advice, but - as expected - I get error messages when removing core.js. Any advice on a clean process to remove js files when starting from a standard master page (masterV4)?
For the record, the current forum thread:
http://sharepoint.stackexchange.com/questions/15669/errors-after-removing-core-js

Thanks!

Chris O'Brien said...

@Christophe,

I'm not sure this technique is best used with the standard master pages - those things have way too many controls in which will depend on functions which could be in core.js.

The main purpose of the approach (in my head) is for minimal master pages, most likely used for anonymous sites. Otherwise I think you could be chasing down and removing individual controls for some time.

Does that make sense?

C.

Anonymous said...

Hi Chris.

I've followed your example and it appears to be working. However, how can I remove some of the other .js files on the page? For example, sp.runtime.js, sp.js, and cui.js. These files are not in our HttpContext.Current.Items collection.

Thanks

Chris O'Brien said...

@Anonymous,

I don't think you'll be able to remove the files you mention - SharePoint loads them in a different way, and you'll most likely get errors even in presentation mode.

It's a shame, but I think that's the way it is.

HTH,

Chris.

François said...

Great article!

You mention that it could be improve to remove useless CSS.

I tried some reflection but didn't found when the css are loaded.

Do-you have any clue to remove the following css?
controls.css
page-layouts-21.css
rca.css
corev4.css

Thanks,

François

Anonymous said...

i have used your code, placed the compiled dll in bin folder of my web application (both default and extended) and updated web.config accordingly.

when i try to access the site anonymously i get error, i try to check logs but not able to find any particular reason.

am i missing something?

Chris O'Brien said...

@Anonymous,

Afraid I'm not sure why this would happen. We used the code in this article on an anonymous public-facing website, so I'm not sure what you could be doing differently..

Chris.

Edson Catugy said...

Excellent Post!!