Monday, 18 November 2024

Speaking at ESPC 2024 – more AI and Copilot!

ESPC, the European SharePoint, Microsoft 365 and Azure Conference - which always seems to have amazing venues which make you look tiny as a speaker, see left - always bookends the year for many of us in Microsoft tech (at least on this side of the pond). Winter timing allows us to reflect on tech developments through the year as well as discuss announcements from Microsoft’s big Ignite conference, which is typically just before (this week, November 19-22 this year). To speak at ESPC is always a privilege and I think this is my 10th year now. The event is in Stockholm this year between December 2-5 and it’s not too late to get tickets – as one of the year’s best personal development events in our space, it’s a great way to gain knowledge and take some great benefits back to your team and company.

This post is to mention the talks I'll be giving - but before then, some quick facts about the conference:
  • Usually around 2000-2500 attendees, a mix of different roles but generally focused on Microsoft tech
  • Always a big Microsoft representation – from senior execs like Jeff Teper to key product and strategy folks including Karuana Gatimu, Vesa Juvonen, and others
  • December 2-5, starting with a day of (optional) full-day workshops on the 2nd
  • Lots of coverage of AI, Copilot, Azure, Power Platform, SharePoint, Teams and more on the tech side – accompanied by lots of strategy, governance, and end-user sessions
  • Typically a forum for Microsoft to make key product strategy announcements
Here’s what I’ll be talking about:
 

“AI on your data” deep-dive - comparing Microsoft 365 Copilot, Copilot Studio, and Azure OpenAI on your data

Astract: Every business can gain from combining generative AI and LLMs with company data, and so far Retrieval Augmented Generation (RAG) has been the most common technical approach to this. Providing an LLM with full knowledge of your company - your products and services, clients, employees and expertise, past projects, and other valuable information - has huge potential for simplifying work as we've seen with Copilot and other "gen AI on your data" technologies.

The options are complex however, and many CIOs are wrestling with AI strategy and tool decisions. Is the answer simply Microsoft 365 Copilot? What about no-code Copilot GPTs, low-code Copilot Studio, mid-range Azure OpenAI “on your data”, or building something with a "ChatGPT accelerator" using Azure OpenAI and Azure AI Search? Choosing the right approach can seem like a minefield – do you want to bring in data from Microsoft 365, Azure, SQL, a SaaS app, or simply a public website? Are you trying to provide a Copilot against a small knowledge base or a more expansive ecosystem of sites? Do you want to pay by user or by AI consumption? Should the experience surface in Teams, a Copilot plugin, or be embedded in an intranet or internet site?

This session aims to be a navigator through the Copilot and "AI on your data" maze, informed by battle scars from implementing all these forms of AI.


Implementing gen AI on the UK’s biggest infrastructure project – a ChatGPT and Azure OpenAI story

Abstract: The HS2 railway construction project stands as a cornerstone of the United Kingdom's infrastructure evolution, showcasing an ambitious leap in engineering and construction innovation. The scale of HS2 introduces a unique data challenge. Every mile of construction feeds into a massive repository of information, encompassing everything from ground surveys and compaction reports to intricate local authority covenants and extensive service contracts. But with more than 20 disparate platforms in use, ranging from in-house catabases, SaaS applications, and custom apps developed by suppliers, piecing together data related to aspects of the construction was becoming more challenging.

With the explosion of generative AI, the question came from programme leadership - “Can’t we just ask something like ChatGPT the questions, and it find the answers from across our systems?” This is a case study of a unique project, where generative AI is streamlining critical processes like incident management in a hugely complex environment. The combination of cutting-edge technology (Azure OpenAI, Azure AI Search, Semantic Kernel and more), talented developers and engineers, and a visionary construction organisation provides a shining example of how gen AI and Large Language Models are making a real difference to the world.

This session aims to convey both the business challenge and the technical solutions, and it’s a story with a few twists and turns along the way. Not unlike the railway line!


Closing thoughts and conference details

We’re now 2 years past the introduction of ChatGPT, and in my experience 2024 has been a year where lots of AI projects have made it into production. Technology has never been more impactful on the world. At the same time, many organisations are still debating whether to go big on Copilot and how to make the right choices in their AI strategy and technology decisions. So, it’s a perfect time for ESPC and the conversations that happen there – I’m *extremely* excited for my sessions and to share knowledge and perspectives from the Advania team and I.

Hopefully see you there!

https://www.sharepointeurope.com/pricing/

Tuesday, 10 September 2024

Comparing productivity Copilot options - Copilot for M365, Copilot Studio, build your own Copilot etc.

I spend a chunk of my time talking to CIOs and other leaders deciding the AI strategy for their organisation, and a common conversation at the moment is how to frame Copilot for Microsoft 365 against other options. It's the new "what to use when" conversation in Microsoft-land. If the challenge being addressed is 'providing the right set of AI tools to a business', there are multiple options from Microsoft and beyond, and choosing the right approach comes down to properly considering what you're trying to solve for and understanding not just the orientation, but also the capabilities and limitations of the various options. 

Common questions I'm hearing include:

  • We're not sure about licensing Copilot for M365 across the entire business, what other options do I have?
  • I see ChatGPT has an Enterprise version now - it seems to solve the data privacy risks of the consumer version, and all of our employees are familiar with ChatGPT. Should I use that?  
  • Why could I not just use the free version of Microsoft Copilot (i.e. what was "Bing Chat Enterprise")?
  • I understand Copilot for Microsoft 365, but where does Copilot Studio fit in?

Understanding the characteristics and pricing of the major options becomes vital to make the right decisions. Sometimes it will be an economic decision, sometimes it will be capability led - we've seen it all. I'm a strong advocate of Copilot for Microsoft 365 (as is my employer, Advania UK - we licensed 100% of employees), but it's not necessarily the answer to every AI question. Indeed, my team has solved AI challenges with "custom Copilot" approaches using Azure OpenAI that simply couldn't be done with Copilot for M365, including integrating large volumes of data from SaaS apps and custom databases, and steering the AI past the one-size-fits-all behaviour of Copilot. Despite it's strengths, Copilot doesn't quite offer the flexibility or cost effectiveness needed for certain scenarios. 

Major AI options on a slide

To help with AI strategy conversations, I produced the slide below which I'm sharing here in case it's useful for others (you can download it in PowerPoint form). It's an attempt to summarise some of the key factors and differences, though I'd be the first to say it doesn't cover every consideration and has some subjectivity to it. When I'm asked a question like "where does X fit?" or "how should I think of Y?" question, I often put the slide up to call out some of the major factors and differences as we walk through some options.

When considering AI options, I see some of the major considerations as:

  • Overall positioning and value prop ("Headline" in my slide)
  • The cost model
  • Costs
  • Data sources - which company sources of data can the AI talk to?
  • Automation - whether you can fully automate a process with the tool, or whether it's purely end-user driven
  • The surface - where the AI shows up
  • Key limitations

I often feel this kind of thinking and comparison is what's missing from Microsoft and other vendors. The strengths of a technology are extolled in the documentation and content, but rarely the limitations and "but bear in mind...." considerations. As an example, the Copilot Studio page is unlikely to ever say that it's a great technology for some use cases, "but good luck forecasting your run costs, because the pricing model makes it really hard!" It's factors like this which are hugely relevant if you're deciding (or paying for) the AI strategy for your organisation however. 

There's quite a lot of info - and some opinion - in the slide, so I unfold some of my thinking in the notes below. Here's the slide itself (download link at the end):


Obviously the condensed text and bullets on the slide can only tell half the story in a complex landscape like this, so let's expand the thinking - at least for the major elements rather than every angle.

Headline/value proposition


Most will be familiar with the value proposition of Copilot for Microsoft 365 - your data in M365 (documents, mail, Teams chats and more) integrated with LLM capability and surfaced in the flow of work via Teams and Office apps. If you're an existing user, you'll probably cite the very strong capabilities around Teams calls and meetings as a highlight - especially intelligent recap, with auto-generated meeting summaries, action items, speaker attribution, being able to ask questions of the transcript and more. Alongside, there are lots of other capabilities which boost productivity across many organisational use cases and roles/functions, particularly when working with documents. Sometimes accuracy and results can be mixed, and that sometimes that's due to certain Copilot limitations today - more on this later. Sometimes I'm asked how Copilot Studio relates to Copilot for Microsoft 365 - but this has a different value prop entirely. Copilot Studio is suited to 'focused' Copilots for specific use cases - perhaps to provide answers on HR policies, employee onboarding, product documents, or an FAQ, and they can be surfaced internally or externally on a .com website. Copilot Studio solutions are often modern day chatbots, much more intelligent than those of the past because they combine LLM power, a focused knowledge base or set of data, and particular instructions (grounding) to the AI on how to provide the best possible answers. However, Copilot Studio isn't suited to solving broad AI needs (e.g. an internal Private ChatGPT) because of the limitations on data integration and difficulties predicting costs. For smaller needs it's perfect though - our team has built some amazing solutions already, and I predict many apps and in particular Power Apps, will shift to be Copilot Studio solutions in the next few years. We're just all comfortable with using chat for different things compared to 5 years ago now that it can actually work - CIOs and app makers should be cognisant of this.

Microsoft also offer Copilot (free) and Copilot Pro (£19 per month) as the evolution of what was Bing Chat Enterprise, but these should be seen as competitors to public/consumer ChatGPT rather than a true organisational solution. The main callout is that you have no roadmap or possiblity of integrating company data with these tools - they are LLM only. In Copilot Pro you can use the AI with a document you have open in Word (e.g. for summarisation, generation, rewording, analysis etc) and there is enterprise data protection so any sensitive data can't leak out, but Microsoft Copilot/Copilot Pro don't provide any way of answering questions from company data at scale. A CIO might find Microsoft Copilot appealing as a free option with no barrier to entry, and while it's arguably safer in the workplace than public ChatGPT, my view is you're likely to confuse employees if it's made available as a first step in AI, but then subsequently replaced by other AI tools (e.g. Copilot for M365 or an internal Private ChatGPT) as you unfold your AI strategy.

A Private ChatGPT solution built on Azure OpenAI can be attractive because the costs aren't per-user. Because the costs scale in a different way (i.e. they are "platform + AI consumption" rather than per-user), we've seen a lot of interest in this from mid-sized and larger organisations who aren't sure about a large scale Copilot investment but do want to provide generative AI tools integrated with organisational data. In quite a few cases, the organisation is choosing to deploy a platform like this in addition to Copilot rather than instead of, and at Advania we've had quite a lot of success with our clients in this space. We commonly integrate Microsoft 365/SharePoint/Teams data so the AI is able to answer questions related to the company's clients, projects, people, policies, sales and product information etc. It becomes a powerful tool that changes the employee experience significantly because it provides a new way to find organisational knowledge and get straight to the answer. As alluded to earlier, it can also be the foundation for a tailored AI platform designed to support specific use cases and integrated with data from different platforms. We've integrated Azure OpenAI with incident management systems, HR data, employee skills/certification data, access card systems and much more. As such, a Private ChatGPT solution built on standard gen AI approaches can go beyond simply being a digital assistant for common productivity and creativity tasks, and be the AI platform where high value processes and use cases are enabled. We're seeing many organisations think in terms of an AI platform for the business that can scale and be extended over the next few years as AI opportunities and use cases emerge. 

Further thoughts

As you can probably tell, there are many considerations across multiple dimensions trying to come out in the slide above - and even then it's only a biased, partial view. I'm not unfolding every point referenced on the slide, but just a couple of others to call out:

  • Is the tool automatable? 
    • Whether the AI tool supports automation is a key factor that isn't always considered. For some use cases you want to throw 100 or 1000 items to the AI in bulk, but ready-to-go solutions like Copilot for Microsoft 365 don't lend themselves to this because they have no API. We're doing a lot of work in this space with Private ChatGPT/Azure OpenAI solutions for Due Diligence Questionnaires, Cyber Security/InfoSec questionnaires, RFPs etc. - in many of these cases, organisations are seeing the potential for better outcomes than with specialist tools they already have, but which don't use AI with their data effectively and/or haven't really landed with the business, which is interesting. I posted about this observation here.
  • Limitations 
    • Every AI tool has boundaries and someone supporting your decision-making should understand them. Some examples are called out on the slide, but of note is that Private ChatGPT solutions typically can't index Teams chats and e-mails in the way that Copilot can (nor do they provide Teams meeting support) - they are limited to understanding knowledge found in documents primarily. Conversely, Copilot for M365 has limitations in understanding long documents, and this can be very relevant in some scenarios. Microsoft recently announced the limit is now around 80k words when using Copilot in Word, but it's worth also understanding that - as far as I know - it's still only around 20k words in terms of what Copilot Chat will understand (i.e. when asking Copilot questions of your data generally), because that's how much gets indexed into the Semantic Index.

      At the same time, these limits and the general value of the Copilot proposition should only improve from here.

Conclusions

There's a lot to know, and as I mentioned at the start this is the new "what to use when" in Microsoft technology - but options and flexibility are rarely a bad thing if you can find your way to the right choices. 

If you're interested in these perspectives and can make it to this year's ESPC24 Conference in Stockholm this December, I'll be speaking about this in my session AI on your data Deep-Dive – Comparing Copilot Studio, Copilot GPTs, and Azure OpenAI on your Data. Hopefully the slide is useful to someone either way.



Saturday, 30 March 2024

Join me at ECS 2024 for a full day workshop on Microsoft Copilot solutions with Jussi Roine

2024 will be a pivotal year for Microsoft AI and their range of Copilots in particular - and it was great to hear plans and insights during a week on campus with Microsoft in Seattle for the MVP Summit recently. This is the year when the AI strategy is in full swing for most organisations (or at least, should be), and the need to understand and decide what to provide to employees really takes effect. Between Copilot for Microsoft 365, Microsoft Copilot, role-specific Copilots (such as GitHub Copilot, Copilot for Security, Copilot for Sales, Copilot for Service etc.), or custom Copilots built with Copilot Studio or Azure OpenAI, there’s a lot to consider. For anyone tasked with charting the path forward, the big Microsoft-oriented conferences this year are the place to jump start your learning, hear how others are approaching it, and take back ideas, plans and recommendations – if you haven’t already spoken to your boss about this (or allocated budget if you're the lucky purse holder), now is the time.

With over 2500 attendees, 150+ sessions, talks from senior Microsoft leaders, and a great expo hall with over 75 exhibitors to talk to, the European Collaboration Summit 2024 is actually the biggest Microsoft 365 and Power Platform conference in the world. The conference is held 14-16 May 2024, in Wiesbaden, Germany. I’m thrilled to be running a full day workshop on Microsoft Copilot solutions with my friend and fellow MVP Jussi Roine. Say what you like about that man but you can't deny his vast experience and love for broccoli. We're badging this a "Copilot Powerclass", giving some coverage to a broad range of Copilots but also diving deep on Copilot for Microsoft 365, Windows Copilot, Copilot Studio, and various things to know about data governance, licensing and more.

Despite speaking at big conferences for a decade and half now, I'm excited about this and I have to confess a bit nervous :) This will be my first full-day workshop and it feels like a lot of hours to fill. On the other hand, there's a lot to talk about and every client conversation I have at the moment about Copilot seems to need more time. In any case, Jussi tells me my role is simply to bring vegetables and he'll take care of everything else. 

Still tickets left, but going fast (for both the overall event and our workshop)
At the time of writing (end of March 2024), you're not too late to make it to the European Collaboration Summit - I hear from the organisers that the the vast majority of tickets are allocated, but there's definitely time to join us in Germany in May if you're quick.

And if you like the sound of the full day Copilot session Jussi and I are running, there's still time for that too. We were expecting 30-50 attendees, but at the time of writing we have 87 sign-ups already which is amazing. We still have room for a few more though as the room fits 120 apparently, and we'd love to see you there if you have an interest in learning more about Copilot.

There are also several other workshops with amazing speakers which look great by the way - see ECS 2024 tutorials for the full list). The page to go to is Tickets - European Collaboration Summit (collabsummit.eu)
 

More details on our Copilot workshop

Microsoft Copilot(s) Powerclass

In today's digital age, organizations require powerful tools to improve their productivity, speed up decision making, and secure their environment. The Microsoft Copilot capabilities offer this and more. Join this Full Day Tutorial, designed for business decision-makers and technical decision-makers, to learn more about Microsoft 365 Copilot, Windows Copilot, Security Copilot, and Power Platform Copilot. Over the 8 hours, you will also learn about critical technical topics such as Generative AI, Large Language Models (LLMs), licensing, use cases, productivity, and possibilities. 

Explore how these Copilots can help your organization improve productivity by handling your various operations automatically. Additionally, you will learn about their security capabilities to ensure your environment is kept safe at all times. You will also learn about the licensing requirements for using these Copilots and how they apply to your organization. Finally, there will be discussions on the possibilities and use cases for these Copilots, with hands-on experience that will enable you to harness their full potential. 

The workshop is specially designed to empower you with the knowledge, insights and inspiration you need to make informed decisions around utilizing Microsoft Copilot capabilities to improve your organization's performance. Join us and elevate your business to the level.


Hopefully see you there! 


Tuesday, 20 February 2024

Getting started with plugin development for Copilot for Microsoft 365

In my last post we looked at the return on investment for Copilot for Microsoft 365, specifically in terms of time savings required for the $30/£24.70 per user per month licensing investment to make sense. In this post I want to turn attention to extending Copilot and getting started in the world of Copilot plugins. Copilot for Microsoft 365 can become even more powerful when integrated with other company systems - I created the slide below recently for a deck I was working on, and the four areas provide ideas on where the value might be and potential scenarios:

However, at the current time (February 2024) getting started with plugin development is a bit gnarly - there's lots of documentation to read and there are some interesting practicalities to consider. I’ve spent some time on this, and the sections below give a quick summary of initial findings which may be helpful for anyone else going down this path.

Tenants and licensing

Much of the initial complexity falls into this bucket. Things to know include:

  • Copilot plugin development needs production Copilot licenses, there’s no way around this. This may mean developing in your production tenant (if you are a licensed user) or buying extra Copilot licenses for other tenants
  • Microsoft 365 Developer tenants cannot be used for plugin development 
  • If you’re on the Microsoft 365 Developer TAP (we are at Advania), these tenants can be used but you still need to buy Copilot licenses in that tenant
  • In addition to the core Copilot license, the user also needs to be assigned a "Microsoft Copilot for Microsoft 365 developer license"
  • Copilot for Microsoft 365 licenses effectively grant “Copilot Studio use rights” – importantly, this allows you to create and run M365 Copilot plugins only, not to run standalone Copilots (see note in the blue box below) 
  • Since plugin development is still in preview, production tenants need to be enabled for it via a special helpdesk ticket - there's special wording to use, indicating this is somewhat hand-cranked in the backend for now (see the Copilot extensibility prerequisites article linked at the end for the exact words to use) 

Developing standalone Copilots (not plugins)
Coming the other way round from the plugin focus of this article, the other primary use of Copilot Studio is to develop standalone Copilots using a low-code approach. Examples could be a HR chatbot providing answers on policies and internal benefits and hosted in Teams or a SharePoint intranet page, or a customer service chatbot on an external website providing answers from a knowledge base of uploaded documents. In these cases, you don't need a Copilot for Microsoft 365 license but you will instead be paying the $200/£165 per month run cost, which gets you 25k messages across all such Copilots (with additional capacity charged extra). There's a bit of nuance in what constitutes a message - a typical interaction counts as one message but invoking gen AI counts as two - but in short you are calculating based on expected usage.

The image at the bottom of this article more directly compares the two flavours.

Building Copilot plugins using the Power Platform (low-code)

The prospect of using low-code for plugin development is very appealing. In this space, note the following:   

  • Power Platform connectors can be turned into Copilot plugins by converting them to be a “Connector AI plugin” – whether custom or pre-built connectors (e.g. ServiceNow, Zendesk etc.) 
  • However, today connectors need to be certified – meaning custom connectors cannot easily be used for plugins, since certification is a complex process for major ISVs (i.e. not internal teams or partners simply trying to create solutions for a single organisation)  
  • Additionally, only read-only actions are supported for now 
  • Other Power Platform approaches are possible – using Power Automate Flows, the new AI prompts capability etc. in Copilot plugins. However, again this does not seem to be possible at the time of writing unfortunately 

Building Copilot plugins using Teams message extensions (pro-code) 

The alternative architecture for Copilot for Microsoft 365 plugins is based on Teams development, specifically Teams message extensions. This makes sense, and before the dawn of GPT we've used this approach at Advania UK to build other conversational bots in Teams. Some details to be aware of here:

  • The advantage of Teams message extensions is that plugins with more advanced UI can be used (e.g. adaptive cards in AI responses)
  • Conceivably, the solution you build as Copilot plugin could also be a regular Teams message extension in Teams chat and be surfaced in Outlook - enabling you to provide your experience in different ways within Microsoft 365
  • Teams message extensions are the right approach if you're working at scale (with large volumes of data or user load) 
  • Permissions - if you are developing your plugin using Teams message extensions, you’ll need the ability to side-load apps into the tenant
So that summarises some of the initial considerations in getting started with plugin development. It's also worth noting that Microsoft are pushing to create an entire ecosystem around plugins with marketplace approaches, meaning plugins can be sourced from internal developers, partners, specialist vendors and so on.

Also remember, Copilot extensibility is not just about plugins
All of the options and considerations above relate to plugin development, but this sits alongside the alternate path of bringing data into Copilot for Microsoft 365 using Graph Connectors. In that approach, the data you integrate is indexed (stored in the semantic index which sits behind Copilot) rather than simply being available via a read/write call-out of some kind. Graph Connectors bring other advantages such as making the data available in Microsoft 365 search, Viva Topics, Context IQ and even being used for content recommendations in Microsoft 365, but if you're working with data at scale you'll to purchase additional Graph Connectors index quota (you get 500 items for free per E5 or Copilot license). Microsoft's article Choose your extensibility path expands on these considerations.

Zooming out from plugin development in a different way, it's worth considering Copilot Studio as a whole since it's not just about Copilot for M365 plugins.

Copilot Studio - varying audiences, varying outputs

Copilot Studio can be slightly confusing in the Microsoft AI space because it's used to create different solutions and experiences - essentially including Copilot for Microsoft 365 plugins and standalone Copilots built with low-code. Many people recognise it as "what used to be Power Virtual Agents", but between licensing variations and what can be created there's a bit more to it than that. I posted the following on LinkedIn which summarises the two major usages of Copilot Studio:

That hopefully gives a sense of things from a Copilot Studio lens. Some of this is made clear from this table in the Power Platform licensing guide - see the image below, and note specifically that the output formats and available channels are different between the two paths, and that Copilot plugins using Power Platform approaches can use Standard, Premium and Custom connectors:

Summary

Extending Copilot for Microsoft 365 with plugins can be a great way to derive additional value by integrating systems, and opens the door to the possiblilty of Copilot becoming a universal interface for all of the apps and platforms an employee works with. The next few years could see significant changes to the employee experience in this regard, at least for organisations making the investment in Copilot for Microsoft 365. Other forms of Copilot can also be created - standalone Copilots (as Microsoft refer to them) are often focused on a particular business domain or use case, and can reach a broader audience because they surface in different places and don't rely on Copilot for Microsoft 365 licenses. Both experiences are created by makers/developers in Copilot Studio, but there are licensing and reach differences - and today, some things aren't quite in place (early 2024) because we're still in a preview phase for plugin development. No doubt the path will get smoothed out as we go through 2024.

References


Thursday, 18 January 2024

Copilot for Microsoft 365 - the surprising truth about time savings and ROI

Now that Copilot for Microsoft 365 can be purchased by anyone (with no minimum license count), organisations are starting to think about it more seriously as they form AI strategies and budgets. Looking across Microsoft's family of Copilots, some are free, some are licensed, some are general, and some are targeted as specific personas - but most would agree it's Copilot for Microsoft 365 which stands as the principal Copilot for workplace use. We could be moving from an era where countless hours go into creating and consuming information in very manual ways to a new era where generative AI is doing more of the work - to write that report, create that presentation, or write the words to respond to that e-mail. Those are creation examples, but when AI can summarise, identify key points, generate follow-up actions, and even identify areas of accord and discord, so many of the consumption-based subtasks we do also become more optimised and accelerated too. With the most expensive price tag, Copilot for Microsoft 365 is also the one where the decision-making for the investment is most complex.

Something I spoke about in a recent talk (at the European SharePoint, Microsoft 365 and Azure Conference) that some people latched onto is how surprising the numbers are in terms of time-savings needed for Copilot to pay for itself. The license is a significant investment of course at $30 or £24.70 per user per month with a 12 month commitment - whichever way you slice it, that's an expensive proposition given that the list price of E3, the entire productivity suite for enterprise users, is $36 or £33.10 per user per month. When seen as a "AI bolt-on" the cost of Copilot is indisputably high, and perhaps unsurprisingly when Microsoft announced pricing the common response seemed to be "far too expensive, it doesn't make if it costs nearly as much as the entire suite". There are lots of ways to look at this, but despite the similarities (both Microsoft offerings, both related to productivity, both additive to each other) a direct comparison of one vs. the other actually doesn't make too much sense to me personally - the propositions and value provided are so different. 

Principles for a Copilot value case

Before we look at the numbers, let's agree on three things:

  • The value case for Copilot for Microsoft 365 shouldn't hinge on time savings and an easily-modelled financial equation alone - there are other benefits which are less easily quantified which constitute value. Every organisation will need to form their view on this, but I list examples later in the article.
  • Time savings have to result in genuine gains - whether it's five minutes or five hours, most organisations will take the view that any time saved needs to translate into real value. In other words, there's no ROI if the organisation doesn't benefit from the additional time, perhaps because the time benefit goes exclusively to the employee instead or it simply doesn't go on productive work
  • Basing the model on known or anticipated use cases doesn't work - you can try to predict use cases ahead of time, but the reality for both Copilot and generative AI on the whole is that the business will find benefit in unpredicted ways. Better to pilot the technology in some way and see what happens

A simple ROI model

The calculator below models the break-even point for Copilot for Microsoft, based on time-savings alone, for three different salaries. I'll go into details below but it uses UK parameters for employer tax and so on to arrive at the "true" total cost of an employee per day, but the currency used is Euros since that's where most of my readers are.

The real point of course, is that it doesn't take five hours or even two hours per month saved for Copilot to pay for itself. So long as we can stand behind them, the time savings required are quite minimal:

As the salary increases even less time needs to be saved of course - but even on the lowest salary we're talking only 36 minutes per month. This is the surprise to many people.

Some detail on the calculation:
  • The calculator uses UK parameters, in fact the true cost per day of an employee in our company (Advania UK). This means the current employer tax rate (National Insurance in the UK) and a 4% pension contribution, but excluding other aspects sometimes modelled such as an allocation for office space etc. 
    • I haven't looked, but I suspect these percentages won't vary wildly across countries enough to skew the overall model dramatically
  • Although the tax parameters used etc. are UK, the currency used is Euros to reflect my readership
  • The Excel is linked below if you want to download and amend

If using or pointing people to a web page works better for you, Dan Toft has taken this and created a great online calculator which also allows you to edit the parameters - ROI Calculator (dan-toft.dk).  You'll need to be able to calculate the total cost from a base salary yourself (i.e. calculations for employer taxes and pensions aren't built in), but it depends what you need. Excellent work Dan.
 

The wider value case for Copilot for Microsoft 365

I mentioned earlier that measuring the ROI for Copilot based on time savings alone is only part of the picture, that it's not just about productivity gains and time efficiencies. I used this slide in my talk to expand the discussion to some of the wider benefits:

There's obviously quite a lot wrapped up into those four bullet points and they're not quantitative benefits you would model into a licensing business case, but the positive impact on the employee experience and output needs to be considered somewhere. I feel there's a reduction in cognitive load, and switching between different contexts and tasks becomes easier and less painful. When creating content, regardless of how good you are as a writer your output can be improved - Copilot will often bring in a point you hadn't considered or simply articulate the message better than you would. That's not to say that Copilot outputs are always ready to go or it's able to do the work for you - that would be lovely, but more often than I iterate with further prompts and/or add/edit/delete to what Copilot has created. Nevertheless, I'm faster on quite a few tasks and Copilot's ability to research and bring in facts or approaches others have used helps on the quality front.

As referenced in the last bullet, one area Copilot is particularly helpful in is comms and e-mail. There's drafting e-mails of course, and being able to ask for a summary of my inbox in the last week and pull out any actions I need to take is extremely powerful - even if I scan through the mails myself too, I'll spend less time doing so. Using the "Summary by Copilot" feature to summarise the main points of a long individual e-mail or thread works very well too - picking up context and making a judgement at speed on any action/response makes moving through your inbox simpler and quicker. Finally, the "Coaching by Copilot" capability in Outlook is surprisingly useful too - there's an initial novelty of having your communications and tone judged at first but we all write mails at speed often without considering enough quite how it might read on the other side, and it has enabled me to catch a couple of sub-optimal communications. I'm a senior leader in our company, and perhaps Copilot is right to catch me on phrases such as (real example here) "it seems bonkers to me that...." and suggest some wording that might be more appropriate! You can take it or leave it depending on the context (and there's an argument that some authenticity often comes from our from-the-hip communications), but having things pointed out at least prompts the reflection and supports the choice.

All in all, there are lots of benefits beyond the hard numbers. It's also an interesting thought that as employees increasingly expect a high-fidelity employee experience and toolset that allows them to give their best, Copilot for Microsoft 365 does make a clear statement on that and perhaps becomes a differentiating factor between organisations and teams battling for talent.

License strategies for Copilot for Microsoft 365

So, do you license everyone or just a subset? Start big or with a small, focused pilot? No single answers on this of course, and it's going to be fascinating to watch how the Copilot era unfolds and how organisations approach the investment and benefits realisation. I won't share all the guidance we're giving to clients, but starting with a pilot covering different user types and personas makes sense for any significant investment in new technology like this - and that's how most 1000+ seat organisations I've spoken to are starting. With regards to coverage, clearly the investment required for Copilot is dramatically different if licensing 10-20% of the business compared to 100% - and this differential obviously scales to big numbers for the largest organisations. To license all users at list price for 30,000 employees would cost over £10m, and no less than £33.6m for 100,000 employees. No doubt those are hypothetical scenarios since such a deal would be heavily negotiated as part of an Enterprise Agreement (and/or perhaps in exchange for a high-profile case study - look out for those soon), but it is illuminating to consider the difference between licensing some and licensing all.

My feeling is that most orgs will structure the case based on licensing a subset of users according to persona/role and the value conferred. This makes sense because no-one would argue that all roles will benefit from Copilot and gen AI to the same extent, and licensing users who simply aren't adopting the tool heavily doesn't make sense either. 

Along with the information governance/data security piece (something we've spent a lot of time on developing services for), the different aspects of licensing mean that there can be a few steps on the Copilot for Microsoft 365 journey for most companies who are buying in. Hopefully this post on some of the economics has been useful though.

Thursday, 4 January 2024

Speaking about Copilot for Microsoft 365 – IntraTeam webinar, January 11 2024

Copilot for Microsoft 365 presents a huge opportunity to transform work and unlock productivity. If you're interested in the world of AI, Copilots, and Copilot for Microsoft 365 specifically, I'll be delivering a webinar on the topic as part of an event run by IntraTeam on January 11 - the event is free and hosted over Microsoft Teams, but note that it's for practitioners working with Microsoft 365 in global companies only I'm afraid - if you're a consultant, vendor, student etc. then apologies, but IntraTeam need to keep a more focused audience for this event. Hopefully anyone interested will be able to find me at other events through the year though - it's great to finally be talking about Copilot for M365, and I had great feedback from a similar talk at the recent European SharePoint, Microsoft 365 and Azure Conference - so, I'm looking forward to sharing what I know, hearing the perspectives of others, and no doubt learning a lot from the conversations myself.

2024 is going to be a pivotal year for Microsoft's Copilot offerings, and in the entire landscape I'd argue it's Copilot for Microsoft 365 which is the most prominent and relevant for many organisations. At Advania, we've been on the Copilot Early Access Program and have learnt a lot from early hands-on use, and in this period we’ve also spent a lot of time developing our views, approaches, and tools on the best way to prepare for Copilot. Data security and information governance comes into focus for sure, and while there’s a lot of generic/high-level guidance out there (both from Microsoft and partners), we’re feeling good about the ground we’ve covered and guidance we’re able to give. Additionally, we’ve learnt a lot from being immersed in "custom Copilot" build projects which implement generative AI for our clients. The headline is that there's certainly a space for both, and the AI strategy for many organisations will combine multiple tools.

This webinar will focus on Copilot for Microsoft 365 however. Here’s the agenda for my 1 hour session:

Copilot for Microsoft 365 – What to know

  • Copilots everywhere - Microsoft AI for every role
  • Digging into Copilot for Microsoft 365:
    • Real-world scenarios and demos with our data
    • Advanced usages and prompts
    • Is it worth it? Copilot economics, ROI, and value case
    • Copilot readiness - licensing, security, and information governance
    • Extending Copilot - integrations and plugins
  • Where does Copilot for SharePoint fit? Simplifying content authoring and intranet management
  • Summary and the path forward

I look forward to sharing some information, demos, thoughts, and lessons learnt from our experience so far. If you’re interested in the topic, the session runs at 10:20 CET on January 11 and here’s the registration link:

Copilot for Microsoft 365 – What to know - IntraTeam.com

 

Content snippet

There's lots to talk about! Here are a couple of snippets of things I'll show and discuss:




Join us if you can!

Thursday, 9 November 2023

My AI talks at ESPC 2023 - Microsoft 365 Copilot experiences, Syntex, Azure OpenAI and more

As the new era of AI is in full swing, I have the privilege of covering some of the hottest topics at the upcoming European SharePoint, Microsoft 365 and Azure Conference 2023 in Amsterdam, Europe’s biggest Microsoft-focused conference. The conference starts on Monday 27 November and I’ll be sharing experiences from the field with Microsoft 365 Copilot, Microsoft Syntex, and “AI with your organisational data” projects using Azure OpenAI. I’ll be delivering three sessions plus an open mic talk on all things Copilot and AI with my good friend Jussi Roine.

In 27 years of working, I don’t remember a time when technology was having a bigger impact on how we work and how economies and societies operate. AI is obviously a huge part of that today, and 12 months on from ChatGPT becoming available it’s a wonderful moment to have 2500+ people together for an event like this. My sessions are just a small part of what will be an amazing conference. As usual, Microsoft are sending senior leadership and product managers to deliver keynotes and product announcements, and the entire session catalog has some amazing speakers delivering content to suit many personas. The conference still has tickets available – see https://www.sharepointeurope.com

The conference site has more, but here's an overview of my sessions with an explainer for each.

    Microsoft 365 Copilot - Experiences from the Field

    Experiences from the Early Access Programme and using Copilot, including a deep-dive on the specific approaches we (Advania/Content+Cloud) believe organisations need to adopt to get Copilot-ready


    Microsoft Syntex Deep-Dive - from AI Document Understanding to Content Governance

    In the Copilot era, Syntex is increasing in relevance rather than decreasing. New capabilities help you get ready for Copilot and extend the reach of AI compared to what Copilot can achieve alone


    Integrate ChatGPT into SPFx and Power Platform solutions with OpenAI and Azure OpenAI

    A technical session with my colleague and fellow MVP Anoop Tatti, where we explore different approaches to using GPT in your applications


    The Captain and the Copilots – Insights Uncovered on Generative AI, Productivity and the Speed of Innovation

    An "Inspire stage" session with Jussi Roine on all that we’ve learnt on Copilots, GPT, and Microsoft’s approach to generative AI

    Conference details

    ESPC is always an amazing event if you're based in Europe - it's not to late to attend and I highly recommend it if you work with Microsoft technoloogies. Here's the link to the conference pricing page

    https://www.sharepointeurope.com/pricing/

    Hopefully see you there!

    Wednesday, 18 October 2023

    Building generative AI/ChatGPT on your data solutions - considerations, pitfalls and lessons learnt

    The last article focused on combining organisational data with ChatGPT and Large Language Models, specifically using Microsoft’s 'Azure OpenAI on your data' accelerator which is designed to simplify this. I’ve been focused on the general area of 'AI with your data' (though not the AOI accelerator specifically) for a while now with colleagues, and I don’t think it’s any exaggeration to say that combining generative AI and organisational data will be a big thing for the next few years. The results can be astonishing – we all know what ChatGPT is capable of, but seeing it answer questions and generate content related to an organisation’s clients, products, services, people, and projects rather than its original internet training data immediately shows huge value – providing a “second brain” for employees and supporting many use cases at work. Platform solutions like Microsoft 365 Copilot offer amazing capabilities for core collaboration and productivity, but building your own AI and data solution (often to supplement Copilot) using available building blocks is often the way to go for better results with your data.

    The overall message from my last article (Integrating your data with ChatGPT - exploring Microsoft's "Azure OpenAI on your data" accelerator) was that the tool is a useful accelerator in some respects, but in reality only gets you so far in terms of what you probably need. For AI that gives relevant, accurate, and transparent responses to prompts and queries for real world use cases, the implementors need to understand concepts such as retrieval augmentation (RAG), chunking, vector generation, and more. There are various ways to slice this but here’s one way of thinking of the top-level considerations:

    All of these concepts are inter-related.

    This article tries to help you understand each in more detail, sharing info on our approach and technology selection (for Microsoft-centric solutions) as well as some lessons learnt. I'll finish with some predictions on where the space is going and what I believe will remain important.

    Data platform for Retrieval Augmented Generation

    Retrieval augmentation has emerged as a key concept for combining generative AI with data, representing arguably the first thing to learn about the space. I summarised it in the “RAG and other concepts” section of the last article, so if the concept is new to you, the three bullet points outlined there may help.

    In order to be able to do RAG, you need a platform for your data and it may not be the one where it is currently held. A likely scenario is that data you wish to integrate with AI is spread across multiple platforms rather than conveniently batched up in one place anyway – in our clients, organisational knowledge is often spread across documents in Microsoft 365 (Teams and SharePoint), various data sources in Azure (e.g. structured data in Azure SQL or Cosmos DB, files in Azure BLOBs, perhaps some other flavours too), and some single-purpose SaaS applications. While you *may* have some success going directly to a myriad of platforms like this, there are two fundamental reasons why it’s likely to be difficult:

    • The data in its native form will not be suited to AI – it will not be chunked or represented as vector embeddings, meaning that poor answers are likely to be returned due to issues with relevance and similarity search (both needed by generative AI)
    • Establishing which data source to go to when (across all of the prompts and queries your users might enter) is likely to be difficult, especially when results should be returned in seconds – similarly, responses which combine data from multiple sources will be a challenge if you’re hopping across them

    So, what’s often needed is a vector database which also acts as a data aggregation point. This allows you to run one retrieval operation across the right kind of data for AI, where data from your various sources has already been brought together and converted to embeddings. We favour Azure Cognitive Search in our solutions today since it has lots of connectors, a ready-made indexing platform, and support for vector storage, but as discussed last time many vector database options have sprung up in the AI era - from dedicated vector DBs such as Pinecone, Qdrant and Weaviate, to additions to existing technologies like Azure Cosmos DB (MongoDB flavour), Databricks, and Redis. Microsoft promote Azure Cognitive Search for generative AI applications and it does have some fairly unique capabilities, but we regularly review options in this fast-changing space.

    See the “Generating vector embeddings” section for more on what vectors are why they’re needed in AI solutions.

    Azure Cognitive Search – a competitive advantage for RAG?
    While just about every data platform under the sun now has a vector database offering (I count three options from Microsoft alone - Azure Cognitive Search and two Cosmos DB options), an interesting consideration comes up in terms of choosing between search index and database architectures for storing vectors (i.e. what I described earlier as your RAG platform). Microsoft quite heavily promote Azure Cognitive Search as being especially suited to “AI on your data” solutions, by virtue of things possible with a search engine but not (easily) with a database. In particular, Cognitive Search offers a hybrid search option which combines both vector and full-text searching in the same query. The benefit of this can be improved accuracy of answers from the AI, stemming from increased relevance of initial results retrieved. The theory (quite logical) is that whereas embeddings are great for finding related concepts, keyword matching/full-text search works better with specialised terms, product codes, dates, names and so on because of the nature of exact matching.

    We use this option in our solutions today and get good results, though without some fairly academic research it’s hard to pin down whether it’s definitively related to what happens in a hybrid search. In Cognitive Search, hybrid entails combining vector search and keyword search but also a re-ranking step based on “deep learning models adapted from Microsoft Bing”, all of which is detailed in a Microsoft article Azure Cognitive Search: Outperforming vector search with hybrid retrieval and ranking capabilities. The article goes some way to explaining Microsoft’s testing, methodology, and results - and therefore their rationale for positioning Azure Cognitive Search as the answer to retrieval augmented generation – all I can say here is that it works for us at the moment, but we’re open minded as to whether this is the only game in town.

    Expertise in your RAG platform of choice is key – and you may need to bring in support or consultancy if Azure Cognitive Search (or your chosen vector database) isn’t a common skill today.

    Chunking

    Chunking refers to the practice of splitting long documents which go beyond the limitations of prompt size, e.g 4000 or 32000 tokens for GPT-4 (a token is around 4 characters of text). Remember that RAG is all about retrieving some data/information to give to the AI in a big long prompt, but the limitations we have today mean that a long document will never fit into the prompt in its entirety. What we need is for the most relevant part(s) of the document to be passed to the AI – and that means the documents need to be split into chunks in the RAG data platform. Additionally, models used to generate vectors have similar limits on the maximum input, so chunking is needed both for storing your data in the right format as well as retrieving it. The cut-off point for a chunk is equivalent to around 6000 words if you’re using the Azure OpenAI embeddings models for vector generation, so chunks need to be smaller than that. You can split your documents into:

    • Fixed-size chunks
    • Variable chunks
    • A hybrid, with some special chunking strategies added (e.g. to deal with specific formats in your documents like tables in PDFs or smart art)

    In our experience, that last point needs some special thought - I expand on it in the later section on “Content tuning”.

    Getting chunking right is vital. I speculated in the last article that some of my poor results with the “Azure OpenAI on your data” accelerator were due to inadequate chunking - there is a chunking mechanism in there, but it’s not used under all circumstances and the parameters used in the chunking script may not have suited my data.

    In terms of existing tools to help you implement chunking, there are various scripts and options out there. The LangChain splitter is a common one and Semantic Kernel, Microsoft’s AI orchestrator library, also has one. Whatever script or approach you use, in most cases you’ll need to integrate it into your indexing/ingestion pipeline so that as documents and data change and need to be re-indexed, the chunking and other steps happen automatically. More on this in the “Content ingestion/indexing” section.

    The following Microsoft documentation is a good read on chunking and related considerations:

    Chunk documents in vector search - Azure Cognitive Search | Microsoft Learn

    Generating vector embeddings

    As touched on earlier, in most cases your AI solution will be much more powerful if your data is converted from its original form (e.g. text in a document) to embeddings. This allows concepts like similarity search, which is where much of the power of ChatGPT and generative AI comes from, in particular the feeling that the AI understands what you’re asking for regardless of the exact words you used. Most classic search solutions rely on keyword matching - a search for the word "dog" will only get results containing “dog". However, cats are somewhat related to dogs - and both are related to household pets. When your information is represented as embeddings, these semantic links and relationships can be understood – enabling AI solutions which use search, classification, recommendations, data visualization and more. The approach can work not just across text, but across other content types like images, audio, and video – different content types can all be converted to embeddings, enabling interesting scenarios like finding images and video related to concepts discussed in a conversation or document.

    Embeddings are created by passing your data (e.g. text inside a document) into an AI model which returns the information as embeddings, i.e. arrays of numbers. OpenAI have this image which nicely represents the process:


     

    Most solutions using Azure OpenAI will generate their embeddings using a model behind the Embeddings API e.g. text-embedding-ada-002. New versions get created as models evolve, and since these use different weights/measures internally the format is different, some care is needed that your embeddings generation matches the AI model you’re querying/prompting against.

    AI orchestration

    When developing AI applications, it quickly becomes apparent that some middleware is needed to do some of the heavy lifting of storing data and calling plugins. LangChain emerged as a popular open source library for this, followed by Semantic Kernel as a Microsoft equivalent. Semantic Kernel provides quite a few valuable functions:

    • Connectors to vector databases – including Azure Cognitive Search, but also Azure PostgreSQL, Chroma, Pinecone, Qdrant, Redis, Sqlite, Weaviate and a couple of others
    • A plugins model – allowing you to call out to other apps and systems from the conversation the user has with the AI. If you heard about ChatGPT plugins (e.g. those for Expedia, Zapier, Slack etc.) then this is the SK equivalent – and since the model provides an abstraction over different plugin architectures, both OpenAI and Azure OpenAI plugins can be used. Importantly, SK also provides some ready-to-go plugins, allowing you to do some common operations easily – calling out to a HTTP API, doing file IO, summarising conversations, getting the current time etc., and also doing some things LLMs aren’t suited to such as math operations
    • Memories – context is crucial in generative AI. The AI needs to understand things previously said in the conversation, so the user can ask contextual questions like “Can you expand on that?” Additionally, SK provides the concept of document memories, enabling the AI to have context of a particular document the user is working with closely. In this case, SK does the work of generating vectors embeddings for documents (e.g. those uploaded by the user in a front-end app), thus joining up several of the concepts discussed here

    The real power of orchestration comes with chaining plugins and functions together in both predetermined and non-predetermined ways. In the latter case, we are allowing the LLM to decide how best to use a set of additional capabilities to meet a certain goal i.e. a request made by a user in a prompt which is extremely interesting. For this to be effective, functions need to be described well so that the AI can decide whether they will be useful. The concept of giving the AI agency to decide which tools from an extended toolkit may be useful for a given task (i.e. beyond what it was initially trained on) has huge potential for organisational use. Consider an insurance company offering home/car/pet insurance policies to a large client base – with the right set of plugins, it would be possible to make complex requests in prompts such as:

    “Find all clients with a total annual contract value in the bottom 50%, and for each generate a personalised e-mail recommending policy extras not currently taken. Upload the draft e-mails to SharePoint and post a summary of client numbers and key themes to the ‘Client Retention’ team in Microsoft Teams to allow review”.

    Such a request could simplify a complex data analysis, content generation, and approval exercise massively, not only reducing effort and cost but potentially bringing in new revenue through the campaign results. The capability is ground-breaking because we are able to approximate human work – taking a fairly open-ended input and establishing the process and tools to get to an outcome, perhaps via certain milestones. This is generative AI supporting automation within the workplace, leveraging GPT’s ability to process data, identify anomalies, establish trends, generate content, and take action via plugins.

    Semantic Kernel is particularly strong in this, with several planner types offered to suit different “thinking approaches”. Simple cases will use the ActionPlanner, with more complex multi-step processes using one of the others:


    See the planner capability in the SK docs for more info.

    Content tuning

    In the Chunking section earlier I touched on some of the complexities of chunking for specific content, such as tables in PDF files. Attention needs to be paid to what’s IN the files you are working with – not all content is created equal, and text in paragraphs is more easily understood by AI than tables, graphs, and other visualisations. Some of the specific examples we’ve run into where the AI did not initially give great answers on include:

    • Tables (in both PDF and Office docs), especially:
      • Long tables spanning multiple pages
      • Tables where some rows are effectively a “sub-header row”
    • Scanned/OCR’d documents where the content is effectively an image
    • HTML content
    • Images
    • Smart art
    • Document structure elements (headings, subheadings etc.) which convey semantics

    We needed to take specific steps to deal with such content, and as mentioned in the last article I think it’s where accelerators like Azure OpenAI on your data can run out of steam. For a production-grade AI platform, you’ll need to establish what you need to solve for in this area and prioritise accordingly - there’s almost no upper limit to how much tuning and content optimisation you could implement. Note also that while I label this “content tuning”, the tuning actually takes place in your platform mechanisms – your content ingestion pipeline and the chunking script/code most specifically. You’re not changing content to suit the AI, because the business will create content as the business needs to. That said, one tactic for special content may to index a modified version of a file rather than the original – so long as you have a mechanism for ongoing ingestion of content created by the business.

    So what are the specific steps you might need to take? A possible toolbox here includes:

    • Modifying chunking to recognise long tables and adopt tactics such as:
    • Create a larger chunk than normal so the entire table fits into one
    • Ensure the table header is repeated every time the table is split
    • Implementing ‘document cracking’ (aka document understanding) using something like:
      • Microsoft Syntex – perhaps to leverage its extraction capabilities with important values inside documents (e.g. contract value, start date, end date, special clauses etc.); this can ensure vital details are indexed properly
      • Azure AI Document Intelligence – similar to Syntex, using the Layout model allows you to crack PDFs or images to text, even if it’s a scanned document where the content is actually an image

    Both of those document cracking approaches (Syntex and Azure Document Intelligence) allow tables to be processed since they understand headers, rows and columns.

    In cases where high value information is expressed in such constructs, be ready to spend time in this area gradually tuning and improving the AI’s understanding of the content. To close, perhaps considering the image helps convey why gen AI needs help in this arena:

    Content ingestion/indexing

    All the previous aspects need to be worked into an indexing pipeline of some sort which can continually ingest data from source platforms - the only exception would be if you’re creating a simple solution based on a one-time upload of some static content, which is certainly more straightforward. Most scenarios, however, require generative AI to work against continually changing data (e.g. new and changing documents in Microsoft 365), and this means ensuring all of your steps to support RAG - in terms of content processing, chunking, embeddings generation and so on - are called as part of an automated pipeline.

    But what triggers the process? You could run on a scheduled basis, but in many cases you can piggyback onto existing content indexing mechanisms which may be scheduled or based on detection of content changes. Another benefit of Azure Cognitive Search as the RAG platform is the support for indexers (see the list of connectors above). In our solutions, to bring GPT capabilities to documents stored in Microsoft 365 we use the SharePoint indexer in Cognitive Search to do the initial gathering, but extend using skillsets to integrate document cracking, chunking, embeddings generation and other steps into the ingestion process. A few considerations come with this, including:

    • The SharePoint indexer is still in preview at the time of writing
    • ACS has certain thresholds of how many indexes and indexers you can have – this varies based on pricing tier, but needs consideration when indexing at scale
    • The SharePoint indexer doesn’t currently deal well with some content scenarios such as deletions and folder renaming – this can lead to content staying in your gen AI platform when it shouldn’t, and missed content and/or broken links in citations

    On the second point, our team have needed to augment the indexer to deal with these shortcomings. On the first, we have some views on challenges Microsoft might be running into with the SharePoint indexer (consider ACS ingesting a Microsoft 365 tenant with 30+ TB of data for example) – and we hope this isn’t one of those cases where Microsoft tech gets pulled without even making it out of preview. Having Cognitive Search index documents in SharePoint is a common scenario for many reasons, not just generative AI – leaving the world to create their own indexing mechanisms would take away a big value-add for Microsoft’s premier search technology.

    Summary

    Today, there’s no such a thing as a genuine turn-key platform for “generative AI on large amounts of your organisational data across different platforms”. On a related note, Microsoft 365 Copilot is amazing for many scenarios (and we had early exposure through the limited Early Access Program), but it’s not the answer to every generative AI use case. Sure, data from other platforms can be integrated via Copilot plugins, but in my view the pattern is better suited to small scale ‘callouts’ to the other systems (e.g. read or write a record) – this isn’t quite the model for “ingest TB of data from different company platforms to work with gen AI”.

    However, with a talented team (or partner), such platforms can be built in a few weeks or months depending on your scope, and many parts of the stack will come from assembly of building blocks which exist already. Without a doubt, lots of the challenges above will be abstracted further in the next year or two – but at the same time, I’ll be surprised if Microsoft or anyone else cracks all parts of the puzzle in a way that works for everyone. Some elements will always be organisation-specific, and priorities will vary. Cost will always be a factor too – budgets will be found for AI projects demonstrating a path to return, but no-one wants to license a hugely expensive product only to find it can’t be easily configured or extended to work with company apps, data, and processes.

    Similarly, no-one wants to spend a year building a platform because the team didn’t know what they were doing or weren’t following developments closely. Being plugged in to the firehose of generative AI changes is vital to avoid missteps and wasted effort. For implementors, I feel this is the new web development or the new databases - solutions of immense value can be built, so relevant expertise will be in demand. Following a series of “AI hacks” and client projects this year, I’m feeling good about how we’re shaping up at Content+Cloud/Advania to respond to this new era.

    More fundamentally, the results we’re seeing from combining GPT (via Azure OpenAI) with our client’s organisational data are hugely encouraging and show the power of generative AI in the workplace. Seeing the AI perform reasoning and answer deep questions over organisational data which came from different platforms and in different formats provides a vision into how AI will power organisations and how work will get done over the next few years. As I keep saying, it’s a magical time.

    Tuesday, 5 September 2023

    Integrating your data with ChatGPT - exploring Microsoft's "Azure OpenAI on your data" accelerator

    The idea of combining the power of ChatGPT and LLMs with organisational data has caught the attention of many. It seems to form the basis for many of the conversations I'm having with CIOs and tech leaders at the moment, and with good reason I think. After all, if you could "train" ChatGPT/generative AI everything about your company, your products and services, clients, employees and expertise, past projects and other valuable information, the potential would be huge. If you could further add a sprinkling of the most relevant content on the internet such as the latest industry regulations, analyst reports, or information from accredited suppliers, the potential could be increased further. "Instead of searching and creating, can't I just ask generative AI to give me what I need?" is a common theme of questioning. In my view we're only starting to understand the possibilities and accuracy rates, but in our client projects so far where we've integrated organisational data with ChatGPT, the results are pretty incredible. As one example, being able to ask natural language questions about past projects and get high quality, easy to understand answers, seems to bring out organisational knowledge in a powerful way that helps with decision-making and winning business.  

    There are many approaches to integrating custom data with AI. For most Microsoft-centric organisations, when we talk about ChatGPT it's actually Azure OpenAI which is the starting point for generative AI. This is because it allows safe and controlled use of OpenAI models such as GPT-4, but delivered with all the benefits of trusted Azure such as improved privacy controls, data sovereignty, governance policies, and integration into existing cloud billing. The approach described here revolves around Azure OpenAI and you'll need to have an instance of the service created. 

    Focus of this article
    With this context, this article covers:
    • Core concepts when integrating data with ChatGPT/Azure OpenAI
    • Overview of Azure OpenAI on your data, with a focus on integrating Microsoft 365/SharePoint data in particular
    • The setup process for Azure OpenAI on your data
    • What the solution looks like and findings from testing
    • My thoughts on where the solution fits in combining AI with your data
     

    RAG and other concepts in integrating data with ChatGPT and gen AI

    Stitching together custom data with LLMs requires work. There are several overarching approaches, including training your own model (expensive and complex), fine-tuning an existing model (limited to small pieces of data), to techniques like Retrieval Augmented Generation (or RAG) which essentially combine searching across your dataset - that's the retrieval part - with the answer and content generation we commonly associate with LLMs. RAG is essentially a multi-step process, consisting of at least these steps:

    • Take user prompt and search across a dataset (i.e. your organisational data) for relevant information 
    • Construct a long, detailed prompt for the LLM which includes the fetched data - this is known as grounding
    • Generate a natural language response based on the retrieved information

    The response will therefore feel like ChatGPT has not only been trained on internet data, but your custom company data too. The user does not know or care that a few things have happened under the surface. RAG is essentially the approach used by Microsoft 365 Copilot, where the data being returned in the initial step is from the Microsoft Graph - documents, relationships, meetings, 
    activities, and other data in Microsoft 365.

    In RAG, information is often converted to vectors or embeddings to better support natural language processing.

    Overview - Azure OpenAI on your data  

    To help with the data integration question, Microsoft provide the Azure OpenAI on your data capability (shortened to "AOI on your data" in this article). This is effectively a PaaS accelerator where much of the back-end complexity of integrating LLMs with your data is taken care of. It takes care of creating a back-end data store, allowing your custom data to be ingested, creating embeddings/vectors from your data (at least in some circumstances - more on that later), and allows you to quickly deploy a sample app to provide a basic user interface with some of the useful features you might want (e.g. chat history and citations). It does use resources in your chosen Azure subscription though - you'll either create these at the time of initial config or point to resources you've already provisioned.

    Azure Cognitive Search is a key ingredient

    In Azure OpenAI on your data, the key technology which allows your documents and data to be combined with AI is Azure Cognitive Search. Cognitive Search provides the information store from which the initial information is retrieved, before feeding this into the prompt to ChatGPT/the LLM. Conceptually you can use any queryable data platform in Retrieval Augmented Generation, but it helps a lot if the platform can store vector data. Azure Cognitive Search has been extended with this capability, but know that many vector database options have sprung up in the AI era - from dedicated vector DBs such as Pinecone, Qdrant and Weaviate, to additions to existing technologies like Azure Cosmos DB (MongoDB flavour), Databricks, and Redis. Microsoft promote Azure Cognitive Search for generative AI applications, and it does have some fairly unique capabilities. Azure OpenAI on your data supports the following data sources:

    • Azure BLOBs
    • Files you upload
    • An existing Azure Cognitive Search instance you have (which could hold information you've indexed from lots of sources)

    Needless to say, the last option is the most powerful and flexible, so it's the one we'll look at here. One reason is that Azure Cognitive Search has an array of connectors which will allow you to bring in content quite easily from lots of platforms. These essentially break down as:

    • Native Microsoft connectors:
      • SharePoint Online, Azure SQL, Azure Cosmos DB, Azure MySQL, Azure BLOBs, Files, Tables, Data Lake Gen 2 etc.
    • Third party connectors - there are many, including:
      •  Adobe AEM, Amazon S3, Atlassian, Bentley Connectwise, Box.com, Elasticsearch and lots more - see the ACS connectors gallery
    • Your custom connector:
      • Essentially you can index anything by generating some JSON conforming to a particular structure

    Using the 'existing Cognitive Search' option in Azure OpenAI on your data

    As you might expect, you need an Azure Cognitive Search instance already and to have some data indexed, so if you're experimenting with this you'll need to get one created. If you're interested in "AI on your data" I recommend spending the time on this - it will help you understand how to combine ChatGPT with all sorts of data and platforms.

    Unfortunately the free tier of ACS is not supported for AOI on your data, so you'll need an instance created on at least the 'Basic' tier (£61.05 per month in UK pricing at this time). A good resource for getting started is Create an Azure Cognitive Search service in the portal - the process described there will get you the base service provisioned in Azure. The next step is to connect to some content.

    Indexing content in Microsoft 365/SharePoint Online with the SharePoint indexer

    One popular scenario will be to combine ChatGPT/Azure OpenAI powers with the knowledge contained within documents in Microsoft 365. Sure, it's exactly what Microsoft 365 Copilot will do when it arrives, but for me there are still many reasons to explore going this way - perhaps in addition to adoption of Copilot. For one thing, licensing of all users in an organisation may be a difficult investment case at $30 per user per month - it's unlikely to be something rolled out to the entire organisation for most. In contrast, a tool you stitch together yourself could be - and it could be quite cost effective since there are building blocks like Azure Cognitive Search to support the journey. An AI strategy which combines Microsoft 365 Copilot usage (for those who derive the most value), with a supplementary AI tool which understands organisational data but has no per-user costs, could be a powerful approach to leveraging AI over the next few years. Regarding the latter, Azure Cognitive Search can bring together data from many sources quite easily - meaning it's a good foundation for AI that understands LOTS of how your organisation works. A key benefit is that it can go beyond just data in Microsoft 365. 

    To get set up with Azure Cognitive Search indexing some of your M365/SharePoint content, I recommend following these instructions:

    SharePoint indexer (preview) - Azure Cognitive Search | Microsoft Learn

    Note that there are some technical steps in there since the config is done via Postman and the ACS REST API, but the process doesn't take too long. Once you've done this, it's now time for the fun part - configuring Azure OpenAI on your data and pointing to your Cognitive Search instance. 

    Configuring Azure OpenAI on your data with ACS

    The config steps for this part are done in the Azure AI Studio for your Azure OpenAI instance. As a reminder, you can get to this from the main Azure portal - your OpenAI instance will provide a link. 

    Once there, head into the chat playground and find the "Add your data" tab. Click the "Add a data source" button as shown below:

    In the dropdown which appears, select the Azure Cognitive Search option:


    In the next dialog you're going to point to your Azure Cogntive Search instance by selecting the parent Azure subscription then choosing the ACS service. Note that you also select a specific index within Cognitive Search here - which is why you need all the Cognitive Search config to be in place already using a process like that described above in the "Indexing content in Microsoft 365/SharePoint Online with the SharePoint indexer" section: 

    The next step involves telling ACS how to establish the various bits of data to display in search results. Since the '10 blue links' we associate with search results are always comprised of a title, a URL, a filename and a snippet of content, we need to tell ACS what they should be for the content being indexed. If you were indexing SQL data this might need more thought, but since SharePoint content is a set of files which naturally have these elements the mappings are quite logical. Just use the dropdowns to map each field to the relevant item specified when you created the indexer:

    The final option relates to semantic search in Azure Cognitive Search, which is the ability of ACS to semantically understand relationships between concepts in your data. I'd recommend treating this as an advanced capability that you might not start off with - it's chargeable for one thing, and we've been finding good quality results without it, most likely because vector search is already doing some of this. So, I suggesting skipping past this one for now:  



    The final step is to confirm your settings:

    Once confirmed, you'll be back in the main area of the chat playground with your configuration displayed. Note the "limit responses to your data content" checkbox - this constrains the LLM to only your added data and ignores the core internet data it knows already. Whether you check this or not will depend on the solution you're building (i.e. whether you want both sources involved), but I suggest that you definitely want this during testing at least:

    Config is now complete and we can think about the front-end interface and starting to test. 

    Deploying the sample app front-end

    Azure OpenAI on your data provides a deployable web application which can serve as the front-end. In reality, this isn't something you could deploy to an organisation without further work but it can be useful for testing and/or to accelerate the creation of a real front-end app. To provision it into your Azure subscription, start by finding the "Deploy to" button in the top-right hand corner of the Azure OpenAI Studio:

    Choose your preferred option, but in my case I'm choosing the web app:

    Specify the details for your web app - here's what mine looked like:




    Once the web app has finished deploying, the AOI Studio will display this in the top-right corner:


    Alternatively you can navigate straight in with the URL you specified. When you get there you'll see a basic web app which is talking to your data:


    Looks good! But is it the promised land of ChatGPT and generative AI that truly understands your data? I'll start to answer that here, but there's a lot to consider so I expand on things in the next article - for those working in this space it's worth discussing findings and recommendations in more detail. 

    Testing generative AI on your data

    In short the results from my testing were.....mixed. I put this down to the Azure OpenAI accelerator taking care of some things for you, but for a production-grade solution my view is that you need more control and there's more work to do. Take this how you wish, but for now we are not using the "AOI on your data" accelerator in our client projects at Advania/Content+Cloud which combine generative AI and custom data. We're using similar principles and the very same technologies, but more 'grown-up' approaches based on the Microsoft documentation and other info. More on

    Background - my scenario and data for testing

    As a set of documents to interrogate, I'm using some of Microsoft's earnings reports from recent quarters. I spend quite a bit of time analysing these each quarter to understand Microsoft performance and strategy - they are full of dense information and it would be highly beneficial to be able to ask the AI simple questions and get simple answers, rather than lengthy digestion and interpreting of the contents which I do today. The documents take the format of both PowerPoint documents and Word transcripts from the quarterly earnings calls. I only have a few documents but as I say, they are full of complex information - here they are in the SharePoint document library which ACS is indexing:

    The Word call transcripts look like this:

    The PowerPoint files look like this:

    Results overview

    So let's ask some questions of the data. Initial results seem quite promising, like the answers in this converation thread:

    Looks good! Any solid "generative AI on your data" solution should help you understand how it's finding the answers, and expanding the citation helps me see the source content:

    However, the solution runs into challenges with some requests. Here's an example which I feel should have been answered:

    That's a bit surprising because the answer isn't hard to find in the document set. In a different case, I see a bit of hallucination happening. The data is actually being misunderstood, and an answer is given but it's incorrect. The question I'm asking should again be quite easy to obtain from the documents - total revenue for a specific quarter:

    The reason I know it's incorrect is because the answer is quite easy to find in both the PowerPoint and Word documents. Here it is in the deck for example:

    Expanding the citation starts to explain what's happening here:

    The AI has found something referring to revenue for the quarter, but in fact these numbers relate purely to Intelligent Cloud, one of Microsoft's segments, rather than total revenue. The fact that this part of the discussion in the call transcript relates purely to this has has been misunderstood. This is obviously somewhat concerning. As we combine AI with our data, the need for accuracy and precision tends to increase compared to consumer uses of ChatGPT for example. So why is this happening? Let's consider this and expand out into overall conclusions.

    My high-level conclusions on Azure OpenAI on your data

    My speculation on why AOI on your data doesn't always give great results in these cases comes down to what it does and does not do. Specifically, I put the AI misses above down to the fact that the data is not chunked properly. Sidebar - in the context of AI on your data, "chunking" is a key concept and refers to the practice of splitting long documents which go beyond the limitations of prompt size, e.g 4000 or 32000 tokens for GPT-4 (a token is around 4 characters of text). Clearly, a long document in it's entirety will not fit into the maximum prompt size allowed by LLMs today, so the typical approach involves splitting documents into smaller chunks. Indeed, Microsoft's documentation for AI on your data is explicit in calling out that you might need to do this - the "Ingesting your data into Azure Cognitive Search" section of the AOI on your data documentation (also linked below) discusses this and links to a commonly used 'data preparation' script - however it's something critical you'll need to take care of if you're building any kind of production solution. In some ways, this illustrates the issue with Microsoft's AOI on your data solution today - while it helps in provisioning a starter point for some elements, it doesn't necessarily do the hard bits which you'll need.

    By it's nature, Azure OpenAI is an accelerator which tries to simplify the complex aspects of combining AI with your data, but realistically it cannot take care of everything. Colleagues and I are currently viewing it as a low-code route to AI on your data, and like many low-code solutions there are some trade-offs and you hope you don't run into brick walls. In Power Apps for example, it's possible to break past constraints by calling out to an Azure Function to run custom code or bringing in PCF components to go past out-of-the-box UX controls. In the same way, it's necessary to understand where the boundaries lie with AOI on your data. Let me try to be more specific.

    Where Azure OpenAI on your data helps and where it doesn't

    AOI on your data is helpful in the following ways: 

    • Provisioning a sample web app front-end to Azure App Service - this uses a GitHub sample which isn't a bad starting point, and the solution provisions an App Service and App Service Plan for you. The sample code surfaces capabilities such as SSO auth, chat history and citations, various config options in app settings, and while the UX is very basic it certainly could be extended (the exact sample used is linked below)
    • Provisioning a back-end data store - a Cosmos DB instance used to store chat history, which is configured with 'provisioned throughput' capacity mode (i.e. consumption-based pricing), and the Azure Cognitive Search instance if you're not pointing to an existing one 
    • Hooking up the front-end to the back-end - integrating the sample app to various infrastructure pieces via app config settings - your Cosmos DB, Azure Cognitive Search instance, and your Azure OpenAI instance etc.
    • Helping you connect Azure OpenAI in a basic way to simple custom data sources - as described above, this provides the basics to connect to Azure Blob Storage, the file upload option, and an existing Azure Cognitive Search instance (the approach used in this article)
    However it's less helpful with other things you need:
    • Chunking of your data/content - when you bring an existing Azure Cognitive Search instance, which you'll do for anything other than Azure Blobs or the upload option (e.g. when you want to connect to a wider set of documents in Microsoft 365/SharePoint), the solution will use the data in it's non-chunked form - resulting in potential accuracy challenges
    • Generating vectors/embeddings from your data - this is required to provide similarity search, the capability that allows ChatGPT and generative AI to be so powerful in truly 'understanding' the training data
    • Support for a wide variety of data - the solution supports Word, PowerPoint, PDF and some simple file types (.html, .text, .md) but for anything else you're on your own. Additionally, the processing of these formats is somewhat 'black box' and if it doesn't do the right things for you (e.g. deal with images, graphs, or tables in your PDFs in the right way), it seems there's no control to improve things
    • Aligning with enterprise-grade Azure architecture practices - support here is patchy, and I could imagine some organisations may feel the solution doesn't quite align with their Azure standards and governance. For example, if your Azure OpenAI instance is protected by a vNet and private endpoint, Azure OpenAI on your data can connect to this if you complete an application form but not otherwise. Storage accounts with private endpoints are currently not supported
    Providing a production-grade front-end which you can roll out to the business - in the end, the solution is deploying a sample app, and sample apps aren't meant for production - they are meant as a starting point for development. We've found there's fairly significant coding work to do on this front, and for our client projects (and internal deployment) we choose to use a different GitHub sample as our starting point to this one (there are several around and we've looked at all the major ones)

    In the end, if your goal is to get ChatGPT (by which we really mean Azure OpenAI) talking to your data in Microsoft 365 or Azure, then you'll need to understand some of the deeper mechanics and building blocks involved in creating these solutions. My view is that while AOI on your data takes core of some useful pieces on the journey, those pieces aren't necessarily where the most complexity is. Of course, the capabilities of Azure OpenAI on your data will expand from where they are at the time of writing - there's absolutely no doubt about that. However, my recommendation is to consider the accelerator as the starting point for a technical team to use in a project - either simply as a reference architecture off to the side, or as the basis of a solution they will expand quite significantly. It's a great entry point to the space, but perhaps not the entire solution to providing a solution to the business which combines generative AI and organisational data.   

    Beyond sample apps - delving deeper into building "AI on your data" solutions

    In the next article, I'll go into more detail on some of the concepts you're likely to need to deal with in building a production "AI on your data" solution, and also some of the Microsoft-centric building blocks which are useful like Semantic Kernel. By the way, I certainly wouldn't want to claim I personally have all the answers - some of the wider thinking described above comes from my talented Advania/Content+Cloud colleagues, and even as a collective we're finding that this is definitely an emerging space where things are moving quickly and there's a lot to learn. Consider this info more as an attempt to share key findings and conclusions perhaps - but if Azure OpenAI on your data on it's own doesn't answer all the questions, in the next article I'll share more thoughts on what might work.

    It truly is an exciting time, and the possibilities of AI with organisational data are huge from our perspective.


    References