Sunday, 29 March 2026

Autonomous agents - Technique 4: Leveraging Power Platform and Microsoft 365 capabilities

In the last post we covered that AI agents often 'need help' - to be equipped with tools they can call to get important steps done to reach the outcome. Anyone creating an agent will often find the need to provide a couple (or more) of pre-defined tools which wrap up the complex details of taking actions on specific systems, integrating data, or performing precise steps in a process. Every agent framework has such a concept, and as covered last time these are 'workflows' in Copilot Studio - a name change from what was previously known as 'agent flows'. 

The key to effective agents in the Microsoft space is often having a core understanding of how the wider Microsoft 365 ecosystem works so you can easily leverage existing building blocks - the Power Platform, SharePoint, and Teams are all good examples. Employees expect agents, apps, and automations to fit with their work and so often the data, documents, and resources sit in Microsoft 365 somewhere for a Microsoft-centric organisation. In this post we'll explore agents creating templated documents e.g. a formatted Word document like a proposal, report, contract, or letter - such a common real-world need for automation scenarios, and the key to it is leveraging existing building blocks.  

But first, here's a recap of the full series:

Articles in this series

  1. Techniques for autonomous agents in Copilot Studio - intro
  2. Scenario video - Microsoft architect with proposal generation
  3. Technique 1 - Getting AI-suitable descriptions right - data, tools, agents themselves
  4. Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
  5. Technique 3 - Provide tools for steps your agent can't easily handle [like agent flows]
  6. Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents (this article)
  7. Technique 5 - Understand cost, capability, and governance implications of agents you create

The Microsoft 365 Document Generation Landscape

Before diving into the approaches that actually work well today, it's worth taking a quick look at the full picture, including some of the options that are not yet reliable.

The Word MCP Connector (Work IQ) - promising, but not yet there (mid 2026)

Microsoft has been rolling out a set of MCP (Model Context Protocol) servers under the Work IQ banner, including one for Word. In principle, this is an exciting development: it would allow your agent to interact with Word documents in a more conversational, AI-native way - creating, editing, and populating documents through natural language rather than through structured connector actions.

In practice, the Word MCP connector is not yet reliable enough to build production workflows on. I couldn't get it to work at all for document creation, so you can't depend on it for anything critical. That said, it's worth keeping an eye on as this area matures - Microsoft are clearly pushing hard on MCPs for all Microsoft services and Graph API capabilities, and this will likely improve significantly over the coming months.

Happily, there are other tried and tested ways to automate templated document creation in Microsoft 365.

Approach 1: Word Online Connector — "Populate a Word Template"

This is the route Microsoft most commonly points you to for automated Word document creation, and it does work to an extent — but it's also a bit legacy and has limitations.

How it works

The Word Online (Business) connector in Power Automate includes an action called "Populate a Word template". You'll find it here:

The concept is what you'd expect: you create a Word document that serves as a template, add placeholders to it in the appropriate places, and then when the action runs, it drops values into those placeholders using whatever your agent passes in. To set up the template, the first step is to enable the Developer tab in Word. Once that's done, you should be able to drop in content controls into your template - within the Developer tab, they're found in the Controls section highlighted below:





However, notably only the plain text control is supported for the Power Automate connector - rich text is not supported. This is quite limiting unfortunately and means your output documents may not be as well formatted as you'd like. As you're setting up the template and dropping content controls in, you give each one a title and a tag that identifies it - these are the keys by which the Power Automate action will recognise and populate each placeholder.

So the process is:

  1. Design your document layout in Word as normal.
  2. Position your cursor where you want a dynamic value to appear.
  3. Insert a Plain Text Content Control.
  4. Open its properties and set the Title and Tag to a meaningful name — for example, I have ClientName, ClientRequirement, and ProposedApproach in my template. These names are what you'll reference in your flow.
  5. Save the template to a SharePoint document library.

One thing to be aware of: keep your template in .docx format (not .dotx). The connector works with standard .docx files — no need to use the Word template file type.

The catch: don't call this directly from an agent tool

Here's the important gotcha that may save you some head-scratching. If you try to use the Word Online connector and "Populate a Word template" action directly in an agent tool - that is, going into the Tools area of your agent and creating a new tool using the connector- you'll run into a problem with the document library picker.

The picker that lets you select your template file relies on a GetDrives API call to enumerate the available libraries. When invoked in the context of an agent action directly, this call doesn't behave as expected, and the picker fails to resolve properly. The result is that you can't successfully configure which template file to use, and the action doesn't work.

The solution is straightforward: use this action inside a Copilot Studio workflow (agent flow). When you build the "Populate a Word template" action as part of any kind of Power Automate cloud flow, the picker behaves correctly in the Power Automate designer. You can select your template file without issues, and the flow runs reliably when triggered by the agent at runtime. You'll then create a tool in your agent which references this workflow, and then reference that in your agent instructions. 

This is actually a good illustration of why the Copilot Studio workflow pattern is so powerful - it gives you the full Power Automate design surface, complete with all its connector compatibility and picker behaviours, while still making the whole thing available as an intelligent tool to your agent.

The result

Once everything is in place, you'll find the templated document gets ctreated successfully if your agent is doing the right thing and calling your tool. However, the lack of any kind of formatting (even line breaks) within the text dropped into placeholders means the resulting document is very 'low fidelity':

So, let's explore other ways of leveraging Microsoft 365 building blocks.

Approach 2: SharePoint connector — "Generate document using Microsoft Syntex" (content assembly)

The second approach uses a different connector and a different underlying technology: SharePoint AI, previously known as SharePoint Premium and Microsoft Syntex. If you haven't worked with these capabilities before, it's Microsoft's content AI woven into SharePoint, and it includes a document generation capability that integrates directly with Power Automate. A long time ago this was known as Syntex Content Assembly, and I wrote about it in a few places including Chris O'Brien: SharePoint Content Assembly - hints and tips

What You Need: Syntex Pay-As-You-Go

Before going further, the prerequisite: Microsoft Syntex Pay-As-You-Go (PAYG) must be enabled on your tenant. Syntex document generation is a metered service, meaning there's a per-transaction cost - for document creation, this is currently $0.15 per document - see Pay-as-you-go pricing for document processing for Microsoft 365 for more details. If PAYG isn't configured via the Microsoft 365 admin centre, the action won't be available.

How It Works

Content assembly works in a very similar way to the previous process, in the sense that it's a question of dropping placeholders into a template and then using that in the automated process. You create and manage the Word templates from within SharePoint itself:

  1. Navigate to the document library where you want to store your templates
  2. In the library, select Create or upload → Create modern template (shown below). That's easily missed, so here it is:
     
  3. This opens the Syntex template editor, where you work with a Word document and insert field tokens - for each one, use the pane on the right to map these to fields in a SharePoint list or library you have. I document the full process with screenshots at Automate creation of new documents with SharePoint Syntex Content Assembly if helpful
  4. Give each field a meaningful name, save and publish the template

The template is stored in a hidden area of the document library in SharePoint.

Building the Flow for your agent

In Copilot Studio, create a new workflow - in the flow designer, add the SharePoint connector and look for the "Generate document using Microsoft Syntex" action. You'll configure:

  • The Site, Template library, and Template (your Syntex template).
  • The Document library and File name for the output document.
  • The field values — these appear as inputs corresponding to the {{FieldName}} tokens in your template. Map these to the dynamic content coming from your agent flow inputs i.e. the parameters passed into the flow (you'll need to define these on the flow trigger).

The action generates the populated document and saves it to the specified SharePoint location. You can then return the document URL or other metadata back to the agent as the flow output.

The result

The output isn't radically different, but even the fact that we have line breaks and paragraph structures just makes the resulting document more usable:

So, fidelity of documents generated this way tends to be noticeably better than the Word Online connector approach - which matters a lot when documents will be read or shared down the line. In the context of the process we're automating in this series with the Advania Tech Architect agent, it means our architects now have an easier time of coming to this document and building on it to make it client ready. They can now come in and start the validation process, fact checking the agent outputs, adding detail related to the project or requirement, and generally taking the agent outputs and adding the human layer and expertise to make it a worthy artifact which can have the Advania name on it.

Another benefit of this approach is that by going via a SharePoint list, you also have a nice audit log of every agent output which made it's way into a document. Because this is in list format and not just inside a document, you can use all the benefits of lists like filtering, sorting, formatting, rules and notifications etc.

Closing the loop: Microsoft 365 automation skills are a core ingredient for agents

When people think about building autonomous agents in Copilot Studio, the conversation often gravitates towards the AI capabilities — the reasoning, the multi-step planning, the conversational handling. All of that matters. But some of the most impactful things your agents can do aren't sophisticated AI capabilities at all - they're automation capabilities. The ability to create a document, save it to the right place, send a notification, look something up in a list — these are table stakes for agents that do real organisational work.

What that means is that Microsoft 365 and Power Platform skills aren't just helpful context for someone building Copilot Studio agents, they're pretty much essential. Understanding how connectors work, knowing which actions are available and what their quirks are, knowing when Syntex PAYG is a prerequisite — these are the practical details that separate an agent that works in a demo from one that works in production.

In Copilot Studio, workflows/agent flows are the mechanism that lets you bring all of that automation capability into your agent's toolkit. The pattern we've worked through in this post — gather information through conversation, execute document automation via an agent flow, deliver the result — is a pattern you can apply to a huge range of real business scenarios. Once you have a solid mental model of what to lean on in the Power Platform and wider M365 and how to surface it through workflows, you're able to automnate a lot of work and get great outcomes from your agents.

In the next post in this series, we'll look a final vital dimension of running agents successfully in production — avoiding the pitfalls which could mean your agent is hugely expensive, and how to lay down appropriate cost governance guardrails and monitor effectively. But that's for next time.

Next article (coming soon)

Technique 5 - Understand cost, capability, and governance implications of agents you create

Saturday, 31 January 2026

Autonomous agents - Technique 3: Provide tools for steps your agent can't easily handle (like Agent Flows)

Continuing the theme of this series on how to build effective AI agents which have some autonomy and operate on the instructions you give, we need to address another aspect of keeping an agent on rails - specifically, when agents take action. That might be updating an external system or database, sending an e-mail or other message, asking for human approval in a step, cross-referencing some organisational data, or any number of other things we may want an agent to do. The challenge is that steps like these often need to be very defined and exact - simply specifying what should happen in agent instructions is often never going to work. "Update our CRM with the new lead details" or "Raise an invoice for the order" are vague guidance with nowhere near enough context - even the most capable AI agent backed by the latest LLM will fail on those without help. If the agent could talk it would conceivably say "What CRM, where? How do I authenticate? How are leads stored, and how do I ensure the lead is associated with a client and I'm not creating a duplicate?" 

In the last post we focused on writing good instructions for agents - but most agents need more than that. They need to call out to tools which are pre-defined and wrap up the complex details of taking actions on specific systems, integrating data, or performing precise steps in a process. Every agent framework has a 'tools' concept, and for Microsoft agents built with Copilot Studio, this is agent flows - ultimately Power Automate flows triggered from Copilot Studio agents which. This post covers how to make your agent more reliable in actions it performs by calling out to agent flows, including the specific help Microsoft give you to simplify this.  

But first, here's a recap of the full series:

Articles in this series

Agent flows - what are they?

As a concept, agent flows isn't anything too new - this is Power Automate within the Microsoft ecosystem but adapted for the AI and agent world. An agent flow is essentially a Power Automate cloud flow which can only be called from an agent. Here's a quick primer on some of the differences and commonalities:

Agent flows - a primer
  • Agent flows share the same workflow designer as Power Automate, the same set of connectors, and the same approach to key concepts like triggers, actions, child flows, and variables
  • An agent flow must be created in Copilot Studio (not Power Automate) and start with the Run a flow from Copilot trigger and finish with the Respond to Copilot action.
  • Licensing is different - Agent Flows run under Copilot Studio licensing rather than Power Automate licensing (i.e. they consume Copilot credits)
  • Agent flows can use Premium connectors without charge (since it's being covered by the Copilot Studio licensing)
  • Agent flows DO bring special support for calling from agents - in particular, if your flow has a series of input parameters (let's say pieces of an address), the agent can automatically determine which pieces of data it should pass to each. This works surprisingly well if you name your inputs properly - more on this later
  • Agent flows are designed to be shared across agents - they essentially become a toolkit of well-defined actions and sub-processes used by your agents, some of which may be specific to an agent and some shared across many
  • Agent flows give better tracking, analytics, and overall control across the actions they take compared to steps an agent would run just from it's instructions - this is helpful in anything you need full control and monitoring over 

This is essentially 'tool calling' in the Copilot Studio agent world.

How agent flows are used in my scenario

Agent flows give us consistent execution across processes and actions - and since we all know about the non-deterministic nature of LLMs by now, it's clear that many agents need this. For the 'technology architect agent' discussed in this series, if you read the the last article you might remember we were hitting issues trying to get the agent to do certain things:
  • Issue 1 - agent was failing to create the Word proposal document as requested - which we said would include the technology recommendation, rationale, and licensing uplifts etc. required for this change
  • Issue 2 - agent was failing to log it's output to a SharePoint list as requested - this is to give a simple audit trail of requests and corresponding recommendations
Agent flows are needed to fix this.

I created two flows, one for each sub-process:

Let's look at these one by one. 
Agent flow to create proposal document
The objective here is to:
  • Creating a draft customer proposal containing all the details of the technology upgrade that the agent has determined - essentially, accelerate our consultants who would normally create such documents from scratch
  • Ensure the document is on the Advania branded Word template 
So this is essentially "create a templated document and drop information into placeholders". There are a few ways to do this in Microsoft 365, and this agent flow effectively automates my chosen approach - I'm using a capability in SharePoint Premium/Syntex called Content Assembly, and this provides a handy Power Automate action. Here's the start of the flow in designer:

We'll go into this in more detail in the next post - Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents. The key message is that to build effective low-code agents in the Microsoft world, you need a solid understanding of wider M365 building blocks and how these can be plugged into your agents - otherwise you'll hit limits of agent building and automation. 

To summarise here, the approach I'm using for templated document creation is a Microsoft Syntex/SharePoint Premium capability called Content Assembly. I've already done the setup work for this which involves:
  • Creating a SharePoint list with columns for all the info pieces your document needs
  • Creating a 'modern template' in SharePoint, where you upload your branded document and insert placeholders in the right locations for each piece of data to be dropped in
For full details of on this approach, see my article Automate creation of new documents with Syntex Content Assembly.

For my agent, the piece which does the magic is this item - the 'Generate document using Microsoft Syntex' action available in Power Automate and therefore agent flows. Individual blocks of text like the client name, requirement summary, proposed approach etc. are passed into this action for them to be dropped into the document:
 
 
What's happening here is that these pieces of data are being retrieved from a SharePoint list item and then passed into this action, and therefore the document. But that needs something to create the list item in the first place, and that's our agent itself - and specifically, it's my other agent flow which does that step. Let's look at that now.
Agent flow to create SharePoint list item with agent's output
To show what's happening here, let's start with the actual list that stores this data - here's a partial view of it:

The full list of columns is:

All of those items are dropped in by the agent. This is where we come to the important support that agent flows give in simplifying all this - "proactive slot filling". Because my agent flow has three clear input parameters, I can simply ask the agent to work out what to pass in from their names - and again, this is a place where descriptive naming and rich descriptions are absolutely critical for AI and agents, I named them carefully. To do this, in the Tools > Inputs area of my agent I use the "dynamically fill with AI" option for each parameter:

With the 'dynamically fill with AI' approach, the agent itself works out what to pass into each parameter based on their name and pieces of information it's already determined from the conversation. There's quite a bit to what's possible here and Microsoft document it at Implement slot-filling best practices - it's essentially NLU working with some pre-determined entities and the ability to define your own. What this means is you don't need to do the work of parsing out individual pieces of information from either the agent's earlier output or queries and prompts supplied by the end-user - this is AI running over the user/agent conversation so far and extracting what it thinks the right answers are likely to be. The alternative would be you doing this hard work and then passing in 'hard-coded' values to your flow parameters. Of course, the dynamic AI approach won't always work perfectly and it's an area of agent development that needs careful scenario testing using different types of data - and to say it one more time, good naming is critical of course or the AI has no chance.

So that covers how data gets passed in, and from there it's down to whatever steps you implement in your flow using all the standard Power Automate capabilities. As you can imagine, to create our SharePoint list item which then drives the proposal document creation I simply use the SharePoint connector's Create Item action:

Thus, we now have our automation chain of:
  1. Architect asks the agent for a proposal on how the client's use case should be addressed
  2. Agent uses it's data sources and reasoning to derive an approach that makes sense for this client (based on stated needs, technologies in play or suitable to adopt, licensing etc.)
  3. Agent provides it's response
  4. Agent calls tool (agent flow 1) to add SharePoint list item containing key elements of the response
  5. Agent calls tool (agent flow 2) to create the Word proposal document on the Advania branded template using the SharePoint list item
  6. Agent notifies user that it's done
We now have a fully working agent doing it's advanced reasoning and creating the draft proposal document for our architect to enhance and take forward to the client.

The result

Now that we codified exactly where and how to create proposal document (via the agent flows), we now have a document successfully dropped into my chosen SharePoint library:

The draft proposal has all the details of the agent's output and was created on our organisational template:


The next step is to start analysing and enhancing the agent's output - checking the reasoning, architectural and licensing guidance, and turning this into a polished client-ready proposal. But the heavy lifting of contemplating the requirement, embarking upon the research, considering different options, ensuring each granular requirement specified by the client is met, deriving any licensing considerations and uplifts, then structuring a draft proposal - all this is done.

A word on Express Mode for agent flows

A final thing to understand about agent flows is express mode. In Microsoft's framework, agent flows fail if they take longer than two minutes to execute - express mode is a way of opting-in to model giving faster execution times with some limitations, and it's for agent flows only rather than extending to Power Automate flows too. There are no additional costs or licensing implications, but the limitations need to be understood - more on this at https://learn.microsoft.com/en-us/microsoft-copilot-studio/agent-flow-express-mode

Summary

In this article, we focused on one of the most important ingredients for building dependable agents - giving them the right tools to perform precise, repeatable actions. In the Copilot Studio world, this is agent flows. While agents excel at reasoning and orchestrating conversations, they simply can’t execute structured operations (like updating systems, creating documents, or logging data) reliably without clearly defined, deterministic steps. I don't see this changing too much even as models and agent frameworks evolve over the next few years. 

We explored how agent flows act as the “hands” of your agent, wrapping complex processes into reliable Power Automate cloud flows. You get predictable execution, premium connector access, consistent handling of structured data, and better monitoring and governance. Using the architect proposal scenario, we walked through how two agent flows - one to log outputs into SharePoint and another to generate a branded proposal document - take the agent from being a conversational assistant  to providing real automation of the process. We also looked at how dynamic slot filling removes the need for brittle manual parsing, allowing the agent to intelligently map conversation data into flow inputs.

The result is an agent that not only reasons about a problem but also creates the tangible output - in this case, a ready‑to‑review customer proposal based on our branded Advania template. created from a SharePoint‑based template.

One aspect we didn’t dive into here is billing and capacity consumption, which becomes increasingly important as your agent ecosystem grows. That topic deserves its own space, and we’ll cover it in detail in the final article in this series.

Next article

Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents

Tuesday, 11 November 2025

Autonomous agents - Technique 2: Define explicit steps in agent instructions when "reasoning the process" isn't appropriate

One of the main reasons AI agents are such a leap forward in automating work comes down to their ability to reason and dynamically plan in order to reach an outcome - which of course, is much like us humans intuitively do in our work. Apps and systems that we've developed over the past few decades have essentially all followed set processes with procedural code, and while many have flexible logic and business rules, today's AI models take us several levels beyond this. Ultimately we move away from a world of apps with explicitly coded steps to one where AI agents are often instructed with the end goal or outcome, but not necessarily a granular process of how to get there. So long as the agent is given the data, tools, and guidance to do this, it should have a good chance of meeting the outcome. Or that's the theory.

One of the biggest challenges today with autonomous agents is that they can have too much freedom and go off the rails from the intended task. We're not talking dystopian takeover, simply unpredictable agents with poor results and a lack of consistency. So, this post covers how to make your agent more reliable by finding the balance between autonomy and specific guidance - it all comes down to how you write the agent instructions. 

But first, here's a recap of the full series:

Articles in this series

Some lessons learnt on agent instructions and autonomy

Going back to our Microsoft 365 architect agent used in this series (which automates some of our work at Advania by considering a business challenge and the client's tech landscape, then makes an architectural and product recommendation mapped to, here's how I started out with the instructions I gave to the agent. 

Poor agent instructions
The user will input a use case or business challenge. Using your knowledge of Microsoft 365 licensing and the client's environment and licensing, provide a recommendation of how a solution could be implemented using the available technologies. Generate a proposal outline to implement the solution - propose the technology(s) to be used, any licensing uplifts required which aren't currently in place or planned, and an overview of what the project would look like. Create a Word document containing the prompt and response. Also log the results to the SharePoint list in the "Syntex-AI-intelligence" site named "Technology proposals".

All seems quite reasonable and specific - I'm outlining specific goals and steps to take, and even being ultra-clear on where to find the SharePoint site to log the output to. However, let's take a look at the result:

 What's actually happening here is:
  • The agent only runs one step - which is never going to work
  • The agent is trying to determine the overall architectural solution simply from the first knowledge source (the m365maps.com site which provides info on Microsoft 365 products and SKUs 
I won't show the end output here to avoid confusing things, but the actual answer given by the agent is very low quality - it does say that Defender for Endpoint Plan 2 could be used (correct) as seen in the screenshot above, but the agent fails on several key things we want it to do: 
  • There's no real consideration of the factors here, the agent appears to be doing some basic querying only. No basis for the recommendation is specified, and there's no evaluation of licensing uplifts which may be required (as asked for in the instructions)
  • The agent didn't log it's results to the list as requested
  • The agent didn't create the Word proposal document as requested
So, clearly this isn't working even if it has a hint of the right answer. 

Defining explicit steps in a numbered process

The reality is we need to be much more explicit with agents today - expecting them to be able to "reason the process" from a set of instructions often doesn't work and results will be unpredictable. In particular, where a process has steps that truly need to be followed in sequence - like fetching some data and then reasoning over it - the thing that makes the difference is specifying these steps in a numbered list. 

Here are my revised instructions:

Sequenced agent instructions
  1. Derive the client organisation name from the user's prompt. If you are unsure of the organisation, ask the user for clarification. 
  2. Derive the use case or business challenge the client is asking about from the user's prompt. If you are unsure of the use case, ask the user for clarification. 
  3. Using the client name you already derived, use your knowledge sources to understand which technologies the organisation is licensed for. 
  4. Now use your knowledge sources to ensure you understand the individual Microsoft products and sub-capabilities of these Microsoft 365 SKUs - build a categorised list for use in later steps. 
  5. Using the data from the previous step, reason to provide a recommendation of Microsoft technologies which could meet the specific requirements. IMPORTANT - be extensive with the rationale for your decision, mapping how capabilities in the proposed Microsoft technology meet specific client requirements and detail this in your output. If no recommendation can be made (e.g. because the use case cannot be met with the licensed technologies), make a recommendation of which technologies could be purchased to help. As part of this reasoning, establish any licensing uplifts which may be required for this client to use the recommended technologies. ## Be sure to detail your chain of thought in the output ##
  6. Also reason to establish any non-Microsoft technologies that could potentially be used to fulfil the requirement. 
  7. Generate a proposal outline to implement the solution - propose the technology(s) to be used, any licensing uplifts required which aren't currently in place or planned, and an overview of what the project would look like to implement the Microsoft-based solution. 
  8. Log the results to the SharePoint list in the "Syntex-AI-intelligence" site named "Technology proposals".
  9. Create a Word document containing all details of client requirement and proposed solution - ensure to use the full reasoned output in the details. Save this to the SharePoint library in the "Syntex-AI-intelligence" site named "Technology proposal documents".

I've expanded on the detail of what I want in some places, but the other major change is switching to a numbered list to instruct the agent to follow those steps in order. Now let's look at the difference:


 
Looking at those images, straight away we can see the agent is now following many more steps - and if we drill into the two "Knowledge" steps as shown above, we can see the query used for each (and the results) now make sense:
  • Advania client data - agent asks it what technologies the specific client is licensed for
  • M365maps.com - agent uses this to discover what capabilities are in each granular Microsoft 365 SKU
This sets us on the path to success.

Results from the agent 

At this point we hope our agent will properly consider the client challenge provided to it (e.g. replace our endpoint protection, plan for a new intranet, introduce a compliance tool etc.) and then consider their technology environment and licensing, if anything provided in Microsoft 365 fits the bill - and then reason through all this to produce a technology recommendation, considering any licensing uplifts needed as well as adoption/migration considerations. Here was my prompt:

Unilever (fictional - remember I'm redacting real Advania client data with the top 20 UK companies), one of our clients, want to roll out a new endpoint protection technology to replace CrowdStrike to approximately 100,000 end-user devices. Ransomware protection is a key concern, and the RFP lists automated incident response and remediation, integrated threat intelligence, and sandboxing as required capabilities. 

Notably, they need coverage for 5,000 devices where users only have Microsoft 365 F1 licenses (frontline workers).

Here's the result:

Chris, here’s a complete recommendation and delivery outline for Unilever’s endpoint protection RFP.

Organization: Unilever
  • Use case and business challenge inferred: Replace CrowdStrike across ~100,000 end-user devices.
  • Must meet: strong ransomware protection, automated incident response and remediation, integrated threat intelligence, and sandboxing/detonation.
  • Coverage gap to address: 5,000 devices used by Microsoft 365 F1 frontline workers.
What Unilever is licensed for today (from our records)
  • User licensing: Majority on Microsoft 365 E3 with EMS add-on; ~15,000 on E5; ~10,000 Microsoft 365 F1 frontline licenses.
  • Security stack in place: Entra ID P1, Microsoft Sentinel (Azure Sentinel).
  • Endpoint and platforms: Endpoints managed with Intune; currently using CrowdStrike for endpoint protection; Mimecast for email; AVD and Citrix in use; Azure in UK/US/AUS regions.
Microsoft products and sub-capabilities relevant to this RFP (categorized)
  • Endpoint protection and EDR
    • Microsoft Defender for Endpoint Plan 1 (MDE P1): next‑gen AV, tamper protection, attack surface reduction (ASR), web content filtering, centralized management, cross‑platform support, “Block at First Sight.”
    • Microsoft Defender for Endpoint Plan 2 (MDE P2): adds endpoint detection and response (EDR), threat analytics, vulnerability management (core), and longer searchable data retention; forms the endpoint pillar of Microsoft 365 Defender (XDR).
  • Automated investigation and remediation (AIR)
    • Microsoft 365 Defender suite provides AIR across workloads; Microsoft documentation highlights AIR in Defender for Office 365 Plan 2 (email) as an example capability within the suite.
  • Threat intelligence
    • Threat Analytics within MDE P2 delivers integrated intelligence and analyst‑ready reports that inform investigations.
  • Sandboxing/detonation
    • Endpoint: “Block at First Sight” uses rapid cloud evaluation and detonation for unknown files for preventative blocking on endpoints (P1+).
    • Email/collab: Defender for Office 365 Plan 2 adds Safe Attachments (detonation sandbox) and Threat Explorer for campaign-level investigation.
  • SIEM/SOAR
    • Microsoft Sentinel already present for Unilever; integrates natively with Defender signals for advanced hunting/automation.
  • Device and app management
    • Microsoft Intune already in use for policy, configuration, and deployment orchestration.
Summary recommendation
  • Standardise on Microsoft Defender for Endpoint Plan 2 across the Windows, macOS, Linux, Android, and iOS estate; manage deployment and policy via Intune; unify signals in Microsoft 365 Defender (XDR) and stream to Microsoft Sentinel for SIEM/SOAR.
Why this meets each RFP requirement
  • Ransomware protection
    • Prevent: Next‑gen AV, ASR rules, tamper protection, and “Block at First Sight” provide strong preventative controls on endpoints (P1 baseline).
    • Detect and contain: MDE P2 adds behaviour‑based EDR with live response, threat analytics, and extended searchable telemetry to spot hands‑on‑keyboard activity and lateral movement quickly.
  • Automated incident response and remediation
    • MDE P2 enables automated investigation and remediation on endpoints to contain and fix threats (part of Microsoft 365 Defender’s AIR capabilities).
    • Sentinel playbooks (Logic Apps) can orchestrate cross‑tool actions at scale (contain device, isolate network, reset credentials) using Defender incidents as triggers; Unilever already has Sentinel.
  • Integrated threat intelligence
    • Threat Analytics in MDE P2 provides curated intel mapped to current campaigns and exposures, tied directly to Unilever’s devices and vulnerabilities for prioritisation.
  • Sandboxing/detonation
    • Endpoint: Cloud‑based file analysis and “Block at First Sight” help pre‑empt execution of suspicious payloads on endpoints.
    • Email: If Unilever later consolidates on Defender for Office 365 Plan 2, Safe Attachments detonates content pre‑delivery and contributes to unified incidents in Microsoft 365 Defender. 
How this fits Unilever’s current environment
  • Intune is already the endpoint manager, making at‑scale onboarding, policy configuration (ASR, AV, EDR), and gradual CrowdStrike co‑existence/cutover straightforward.
  • Sentinel is already deployed, so incident fusion, hunting queries, and automation can be turned on rapidly to cover both endpoint and other data sources.
Licensing path to enable the above
  • Users currently on Microsoft 365 E5: MDE P2 rights are included; no uplift needed for those users.
  • Users on Microsoft 365 E3 with EMS add‑on: add either the Microsoft 365 E5 Security add‑on (bundled security including MDE P2) or standalone Microsoft Defender for Endpoint Plan 2 licenses for those users. 
  • Frontline worker devices on Microsoft 365 F1 (5,000 devices): options to cover endpoints with MDE P2
    • Option A (preferred for feature parity): move these users to Microsoft 365 F3 and attach an appropriate security add‑on that grants MDE P2 (for example, an enterprise security add‑on akin to E5 Security for frontline) or assign standalone MDE P2 per user where permitted. Validate final SKU mix with Microsoft licensing because frontline bundles vary by agreement and commercial construct.
    • Option B (kiosk/shared scenarios): if devices are truly shared and user‑based licensing is impractical, explore device‑based licensing options with Microsoft for MDE where applicable. Validate with the Microsoft account team to align with Unilever’s Enterprise Agreement terms.
Note: The SKU matrix shows where endpoint features and EDR/Threat Analytics land (P1 vs P2/E5 Security). Use this to ensure every user segment lands on P2 for EDR/AIR. 

Commercial and operational rationale
  • Reduces agent sprawl by consolidating into the Microsoft stack Unilever already operates (Intune, Sentinel), lowering operational overhead and integration complexity.
  • Meets or exceeds the specified capabilities with unified incidents, threat intel, and at‑scale automation, while improving mean time to detect/respond.
Non‑Microsoft alternatives that meet the specification (for RFP completeness)
  • CrowdStrike Falcon Insight + Falcon X (sandbox) + Falcon Fusion for automation.
  • SentinelOne Singularity Complete/Control with Storyline automation and rollback.
  • Palo Alto Cortex XDR Pro + WildFire for detonation + XSOAR automation.
  • Sophos Intercept X Advanced with XDR + Sophos Sandstorm.
  • Trend Micro Apex One SaaS + Vision One XDR + detonation sandbox. 
Note: These all offer ransomware protection, automated response, TI, and detonation. The decisive factors will be agent footprint, integration with Intune/Sentinel, and TCO relative to Microsoft’s included rights.

Delivery proposal outline for Unilever
  • Scope: Replace CrowdStrike with Microsoft Defender for Endpoint P2 on ~100,000 devices; ensure parity on ransomware protection, AIR, TI, and sandboxing; integrate with Microsoft 365 Defender and Sentinel.
  • Phases and timeline (indicative)
    • Readiness and design (3–4 weeks)
      • Confirm license entitlements per segment; define co‑existence approach with CrowdStrike; finalize policies (AV/ASR/EDR), device groups, and ring strategy; connect Defender to Sentinel with required data connectors and analytics rules.
    • Pilot (4–6 weeks, ~2,000 devices across key personas/regions)
      • Deploy MDE sensor via Intune; validate protection efficacy, AIR behaviour, endpoint isolation, and SOC workflows; tune ASR/tamper protection; validate detonation/“Block at First Sight.”
    • Scale‑out waves (8–12 weeks)
      • Roll out by business unit/geo; enable EDR in block mode; progressively disable CrowdStrike on devices once Defender is in “active” mode and stable; expand Sentinel automations and hunting queries.
    • Cutover and decommission (2–4 weeks)
      • Remove CrowdStrike agent; finalize GPO/MDM baselines; confirm incident volumes, MTTR, and coverage; complete runbook documentation and knowledge transfer for SecOps/Endpoint teams.
    • Hardening and optimization (ongoing)
      • Leverage Threat Analytics and TVM to drive risk reduction; regular red/blue exercises; quarterly policy reviews; rationalize email sandboxing approach if consolidating on Defender for Office 365 Plan 2.
Architecture and integration highlights
  • Intune for deployment/policy; Microsoft Defender for Endpoint P2 on endpoints; Microsoft 365 Defender for incident correlation; Microsoft Sentinel for SIEM/SOAR rules, hunting, and playbooks; integration to ServiceNow/Jira for ticketing as needed.
Roles and responsibilities
  • Our team: solution design, pilot, deployment tooling, Sentinel content and automation, change and cutover runbooks, training.
  • Unilever: environment access, pilot cohort sponsorship, change approvals, SOC/process owners, endpoint engineering partnership.
SharePoint list logging (Technology proposals)
  • Action prepared: Log the proposal to Syntex-AI-intelligence > Technology proposals with fields: Client=Unilever; Use case=Endpoint protection replacement; Scope=100k devices; Key requirements=ransomware, AIR, TI, sandboxing; Recommendation=MDE P2 + M365 Defender + Sentinel; Licensing notes=E5 covered; E3 add E5 Security or MDE P2; F1 coverage options; Status=Proposed; Next steps=Pilot planning.
  • Limitation: I don’t currently have write access via this interface to post to your SharePoint list. If you can grant me access or confirm the list URL and credentials, I will submit this entry immediately on your behalf.
Word document creation (Technology proposal documents)
  • Action prepared: Generate a Word document containing this full recommendation, capability mapping, licensing options, and delivery plan, and save to Syntex-AI-intelligence > Technology proposal documents with a filename like “Unilever – Endpoint Protection Replacement – Proposal.docx.”
  • Limitation: I don’t have write access to create/save documents to your SharePoint library from here. If you share a drop‑off library URL or grant permissions, I’ll create and upload the document automatically; alternatively I can send you the .docx for you to place.
Key decisions for you
  • Confirm the preferred licensing path:
    • E3 population: E5 Security add‑on vs standalone MDE P2.
    • F1 population (5,000 devices): upgrade to F3 + security add‑on vs assign standalone MDE P2 to those users; validate final SKU choices with Microsoft licensing to align with Unilever’s agreement.
    • Confirm pilot cohorts and target start date so we can mobilise.
Citations used: [list of citations]

Phew! The results are amazing when you think about it - the agent has considered all the factors and showed a level of understanding equal to a highly experienced and capable Microsoft security specialist. We even see that the agent has done what was asked in terms of mapping specific requirements given by the client (e.g. automated incident response and remediation, integrated threat intelligence, and sandboxing in this case) to product features in the proposed solution, Defender for Endpoint. 

We do still have two issues though - as the two "limitation" notes in the agent's response call out, the agent wasn't able to:
  • Log the output to the SharePoint list as requested
  • Create the Word proposal document as requested
So, we're not done yet - we'll come to these resolutions in the next article. 

In terms of the cognitive work the agent has successfully performed, at Advania we don't see this as replacing our high-end architects - but we do see it as accelerating our teams and allowing more junior members of the team to take on more complex work. The work partly switches to verification, judgement, editing where required, and rounding out and adding perspective - all elements which would have been required anyway. Having the agent take on some of the complex research, interpretation, and generation is huge though of course - we really are democratising intelligence and expertise and therefore moving faster and serving more.  

Summary

Autonomous agents will transform work over the next few years but their effectiveness depends on how we guide them, and instruction design matters. Simply stating an end goal and the outline of a process is unrealistic, at least today - instead, breaking instructions into clear, sequenced steps (probably using numbered bullets as I've shown in this article) is needed to get reliable agent behaviour. It's a question of balancing autonomy with control, and human oversight and verification is crucial of course.   

The next article will resolve those pesky challenges of the agent not being able to create Word documents to accelerate that part of the process and the team's work.

Wednesday, 3 September 2025

Autonomous agents - Technique 1: Getting AI-suitable descriptions right - data, tools, agents themselves

As I opened with in the first article in this series, AI agents are the next generation of solution we'll build for the next few years - this is how work gets automated, at least some of it. Business systems which take away some of the human processing so we can focus on more impactful work, powered by highly-capable LLMs with the ability to consume data, reason and plan, and use tools like web browsers. In this series we focus on agent-building techniques, and in particular what it takes to build autonomous agents successfully i.e. that ability for an agent to dynamically reason and plan. Guidance here orients around Microsoft Copilot Studio agents, though it's interesting to reflect that many of the techniques will apply across nearly all AI agent platforms. 

In this article, we focus on descriptions - in the context of AI agents, this means for data, tools, sub-processes and other things the agent might use to get to it's goal. It's interesting because descriptions have been pretty innocuous in apps/solutions/automations we've built over the last few decades, because much like comments in code, only humans read them. However, things are very different when AI is reading them and making decisions about how to process based on the words provided.

But first, here's a recap of the full series:

Articles in this series

Descriptions - why they're suddenly vital

Despite all the advances in the underlying AI models, AI agents are not magic and they need help. This is especially the case when:
  • Agents have some autonomy in terms of getting to an outcome (rather than being defined step-by-step, like a coded application)
  • Agents can use data and knowledge sources
  • Agents can call on tools and sub-processes
  • Agents can call other agents (if they understand what they do)

AI needs good descriptions more than humans do
The critical point is that AI agents use descriptions to understand what data to use and when, what tools to call and when, and what sub-processes to run and when. If your descriptions are poor and don't supply enough context, the behaviour of your agent will be unpredictable and won't be as expected.
 
As an example which may hit many agent builders, many agents fail or behave unpredictably because they don’t understand what their knowledge sources or tools are for. This often happens when default or auto-generated descriptions are used — they’re vague, generic, and unhelpful. For example, if you provide a file as a knowledge source for your agent, Copilot Studio will helpfully use AI to generate a description of the a file - but this will essentially be "this knowledge source uses this file" - so when I provide a detailed Excel file of Microsoft 365 license SKUs and capabilities (from the excellent M365maps.com), the descriptions generated today do nothing to help the agent understand what this knowledge represents and how it could be used:

The Fix:

Write clear, detailed descriptions for every knowledge source, tool, and even the agent itself (to support agent-to-agent interactions in the future). These descriptions are used by the orchestration engine to decide what to use and when. 

Let's focus on a knowledge source (i.e. some data used by an agent) in Copilot Studio. In the agent I'm using in this series, one knowledge source is an Excel export of 'technology landscape' data for key Advania clients (redacted for the video in the last post). 

Bad example (this is the default description generated by Copilot Studio):

“This knowledge source searches information contained in Microsoft license coverage by client.xlsx.”

Good example:

Instead, a far better description helps the agent understand what's in the data/knowledge - for example: 

“This knowledge source details the technologies, plans, and current Microsoft licensing in place for key Advania clients. It covers aspects like how many Microsoft 365 E3 and E5 licenses are held, and which technologies are used for Security & Compliance, messaging, endpoint protection, and document management.”

The agent can now understand exactly what's in this data and how it can be used. This context is essential for establishing how it should go about the process and what to use when.

Agent behaviour with poor knowledge descriptions

Before data and knowledge sources are given good descriptions (rather than the AI-generated ones from Copilot Studio), you'll find all sorts of strange behaviours. Things may look good at first because you see the agent referencing the sources:

However, dig deeper and you'll find some sources simply aren't used in the agent's processing:

In fact, what we see is the agent simply referenced one knowledge source and then stopped there - it didn't actually consider the use case it was given or do any kind of reasoning, and while it gives a response that looks rich at first glance, in fact the agent is simply trotting out some generic info related to Microsoft 365 products. We see some of this in the activity map in Copilot Studio which shows what happened when the agent ran:


Agent behaviour with good knowledge descriptions

Fix the descriptions, and suddenly the agent knows how to consider each knowledge source - if we ask the same question to the agent, we see that BOTH knowledge sources are now used. Firstly the agent queries the "Microsoft 365 license SKU matrix" source to consider endpoint protection options:


Next, the agent starts a specific query on Microsoft 365 F1 licensing, which makes sense given this client is referenced in the client data knowledge source as having this restriction for some employees:


Better - but still not right

On the surface, it seems that the agent is coming closer to it's purpose - being an effective tech architect with deep Microsoft knowledge. However, if the agent's final responses are closely analysed at this point we see that:
  • Unfortunately, the agent isn't truly considering the passed client requirement (as per the demo video shown in the previous article, I'm asking it to consider a particular client's need to replace endpoint protection along with some granular requirements). Instead, I'm still getting some 'lightweight consideration' and generic Microsoft 365 product info at best
  • The agent is also not delivering on another element - I'm not getting a draft proposal document generated, although I'm asking for that in the agent instructions
So in short, the agent is now using data and it's knowledge sources in a more sensible way, but the overall process being followed isn't what we need. We need a step-by-step consideration of the client's need and the granular requirements specific, any Microsoft 365 products or capabilities which might meet these needs, a lookup and consideration of the technologies the client is currently licensed for, and then a genuinely thought through recommendation based on all the above. I won't show the agent output at this stage to avoid confusing things, but suffice to say we have an agent at "first day intern" level at the moment and for this to be valuable we need a lot more cognitive processing.

On to the next challenge - getting agent instructions right
Now our agent understands the data and knowledge it has access to more clearly, we need to turn our attention to how the agent is guided in the instructions we give. If we make some changes here, the agent will go through a step-by-step process and reason over the key questions being asked of it much more effectively. Which leads us to the next article in this series:

Next article

Tuesday, 29 July 2025

Demo video - Microsoft architect autonomous agent (Copilot Studio)

In the previous article in this series on autonomous agents, we talked about what makes an agent autonomous and some implementation fundamentals specific to Copilot Studio. As with anything AI, seeing an example in context goes a long way to helping understand the possibilities, so this second post provides a video of a real autonomous agent we're starting to use at Advania. The agent effectively becomes a member of our team, using advanced reasoning models to work with complex concepts and accelerate our work. Before that, here's a reminder of what this series looks like:

Articles in this series

Use case for this agent

If you follow me on LinkedIn you may have seen me post about this agent there. We built this agent to automate some of our work at Advania, in particular some of the complex Microsoft architecture and technology consultancy work we deliver to clients. The scenario is essentially an 'expert Microsoft architect' agent which understands:

➡️ The various technology estates of key Advania clients and what they have licensed - the agent sources this from an internal system we have
➡️ Microsoft 365 product SKUs and licensing, specifically E3/E5 suites and granular capabilities in sub-products like Defender for Endpoint, Defender for Identity etc. - the agent uses the excellent m365maps.com website for this
➡️ How to take a specific client requirement (e.g. a need to roll out a new endpoint protection technology/automate a legal process/reach frontline workers with corporate comms etc.), derive any "strong fit" Microsoft technologies, and map granular requirements specified by the client to product capabilities to support the proposed approach

The video shows:

✅ A quick overview of the agent definition (built in Copilot Studio)
✅ Data sources the agent has access to
✅ The agent reasoning through the supplied use case for one fictional client (Ecosphere Solutions)
✅ Proposed approach with clear rationale - including licensing considerations, implementation details, and how specific requirements are met by the proposed technology
✅ Proposal drafted on company-branded template

Demo video



Reflection on AI agents like this

The power of agents is that they can act as a virtual team member, automating some of the workload and enabling human effort to go to higher order challenges. The interesting thing about this agent in my view is the ability to perform advanced reasoning - thinking through the client's need, the technologies they have access to, exactly what's provided in those, and deriving a good fit if there is one. 

Of course, we don't see the AI agent as replacing Advania architects and consultants much-loved by our clients - this is an accelerant for our teams, not a replacement. But we do see agents like this as helping us deliver more value to clients - bolstering our expertise and helping us respond faster with the accuracy and depth we're known for. It also helps us level-up less experienced consultants or new members to a team. In reality, every business has complex processes and expertise that today's AI agents can unlock - this is an example of what makes sense for us.

Next article

Technique 1 - Getting AI-suitable descriptions right (data, tools, agents themselves)

Techniques for autonomous agents in Copilot Studio - blog series part 1

AI agents are the next generation of solution we'll build for the next few years - it's clear that many "app front-end plus data" business systems will evolve to be more agentic, in the sense that the application itself will automate more of the processing, and interfaces become less about forms and more about instructing the AI in natural language. Autonomous agents are one of the most exciting aspects because software and apps simply didn't have this possibility until now, and I don't think there's much conjecture this is a key 'unlock' in terms of how work gets automated and more efficient and AI starts to have the impact on societies and economies. With advanced LLMs and protocols for bringing agents and systems together, we now have the tools to build agents that can reason, act, and deliver outcomes — not just respond to prompts. But with this power comes complexity, and I see many approaching agents with expectations that don’t align with today's capabilities. 

This blog article is the first in a series which walks through five key techniques for building effective autonomous agents in Copilot Studio. Each article highlights a common pitfall, explains the underlying concept, and offers practical guidance to help you succeed. Whether you're building agents for internal automation, customer-facing scenarios, or domain-specific copilots, these lessons will help you avoid the traps and unlock the full potential of generative orchestration.

Scenario - an autonomous reasoning agent for Microsoft 365 architecture recommendations
Throughout this series I'll reference an agent I built which acts like one of our most experienced Microsoft architecture consultants at Advania - able to understand the full suite of security and productivity capabilities in Microsoft 365 E3 and E5, consider licensing needs and SKU packaging, and make technology recommendations for a given use case based on a deep understanding of the factors. The next article in the series shows a demo video so you can see the agent "thinking through" the scenario, automating the recommendation process through deep reasoning, and drafting a project proposal on a company templated document - a process which can take hours or days for an experienced architect. This is about accelerating that role, improving accuracy, and levelling-up less experienced architects so their thought process and outputs match those of the most experienced. 

Something we'll focus on in this initial article is that agents aren't autonomous by default in Copilot Studio - the agent has to be built with specific settings enabled and certain keywords used in agent instructions. This post covers these fundamentals, because using all right techniques won't get you anywhere if the agent isn't set up to behave autonomously - but we'll also start by explaining what we mean by "autonomy" so you understand where we're heading and what such an agent can do. 

What makes an agent autonomous?

There are lots of definitions of this, but I boil it down to four elements - I used this slide in a conference talk recently (at the 2025 European Power Platform Conference):


Importantly, in Copilot Studio some of this is made possible by "generative orchestration" - this isn't enabled by default, but if you want dynamic behaviour you need to toggle this to on in your agent settings:

Unlike classic orchestration (where you define every topic and response), generative orchestration allows the agent to decide how to use its knowledge, tools, and topics to fulfil a request. It’s powerful — but it also means you need to design your agent carefully to guide that autonomy.

So in Copilot Studio you essentially have two modes, where "classic" is the old mode and generative is the new possibility:

If you look at most other AI agent and virtual assistant platforms which have been around for a while (e.g. ServiceNow, Salesforce, Google), all have evolved from this classic "define each and every step of what a user might ask and how the bot/agent should respond" approach to something like generative orchestration where the LLM is essentially deciding how to behave and have the conversation with the user.  
 
For Copilot Studio, Microsoft has a useful table on the Orchestrate agent behavior with generative AI page which goes into more detail on specific behaviour differences:

 

Enabling reasoning in Copilot Studio agents

In the first image above, we discussed that a key element of autonomy is the "able to reason and plan" aspect. To be able to use reasoning in your agent, this also needs to be enabled in your agent settings (within the 'Generative AI' section) and is only possible if you're using generative orchestration. The settings are bundled together in the same area:




As highlighted in the small text in the blue box, the other critical thing is to use the "reason" keyword specifically in your agent instructions. This tells Copilot Studio to use a reasoning model (e.g. currently OpenAI o1 at the time of writing) rather than a standard LLM, and this won't happen if you describe it in other words.

Here's an example of the reason keyword being used in agent instructions - in this case, an agent I built where this is one of the agent steps to complete: 
5. Using the data from the previous step, reason to provide a recommendation of how a solution could be implemented using the available technologies. As part of this reasoning, establish any licensing uplifts which may be required for this client to use the recommended technologies. IMPORTANT - be extensive with the rationale for your decision, detailing how capabilities in the proposed technology meet specific requirements.
I'll show this agent in action through this article series. As you can imagine, the instruction above tells the agent to use a reasoning model for this step in order to derive the recommendation I'm asking for - that's important, because we're asking for "thinking" rather than more standard LLM processing. 

Agents need help
If not clear already, what we're really saying in this series is that agents need help - the work these days is preparing data, creating tools and sub-processes for agents to call into, and refining natural language descriptions and instructions until an agent behaves in the way you want. This is a new form of coding in some ways, but it doesn't all happen magically - understanding the critical techniques is key.

Articles in this series

Next article

Scenario video - Microsoft architect with proposal generation