M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Stop Waiting: Automate Multi-Stage Approvals with Copilot Studio


Listen Later

You spend half your day waiting for approvals. Someone’s on vacation, someone else “didn’t see the email,” and by the time a decision finally arrives, the context that justified the request has expired. Corporate purgatory: progress paused by people who swear they’re busy.

Now, picture a system that simply doesn’t wait. A workflow that moves forward the instant conditions are met. Enter Microsoft Copilot Studio’s Agent Flows—the bureaucracy killer disguised as automation.

Here, AI becomes your first approver. It reads the data, evaluates it against policy, and gives an informed “approve or reject” before any human blinks. Only borderline cases ever reach a manager’s inbox, which means speed without sacrificing oversight. And unlike legacy approval flows that collapse under conditional complexity, these AI-driven ones scale—branching, validating, and auditing themselves along the way.

In this walkthrough, I’ll show you how to build a multi-stage, conditional approval system that decides faster than your colleagues can find the “Reply All” button. You’ll learn how to: set up an AI stage with custom approval logic, add targeted human reviews, design dynamic conditions that reroute intelligently, and integrate real document validation for compliance.

By the end, you’ll have an automated process that knows when to think like a machine and when to defer to human judgment.

Stop following a queue. Start letting logic lead.

Section 1: The Problem with Traditional Approvals

Traditional approval chains are a tragic remix of the same inefficiency: someone submits a form, emails fly, spreadsheets drift out of sync, and between forwarding loops, nobody remembers which version was final. Each participant adds delay, not value. The process doesn’t manage the work—it manufactures latency.

Typical Power Automate approval flows try to solve this, but they stall once you introduce nuance. A single approval path works fine if you only need one “Yes” or “No.” The moment you add management layers, spending thresholds, or specialized rules, the design begins to splinter. You end up nesting conditions like Russian dolls—inelegant, fragile, and impossible to debug six months later. One broken connector, and the entire system silently fails.

Humans become the bottlenecks—or to be brutally accurate, latency nodes. Every message they receive becomes another asynchronous round trip. Email as an approval mechanism is like using carrier pigeons in a fiber-optic world. It technically works, but it shouldn’t.

Enter Microsoft Copilot Studio. This is not just an incremental version of Power Automate. It introduces Agent Flows—approval systems powered by AI, yet fully integrated into your organization’s data sources, roles, and logic structures. It bridges deterministic policy enforcement with adaptive decision-making. The brilliance lies in how it separates stages: automated where you want speed, human where you still require validation.

Think of it as hierarchy re-engineered. The AI stage evaluates fixed rules—amount limits, category types, date ranges—with clinical efficiency. Then, if a decision teeters on ambiguity, the process escalates to human oversight without forcing every trivial case to queue up.

This alone eliminates exponential delay. Instead of ten people performing serial reviews, AI handles eighty percent instantly, routing only outliers. And yes, Copilot Studio tracks everything through its Dataverse backbone, producing verifiable logs without your team needing to dig through mailbox archives.

Previous workflows were built for humans. Agent Flows are built around them—keeping people in the loop only when interpretation, not repetition, is required. Once you see how this architecture functions, traditional approvals will feel primitive, like balancing checkbooks by candlelight.

The stakes are simple: compliance, consistency, and scale. Modern operations drown without automated validation, and AI-assisted logic is now the baseline for reliability. When you migrate from static flows to conditional, auditable Agent Flows, you stop managing approvals reactively and start treating them as living systems. The difference is not just speed—it’s structural sanity.

Section 2: Building the AI Stage — Teaching the First Approver

Now comes the interesting part—training your first digital bureaucrat. The AI stage is the logical gatekeeper of your approval process. Its job is not to “think” like a human but to perform structured reasoning at superhuman consistency. It reads instructions, checks them against inputs, and outputs one of two verdicts: Approved or Rejected. No politics. No coffee breaks.

You begin by defining a new Agent Flow. At creation, the AI stage sits front and center like an empty exam paper waiting for its question key. The trigger usually comes from Dataverse—a record added or modified in your claims or expense table. Once a claim is created, this stage activates, evaluates the data, and decides accordingly.

Inside the stage, the most important field is the Instruction Prompt—the brain of the operation. This is where you describe the approval logic in plain but rigorous language. Write it as if you’re instructing a lawyer who never improvises. For example: “Approve the claim if the amount is less than 500, the description supports physical or mental health, and the purchase date is within 30 days of submission. Reject if any rule fails.” That’s it—binary clarity.

Next, define your dynamic inputs. Think of them as variables feeding real data into your logic. You’ll most likely include an amount, a description, and a purchase date. Each is added as “content”—text or number inputs that the AI will parse when making a decision. Copilot Studio classes these inputs by type: Text, Number, or Image/Document. Choose correctly, because mixing types—say, a number stored as text—confuses both humans and machines.

Beneath each input, provide sample data for testing. This acts as scaffolding while you calibrate logic. For instance, enter an example claim like “Fitness yoga mat,” amount “300,” purchase date “August 22.” When you hit “Test,” Copilot Studio runs the reasoning model—GPT-based under the hood—and outputs a decision plus an explanation. If the outcome matches your expectation, good; if not, your instructions lack determinism.

And that’s where most beginners mess up. Broad or ambiguous phrasing—“reasonable expense,” “recent purchase”—is AI poison. The system can’t read corporate mood; it needs mathematical definitions. So, refine your instructions relentlessly until the test results are consistent. Every time you reword, you’re tuning the judgment engine for reproducibility. The pattern you want is identical input producing identical output every time—machine logic, not office gossip.

Once your prompt yields reliable decisions across multiple test cases, you’ve effectively trained your AI approver. Now, inject the real dynamic data using tokens pulled from Dataverse: claim amount, details, purchase or submission date. These replace your test values during live runs. From that point on, the AI evaluates each claim in context, returning an approval verdict the moment new data appears.

Before declaring victory, run iterative tests. Change the amount to 600—does it reject? Change the date to three months ago—does it flag as invalid? Add a nonsensical claim like “iPad purchase”—does it detect the mismatch? Each pass should demonstrate consistent cause and effect. If responses fluctuate, the issue is your prompt clarity, not the AI model.

When consistent patterns emerge, establish a naming convention for your content variables—prefixes like strClaimAmount for strings or numPurchaseDate for numeric values. This tiny discipline prevents chaos when you expand later into multi-branch logic.

At this stage, your automated approver can now make independent decisions based on policy without supervision. But automation isn’t dictatorship; it’s delegation. The machine handles speed and precision, not judgment. For borderline scenarios, you’ll need escalation—a human in the loop to review what the AI declines or deems uncertain. That’s where the next stage comes into play, and that’s how authority gets shared between silicon and staff.

Section 3: Adding Human Oversight — Multi-Stage and Conditional Logic

Once your AI stage is confidently judging claims, you can add what bureaucracies ironically call “the human touch.” In Copilot Studio, that means a manual stage—a checkpoint where a real person gets to exercise discretion. Think of it as your safety valve: everything predictable flows past automatically; anything nuanced lands on a manager’s desk.

You start in the approval designer by adding a Manual Stage right after the AI one. This creates a second gate in your Agent Flow. The handoff is conditional—only if the AI’s verdict equals “Approved” do we proceed to the human stage. If the AI says “Rejected,” the process ends right there. Why waste time asking managers to confirm what policy already disqualified? Efficiency is selective attention dressed as automation.

Let’s set up an example scenario. Suppose an employee submits an expense claim for gym membership. The AI approves it because all criteria fit the policy. Now, that decision triggers a manager-level manual stage called “Manager Approval.” Under the hood, you configure this stage with a title like Claim Approval Request from [Claim Submitter]—those variables pull data dynamically from Dataverse. The Assign To field is where you define who receives the task, typically the claimant’s manager.

To automate this routing, Copilot Studio lets you integrate the Office 365 Users connector. First, grab the user record of whoever created the claim—using the Get Row by ID action on the user table in Dataverse. That retrieves the submitter’s information. Next, the Get Manager action fetches that user’s manager via their primary email. Pass that result back into your manual stage as a variable like strManagerEmail. Result: no hardcoding, no stale hierarchies. Managers change, hierarchy refreshes itself.

Now comes conditional branching—the real reason multi-stage approvals exist. You can introduce logic directly inside the Agent Flow: “If claim amount is greater than or equal to 150, then add an additional stage for admin approval. Otherwise, end the process.” This turns a simple two-step routine into a multi-tiered decision tree that adapts to context. Low-value claims get one human. Higher-value or risky claims escalate for extra eyes.

In the designer, insert a Condition action before closing the flow. Reference your numeric variable—perhaps numClaimAmount. Choose the operator “greater than or equal to” and set the threshold, such as 150. In the “If true” branch, add another Manual Stage named “Admin Approval.” Assign it to a fixed approver like the compliance administrator. In the “If false” branch, end as approved. The effect? The workflow behaves like a rational organization instead of a universal bottleneck.

A bit of practical humor applies here: yes, delegation still requires trust—not wishful routing. Don’t assign broad security roles that can approve everything. Restrict permissions to relevant users or groups through Dataverse role-based access control. Otherwise, you’ve automated anarchy.

Testing this structure reveals how smoothly each piece fits. When a claim under $150 passes AI review, the flow ends after manager approval. When over $150, the system politely queues it for admin review before finalizing. Each path leaves an audit trail—the AI reasoning log, timestamped approvals, and recorded outcomes stored in Dataverse. No one can later claim, “I never saw it.” The database says otherwise.

As you refine, keep naming conventions disciplined. Prefix human stages clearly—manApproved, adminApproved—so you can trace branches visually. The flowchart should read like logic, not mystery art.

Once satisfied, publish and test with real scenarios. Watch the chain unfold: AI evaluates instantly, manager gets a targeted approval task, admin joins only when conditions warrant. The result feels less like “automation doing everything” and more like a transparent hierarchy that handles itself.

This layered orchestration also exposes a new frontier—document validation. Because sometimes a form field lies, but the receipt doesn’t. The next step is to make the AI examine the physical evidence before it ever reaches your inbox. That’s where receipt validation enters, turning bureaucracy into verifiable science.

Section 4: Dynamic Inputs and Document Validation

Let’s talk about receipts—the tiny, crumpled pieces of paper that mysteriously become critical evidence once someone wants reimbursement. In a manual approval process, humans verify them by eyeballing: “Yes, that looks like a yoga mat, not a yacht.” In an automated one, we make the AI perform the same check—without judgment fatigue.

In Copilot Studio’s Agent Flows, document validation comes to life through dynamic content inputs—specifically the ability to pass an image or file into your approval logic. This adds a third sense to your AI approver: sight. Instead of relying solely on text fields, the model reads the document and compares what’s written against what was uploaded.

Here’s the structure. Inside your AI stage, you add a new input type called image or document. Label it clearly—something like docReceiptFile. Upload a sample receipt for testing. This doesn’t go live; it just trains the logic layout. Now adjust your instruction prompt to include document cross-validation. You extend your previous logic: “Approve the claim if all the following are true: amount < 500, claim details support physical or mental health, purchase date within 30 days, and the receipt document matches the declared item, amount, and date.”

This fourth condition transforms your AI approver from policy enforcer into forensic accountant. When the test runs, the model extracts readable text from the document and verifies consistency with your structured inputs. If the receipt lists “Fitness Mat $300 August 22,” and your claim says anything else, the verdict flips to Rejected.

Now, for implementation. Before the approval runs, you’ll need to fetch that document. Use the Download a File or Image action from Dataverse. Target the same table the claim record resides in—likely “Claims.” Under file column, choose the attachment field, often named something like “Receipt.” Then, specify the row ID dynamically from the trigger—the unique claim identifier.

This action returns file metadata and content. But here’s the subtle trap: the “content” output needs slight surgery for the AI to interpret correctly. When mapping the file content into your docReceiptFile input, open the expression editor and append ?[’$content’] to the body reference. That last tweak delivers the actual file bytes rather than the metadata wrapper. Skip it, and your AI will enthusiastically approve blank receipts.

Let’s address synchronization—the Achilles’ heel of document workflows. Most Dataverse tables don’t store attachments until after the record first saves. Meaning, if your flow triggers “on record creation,” it may fire before the file exists. Symptom: flow fails or the receipt content is null. The remedy is elegant: change the trigger condition to on item modification, but restrict it to fire only when the receipt column changes. That way, the moment someone uploads or replaces a file, validation kicks in.

To configure this, open trigger settings, choose “Modified Columns,” and paste the logical name of your receipt column—retrievable from Dataverse column detail under “Advanced Tools.” Once set, publish. Now your automation behaves politely—it waits until there’s actually something to validate.

When the claim updates with a new receipt, the flow restarts. It downloads the file, feeds it through the AI stage, and cross-checks all inputs. If discrepancies exist—amount mismatch, outdated date, inconsistent descriptions—the AI rejects with explanation included. The reasoning log might read: “Claimed amount $305 differs from document total $300.” That’s compliance without complaint.

The practical impact? Every submission becomes self-auditing. Employees can no longer slip through questionable receipts or vague descriptions. AI provides the judgment, Dataverse provides the evidence, and neither takes coffee breaks.

So no, document validation isn’t overkill—it’s immune memory for your workflows. Outliers get caught at the molecular level: the characters printed on actual receipts. And the human reviewer gets promoted from manual checker to exception handler. Congratulations, you’ve now automated honesty itself.

Section 5: Building Reliability — Testing, Versioning, and Publishing

Any automation can look perfect—until it runs. That’s why reliability is not a phase; it’s a discipline. Copilot Studio makes that discipline surprisingly civilized through versioning, testing, and publishing controls embedded directly into Agent Flows.

Start with iterative testing. Every approval flow is a miniature ecosystem; even one stray variable name can create chaos. So, treat testing as a scientific process: isolate changes, apply data, observe output. In the flow editor, Copilot Studio lets you simulate sample data without deploying production triggers. Feed variant scenarios—too-large amount, outdated date, irrelevant item names—and ensure each yields the correct verdict.

Then move to naming conventions. Boring? Absolutely. Essential? More than air. Prefix every variable systematically: str for strings, num for numerics, doc for files. Likewise, label actions explicitly—downloadReceiptFile, checkManagerApproval, updateClaimApproved. Because in three months, when someone asks why “Action 7” broke, you’ll appreciate that breadcrumbs are friendlier than archaeology.

Once individual tests succeed, expand to edge scenario validation. Pretend to be the incompetent user—upload no receipt, change a date to next year, type “five hundred” instead of “500.” Confirm the AI explains and rejects without crashing. These aren’t hypothetical screw-ups; they’re reality rehearsals. The more failure cases you neutralize upfront, the fewer 2 A.M. Teams messages you’ll get later saying “the flow stopped again.”

Publishing introduces governance. Each time you save a major change, Copilot Studio quietly creates a new version. You can revert anytime—no “overwrite anxiety.” Use this to keep experimental logic sandboxes apart from production. One flow named ClaimApproval_DEV, another ClaimApproval_PROD. Test in dev, publish only when stable. Remember, reliability isn’t just about correctness—it’s about control under change.

After publishing, leverage the Activity Logs. They’re not decorative; they’re legal-grade records of AI and human interactions—timestamps, approver identities, reasoning summaries. Review these logs periodically. Patterns reveal themselves: which rules trigger most rejections, which managers overrule the AI, how often compliance steps engage. Those insights refine both policy and design.

Another subtle best practice: automate status updates back into Dataverse records at the end of each branch. The “update a row” action should set a clear field like claimStatus = Approved or Rejected. This visible endpoint reinforces trust. Users see decisions inside their own table views—no separate tracking spreadsheet, no phantom approvals.

Once your system passes all tests, hit Publish with quiet confidence. Unlike earlier generations of Power Automate flows, Agent Flows retain contextual metadata—the AI decision schema, dynamic content bindings, and version tags. The result is not just a running process but a reproducible artifact: auditable automation.

The final reliability check is mental: resist over-engineering. Sometimes a failed flow reveals overcomplex rules, not technical bugs. Streamline logic where possible—fewer stages, clearer instructions, minimal ambiguity. Automation’s enemy isn’t randomness; it’s unnecessary cleverness.

When you finish this stage, your approval system achieves operational maturity: tested, versioned, rollback-friendly, and verifiable. Every outcome, from AI judgment to admin sign‑off, leaves a digital footprint that stands up to scrutiny.

That’s reliability quantified—not a vibe, not luck. Just structured reasoning, recorded faithfully by the tools you built to think while you sleep.

Section 6: Results and Real-World Payoff

What happens when you hand bureaucracy to mathematics? The approvals stop waiting. An AI‑driven, multi‑stage Agent Flow cuts turnaround from days to minutes—because decisions no longer depend on calendar availability. Every claim that meets objective policy is approved instantly. Every edge case lands neatly in a manager’s queue with the context already attached. Humans make value judgments, not clerical confirmations.

The impact appears first as time compression. A reimbursement that once required four emails and two reminders now resolves before lunch. Departments experience shorter processing cycles, less idle status, and perfectly logged reasoning trails. The approval mechanism ceases to be an interruption; it becomes infrastructure.

Then comes error reduction. By enforcing deterministic rules through AI, you remove subjective drift—the bane of manual approvals. No more “I thought this counted as health and wellness.” The AI interprets criteria the same way every time. Consistency scales better than compassion in administrative logic. Managers deal only with anomalies, and even those come with a full audit trail: data inputs, decision history, model explanation. Audits transform from panic exercises into polite verifications.

On an organizational level, this isn’t just faster—it’s safer. Compliance doesn’t rely on memory or mood; it’s embedded. The Dataverse logs show who approved, when, and why. That transparency discourages shortcut culture because every decision now lives in searchable permanence.

And philosophically, the workflow flips from human‑first with AI assistance to AI‑first with human correction. Practical governance, not science fiction. AI handles rule enforcement, humans handle judgment. The hybrid produces fairer, faster, and ultimately more trustworthy systems.

Imagine explaining that to an auditor—the kind of smile that follows is worth the automation alone.

Conclusion — Key Takeaway + CTA

So there it is: Microsoft Copilot Studio isn’t just another automation tool. It’s the place where your organization’s indecision finally retires. You built an approval engine that can think in conditions, escalate responsibly, validate receipts against reality, and document every move it makes. That’s not a flow—it’s a governance model.

If you remember nothing else, remember this: AI should make decisions predictable and humans exceptional. Stop routing everything through people out of habit. Let logic handle the routine so managers can handle exceptions.

Now, if this walkthrough saved you from another week of “pending manager review,” consider subscribing. The next tutorial dives deeper into advanced Copilot Studio automations—where these Agent Flows connect with Power Platform analytics and adaptive policies.

Lock in your upgrade path: subscribe, turn on alerts, and let new episodes deploy automatically. No manual checks, no missed releases—just continuous delivery of useful knowledge. Proceed.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit m365.show/subscribe
...more
View all episodesView all episodes
Download on the App Store

M365 Show with Mirko Peters - Microsoft 365 Digital Workplace DailyBy Mirko Peters - Microsoft 365 Expert Podcast