Building a deterministic AI agent with Copilot Studio and KBAI

Deterministic AI agents are gaining traction for their ability to provide consistent, reliable, and explainable decision-making. Unlike probabilistic models that rely on statistical methods and can produce varying outputs, deterministic agents follow predefined rules to ensure that the same input always leads to the same output. This makes them ideal for applications where accuracy, predictability, and transparency are critical—such as compliance checks, legal analysis, and customer support automation.

In this article, we’ll explore how to build a deterministic AI agent using Microsoft Copilot Studio as the conversational platform and KBAI Knowledge-Based AI as the reasoning engine. We’ll also provide a detailed example of how this agent can perform structured document analysis—specifically, analyzing an employment contract to determine its validity—by integrating KBAI’s API with Copilot Studio and a Large Language Model (LLM) for fact extraction.

By the end, you’ll understand how to create, test, and deploy a hybrid AI system that combines the strengths of deterministic reasoning with the flexibility of LLMs for natural language understanding.

What is a Deterministic AI Agent?

A deterministic AI agent uses a knowledge base of rules to reason and make decisions. These rules are explicitly defined, meaning the agent’s behavior is fully predictable and traceable. For example, in a customer support scenario, a deterministic agent might follow a set of rules to determine whether a user’s issue can be resolved automatically or needs escalation to a human agent. Because the rules are fixed, the agent will always make the same decision given the same set of inputs.

This contrasts with probabilistic AI models, such as LLMs, which generate responses based on patterns in data and can produce different outputs for the same input. While LLMs are powerful for generating natural language, their variability can be a drawback in scenarios requiring strict accuracy and consistency.

Introducing Copilot Studio and KBAI

To build our deterministic AI agent, we’ll use two key tools:

  • Microsoft Copilot Studio (https://copilotstudio.microsoft.com): A platform for designing conversational AI agents. It provides tools for creating dialogues, managing user interactions, and integrating with external services. In this setup, Copilot Studio will serve as the interface through which users interact with the AI agent.

Copilot Studio user interface

  • KBAI (Knowledge-Based AI) (https://app.usekbai.com): A deterministic reasoning engine that uses a knowledge base of rules to make fact-based decisions. KBAI is designed to work alongside LLMs and other AI systems, providing accurate and explainable reasoning. It allows non-technical users to create and manage rules, ensuring that the AI’s decision-making process is transparent and controllable.

KBAI user interface

KBAI offers several advantages:

  • Accurate AI outputs: Provides precise, fact-checked responses in real-time.
  • Seamless integration: Works with JavaScript, Python, VS Code, and cloud platforms without disrupting existing workflows.
  • No vendor lock-in: Users can download the knowledge base and code for offline use or further customization.
  • Reduced debugging time: Cuts iteration time by up to 30%, enabling faster delivery of reliable AI features.

Step-by-Step Guide to Building a Deterministic AI Agent

Building a deterministic AI agent involves three main steps:

  1. Creating the knowledge base in KBAI.
  2. Testing and modifying the knowledge base.
  3. Deploying and using the knowledge base via Copilot Studio, with an example of structured document analysis.

You'd also see how KBAI allows the separation of work on these steps: the creation and testing of the knowledge base can be done by a domain expert independently of creating an agent in Copilot Studio.

Let’s detail each step.

Step 1: Create the Knowledge Base in KBAI

The first step is to create a knowledge base—a collection of rules that define how the AI agent should reason and make decisions.

How to Create the Knowledge Base:

  • Access the Knowledge Base Editor: From the KBAI dashboard, click "Create New."
  • Name your knowledge base: Choose a descriptive name, such as "Contract Validity Checker."
  • Add initial facts and rules: Click "Modify" and use the editor to type or paste your rules. KBAI learns best from formalized rules and workflows, such as company policies or legal criteria.
  • Save your changes: Click "Save changes" to store the initial version of your knowledge base. Processing may take a few minutes—avoid refreshing the page until it’s complete.

Example: For analyzing an employment contract, your rules might include:

  • "If the contract has a non-compete clause and is signed by both parties, then it is valid."

Step 2: Test and Modify the Knowledge Base

Once the initial knowledge base is created, it’s time to test and refine the rules to ensure they work as expected.

How to Test and Modify:

  • Evaluate rules: Next to each rule, click the "Evaluate" button to see how it executes and what parameters it requires. This helps you understand the rule’s behavior and identify any missing facts or ambiguities.

Testing reasoning rule evaluation in KBAI

  • Modify rules: If adjustments are needed, click "Modify" and add or remove rules as necessary. KBAI’s interface allows you to iteratively refine the rules without needing extensive test cases.

Why this matters: KBAI’s deterministic nature ensures that once the rules are correctly defined, the agent will always produce the same output for the same input. This reduces the need for complex testing frameworks, as you can directly validate the rules in the interface.

Step 3: Deploy and Use the Knowledge Base with Structured Document Analysis

After testing, it’s time to deploy the knowledge base and integrate it with Copilot Studio. Rather than just calling the KBAI API generically, we’ll demonstrate how to use it in a real-world scenario: analyzing an employment contract to determine its validity.

Deployment Options:

KBAI offers two ways to deploy your knowledge base:

  • API Mode: Click "Deploy" to generate an inference API for real-time use. Save the API endpoint URL (e.g., https://kbai-api.example.com/inference) for integration with Copilot Studio.
  • Code Export (Enterprise Only): Click "Export Code" to download JavaScript function definitions, which can be used directly or translated into other languages like Python or Java.

Structured Document Analysis Example: Employment Contract Validity

Let’s walk through how the AI agent can analyze an employment contract using the KBAI API, Copilot Studio, and an LLM.

Scenario

A user provides an employment contract via Copilot Studio, and the agent needs to determine if it is valid based on these rules:

  • "If the contract has a non-compete clause and is signed by both parties, then it is valid."

KBAI requires specific facts to apply this rule:

  • hasNonCompeteClause: Whether the contract includes a non-compete clause.
  • isSignedByBothParties: Whether the contract is signed by both parties.

Since these facts aren’t initially known, the agent uses an LLM to extract them from the document and iteratively calls the KBAI API until all facts are gathered.

Implementation in Copilot Studio

In Copilot Studio, this process is managed with a Topic (for user interaction) and a Flow (for the reasoning loop).

Step-by-Step Process:
  1. User Input: The user pastes the contract text into Copilot Studio. This is handled by a Copilot Topic that then passes the text to a Flow, which implements the reasoning loop and KBAI API connection.
  2. Initial API Call to KBAI: The agent sends an initial request to the KBAI API with the contract text:
    curl -X POST https://kbai-api.example.com/inference \
         -H "Content-Type: application/json" \
         -H "Authorization: Bearer <your-token>" \
         -d '{"facts": {"contractText": "Employee agrees to a non-compete clause... Signed by John Doe and Jane Smith"}}'
    
    KBAI responds:
    {
      "stopReason": "FACT_NEEDED",
      "facts": {"contractText": "..."},
      "missingFact": "hasNonCompeteClause",
      "log": "Rule evaluation stopped: Missing fact 'hasNonCompeteClause'."
    }
    
  3. Query the LLM: The agent sends a query to the LLM:
    • "Does the contract include a non-compete clause?"
      The LLM analyzes the text and responds: "Yes."
  4. Update Facts and Call KBAI Again: The agent updates the facts and sends a new request:
    curl -X POST https://kbai-api.example.com/inference \
         -H "Content-Type: application/json" \
         -H "Authorization: Bearer <your-token>" \
         -d '{"facts": {"contractText": "...", "hasNonCompeteClause": true}}'
    
    KBAI responds:
    {
      "stopReason": "FACT_NEEDED",
      "facts": {"contractText": "...", "hasNonCompeteClause": true},
      "missingFact": "isSignedByBothParties",
      "log": "Rule evaluation stopped: Missing fact 'isSignedByBothParties'."
    }
    
  5. Query the LLM Again: The agent asks:
    • "Is the contract signed by both parties?"
      The LLM responds: "Yes."
  6. Final API Call to KBAI: The agent sends the updated facts:
    curl -X POST https://kbai-api.example.com/inference \
         -H "Content-Type: application/json" \
         -H "Authorization: Bearer <your-token>" \
         -d '{"facts": {"contractText": "...", "hasNonCompeteClause": true, "isSignedByBothParties": true}}'
    
    KBAI responds:
    {
      "stopReason": "COMPLETED",
      "facts": {"contractText": "...", "hasNonCompeteClause": true, "isSignedByBothParties": true, "isValid": true},
      "log": "Rule applied: Contract is valid."
    }
    
  7. Present the Result: The agent tells the user: "The contract is valid because it includes a non-compete clause and is signed by both parties."

Configuring the Topic in Copilot Studio

The following Topic collects the contract text and triggers the Flow.

kind: AdaptiveDialog
beginDialog:
  actions:
    - kind: Question
      prompt: "Paste the contract text"
      variable: "Topic.ContractText"
    - kind: InvokeFlowAction
      input:
        binding:
          text: "=Topic.ContractText"
      output:
        binding:
          result: "Topic.Result"
      flowId: "contract-analysis-flow"
    - kind: SendActivity
      activity: "The contract is {Topic.Result}."

Topic configuration in Copilot Studio

Configuring the Reasoning Loop Flow in Copilot Studio

Flow: Implements the reasoning loop:

  1. Initialize Facts = {"contractText": Topic.ContractText}.
  2. Loop until KBAI returns COMPLETED:
    • Send Facts to KBAI API.
    • If FACT_NEEDED, query the LLM for the missing fact (e.g., "Does the contract include a non-compete clause?").
    • Update Facts with the LLM's response.
  3. Return Facts, both final and intermediate reasoning, to the calling Topic when complete.

To implement such flow in Copilot Studio:

  1. Open Flows - New Agent Flow in Copilot Studio

  2. Add trigger "When an agent calls the flow"

  3. Initialize variables:

    • Facts - object type, value. This variable will hold currently inferred facts.
    • State - String type, empty value. Keeps last inference state obtained from KBAI, used to direct the inference loop.
    • MissingFact - String type, empty value. Keeps name of a missing fact that needs to be retrieved by an LLM.
    • NewFacts - object type, value. A temporary variable to store facts returned by KBAI, before assigning them to the Facts variable.
    • JSONStartPosition and JSONEndPosition - markers setting out location of the JSON in an LLM response. Integer type, 0 default value.

  4. Add a Do Until loop with a condition to Loop until State is equal to COMPLETED

  5. Inside the loop, call KBAI using an HTTP action. Use the published URL of the Knowledge Base as a URI. Select POST method, Content-Type: application/json. Set body to a Power Fx function, forming a request to KBAI, specifying the target fact to infer and any known facts:

    json(concat('{"fact":"contract.isValid","facts":', string(variables('Facts')), '}'))
    
  6. Use Parse JSON action to parse HTTP response body with the following schema:

    {
      "type": "object",
      "properties": {
          "stopReason": {
              "type": "string"
          },
          "facts": {
              "type": "object",
              "properties": {}
          },
          "log": {
              "type": "array",
              "items": {
                  "type": "object",
                  "properties": {
                      "code": {
                          "type": "string"
                      },
                      "message": {
                          "type": "string"
                      },
                      "fact": {
                          "type": "string"
                      },
                      "dependencies": {
                          "type": "object",
                          "properties": {
                              "type": {
                                  "type": "string"
                              },
                              "properties": {
                                  "type": "object"
                              }
                          }
                      }
                  },
                  "required": [
                      "code",
                      "message",
                      "fact"
                  ]
              }
          }
      }
    }
    

  1. Set State variable to the stopReason from the parsed response body: body('Parse_JSON')?['stopReason'] and Facts to the facts from it: body('Parse_JSON')?['facts']

  2. Add a Condition to check if there's a missing fact (State equal to FACT_NEEDED) and, if so, run an LLM prompt to determine the fact from the document text

  3. In the condition, add a Run a Prompt action.

    The action should ask a prompt such as:

    Your job is to process documents and extract facts from them, based on the text and the best of your understanding. It's ok to derive a fact when straightforward (i.e. to calculate something) as long as it doesn't involve guessing.
    
    Extract the fact called "Fact Name" from the document:
    
    Document Text
    

    Where Fact Name and Document Text are prompt variables. Set Fact Name to the last inferred fact where the inference has stopped (last(body('Parse_JSON')?['log'])['fact']) and Document Text to the text of the document that was provided to Flow by a Topic when the flow was triggered (triggerBody()?['text'])

  4. Use a Set action to make Copilot Studio friendly fact name by replacing dots with underscores in the fact name (replace(last(body('Parse_JSON')?['log'])['fact'], '.', '_')) and setting it to the MissingFact variable.

  5. KBAI would specify the data type it expects for a missing fact as well. Create another Run a prompt action to convert the LLM response to the target data type. This approach provides much more reliable document querying and conversion compared to asking a model to give an answer in JSON format in a single prompt.

    The prompt for this step may need to be fairly detailed to explain what the expected result is, especially if there are any ambiguities for handling empty/null/unknown values when querying the document. For example:

    Convert the fact called 'Fact Name' from a string value 'Text input' to a JSON object containing a single key `value` with the fact converted according to the following JSON schema:
    
    JSON schema 
    
    If the fact defines whether something is present in the document, has a boolean type and the requested information wasn't found or extracted from the document, the return `value` should be set to `false`. Otherwise such boolean value should be true, if the document has the information specified by the fact.
    
    The fact doesn't have to be presented explicitly. It's ok to derive a fact when straightforward (i.e. to calculate something) as long as it doesn't involve guessing.
    

    It's usually possible to run this second conversion prompt with a smaller model, such as GPT-4o Mini.

    The Text Input variable in the example above would be the original prompt response (outputs('Run_a_prompt')?['body/responsev2/predictionOutput/text']), Fact Name is the missing fact name (variables('MissingFact')) and JSON Schema would be the schema requested by KBAI for the fact (last(body('Parse_JSON')?['log'])['schema'])

  6. Add a Set action to set JSONStartPosition to the beginning of the JSON in the second LLM response (add(indexOf(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], '```json'),8))

  7. Similarly, set JSONEndPosition to its end (lastIndexOf(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], '```'))

  8. Now, add a Parse JSON action to convert the extracted JSON fragment (substring(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], variables('JSONStartPosition'), sub(variables('JSONEndPosition'), variables('JSONStartPosition')))) to an object.

  9. Set two more variables. Set the NewFacts to existing Facts, adding a missing fact, now extracted (setProperty(variables('Facts'), variables('MissingFact'), body('Parse_LLM_Extracted_JSON')?['value']))

  10. Then copy the NewFacts to Facts (this has to be done in a separate step at the time of writing this tutorial).

  11. As the last step, executing after the loop, respond to an agent with the inferred facts (string(variables('Facts')))

Benefits of This Approach

  • Accuracy: KBAI ensures consistent reasoning based on explicit rules.
  • Flexibility: The LLM extracts facts from unstructured text, enabling the agent to handle diverse documents.
  • Transparency: The API responses include a reasoning log, making the decision process explainable.

Integrating KBAI with Copilot Studio and LLMs

In this example, KBAI’s API is called iteratively within a reasoning loop, with the LLM providing the facts KBAI needs to complete its analysis. This hybrid approach leverages:

  • KBAI for deterministic, rule-based reasoning.
  • LLM for natural language understanding and fact extraction.
  • Copilot Studio for user interaction and orchestration.

This makes the agent suitable for complex tasks like legal document analysis, compliance verification, and more.

Conclusion

By combining Copilot Studio’s conversational capabilities with KBAI’s deterministic reasoning and an LLM’s natural language processing, you can build powerful AI agents that perform structured document analysis with precision and transparency. The detailed example of analyzing an employment contract demonstrates how the KBAI API can be practically applied, ensuring that your AI system is reliable, explainable, and effective for critical applications.

Ready to get started?

Experience the power of knowledge-driven deterministic AI with KBAI