Published June 30, 2025, last updated July 5, 2025
Deterministic AI agents are gaining traction for their ability to provide consistent, reliable, and explainable decision-making. Unlike probabilistic models that rely on statistical methods and can produce varying outputs, deterministic agents follow predefined rules to ensure that the same input always leads to the same output. This makes them ideal for applications where accuracy, predictability, and transparency are critical—such as compliance checks, legal analysis, and customer support automation.
In this article, we’ll explore how to build a deterministic AI agent using Microsoft Copilot Studio as the conversational platform and KBAI Knowledge-Based AI as the reasoning engine. We’ll also provide a detailed example of how this agent can perform structured document analysis—specifically, analyzing an employment contract to determine its validity—by integrating KBAI’s API with Copilot Studio and a Large Language Model (LLM) for fact extraction.
By the end, you’ll understand how to create, test, and deploy a hybrid AI system that combines the strengths of deterministic reasoning with the flexibility of LLMs for natural language understanding.
A deterministic AI agent uses a knowledge base of rules to reason and make decisions. These rules are explicitly defined, meaning the agent’s behavior is fully predictable and traceable. For example, in a customer support scenario, a deterministic agent might follow a set of rules to determine whether a user’s issue can be resolved automatically or needs escalation to a human agent. Because the rules are fixed, the agent will always make the same decision given the same set of inputs.
This contrasts with probabilistic AI models, such as LLMs, which generate responses based on patterns in data and can produce different outputs for the same input. While LLMs are powerful for generating natural language, their variability can be a drawback in scenarios requiring strict accuracy and consistency.
To build our deterministic AI agent, we’ll use two key tools:
KBAI offers several advantages:
Building a deterministic AI agent involves three main steps:
You'd also see how KBAI allows the separation of work on these steps: the creation and testing of the knowledge base can be done by a domain expert independently of creating an agent in Copilot Studio.
Let’s detail each step.
The first step is to create a knowledge base—a collection of rules that define how the AI agent should reason and make decisions.
Example: For analyzing an employment contract, your rules might include:
Once the initial knowledge base is created, it’s time to test and refine the rules to ensure they work as expected.
Why this matters: KBAI’s deterministic nature ensures that once the rules are correctly defined, the agent will always produce the same output for the same input. This reduces the need for complex testing frameworks, as you can directly validate the rules in the interface.
After testing, it’s time to deploy the knowledge base and integrate it with Copilot Studio. Rather than just calling the KBAI API generically, we’ll demonstrate how to use it in a real-world scenario: analyzing an employment contract to determine its validity.
KBAI offers two ways to deploy your knowledge base:
https://kbai-api.example.com/inference
) for integration with Copilot Studio.Let’s walk through how the AI agent can analyze an employment contract using the KBAI API, Copilot Studio, and an LLM.
A user provides an employment contract via Copilot Studio, and the agent needs to determine if it is valid based on these rules:
KBAI requires specific facts to apply this rule:
hasNonCompeteClause
: Whether the contract includes a non-compete clause.isSignedByBothParties
: Whether the contract is signed by both parties.Since these facts aren’t initially known, the agent uses an LLM to extract them from the document and iteratively calls the KBAI API until all facts are gathered.
In Copilot Studio, this process is managed with a Topic (for user interaction) and a Flow (for the reasoning loop).
curl -X POST https://kbai-api.example.com/inference \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-token>" \
-d '{"facts": {"contractText": "Employee agrees to a non-compete clause... Signed by John Doe and Jane Smith"}}'
KBAI responds:
{
"stopReason": "FACT_NEEDED",
"facts": {"contractText": "..."},
"missingFact": "hasNonCompeteClause",
"log": "Rule evaluation stopped: Missing fact 'hasNonCompeteClause'."
}
curl -X POST https://kbai-api.example.com/inference \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-token>" \
-d '{"facts": {"contractText": "...", "hasNonCompeteClause": true}}'
KBAI responds:
{
"stopReason": "FACT_NEEDED",
"facts": {"contractText": "...", "hasNonCompeteClause": true},
"missingFact": "isSignedByBothParties",
"log": "Rule evaluation stopped: Missing fact 'isSignedByBothParties'."
}
curl -X POST https://kbai-api.example.com/inference \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-token>" \
-d '{"facts": {"contractText": "...", "hasNonCompeteClause": true, "isSignedByBothParties": true}}'
KBAI responds:
{
"stopReason": "COMPLETED",
"facts": {"contractText": "...", "hasNonCompeteClause": true, "isSignedByBothParties": true, "isValid": true},
"log": "Rule applied: Contract is valid."
}
The following Topic collects the contract text and triggers the Flow.
kind: AdaptiveDialog
beginDialog:
actions:
- kind: Question
prompt: "Paste the contract text"
variable: "Topic.ContractText"
- kind: InvokeFlowAction
input:
binding:
text: "=Topic.ContractText"
output:
binding:
result: "Topic.Result"
flowId: "contract-analysis-flow"
- kind: SendActivity
activity: "The contract is {Topic.Result}."
Flow: Implements the reasoning loop:
Facts = {"contractText": Topic.ContractText}
.COMPLETED
:
Facts
to KBAI API.FACT_NEEDED
, query the LLM for the missing fact (e.g., "Does the contract include a non-compete clause?").Facts
with the LLM's response.Facts
, both final and intermediate reasoning, to the calling Topic when complete.To implement such flow in Copilot Studio:
Open Flows - New Agent Flow in Copilot Studio
Add trigger "When an agent calls the flow"
Initialize variables:
Add a Do Until loop with a condition to Loop until State is equal to COMPLETED
Inside the loop, call KBAI using an HTTP action. Use the published URL of the Knowledge Base as a URI. Select POST method, Content-Type: application/json. Set body
to a Power Fx function, forming a request to KBAI, specifying the target fact to infer and any known facts:
json(concat('{"fact":"contract.isValid","facts":', string(variables('Facts')), '}'))
Use Parse JSON action to parse HTTP response body with the following schema:
{
"type": "object",
"properties": {
"stopReason": {
"type": "string"
},
"facts": {
"type": "object",
"properties": {}
},
"log": {
"type": "array",
"items": {
"type": "object",
"properties": {
"code": {
"type": "string"
},
"message": {
"type": "string"
},
"fact": {
"type": "string"
},
"dependencies": {
"type": "object",
"properties": {
"type": {
"type": "string"
},
"properties": {
"type": "object"
}
}
}
},
"required": [
"code",
"message",
"fact"
]
}
}
}
}
Set State variable to the stopReason
from the parsed response body: body('Parse_JSON')?['stopReason']
and Facts to the facts
from it: body('Parse_JSON')?['facts']
Add a Condition to check if there's a missing fact (State equal to FACT_NEEDED) and, if so, run an LLM prompt to determine the fact from the document text
In the condition, add a Run a Prompt action.
The action should ask a prompt such as:
Your job is to process documents and extract facts from them, based on the text and the best of your understanding. It's ok to derive a fact when straightforward (i.e. to calculate something) as long as it doesn't involve guessing.
Extract the fact called "Fact Name" from the document:
Document Text
Where Fact Name and Document Text are prompt variables. Set Fact Name to the last inferred fact where the inference has stopped (last(body('Parse_JSON')?['log'])['fact']
) and Document Text to the text of the document that was provided to Flow by a Topic when the flow was triggered (triggerBody()?['text']
)
Use a Set action to make Copilot Studio friendly fact name by replacing dots with underscores in the fact name (replace(last(body('Parse_JSON')?['log'])['fact'], '.', '_')
) and setting it to the MissingFact variable.
KBAI would specify the data type it expects for a missing fact as well. Create another Run a prompt action to convert the LLM response to the target data type. This approach provides much more reliable document querying and conversion compared to asking a model to give an answer in JSON format in a single prompt.
The prompt for this step may need to be fairly detailed to explain what the expected result is, especially if there are any ambiguities for handling empty/null/unknown values when querying the document. For example:
Convert the fact called 'Fact Name' from a string value 'Text input' to a JSON object containing a single key `value` with the fact converted according to the following JSON schema:
JSON schema
If the fact defines whether something is present in the document, has a boolean type and the requested information wasn't found or extracted from the document, the return `value` should be set to `false`. Otherwise such boolean value should be true, if the document has the information specified by the fact.
The fact doesn't have to be presented explicitly. It's ok to derive a fact when straightforward (i.e. to calculate something) as long as it doesn't involve guessing.
It's usually possible to run this second conversion prompt with a smaller model, such as GPT-4o Mini.
The Text Input variable in the example above would be the original prompt response (outputs('Run_a_prompt')?['body/responsev2/predictionOutput/text']
), Fact Name is the missing fact name (variables('MissingFact')
) and JSON Schema would be the schema requested by KBAI for the fact (last(body('Parse_JSON')?['log'])['schema']
)
Add a Set action to set JSONStartPosition
to the beginning of the JSON in the second LLM response (add(indexOf(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], '```json'),8)
)
Similarly, set JSONEndPosition
to its end (lastIndexOf(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], '```')
)
Now, add a Parse JSON action to convert the extracted JSON fragment (substring(outputs('Convert_to_the_target_data_type')?['body/responsev2/predictionOutput/text'], variables('JSONStartPosition'), sub(variables('JSONEndPosition'), variables('JSONStartPosition')))
) to an object.
Set two more variables. Set the NewFacts
to existing Facts
, adding a missing fact, now extracted (setProperty(variables('Facts'), variables('MissingFact'), body('Parse_LLM_Extracted_JSON')?['value'])
)
Then copy the NewFacts
to Facts
(this has to be done in a separate step at the time of writing this tutorial).
As the last step, executing after the loop, respond to an agent with the inferred facts (string(variables('Facts'))
)
In this example, KBAI’s API is called iteratively within a reasoning loop, with the LLM providing the facts KBAI needs to complete its analysis. This hybrid approach leverages:
This makes the agent suitable for complex tasks like legal document analysis, compliance verification, and more.
By combining Copilot Studio’s conversational capabilities with KBAI’s deterministic reasoning and an LLM’s natural language processing, you can build powerful AI agents that perform structured document analysis with precision and transparency. The detailed example of analyzing an employment contract demonstrates how the KBAI API can be practically applied, ensuring that your AI system is reliable, explainable, and effective for critical applications.
Experience the power of knowledge-driven deterministic AI with KBAI