AI Agents
AI Agents
Plan your agent

Plan your agent

EARLY ACCESS

Before you build your AI agent, plan to ensure a successful implementation. Good planning covers architectural decisions, tool selection, and workflow design.

You should also plan behavioral guidelines to define how the agent communicates and handles edge cases.


Identify business outcomes

Define the following before you build:

Business outcomes

What do you want to achieve?
For example: reduce support time, increase conversion rate, improve customer satisfaction.

Specific tasks

What tasks must be performed?
For example: book tickets, update records, schedule meetings.

KPIs

How will you measure success?
For example: average handling time, first contact resolution, lead conversion rate.

Examples by scenario

Support: Ticket classification, knowledge base lookup, status updates, escalation management. Sales: Lead qualification, opportunity creation, meeting scheduling, CRM updates.


Multi-agent system considerations

In a multi-agent system, multiple AI agents work together to achieve a goal.

When to use a multi-agent system

Consider a multi-agent approach instead of a single large agent when:

  • The agent needs to handle multiple distinct task categories.
  • Different capabilities are independent or interact minimally.
  • You want to keep complexity manageable as the system scales.

Architecture components

Consider the following architecture components:

  1. Orchestrator: Receives end user requests, identifies which subagent handles them, and manages tool routing.
  2. Subagents: Each agent focuses on a specific domain: knowledge retrieval, CRM updates, scheduling, classification, and so on.

For detailed architecture, example workflows, and subagent versioning, see Orchestration.


Plan the workflow

Follow this process before you implement AI agents:

  1. Decide if AI agents are required: Determine whether the use case requires AI agents or if chatbots are sufficient.
  2. Define KPIs, business outcomes, and core tasks: Identify specific outcomes you want to achieve.
  3. Identify all use cases: List all scenarios the agent handles, including edge cases and unexpected situations.
  4. Assess architectural complexity: Decide between a single-agent and a multi-agent system.
  5. Identify required tools: Identify both external integrations and internal systems.
  6. Validate integration paths: Confirm how the agent integrates with Chatbots and define escalation flows to human agents.
  7. Define behavioral guidelines, safety rules, and tone: Set clear boundaries for what the agent can and cannot do. See Behavioral rules and guardrails and Write prompts for AI agents.
  8. Identify test cases and evaluation criteria: Define how you will measure success before you start building.
  9. Develop, test, and refine: Refine the agent based on performance against your test cases.
  10. Deploy and monitor: Monitor performance and refine based on real-world usage.

Plan behavioral guidelines

AI agents behave differently from rule-based systems. To ensure a successful implementation, understand how their behavior differs, what you can and cannot control, and how to define the right boundaries.

Predictability: rule-based vs AI agents

ActionRule-based systemsAI agents
BehaviorPredefined and predictable.Probabilistic and adaptive.
ControlFully controlled.Context-driven.
ResponsesYou know exactly how the system responds in every scenario.Interpret the situation and choose an appropriate response.
AdaptabilityCannot adapt if end users deviate from the intended path.Can handle unexpected inputs, but some behavior cannot be predicted or controlled in advance.

What you cannot control

Understanding these limitations is essential before you deploy an AI agent in production:

  • Exact phrasing: The agent generates responses dynamically. If you need exact wording, such as legal disclaimers, write it into your application rather than relying on the agent to generate it.
  • Synonyms and rewording: The agent understands intent. "I want to cancel my order" and "Can you stop my shipment?" may be handled the same way. You cannot script exact responses.
  • Scope boundaries: If you do not explicitly restrict a behavior, the agent may attempt it if it seems reasonable.
  • Individual messages: Unlike rule-based chatbots, you cannot approve every possible response. You define guidelines and boundaries, not exact outputs.

Behavioral rules and guardrails

Define explicit guidelines across these areas:

AreaGuidelines
Capability boundariesDefine what the agent can and cannot do. Be specific and comprehensive. Never promise capabilities that depend on tools not added to the agent.
Communication styleSet tone of voice, brand voice, language preferences, and level of formality.
Mandatory restrictionsAlways confirm before modifying end user data. Never share personally identifiable information. Never offer actions that depend on unavailable tools.
Safety and complianceInclude legal disclaimers, privacy requirements, industry-specific regulations, and rules for when to escalate to a human agent.
IMPORTANT

If you do not explicitly define these constraints, the agent makes assumptions. It may claim capabilities it does not have, take unintended actions, or generate responses that violate your guidelines.


Balancing control and adaptability

When you use AI agents, you trade strict word-for-word control for adaptability and intelligence. The agent handles unexpected situations more effectively, but requires well-defined boundaries and behavioral rules to stay within the intended scope.

Include these guidelines in your system prompt.

For examples and best practices, see Write prompts for AI agents.


Next steps







Need assistance

Explore Infobip Tutorials

Encountering issues

Contact our support

What's new? Check out

Release Notes

Unsure about a term? See

Glossary
Service status

Copyright @ 2006-2026 Infobip ltd.

Service Terms & ConditionsPrivacy policyTerms of use