Before you publish your AI Agent, itâs important to make sure it behaves as expected. With the Test AI Agent feature, you can simulate full conversations within the AI Agent setup flowâno live Contacts or workarounds required.
How to Test Your AI Agent
Go to AI Agents > Create (or Edit)
On the right panel, select the Chat tab
Type a message to simulate a customer query
Watch how the AI Agent responds in real time
Youâll also see logs of any actions the AI Agent takes. Currently, these are:
Assigning to agent or team
Updating Contact fields
Updating Lifecycle stages
Closing a conversation
Reset & Update the Test Contact
You can reset the conversation at any time by selecting Reset Chat. This clears the chat history so you can start fresh.
You can update the test Contactâs name, email or other fields in the Contact fields tab to see how the AI Agent responds to different inputs.
These changes wonât affect your real Contactsâtheyâre only used for testing purposes.
Test AI Agent with Files
You can also test how your AI Agent reads and processes both text and files â just like real customer conversations.
To do this, select the file icon in the Test AI Agent panel and upload a file from your device or file library.
What You Can Upload
Supported file types: .pdf, .jpg, .png, and other common formats
Maximum file size: 20 MB per file
Files can be added from your device or the in-app file library
How It Works
You can send text, files, or both in a single message.
Hover over a file to remove it if needed.
Once sent, files appear in the chat just like in the Inbox composer.
The AI Agent will analyze up to 5 most recent files per request (additional files can still be sent but wonât be processed).
Best Practices
Use natural, realistic inputs: Simulate messages the customer would actually say to test how well the AI Agent understands them.
Reset between runs: Clear the chat before each new test to avoid unintended context from earlier responses.
Edit Contact fields to explore logic branches: Try different languages, names, phone numbers and more to ensure your conditions and actions respond correctly.
Tweak prompts in small steps: After each test, revise only one prompt block at a time. Save and test again to isolate what works.
Monitor action logs: Check for expected behavior (e.g., âassigned to @Rachelâ) to verify prompt logic and setup.
Test tone and personality: Review whether the responses match your configured tone (e.g., friendly, professional).
FAQ and Troubleshooting
Are test conversations visible in the Inbox module?
No, simulated chats are separate from real conversations and do not appear in the Inbox module.
Can I test Workflow triggers?
The test panel doesnât execute Workflows, but it will show if your AI Agent is set to trigger one. Youâll see the Workflow name in the action log beneath the AI Agentâs response.
Will this affect my real Contact data?
No. The test panel uses a simulated Contact that doesnât impact your live data.
Can I test unpublished AI Agents?
Yes. You can safely test and iterate before publishing your AI Agent.
How are team assignments shown in the test panel?
Currently, the test panel shows team assignments to the first person listed in that team, even if assignment rules like round robin or least open conversation are specified. This is just how testing works at the moment, so you can still confirm that the assignment logic is being triggered correctly.
When the AI Agent is published, it will follow your workspaceâs real team assignment rules (like round robin or least open conversation). Weâre working on updating the test experience to better reflect live team assignments and will roll this out in a future update.