1. All Collections >
  2. Product >
  3. AI Agents >
  4. How to Test AI Agents

How to Test AI Agents

Avatar
Shing-Yi Tan
2 min read

Before you publish your AI Agent, it’s important to make sure it behaves as expected. With the Test AI Agent feature, you can simulate full conversations within the AI Agent setup flow—no live Contacts or workarounds required.

How to Test Your AI Agent

  1. Go to AI Agents > Create (or Edit)

  2. On the right panel, select the Chat tab

  3. Type a message to simulate a customer query

  4. Watch how the AI Agent responds in real time

You’ll also see logs of any actions the AI Agent takes. Currently, these are:

  • Assigning to agent or team

  • Updating Contact fields

  • Updating Lifecycle stages

  • Closing a conversation

Reset & Update the Test Contact

You can reset the conversation at any time by selecting Reset Chat. This clears the chat history so you can start fresh.

You can update the test Contact’s name, email or other fields in the Contact fields tab to see how the AI Agent responds to different inputs.

These changes won’t affect your real Contacts—they’re only used for testing purposes.

Test AI Agent with Files

You can also test how your AI Agent reads and processes both text and files — just like real customer conversations.

To do this, select the file icon in the Test AI Agent panel and upload a file from your device or file library.

What You Can Upload

  • Supported file types: .pdf, .jpg, .png, and other common formats

  • Maximum file size: 20 MB per file

  • Files can be added from your device or the in-app file library

How It Works

  • You can send text, files, or both in a single message.

  • Hover over a file to remove it if needed.

  • Once sent, files appear in the chat just like in the Inbox composer.

  • The AI Agent will analyze up to 5 most recent files per request (additional files can still be sent but won’t be processed).

Best Practices

  • Use natural, realistic inputs: Simulate messages the customer would actually say to test how well the AI Agent understands them.

  • Reset between runs: Clear the chat before each new test to avoid unintended context from earlier responses.

  • Edit Contact fields to explore logic branches: Try different languages, names, phone numbers and more to ensure your conditions and actions respond correctly.

  • Tweak prompts in small steps: After each test, revise only one prompt block at a time. Save and test again to isolate what works.

  • Monitor action logs: Check for expected behavior (e.g., “assigned to @Rachel”) to verify prompt logic and setup.

  • Test tone and personality: Review whether the responses match your configured tone (e.g., friendly, professional).

FAQ and Troubleshooting

Are test conversations visible in the Inbox module?

No, simulated chats are separate from real conversations and do not appear in the Inbox module.

Can I test Workflow triggers?

The test panel doesn’t execute Workflows, but it will show if your AI Agent is set to trigger one. You’ll see the Workflow name in the action log beneath the AI Agent’s response.

Will this affect my real Contact data?

No. The test panel uses a simulated Contact that doesn’t impact your live data.

Can I test unpublished AI Agents?

Yes. You can safely test and iterate before publishing your AI Agent.

How are team assignments shown in the test panel?

Currently, the test panel shows team assignments to the first person listed in that team, even if assignment rules like round robin or least open conversation are specified. This is just how testing works at the moment, so you can still confirm that the assignment logic is being triggered correctly.

When the AI Agent is published, it will follow your workspace’s real team assignment rules (like round robin or least open conversation). We’re working on updating the test experience to better reflect live team assignments and will roll this out in a future update.

Share this article
Telegram
Facebook
Linkedin
Twitter

Can't find what you're looking for? 🔎