1. All Collections >
  2. การสนทนานำการเติบโต >
  3. การรายงานและการวิเคราะห์ >
  4. How to Automate Conversation QA Using n8n & AI

How to Automate Conversation QA Using n8n & AI

Avatar
Shing-Yi Tan
12 min read

Customers are advised to review AI-handled conversations after enabling AI Agent, but manual QA becomes too time-consuming at scale. This guide shows how to automate conversation QA using n8n + an AI model, so every closed conversation can be reviewed automatically and logged into Google Sheets for reporting.

In summary, this n8n workflow is triggered every time a conversation is closed. Once triggered, it calls the List Messages API to fetch the last 50 messages (with an option to fetch up to 100 using pagination), then passes the transcript to an AI model for QA processing — such as sentiment analysis, engagement scoring, and resolution checks. The results are tracked in a Google Sheet so you can easily look up any conversation by contact ID.

What You'll Build

An automation that:

  • Triggers when a conversation is closed

  • Retrieves the last 50 messages using the List Messages API (optionally up to 100 with pagination)

  • Cleans and formats messages into a transcript

  • Sends the transcript to an AI model for QA evaluation

  • Parses AI output safely (JSON-only contract)

  • Appends results into Google Sheets (one row per conversation)

Requirements

Tools

  • Automation platform: n8n

  • Respond.io Developer API — List Messages endpoint

  • AI API — OpenAI, Anthropic, Gemini, or any provider supported by n8n

  • Reporting / storage — Google Sheets (or your preferred storage tool)

Credentials

  • Respond.io Developer API key

  • AI provider API key (OpenAI, Anthropic, Gemini, etc.)

  • Google Sheets credential in n8n (OAuth or service account)

Before You Start

1) Create a Google Sheet for QA results

Create a spreadsheet and add a sheet (e.g. QA_Results) with these headers:

  • timestamp

  • contact_id

  • opened_at

  • channel

  • overall_score

  • resolved

  • customer_sentiment

  • reasoning

  • engagement_score

2) Confirm your trigger payload fields

From the Conversation closed trigger output, identify the exact field names for:

  • Conversation ID

  • Contact ID

  • Conversation opened timestamp

  • Conversation closed timestamp (optional)

Field names vary by implementation. In the code nodes below, you'll see placeholders like trigger.conversationId. Update those mappings to match your trigger output.

Step-by-step Guide

1. Trigger: Conversation Closed

When a conversation is closed in respond.io, we want to automatically trigger the n8n workflow. This ensures every completed conversation goes through QA — no manual review needed.

n8n node:Conversation closed trigger

  1. In n8n, add the respond.io trigger node.

  2. Choose Conversation Closed.

  3. Connect your respond.io API key credentials. Learn how to set this up in n8n with our integration guide.

This trigger makes sure you only review conversations after they are finished.

2. Fetch Conversation Messages

In this step, you'll fetch the last 50 inbound and outbound messages using the List Messages API. This is usually enough for a thorough QA review. If you need more context, you can optionally fetch another 50 messages (up to 100 total) using pagination.

n8n nodes:

  • get 1st 50 messages (HTTP Request)

  • Is there a second page? (IF) — optional

  • get 2nd 50 messages (HTTP Request) — optional

2.1 Get first 50 messages

Node:get 1st 50 messages

  1. Select Core > HTTP Request

  2. Method: GET

  3. URL: your List Messages endpoint

Example URL (replace with your actual API base URL + endpoint format):

https://api.respond.io/v2/contact/{{identifier}}/message/list

Where identifier needs to be replaced with one of the following formats: id:<contactID>, phone:+<countryCodeAndPhone>, or email:<contactEmail>

  1. Turn on Send Query Parameters > Using Fields Below, then add:

    • Name: limit

    • Value: 50

If your API supports it, set sorting oldest → newest. Example: sort=asc.

  1. Enable Send Headers > Using Fields Below, then add:

    • Name: Accept

      • Value: application/json

    • Name: Authorization

      • Value: Bearer <your_respond_api_key>

If your List Messages response includes pagination.next, you can use it directly to fetch the second page.

2.2 Check and get more than 50 messages (Optional)

If 50 messages isn't enough for your QA needs, you can fetch a second page of 50 messages. Add an IF node to check whether there are more messages available.

Node:Is there a second page? (IF)

  • Left value (Expression): {{ $json.pagination.next }}

  • Operator: is not empty

If true, it means there are more messages to fetch. If false, the workflow continues without fetching more — this prevents the workflow from failing when there's no second page. The Merge node downstream will wait for both paths to complete, so the workflow still runs smoothly either way.

2.3 Get the next 50 messages (Optional)

If the IF node passes (a second page exists), fetch the next batch.

Node:get 2nd 50 messages

  1. Add another HTTP Request node.

  2. Set:

    • Method: GET

    • URL: {{ $json.pagination.next }}

  3. Add the same headers:

    • Accept: application/json

    • Authorization: Bearer <your_respond_api_key>

3. Merge Message Pages (Optional)

This step is only needed if you're fetching more than 50 messages. If you chose not to fetch a second page, you can skip this node and connect get 1st 50 messages directly to the next step (Clean + keep messages since last open).

The Merge node combines the first 50 messages with the second 50 messages into a single list. Without it, the workflow can't process two separate API responses together.

n8n node:Merge

  1. Add Flow > Merge.

  2. Set Mode to Append.

  3. Set Number of Inputs: 2

  4. Connect:

    • Input 1: get 1st 50 messages

    • Input 2: get 2nd 50 messages

4. Clean and Filter Messages

The API response is still raw and contains a lot of extra information the AI doesn't need. This step cleans it up — removing old messages, normalizing sender labels, and structuring everything into a simple list that's ready for AI processing. All you need to do is add the Code node and copy-paste the JavaScript below.

n8n node:Clean + keep messages since last convo open

To do this, Add node → Select Core → Code → Code in JavaScript.

This code will:

  1. Combine messages from both pages

  2. Filter out messages sent before the conversation opened

  3. Sort messages from oldest → newest

  4. Normalize senders into consistent labels: Contact, AI Agent, Human Agent, Workflow

  5. Add message index numbers

  6. Return a clean, structured list for the transcript step

Notes: This example filters using messageId compared to the opened timestamp converted to microseconds. If your API provides createdAt timestamps instead, filter by createdAt rather than messageId. Ensure your conversation_open_timestamp includes a timezone. If it does not, set it to your workspace timezone before parsing.

Paste this into the Code node:

// n8n Code node AFTER Merge (Run Once for All Items)
// Incoming items are the API responses from page1 and (optionally) page2.

function unwrapRespondList(json) {
  if (Array.isArray(json) && json.length && json[0]?.items) return json[0];
  if (json?.items) return json;
  if (Array.isArray(json)) return { items: json, pagination: {} };
  return { items: [], pagination: {} };
}

function toMicroseconds(ts) {
  if (!ts || typeof ts !== "string") return null;

  // Prefer timestamps that already contain timezone info.
  // If your timestamp lacks timezone, add it upstream (recommended) rather than hardcoding here.
  const iso = ts.includes("T") ? ts : ts.replace(" ", "T");
  const ms = Date.parse(iso);
  if (Number.isNaN(ms)) return null;
  return ms * 1000; // microseconds
}

function normalizeSender(source) {
  if (!source) return "Unknown";
  const s = String(source).toLowerCase();
  if (s === "contact") return "Contact";
  if (s.includes("ai")) return "AI Agent";
  if (s.includes("workflow") || s.includes("automation")) return "Workflow";
  if (s.includes("user") || s.includes("agent")) return "Human Agent";
  return source;
}

// Read from the Conversation closed trigger
const trigger = $("Conversation closed trigger").first().json;
const body = trigger.body ?? trigger;

const openedTs =
  body.conversation_open_timestamp ||
  body.conversation_opened_timestamp ||
  body.conversationOpenedAt ||
  body["conversation_open_timestamp "] ||
  null;

const openMicro = toMicroseconds(openedTs);

// Merge items from all incoming API payloads (1 or 2 pages)
let mergedItems = [];
for (const item of $input.all()) {
  const unwrapped = unwrapRespondList(item.json);
  if (Array.isArray(unwrapped.items)) mergedItems.push(...unwrapped.items);
}

// Filter messages after conversation opened timestamp
if (openMicro !== null) {
  mergedItems = mergedItems.filter((m) => {
    const idNum = Number(m?.messageId);
    return Number.isFinite(idNum) && idNum > openMicro;
  });
}

// Sort oldest -> newest
mergedItems.sort((a, b) => Number(a?.messageId ?? 0) - Number(b?.messageId ?? 0));

// Index + shape
const indexed = mergedItems.map((m, i) => {
  const traffic = m?.traffic ?? null;
  const sender = m?.sender || {};

  // Incoming traffic is always from Contact
  if (traffic === "incoming") {
    return {
      Index: i + 1,
      traffic,
      message: m?.message ?? null,
      Sender: { source: "Contact" },
    };
  }

  return {
    Index: i + 1,
    traffic,
    message: m?.message ?? null,
    Sender: {
      source: normalizeSender(sender?.source),
      userId: sender?.userId ?? null,
      teamId: sender?.teamId ?? null,
    },
  };
});

return [
  {
    json: {
      message_count: indexed.length,
      items: indexed,
      meta: {
        conversation_open_timestamp: openedTs,
        conversation_open_microseconds: openMicro,
        pages_received: $input.all().length,
      },
    },
  },
];

Make sure to set Mode = Run Once for All Items and Language = JavaScript.

The script above is a reference. Your workspace may return different fields or structures. To build the right JSON parsing logic for your setup, copy the output from the previous node, paste it into an AI tool (e.g., ChatGPT or Claude) along with the reference script above, and describe the output format you need. The AI can then adapt the script to match your actual data.

5. Build Transcript

This step converts the cleaned message list into a Markdown-style transcript for AI processing. Structured transcripts improve AI understanding and reduce hallucinations.

n8n node:Build markdown transcript (Core → Code → JavaScript)

Paste this into the Code node:

// Minimal parse of possibly-escaped JSON string
function parseMaybeEscapedJSON(raw) {
  if (raw == null) return null;
  if (typeof raw === 'object') return raw;
  let s = String(raw).trim();
  if ((s.startsWith('"') && s.endsWith('"')) || (s.startsWith("'") && s.endsWith("'"))) s = s.slice(1, -1);
  s = s.replace(/\\"/g, '"').replace(/\\n/g, '\n').replace(/\\t/g, '\t').replace(/\\r/g, '\r').replace(/\\\\/g, '\\');
  return JSON.parse(s);
}

// Flatten nested objects/arrays into k1_k2_0_k3 style keys
// Primitive-only arrays are joined as comma-delimited strings
function flatten(obj, prefix = '', out = {}) {
  if (obj == null) return out;
  const makeKey = (k) => (prefix ? `${prefix}_${k}` : String(k));
  if (Array.isArray(obj)) {
    // If every element is a primitive, join as comma-delimited string
    if (obj.every(v => v == null || typeof v !== 'object')) {
      out[prefix || 'value'] = obj.join(', ');
      return out;
    }
    obj.forEach((v, i) => {
      const k = makeKey(i);
      (v && typeof v === 'object') ? flatten(v, k, out) : out[k] = v;
    });
    return out;
  }
  if (typeof obj === 'object') {
    for (const [k, v] of Object.entries(obj)) {
      const key = makeKey(k);
      (v && typeof v === 'object') ? flatten(v, key, out) : out[key] = v;
    }
    return out;
  }
  out[prefix || 'value'] = obj;
  return out;
}

const parsed = [];
for (const item of items) {
  try {
    const obj = parseMaybeEscapedJSON(item.json?.Output);
    if (!obj || typeof obj !== 'object') {
      parsed.push({ json: { error: 'Parsed Output is not an object', raw: item.json?.Output ?? null } });
      continue;
    }
    const out = flatten(obj);
    parsed.push({ json: out });
  } catch (e) {
    parsed.push({ json: { error: 'Failed to parse Output', message: e?.message || String(e), raw: item.json?.Output ?? null } });
  }
}
return parsed;

Make sure to set Mode = Run Once for All Items and Language = JavaScript.

6. AI-based conversation reviews

This node sends the transcript to an AI model (e.g., OpenAI 5.4) and returns a structured JSON response.

How to set up

  1. Select your AI provider credentials. In this example, we will use OpenAI.

  2. Resource: Message a Model

  3. Operation: Message an Assistant

  4. Messages:

    1. Type: Text

    2. Role: User

    3. Prompt: {{ $json.transcript }}

  5. Simplify Output: Toggle On

  6. Add Option:

    1. Instructions - This is just an example template, but you can input this to another AI (i.e. ChatGPT, Claude, etc) to edit it to your needs:

You are a conversation quality analyst for respond.io, a customer messaging platform. Your job is to review support or sales conversations and produce accurate, consistent quality scores.

You will receive a conversation transcript as a numbered list of messages. Evaluate it using the structured reasoning process below before producing your final output.

---

## INPUT FORMAT

Each message in the transcript contains:

- A message number (e.g. `1`, `2`, `3`)
- `traffic:incoming` — message FROM the customer
- `traffic:outgoing` — message TO the customer
- `text:` — the message content
- `sender source:` — who sent it:
  - `Contact` → customer
  - `ai_agent` → AI Agent (automated — evaluate for quality)
  - `workflow` → system automation (do NOT score as agent engagement)
  - `user` → human agent (note the handover point)

When evaluating, treat `ai_agent` and `user` messages together as "the agent side." Track when a handover from AI to human occurred, as this affects resolution and engagement scoring. Ignore `workflow` messages when scoring engagement — they are automated system responses, not judgment calls. The conversation may be in any language. Evaluate sentiment and content accurately regardless of language.

---

## STEP 1 — CHAIN OF THOUGHT (Internal Reasoning)

Before producing any scores, reason through the conversation step by step in this exact order:

**1. Identify the customer's core intent.**
What did the customer come in wanting? State it in one sentence.

**2. Trace the conversation structure.**
Who handled the conversation — AI Agent only, or was there a human handover? At what point? Were multiple human agents involved? Were there workflow messages that could have confused the customer?

**3. Trace resolution.**
Did the agent actually address the customer's core intent? Did the customer confirm it was resolved? Did the conversation end mid-issue, with a deflection, or with an appropriate handover?

**4. Trace sentiment.**
How did the customer start emotionally? How did they end? Look for:
- Frustration markers: repeated questions, corrections, short/abrupt replies
- Satisfaction markers: "thank you", "got it", "perfect", positive sign-offs
- Neutral markers: transactional, cooperative, no strong signal either way
Weight the customer's ending tone more heavily than their opening tone.

**5. Evaluate agent engagement.**
Score the agent side on:
- Did they acknowledge the customer's situation before jumping to a solution?
- Did they personalize the response using context from earlier in the conversation?
- Did they ask smart clarifying questions, or make assumptions?
- If a human took over, did they maintain continuity from the AI — or start from scratch?
- Were responses in the same language as the customer?
- Do NOT score `workflow` messages as engagement.

**6. Identify hard failures.**
Check explicitly for each of the following:
- Did the customer repeat their core question without it being acknowledged?
- Did the agent provide factually incorrect information?
- Did the agent ignore a key part of the customer's message?
- Did the conversation end without resolution or clear next steps?
- Was there an unprofessional or inappropriate tone?
- Were there excessive response delays?
- Was the escalation path wrong or unnecessary?
- Did a human agent fail to pick up context from the AI handover?
- Did the agent respond in a different language than the customer?

**7. Draft your reasoning.**
In 2–4 sentences, summarize: what happened, what the agent did well, and what failed. Be specific — reference actual moments or message numbers.

---

## STEP 2 — INITIAL SCORING

Based on your Step 1 reasoning, produce an initial draft of all scores:

- **`overall_score`** (integer, 1–10): Holistic quality of the conversation. Weights resolution, sentiment, and engagement together. Hard failures automatically cap the score at 5.
- **`resolved`** (enum): Whether the customer's issue was closed. Do NOT mark `resolved` just because the agent sent a final message. Requires customer confirmation or a clearly completed handover.
- **`customer_sentiment`** (enum): The customer's dominant emotional tone, weighted toward how they ended the conversation.
- **`engagement_score`** (integer, 1–10): How well the agent communicated — personalization, empathy, clarity, language match, and continuity across handovers.
- **`reasoning`** (string): Your 2–4 sentence summary from Step 1.
- **`flags`** (array): List of hard failures identified. Empty array if none.

---

## STEP 3 — CHAIN OF VERIFICATION

Before finalizing, run each of the following checks and answer them explicitly:

**Check 1 — Resolution:**
Is there actual evidence the customer's problem was solved — or did I infer it? If inferred, should `resolved` be `"unresolved"` instead?

**Check 2 — Sentiment:**
Am I judging sentiment based on the customer's ending tone, not just their opening frustration? Did their mood improve, stay the same, or worsen?

**Check 3 — Engagement:**
Would this `engagement_score` hold up if a QA manager reviewed the conversation? Did I correctly exclude `workflow` messages from engagement scoring?

**Check 4 — Handover quality:**
If a human agent took over, did they maintain continuity — or restart from scratch, ignore context, or respond in the wrong language? Adjust `engagement_score` accordingly.

**Check 5 — Score consistency:**
- `resolved: unresolved` + `customer_sentiment: negative` → `overall_score` must be ≤ 4
- `resolved: resolved` + `customer_sentiment: positive` → `overall_score` must be ≥ 6
- Any flag present → `overall_score` must be ≤ 5

**Check 6 — Flags completeness:**
Review the full hard failure list again. Did I miss anything?

If any check fails, revise the relevant score before proceeding.

---

## STEP 4 — FINAL OUTPUT

Output your final scores in the required JSON format only. Do not include your internal Step 1–3 reasoning. Only the `reasoning` field summary should appear in the output.

---

## SCORING REFERENCE

### overall_score
| Score | Meaning |
|-------|---------|
| 9–10 | Excellent — resolved efficiently, customer ended positive, agent was empathetic and personalized |
| 7–8 | Good — resolved with minor friction, neutral-to-positive sentiment |
| 5–6 | Average — partially resolved, or high engagement but poor outcome |
| 3–4 | Poor — unresolved, negative sentiment, or clear engagement failures |
| 1–2 | Critical failure — wrong info given, customer left angry, or conversation abandoned |

### engagement_score
| Score | Meaning |
|-------|---------|
| 9–10 | Highly personalized, empathetic, efficient, proactively helpful, correct language throughout |
| 7–8 | Solid — addressed the issue well with some personal touch |
| 5–6 | Functional but robotic — technically correct, no warmth or personalization |
| 3–4 | Poor communication — assumptions made, questions ignored, or handover lacked continuity |
| 1–2 | Harmful — rude, dismissive, wrong language used, or completely off-topic |

Required: strict JSON output

Your assistant instructions should require a JSON Schema.

Recommended Output Format:

  1. Type: JSON Schema (recommended)

  2. Name: Conversation QA Schema

  3. Strict: Toggle On

  4. Schema: This is also just an example schema to output over quality score of the conversation, status of the issue, customer sentiment, engagement score, reasoning behind its result, and failure flags of the conversation. Remember to edit it so it fits your needs.

{
  "type": "json_schema",
  "json_schema": {
    "name": "conversation_review",
    "strict": true,
    "schema": {
      "type": "object",
      "properties": {
        "overall_score": {
          "type": "integer",
          "description": "Holistic quality score of the conversation. 1 = critical failure, 10 = excellent. Any flag present caps this at 5."
        },
        "resolved": {
          "type": "string",
          "enum": ["resolved", "unresolved", "escalated"],
          "description": "Whether the customer's core issue was closed. 'resolved' requires explicit customer confirmation. 'escalated' means a successful intentional handover to a human agent. 'unresolved' means the conversation ended without closure."
        },
        "customer_sentiment": {
          "type": "string",
          "enum": ["positive", "neutral", "negative"],
          "description": "The customer's dominant emotional tone, weighted toward how they ended the conversation. 'positive' = satisfied or grateful. 'neutral' = cooperative and transactional. 'negative' = frustrated, repetitive, or left dissatisfied."
        },
        "engagement_score": {
          "type": "integer",
          "description": "How well the agent side communicated — personalization, empathy, clarity, language match, and handover continuity. Excludes workflow messages. 1 = harmful or dismissive, 10 = highly personalized and proactive."
        },
        "reasoning": {
          "type": "string",
          "description": "2–4 sentences explaining the scores. Must reference specific moments in the conversation. Cover what the agent did well, what failed, and what drove the final scores. No generalizations."
        },
        "flags": {
          "type": "array",
          "description": "Hard failure flags identified in the conversation. Empty array if none apply.",
          "items": {
            "type": "string",
            "enum": [
              "customer_repeated_question",
              "incorrect_information_given",
              "agent_ignored_customer_message",
              "conversation_ended_abruptly",
              "unprofessional_tone",
              "excessive_response_delay",
              "wrong_escalation_path",
              "handover_lacked_continuity",
              "wrong_language_used"
            ]
          }
        }
      },
      "required": [
        "overall_score",
        "resolved",
        "customer_sentiment",
        "engagement_score",
        "reasoning",
        "flags"
      ],
      "additionalProperties": false
    }
  }
}

After this node, connect it to a Map Output node and continue to step 7.

7. Parse and normalize AI output

This step prevents malformed or partial AI responses from breaking your workflow.

n8n nodes:

  • Map Output (Data Transformation → Edit Fields / Set)

  • Parse Output (Core → Code → JavaScript)

7.1 Map Output

Node:Map Output

  • Mode: Manual mapping

  • Create a field called Output and map it to the AI response.

This makes the next node consistent (it can always read $json.Output).

7.2 Parse Output safely

Node:Parse Output

Paste this into the Code node:

// Minimal parse of possibly-escaped JSON string
function parseMaybeEscapedJSON(raw) {
  if (raw == null) return null;
  if (typeof raw === 'object') return raw;
  let s = String(raw).trim();
  if ((s.startsWith('"') && s.endsWith('"')) || (s.startsWith("'") && s.endsWith("'"))) s = s.slice(1, -1);
  s = s.replace(/\\"/g, '"').replace(/\\n/g, '\n').replace(/\\t/g, '\t').replace(/\\r/g, '\r').replace(/\\\\/g, '\\');
  return JSON.parse(s);
}

// Flatten nested objects/arrays into k1_k2_0_k3 style keys
function flatten(obj, prefix = '', out = {}) {
  if (obj == null) return out;
  const makeKey = (k) => (prefix ? `${prefix}_${k}` : String(k));

  if (Array.isArray(obj)) {
    obj.forEach((v, i) => {
      const k = makeKey(i);
      (v && typeof v === 'object') ? flatten(v, k, out) : out[k] = v;
    });
    return out;
  }

  if (typeof obj === 'object') {
    for (const [k, v] of Object.entries(obj)) {
      const key = makeKey(k);
      (v && typeof v === 'object') ? flatten(v, key, out) : out[key] = v;
    }
    return out;
  }

  out[prefix || 'value'] = obj;
  return out;
}

const parsed = [];

for (const item of items) {
  try {
    const obj = parseMaybeEscapedJSON(item.json?.Output);
    if (!obj || typeof obj !== 'object') {
      parsed.push({ json: { error: 'Parsed Output is not an object', raw: item.json?.Output ?? null } });
      continue;
    }
    const out = flatten(obj);
    parsed.push({ json: out });
  } catch (e) {
    parsed.push({ json: { error: 'Failed to parse Output', message: e?.message || String(e), raw: item.json?.Output ?? null } });
  }
}

return parsed;

Flattening note: arrays and nested objects will become columns like issues_0_type, issues_0_severity, etc. This makes it easier to store in Google Sheets.

8. Store Results

The final step stores the QA results into Google Sheets for tracking and reporting.

n8n node: Google Sheets (append)

  • Credential to connect with: Connect your Google Sheet account

  • Resource: Sheet Within Document

  • Operation: Append Row

  • Select document: From list → Select the spreadsheet name

  • Select sheet: From list → Worksheet name

  • Mapping Column Mode: Map each column manually

Then, map each value to the column name you've set up in your sheet, ideally following the column names described in Section 1.

Where each field comes from

From the webhook trigger (conversation metadata):

Map these using the Webhook node reference, e.g.:

{{ $('Webhook').first().json.body.contact_id }}

  • contact_id

  • conversation_opened_timestamp

  • conversation_closed_timestamp

From the parsed AI output (Section 7):

Map these using the parsed output fields, e.g.:

{{ $json.overall_score }}

  • overall_score

  • resolved

  • sentiment

  • reasoning

  • engagement_score

For debugging (recommended during rollout):

  • raw_ai_output — store the full AI response so you can spot-check results and refine your prompt

💡 Tip: Check the actual output of your Webhook node to confirm the exact field names — they may vary depending on your workspace's trigger configuration.

Optional Enhancements

  • Mask PII before AI review (emails, phone numbers, order IDs)

  • Process more than 100 messages by looping through pagination until pagination.next is empty

  • Sample conversations (e.g., only review 10% of closed conversations to control costs)

  • Route high severity issues to Slack/Teams for faster follow-up

แชร์บทความนี้
Telegram
Facebook
Linkedin
Twitter

ไม่พบสิ่งที่คุณกำลังมองหาใช่ไหม? 🔎