AI 6 min read

Autonomous agents vs. collection scripts: what actually changes

Scripts follow the script. Agents respond to the debtor. The difference in recovery rate is not marginal.

AI Team · Dyvit
3 Feb 2026

Most "automated collection solutions" on the market are scripts disguised as AI. They follow a predefined decision tree: if the debtor says X, reply Y. If they ask for more time, offer option A or B. If they don't respond, wait 3 days and try again.

Scripts are predictable. Debtors learn to ignore them. And when the conversation goes off-script, the debtor mentions unemployment, disputes the debt, or asks for a different number of installments, the system freezes or escalates to a human.

An autonomous agent works differently. It doesn't follow a flow. It reads context, interprets intent, and adapts its response. The difference sounds subtle in theory. In practice, it's the difference between a 14% and 34% recovery rate.

What scripts do well (and where they stop)

Scripts are efficient for simple, predictable cases: the debtor confirms the debt, accepts the first offer, and pays. This profile accounts for roughly 20-25% of any portfolio. For these debtors, a script works just as well as an agent.

The problem is the rest of the portfolio. Debtors who question the amount, who want installment plans in formats the script didn't anticipate, who mention specific contexts (lost their job last week, have another debt with the same company, dispute an incorrect charge). For these cases, the script has no adequate response. The conversation abandonment rate is high.

The debtor who abandons the conversation hasn't disappeared. They remain delinquent with a deteriorated relationship.

Direct comparison: same portfolio, different approaches

MetricAutomated scriptDyvit Agent
Response rate to 1st message 31% 52%
Conversion rate (deal closed) 14% 34%
Escalation rate to human 38% 8%
Average time to settlement 4.2 days 1.8 days
Debtor satisfaction (post-settlement NPS) -12 +31

Data based on comparative pilot portfolios, Jan-Feb 2026. Segment: personal credit, 30-90 day delinquency.

A concrete example: the debtor who goes off-script

Consider this real scenario (anonymized). Debtor, R$1,800 overdue for 45 days. The script sends a standard message. The debtor replies: "I know I owe it, but I switched banks and no longer have that account. My new Pix key is my CPF."

Script: doesn't recognize the information, sends a Pix link to the previous key (which no longer works), waits 48 hours with no response, escalates to a human. The human calls 3 days later. The debtor has already forgotten the conversation.

Agent: recognizes the information ("understood, I'll generate a new link using your CPF key"), validates the key in DICT in real time, generates a new link with a 24-hour expiration, confirms delivery. The debtor pays the same day.

The difference isn't in the algorithm. It's in the ability to interpret context and act on it without breaking the conversational flow.

What defines a real agent

It's not the size of the LLM. It's the ability to execute real-world actions within the conversation: verify data in real time, generate documents, confirm transactions, record agreements in the creditor's system. A chatbot that only talks is not an agent. It's an FAQ with personality.

The components that distinguish a functional collection agent from a sophisticated script:

  • 01
    Conversation memory
    The agent remembers what was said 10 messages ago. If the debtor mentioned they get paid on the 5th, the agent proposes a due date of the 6th, without the debtor needing to repeat themselves.
  • 02
    Intent interpretation, not keyword matching
    "I'm broke right now" could mean refusal, a request for more time, or an opening for installments depending on context. The agent interprets the correct signal and responds accordingly.
  • 03
    Real-time action execution
    Generate a Pix link, validate a key in DICT, record an agreement in the ERP, create a support ticket: all within the same conversation, without interruption.
  • 04
    Awareness of its own limits
    The agent knows when a case is beyond scope, such as debt disputes, requests for formal documentation, or threats of legal action. In these cases, it escalates to a human with full conversation context.

The question isn't "AI or human." It's "which type of AI." A script with an LLM wrapper is still a script. An agent that interprets context, executes actions, and learns from every conversation is infrastructure. The difference shows up in the recovery rate at the end of the month.

AI Autonomous Agents Automation NLP Recovery Rate

See the difference live

In the demo, we show the agent handling off-script cases: the kind a regular chatbot gives up on.

Book a demo