How limiting AI Agents capabilities could hit your bottom line

How limiting AI Agents capabilities could hit your bottom line

AI agents no longer just advise travellers: they can research, compare and complete bookings end‑to‑end

AI agents no longer just advise travellers: they can research, compare and complete bookings end‑to‑end. That evolution forces travel brands to rethink a long-standing assumption baked into many online journeys: that “real customers” look human, and automated traffic should be blocked.

Conversational agents can handle end-to-end bookings if given the opportunity. Until now, most online services have taken care to verify that a user is human by asking them to identify objects in photos of buses or traffic lights before allowing them to proceed. However, with AI agents, this boundary becomes blurred, as some automated traffic may correspond to real customers, and it would be counterproductive to block them. A Security Boulevard article describes this shift and its corollary: distinguishing legitimate automation from malicious automation.

Conversational AI can be seen as the missing link between inspiration and booking, and must reduce the friction that still affects the purchase of complex products such as travel packages. In a travel journey fraught with constraints such as fluctuating availability, dynamic pricing and multiple services, delegating to an agent can provide speed and clarity. According to a DataDome report cited in the article, 38% of consumers have already used AI in their purchases, illustrating the role of agents in purchasing decisions and highlighting a shift away from the sole reliance on search engines. However, the ultimate automation — booking — requires real-time assessment of the identity and intent of a request, even when it originates from a bot, in order to mitigate the risks associated with the newfound power given to AI.

Good and bad automation

Three concrete risks are highlighted for the travel industry. Firstly, there is the risk of agents being taken over and account fraud occurring (e.g. unauthorised bookings, exfiltration of personal data and siphoning of points), which is all the more difficult to detect as human behavioural cues fade when the buyer automates. Second, automated purchasing can lead to stock hoarding and price manipulation of certain products. Finally, loyalty programmes can be abused through credential stuffing and the resale of points.

The proposed solution shifts from a binary 'I'm not a robot' filter to intent-based trust management. This involves verifying identity and behaviour in real time in order to categorise legitimate AI agents acting on behalf of a traveller, malicious automations (e.g. scraping, hoarding and fraud) and compromised/spoofed agents. Current simple user-agent allowlisting approaches are showing their limitations; research from DataDome reveals that 80% of agents do not properly identify themselves, necessitating behavioural analysis.

The Security Boulevard article explains that success in agent-based commerce will not come from systematic blocking or indiscriminate access. The key will be the ability to create trusted access, allowing legitimate automation while blocking harmful automation in order to preserve customer experience, margins and partner relationships. This is a strategic challenge that players in the travel industry must address now.