Build an Agentic Chatbot that Books Travel and Orders Food: A Step-by-Step Tutorial
AIchatbotstutorialintegration

Build an Agentic Chatbot that Books Travel and Orders Food: A Step-by-Step Tutorial

UUnknown
2026-02-24
11 min read
Advertisement

Hands-on guide to build an agentic chatbot that books travel and orders food—LLM + action-executor + APIs, with OAuth, webhooks, retries, and safety checks.

Build an Agentic Chatbot that Books Travel and Orders Food: A Step-by-Step Tutorial

Hook. If you’re a developer tired of juggling disparate APIs, OAuth pain, brittle webhooks, and unclear LLM outputs, this guide shows how to build a reliable agentic assistant in 2026 that books travel and orders food—with safety checks, retries, and real-world integrations.

What you’ll get

This hands-on tutorial walks through a production-ready architecture combining an LLM, a structured action-execution layer, and connectors to travel and food delivery APIs. You’ll learn practical patterns for:

  • Prompting an LLM to emit structured actions (tool/function calls).
  • Implementing an action-execution layer to validate, authorize, and execute API calls.
  • Handling OAuth, webhooks, retries, idempotency, and safety checks.
  • Testing in sandboxes and deploying for scale.

Why build agentic assistants now (2026)

By late 2025 and into 2026 the industry shifted from chat-only LLMs to agentic AI—models that call external tools and perform stateful workflows. Major vendors rolled out reliable function-calling, sandboxed tool interfaces, and better policies for safe automation. Alibaba’s Qwen and other players highlighted a new norm: assistants that take actions, not just suggest them. This guide reflects those trends and teaches you robust engineering patterns for production agents.

Prerequisites

  • Node.js or Python experience (examples use Node.js/TypeScript and plain JS).
  • Accounts for target APIs (use sandbox keys for Amadeus/Skyscanner for travel and DoorDash/Uber Eats/Deliveroo sandboxes for food delivery).
  • An LLM provider that supports structured outputs / function calling (2026 providers include major cloud LLMs and specialized inference APIs).
  • Familiarity with OAuth 2.0 (PKCE), webhooks, and basic infra (Kubernetes or serverless).

High-level architecture

Our system has three layers:

  1. LLM layer: Receives user messages, reasons with context, and emits structured actions (JSON function calls) or asks clarifying questions.
  2. Action-execution layer: Validates actions, authorizes the user, dispatches API calls, enforces safety checks, and manages retries/idempotency.
  3. Connector layer: Implements adapters to travel and food ordering APIs, plus webhook handlers for asynchronous confirmations.

Flow example

User: "Book me a flight SFO→JFK next Tuesday and order dinner at 7pm—sushi, budget $40."

System: 1) LLM emits actions: find_flight, book_flight, find_restaurant, place_order. 2) Action-execution validates user scopes, runs booking APIs, returns confirmations. 3) Webhooks deliver final receipts and the assistant updates the user.

Step 1 — Design a structured action protocol

Think of the action schema as a small RPC contract between the LLM and your executor. Use strict JSON schemas for each tool to avoid ambiguous natural language parsing.

// Example action schema (simplified)
{
  "action": "book_flight",
  "params": {
    "from": "SFO",
    "to": "JFK",
    "depart_date": "2026-02-10",
    "return_date": null,
    "passengers": 1,
    "class": "economy",
    "max_price": 45000 // cents
  }
}

Guidelines:

  • Keep schemas narrow and typed (dates in ISO, currencies in cents, IATA codes for airports).
  • Use explicit intent names: find_flight, book_flight, get_price, find_restaurant, place_order, cancel_order.
  • Return either a terminal result or a follow-up action request (e.g., ask_user_confirmation).

Step 2 — Prompt the LLM for safe function-calling

Instead of free-form responses, instruct the LLM to return only validated action JSON or a clarification request. Example prompt components:

  • System instruction with allowed actions (names and JSON schemas).
  • User message and relevant context (profiles, payment methods, travel preferences).
  • Policy guardrails: ask for explicit user confirmation for bookings over X, for cross-border transfers, etc.
// Simplified LLM prompt example (pseudocode)
SYSTEM: You are an assistant allowed to call the following actions: [book_flight, find_flight, find_restaurant, place_order]. For each call, return a single JSON object matching the schema. If uncertain, return {"action":"ask_clarification","params":{...}}.
USER: I want a flight from SFO to JFK next Tue and sushi for dinner at 7pm.

Step 3 — Build the action-execution layer

This layer verifies the LLM output, checks user consent, and executes the connector code. It handles transient failures and provides retries with exponential backoff and idempotency keys for safety.

Executor responsibilities

  • Validate action JSON against schema.
  • Check authorization & scopes (OAuth tokens, payment permissions).
  • Enforce guardrails (price caps, destination restrictions, GDPR checks).
  • Execute connector calls with retries and idempotency.
  • Emit events and persist state for async webhooks.

Node.js example: executeAction()

async function executeAction(action, user) {
  // 1. Validate
  validateSchema(action);

  // 2. Authorization
  if (!user.hasScope(action.action)) throw new Error('Forbidden');

  // 3. Idempotency key
  const idempotencyKey = makeIdempotencyKey(user.id, action);

  // 4. Dispatch with retries
  return await dispatchWithRetry(() => dispatchToConnector(action, user, idempotencyKey), 3);
}

async function dispatchWithRetry(fn, retries) {
  let attempt = 0;
  const base = 300; // ms
  while (true) {
    try {
      return await fn();
    } catch (err) {
      attempt++;
      if (attempt > retries || !isTransient(err)) throw err;
      await sleep(Math.pow(2, attempt) * base + jitter());
    }
  }
}

Notes: isTransient() should inspect HTTP 5xx, rate-limit headers, and network errors. Use a durable store (Redis or DB) to save idempotency keys and in-flight states.

Step 4 — Connect to travel booking APIs

Major travel data providers: Amadeus, Sabre, Travelport, and various airline APIs. In 2026, the ecosystem includes more standard GraphQL endpoints and richer sandbox modes—use these for development.

Key patterns for travel connectors

  • Use provider sandboxes for quoting and booking tests.
  • Normalize responses (prices in cents, leg objects, cancellation policies).
  • Keep booking state until the airline confirms—do not trust LLM or user-side assumptions.
  • For payments, prefer tokenized payment methods and Payment Initiation APIs (where supported) to avoid PCI scope.
// Pseudocode: bookFlight connector
async function bookFlight(params, user, idempotencyKey) {
  // 1. Quote check
  const quote = await amadeus.getQuote(params);
  if (quote.price > params.max_price) throw new Error('PriceExceeded');

  // 2. Confirm payment method exists
  const token = await getPaymentTokenForUser(user);

  // 3. Create booking in provider sandbox
  const booking = await amadeus.book({ ...params, token, idempotencyKey });

  // 4. Persist booking and return local confirmation id
  await saveBooking({ userId: user.id, booking, status: 'PENDING' });

  return { localId: bookingRef(booking), providerStatus: booking.status };
}

Step 5 — Connect to food delivery APIs

Food delivery SDKs (DoorDash, Uber Eats, Deliveroo) provide endpoints for searching restaurants, estimating fees, and placing orders. In 2026 many vendors support two-legged ordering flows and webhook confirmations.

Practical tips

  • Always run orders in the vendor sandbox while iterating.
  • Use idempotency to avoid duplicate orders if the user retries or the LLM issues duplicate actions.
  • Confirm user preferences and allergies before placing orders—LLMs may infer too much.
// Pseudocode: placeOrder connector
async function placeOrder(params, user, idempotencyKey) {
  // Validate address and payment
  if (!user.address) throw new Error('MissingAddress');

  const estimate = await foodApi.getEstimate({ restaurantId: params.restaurantId, items: params.items, address: user.address });
  if (estimate.total > params.max_price) throw new Error('BudgetExceeded');

  const order = await foodApi.createOrder({ items: params.items, address: user.address, paymentToken: user.paymentToken }, { idempotencyKey });

  await saveOrder({ userId: user.id, order, status: 'PENDING' });
  return { orderId: order.id, eta: order.eta };
}

Agentic assistants act on behalf of users—so OAuth and user consent are essential.

Best practices (2026)

  • Prefer OAuth 2.1 + PKCE for client flows on mobile/web.
  • Request narrow scopes and show explicit consent screens for actions that spend money or access PII.
  • Store refresh tokens encrypted and rotate them regularly. Use hardware-backed keys (KMS/HSM) for encryption.
  • Implement a “re-authorize” flow for high-risk actions (international bookings, high-value orders) requiring a fresh authentication step.
// OAuth flow outline
1. Step: User clicks "Connect DoorDash" → Redirect to provider with PKCE.
2. Provider returns code → Exchange for access_token + refresh_token.
3. Save tokens encrypted, with scopes.
4. Use refresh_token to obtain new access tokens before expiry.

Step 7 — Webhooks and asynchronous confirmations

Many bookings and orders are asynchronous. Use webhooks to update booking/order state and inform the user. Key considerations:

  • Validate webhook signatures using provider public keys.
  • Persist webhook events for audit and replayability.
  • Design idempotent webhook handlers (events may be delivered multiple times).
  • Notify users proactively via chat updates when status changes.
// Webhook handler sketch
app.post('/webhooks/food', verifySignature, async (req, res) => {
  const ev = req.body;
  await saveWebhookEvent(ev);
  await handleWebhookEvent(ev);
  res.status(200).send('OK');
});

async function handleWebhookEvent(ev) {
  const existing = await findOrderByProviderId(ev.orderId);
  if (!existing) return; // ignore unknown
  await updateOrderStatus(existing.id, ev.status);
  await notifyUser(existing.userId, `Your order is now ${ev.status}`);
}

Step 8 — Safety checks, guardrails, and human-in-the-loop

Agentic assistants carry risk. In 2026, robust production systems combine automated checks with human approvals for edge cases.

Safety checklist

  • Consent verification: Require explicit consent for spending money.
  • Rate limits & budget caps: Enforce per-user, per-action caps and require re-auth for exceptions.
  • Fraud detection: Check for anomalous destinations, rapid repeated bookings, or mismatched addresses.
  • PII minimization: Do not send more user data to the LLM than required. Use retrieval-augmented methods for contextual facts and keep secrets out of prompts.
  • Human escalation: For high-risk actions, send a verification message to a human operator or require a one-time code.
“Always plan for human-in-the-loop—automate the common cases, but escalate the risky ones.”

Step 9 — Testing strategies

Test in four layers:

  1. Unit tests for schema validation and connectors (mock external APIs).
  2. Integration tests against provider sandboxes for booking flows.
  3. End-to-end tests with LLM mocks returning expected actions and misbehaviors (malformed JSON, unauthorized action).
  4. Chaos testing: simulate webhook delays, broken refresh tokens, and provider rate limits.

Sample test cases

  • LLM returns book_flight but with price higher than user's max — executor should abort and ask user.
  • Webhook duplicate deliveries — handlers remain idempotent.
  • Network outage during place_order — retry and, if failed, surface to user and human ops.

Step 10 — Observability, metrics, and deployment

Track these metrics:

  • Action success/fail rates by type.
  • Retry counts and transient failure rates.
  • Webhook latencies and re-delivery counts.
  • User-facing completion time (request → final confirmation).

Use structured logs, tracing (OpenTelemetry), and a dashboard for rapid incident response. Deploy connectors in containers or serverless functions, and consider autoscaling for peak booking windows.

In 2026, expect:

  • Standardized tool APIs—a move toward common function schemas so multiple LLMs can use the same tools.
  • Wallet-based payments and payment token standards that reduce PCI scope for agents.
  • Multi-agent orchestration: specialized sub-agents for pricing, fraud, and UX, coordinated by a conductor agent.
  • On-device verification: stronger user biometrics for high-value approvals using FIDO/WebAuthn integrations.

Example end-to-end interaction (simplified)

  1. User: "Book my flight SFO->JFK, leave Tues, return Fri. Also order sushi for 7pm—$40 max."
  2. LLM emits: [{action: "find_flight", params:...}, {action:"find_restaurant", params:...}]
  3. Executor: runs find_flight → quotes found; prompts LLM if multiple options. User confirms via chat card.
  4. LLM: {action: "book_flight", params: chosen_option} → Executor: checks payment token, issues booking call with idempotencyKey, persists DB record.
  5. Executor: placeOrder for food, persists order. Both providers push webhooks—executor updates state and notifies user.
  6. Human ops alerted if fraud model flags the combo or price exceeds threshold.

Checklist before going to production

  • Use provider sandboxes and set up webhook verification.
  • Implement schema validation and idempotency for every action.
  • Encrypt tokens & use PKCE for OAuth flows.
  • Set budget limits and explicit confirmation UI for spending.
  • Instrument monitoring & tracing; run chaos tests.

Wrap-up: Key takeaways

  • Design explicit contracts between your LLM and action executor—JSON schemas reduce ambiguity.
  • Make the executor robust: retries, idempotency, and authorization checks save you from costly mistakes.
  • Leverage sandboxes and webhooks to handle asynchronous confirmations safely.
  • Enforce guardrails (consent, budget caps, fraud checks) and plan human escalation for risky flows.
  • Prepare to iterate: 2026 means quickly evolving provider APIs and better tooling—build modular connectors and swap providers easily.

Further reading and tools

  • Amadeus & Sabre developer sandboxes (travel booking APIs)
  • DoorDash / Uber Eats / Deliveroo developer docs (food delivery APIs)
  • OAuth 2.1 + PKCE best practices
  • OpenTelemetry for tracing distributed action flows
  • Security: PCI-DSS guidance for payment integrations

Final thoughts & call-to-action

Agentic AI is the new normal in 2026. Building a trustworthy agent that books travel and orders food requires careful engineering: strict action schemas, a hardened executor, OAuth and webhook handling, and layered safety checks. Start small—automate lookup/quote flows first, then add booking/ordering once you have robust idempotency, logging, and human escalation.

Ready to build? Clone our starter repo (search "thecoding.club agentic-assistant-starter"), try the connectors in sandbox mode, and join our developer community for code reviews and deployment templates. Share your project to get feedback—agentic assistants are evolving fast, and real-world testing is the best teacher.

Advertisement

Related Topics

#AI#chatbots#tutorial#integration
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-23T04:42:27.530Z