A vision by Joan Sanz

The Hotels
Have No Brain.
Yet.

Every PMS, booking engine and channel manager running today was built for humans to operate. None of them have an agent inside. None of them can think, decide, or act on their own. That gap is the greatest infrastructure opportunity in the history of hospitality.

0
Platforms with native agentic AI today
Technically possible now
2025
Year the spec layer starts
4yr
To full industry adoption
The gap

What hospitality tech
can't do today

HiJiffy, Asksuite, Mews, Opera Cloud, SiteMinder — excellent at storing data, executing commands. None can reason, prioritize, or act autonomously. Powerful cars with no driver.

Today — The Reality

Platforms Without Agency

Current hospitality tech is reactive. It responds to commands, stores data, triggers pre-programmed rules. But it cannot read context, make judgment calls, or initiate actions without a human pressing a button first.

PMS: stores reservations, won't act on them
Booking engine: accepts bookings, won't negotiate
Channel manager: distributes rates, won't optimize
Chatbots: answer FAQs, won't resolve problems
CRM: tracks guests, won't recognize them proactively
Tomorrow — The Possibility

Platforms With a Brain

An agentic layer changes everything. The same platforms, with an AI agent reading their data and acting on rules defined in a .md spec file, become autonomous operators. The PMS doesn't just store arrivals — it dispatches check-in messages.

PMS + agent: detects arrival, triggers welcome sequence
Booking engine + agent: suggests upgrades at right moment
Channel manager + agent: updates rates at 80% occupancy
CRM + agent: recognizes loyal guests, activates benefits
RMS + agent: autonomously manages overbooking scenarios
The architecture

How a .md file
becomes action

The Markdown file is not the agent. It is the agent's brain. The LLM is the reasoning engine. The API connections are the hands. Put them together and you have something that can read a situation, decide what to do, and do it — without a human in the loop.

01
The Brain
checkin-agent.md
complaints-agent.md
pricing-agent.md
loyalty-agent.md
Defines rules, tone, escalation logic, compensation limits, brand values
02
The Reasoning Engine
Claude (Anthropic)
GPT-4o (OpenAI)
Gemini (Google)
Llama 3 (Meta)
Reads the .md, understands context, decides what to do next. Model-agnostic — you choose.
03
The Tools / Hands
Function calling / Tool use
MCP (Model Context Protocol)
REST API connectors
Webhook triggers
Execution layer. How the agent reaches into real systems and acts.
04
Hotel Tech Stack
Opera Cloud / Mews / Cloudbeds
SiteMinder / BookingSuite
HiJiffy / Asksuite
WhatsApp Business API
Existing platforms. They don't change — the agent layer sits on top of them.
// The Key Insight

The .md file only needs to be written once. When the hotel changes PMS, the .md doesn't change — only the API connector does. When a better LLM is released, the hotel switches models — the .md doesn't change. The specification is the stable, portable, version-controlled core. Everything else is infrastructure.

Adoption curve

How the industry
will get there

Technology adoption in hospitality follows a predictable path. By the time Marriott has an agentic PMS, independent hotels have had it for two years. Here is the sequence.

Now — 2025
The Spec Layer
Hotels copy .md files into Claude or GPT-4 as system prompts. Manual, but the intelligence is real. Works today.
Independent hotels
Boutiques
Hospitality schools
2025–2026
Platform Bridge
HiJiffy, Asksuite, Quicktext add LLM APIs. These specs become their source of truth for hospitality intelligence.
HiJiffy
Asksuite
Quicktext
2026–2027
PMS Goes Agentic
Mews, Opera, Cloudbeds build AI layers. The .md spec becomes the hotel config file defining what their AI can and cannot do.
Mews
Cloudbeds
Opera Cloud
2027+
Full Autonomy
The agent has credentials for every system. Reads arrivals, adjusts rates, dispatches housekeeping — all from a .md config.
All categories
Global chains
// Recommendation for hotels TODAY

Don't wait for your PMS vendor to build an AI layer. The spec file is ready now. Start with one agent — check-in or complaints — run it as a Claude system prompt via WhatsApp. Measure KPIs for 4 weeks. You will be ahead of every competitor using the same PMS, because your intelligence is in the spec file, not the platform.

// HiJiffy, Asksuite or .md directly?

If you need a managed solution, HiJiffy or Asksuite are good entry points. Load them with specs from this repository. If you have technical capacity, run .md files directly with Claude API — lower cost, higher control. The spec is the intelligence either way.

Universal compatibility

Will these .md files work
with any AI?

Yes. And this is by design. Markdown is the universal language of LLM context. Every major model reads it the same way. The spec file outlasts any platform, any model, any vendor.

Claude (Anthropic)
Exceptional spec adherence. Best for guest-facing agents where tone consistency and knowing when NOT to act are critical. Projects allow cross-session hotel context memory.
Best: guest agents, training simulators, complex escalation
GPT-4o (OpenAI)
Superior tool-use and function calling. Better when agents need multiple sequential API calls in one turn. Broader existing ecosystem of third-party integrations.
Best: ops agents, booking flows, multi-system orchestration
Gemini (Google)
Strong multimodal capabilities — useful for processing housekeeping QC photos. Native Google Workspace integration for hotels using Gmail and Sheets operationally.
Best: image QC, Google Workspace-integrated hotels
Llama 3 (Self-hosted)
The privacy argument. Some luxury hotels need complete data sovereignty. Llama 3 on-premises means no guest data leaves the hotel's infrastructure. Same .md spec, zero data sharing.
Best: data-sensitive, on-premises deployments
HiJiffy / Asksuite
These platforms are LLM wrappers with hospitality UIs. When they upgrade their underlying model, your .md spec travels with them. Load this repository's specs into their system prompt field.
Strategy: load these specs into their system prompt field
Future Models (2026+)
Whatever model is best in 2027 will also read Markdown. The .md spec you write today is directly compatible with models that don't exist yet. Markdown is the lingua franca of LLMs.
Guaranteed compatibility: Markdown outlasts any model provider
The agentic loop

From reading to doing:
the full agentic cycle

An agent that only reads and responds is a chatbot. A true agent perceives the environment, decides, and acts. Here is how that loop works in a hotel using real tools that exist today.

01
Perceive — Read the data
The agent has read access to the PMS via API. It sees tomorrow's 47 arrivals, 3 Platinum loyalty members, 84% occupancy. Both agents receive this context before any interaction begins.
Tool: PMS API → get_arrivals(), get_occupancy(), get_loyalty_tier()
02
Reason — Apply the spec
The LLM reads the .md spec: "At 80%+ occupancy, rate logic applies. Platinum members get guaranteed suite upgrade. VIP detail must be pre-arranged with housekeeping 24h in advance." Agent knows what to initiate.
Input: [occupancy=84%, platinum=3] + pricing-agent.md + loyalty-agent.md
03
Decide — Choose the action
The agent categorizes actions by impact. Routine actions execute automatically. High-impact decisions — rate changes above threshold, compensation — are proposed for human approval.
Decision tree: impact_level() → auto_execute() or request_approval()
04
Act — Execute in real systems
With credentials, the agent sends WhatsApp messages via API, creates PMS housekeeping tasks, sends rate proposals to the revenue manager. All logged, auditable, reversible.
Tools: whatsapp_send(), pms_create_task(), email_send()
05
Learn — Update the spec
Outcomes measured against KPIs. Rate proposal approved, RevPAR +12%. 3 VIP guests gave 5-star reviews. The spec earns a minor update. Gets smarter with every cycle.
Output: kpi_log() → spec_improvement_suggestion() → human_review()
// Daily agent run — 06:00
# Agent: pricing-agent + loyalty-agent # Hotel: Hotel Example | 2026-03-15 06:00 # Model: claude-opus-4 PERCEIVE get_occupancy(date='2026-03-16') 84% occupied (112/134 rooms) get_arrivals(date='2026-03-16') 47 arrivals | 3 Platinum | 6 Gold get_current_rate(room_type='suite') €320/night (base rate) REASON spec rule: "occupancy >80% → rate +25-40%" spec rule: "Platinum → suite upgrade guaranteed" spec rule: "VIP detail: housekeeping 24h prior" DECIDE [AUTO] trigger_hk_detail(guests=3, room='suite') [AUTO] schedule_prearrrival_wa(guests=47, t='-48h') [APPROVAL] rate_proposal(suite: €320→€420, +31.25%) ACT ✓ HK tasks created (3) ✓ WhatsApp queue scheduled (47) ⏳ Rate proposal → Revenue Manager LOG actions: 3 | approvals_pending: 1 next_run: 14:00 (post check-in)
The big question

Can an AI agent actually
log in and do things?

Technically: Yes. Today.
The .md spec + credentials + LLM = a fully autonomous hotel operator.
This is not science fiction. With Claude's current tool-use capabilities, with MCP, and with API access to any PMS with a REST API (and most do), an agent can today: read reservation data, create tasks, send WhatsApp messages, adjust rates, respond to guests, and update channel managers — all triggered by logic defined in a .md specification file. The agent has credentials not to the UI, but to the API. Same data, accessed programmatically.
// Important: Human oversight required for high-impact actions
Rate changes above 20%, overbooking resolution, compensation above defined thresholds — these always wait for human approval. The agent proposes. The human decides. The system executes. The .md specs in this repository include escalation rules and approval thresholds precisely because autonomy without governance is dangerous.
API
API Keys (Read/Write)
Every major PMS (Mews, Opera, Cloudbeds, Apaleo) has a REST API. The agent stores OAuth tokens — never passwords. Scope-limited: the pricing agent only gets rate management access. Principle of least privilege.
MCP
Model Context Protocol
Anthropic's MCP gives Claude access to external systems as structured tools. A hotel builds one MCP server exposing its PMS, CRM and channel manager. The agent calls those tools — structured, auditable API calls. Architecture of the 2026 generation of hotel AI.
The Training Loop
The agent generates execution logs. Every action, outcome and escalation is recorded. A Claude Code workflow analyzes those logs and suggests .md spec improvements. The spec gets smarter with every deployment cycle.
// The meta vision
"When every PMS, booking engine and channel manager reads from the same spec format — when any LLM can consume a .md file and act as a trained hospitality professional — the repository that defines those specs becomes the intelligence layer that the entire industry runs on."
Joan Sanz, Hospitality AI Agents Repository
The meta vision

The .md repository as
the intelligence API
of the industry

Hotel OperationsGuest experience · Revenue · Ops · Training
AI Agents LayerClaude · GPT-4o · Gemini · Llama · Future models
Hospitality AI Agents RepositoryThe .md spec standard · by Joan Sanz · Open Source
PMS · Booking Engine · Channel Manager · CRM · WhatsApp API · Payment GatewayExisting hospitality infrastructure — unchanged
For tech companies

When Mews, HiJiffy or any platform wants to add AI, they don't start from scratch. They integrate with this spec standard. The repository is the open-source foundation they build commercial products on top of.

For hotels

Your AI investment is portable. The intelligence lives in your .md files, not in any vendor's platform. If you switch PMS, your agent intelligence comes with you. You own your AI, because you own the spec.

For hospitality schools

The curriculum of every hospitality school in 2027 will include designing and deploying AI agents. The schools that teach this now — using this repository — will graduate the most employable class in the industry.

// Open source · free forever
The spec is ready.
The models exist.
What are you waiting for?

Download the repository. Run your first agent this week. The hotels that start now will be two years ahead when the whole industry catches up.