Back to Blog
Web Development

How I Built a Live Chat System with Directus and OpenAI

I wanted visitors to my portfolio site to be able to ask questions without filling out a form or sending an email. So I built a live chat system that uses AI to handle most conversations, with me jumping in when needed.

Mike Krell
Mike Krell Mike of MikeMade
8 min read

How I Built a Live Chat System with Directus and OpenAI

I wanted visitors to my portfolio site to be able to ask questions without filling out a form or sending an email. But I also didn't want to be glued to my computer waiting for messages. So I built a live chat system that uses AI to handle most conversations, with me jumping in when needed.

The result? A chat widget that feels instant, remembers conversations, and lets me manage everything from Directus. Visitors get quick answers, and I get to focus on actual work.

What It Does

The chat widget sits in the bottom-right corner of the site. When someone clicks it:

  • They get an instant welcome message
  • They can ask questions about my services, projects, or anything else
  • The AI responds using context from my site
  • If they hit the message limit (3 by default), they get my email
  • If I jump in to reply, the limit disappears — unlimited messages
  • Everything syncs in real-time, even if they close and reopen the chat

On my end, I have an admin dashboard where I can see all conversations, reply to visitors, and tweak the AI's behavior — all without touching code.

The Fun Parts

Directus as the backend — I already use Directus for my CMS, so why not use it for chat too? Conversations, messages, and settings all live in Directus collections. I can manage prompts, adjust AI settings, and see all conversations right in the admin panel. No separate database needed.

OpenAI integration that actually works — The system pulls prompts from Directus, builds context dynamically, and handles errors gracefully. If OpenAI goes down, visitors get a helpful message instead of a broken chat.

Real-time without WebSockets — I went with polling (checking for new messages every 2 seconds) instead of WebSockets. It's simpler, works everywhere, and honestly feels just as fast. The front-end tracks message IDs to prevent duplicates, so it's bulletproof.

Smart message limits — Visitors get 3 messages by default. But the moment I reply as admin, the limit disappears. It's a nice way to say "I'm here, let's talk" without being explicit about it.

Link previews are magic — When the AI mentions a blog post, the front-end automatically fetches the post image and title, then shows a nice preview card. It makes the responses feel more polished.

Conversations expire — After 30 minutes of inactivity, conversations auto-close. Keeps things tidy and prevents old chats from cluttering the admin view.

The Stack

  • SvelteKit — Front-end framework and API routes
  • Directus — Backend CMS and database
  • OpenAI API — GPT-4o-mini for responses (configurable)
  • Svelte Stores — State management for chat UI
  • Polling — Simple real-time updates (no WebSockets needed)

How It Works

The Collections

I set up four Directus collections:

chat_settings — A singleton that holds all the configuration. Enable/disable chat, set message limits, choose the OpenAI model, write custom prompts, and configure the welcome message. Everything's editable in Directus without deploying code.

conversations — Each chat session gets a conversation record. It tracks the visitor ID (stored in their browser's localStorage), what page they started on, their browser info, and when it expires.

messages — Every message lives here. User messages, AI responses, system messages, and admin replies. Each one knows if it's from AI, which model generated it, and links back to its conversation.

ai_prompts — Reusable prompt templates. I can create different prompts for different scenarios, then select which one to use in the chat settings. The prompts support markdown and can include example conversations.

The Front-End Widget

Screenshot 2025 12 01 at 2.29.44 Pm

The chat widget is a Svelte component that handles everything on the visitor side:

  • Generates and stores a visitor ID in localStorage
  • Checks if chat is online (polls every 30 seconds)
  • Shows a notification dot when new messages arrive while closed
  • Displays message countdown ("2 messages remaining")
  • Subscribes to new messages via polling
  • Renders markdown in responses (links, bold, italic)
  • Fetches previews for blog post links automatically

The state lives in Svelte stores, so everything updates reactively. When a new message comes in, it just appears. No page refresh needed.

The Backend API

The main chat api has three actions:

start — When someone opens the chat, it checks for an existing conversation. If they have one, it loads it. If not, it creates a new one and sends the welcome message. Simple.

send — When a visitor sends a message, it saves it to Directus, checks the message limit, updates the conversation expiration, and then asks OpenAI for a response. The AI gets the full conversation history plus any dynamic context from Directus Flows.

settings — Just returns whether chat is enabled and the message limit. The front-end uses this to show/hide the widget.

AI Response Generation

Here's where it gets interesting. The system builds the AI prompt in layers:

  1. Base prompt — Fetched from the ai_prompts collection if one is selected
  2. Dynamic context — Can trigger a Directus Flow to get fresh site data (current projects, recent posts, etc.)
  3. Additional instructions — Custom text from chat settings gets appended

Then it sends everything to OpenAI with the conversation history. The response gets saved to Directus and sent back to the front-end.

If OpenAI fails for any reason, it saves an error message and tells the visitor to email me. The chat keeps working, just without AI responses.

The Admin Dashboard

Screenshot 2025 12 01 at 2.33.47 Pm

I built a simple admin interface at /admin/chat that shows:

  • A sidebar with all conversations (sorted by newest)
  • The full message thread when you select one
  • Visitor context (what page they're on, their browser)
  • A reply box that removes message limits when I respond

Everything updates in real-time via polling. New conversations just appear. New messages show up automatically. It's like having a live feed of all site visitors.

Real-Time Updates

I went with polling instead of WebSockets because:

  • It's simpler — no WebSocket server needed
  • It works everywhere — no connection issues
  • It's reliable — if a request fails, the next one works
  • It feels fast — 2 seconds is plenty for chat

The system tracks message IDs to prevent duplicates. When you send a message, it syncs the IDs immediately so the polling doesn't add it twice. It's a simple solution that works perfectly.

The Details That Matter

Visitor tracking — Each visitor gets a unique ID stored in localStorage. It persists across sessions, so if they come back later, their conversation history is still there.

Conversation expiration — Every time someone sends a message (or AI responds, or I reply), the expiration timer resets to 30 minutes. Keeps old conversations from piling up.

Message limits — Default is 3 messages. But the moment I reply as admin, the limit disappears. The UI shows "Unlimited messages - Admin is responding" so visitors know I'm there.

Link previews — When the AI mentions a blog post, the front-end extracts the link, fetches the post metadata, and shows a nice preview card with the image and title. Makes responses feel more polished.

Markdown support — The AI can use markdown in responses. Links, bold text, italics — it all renders properly. Visitors can click links, and internal blog post links get those fancy preview cards.

Directus Flow Integration

This is the cool part. I can set up a Directus Flow that gets triggered when generating AI responses. The flow can query any data — current projects, recent blog posts, service descriptions — and return it as context.

The context gets appended to the system prompt, so the AI always has fresh information about the site. No hardcoding, no redeploying. Just update the flow in Directus.

What I'd Do Differently

If I were to rebuild this, I'd consider:

  • WebSockets for true real-time (though polling works fine)
  • Email notifications when new conversations start
  • Analytics on conversation topics
  • File uploads in chat
  • Typing indicators

But honestly? The current setup works great. It's simple, reliable, and easy to maintain. Sometimes the simple solution is the right one.

Try It

If you're on my site, you'll see the chat widget in the bottom-right corner. Give it a click and ask me anything. The AI will try to help, and if it can't, I'll jump in.

Interested in adding something like this to your site? I can build a custom chat system that fits your needs. Directus handles the heavy lifting, OpenAI provides the intelligence, and we can make it feel instant on any stack.

-Mike

Built with: Directus, OpenAI API, SvelteKit, JavaScript

Built with:
DirectusOpenAI APISvelteKitJavaScript
Logo

Specializing in design & building modern websites and custom applications, AI integration, and automation solutions. Fully managed solutions, n8n and zapier compatible sites

About Me

Portland-based web developer and creative. Blending creative design, with web technologies. I bring your website to life, and add value to your business.