Take me Back

Implementing Fine-Tuned AI with JSON Context

  Shuja ur Rahman     2024-10-26       Share

ASk AI

ASk AI

I recently set up a chatbot on my portfolio website (shujaurrahman.com) to make it more interactive and useful for visitors. This chatbot allows users to ask questions about my projects, work, and background. I implemented this with a customized API hosted on Vercel, leveraging the Mixtral AI model from Hugging Face to generate relevant responses based on my website’s content. In this post, I’ll go through each part of the implementation, from setting up the backend to handling frontend chat interactions.

Project Overview

The chatbot has two main parts:

  1. AI API (Hosted on Vercel) - This is the backend that interacts with Hugging Face's Mixtral model, fine-tuned with context from my portfolio.
  2. Frontend Chat Interface - A simple, interactive frontend that sends user questions to the AI API and displays responses.

Part 1: Building the AI API with Mixtral and Vercel

Step 1: Setting Up the API Server

The API server is built using Express.js. Here’s a high-level overview:

  1. It’s hosted on Vercel, which allows the server to be globally accessible.
  2. It connects to the Mixtral model on Hugging Face via an API key.
  3. It loads contextual data from a JSON file (combined.json) containing key information about me, my projects, and background to ensure responses are relevant to questions visitors might ask.

The Mixtral model itself is trained to produce concise responses, which are tailored to be friendly yet informative about the topics on my site. The API also manages greetings to make the chatbot feel interactive and user-friendly.

Code Walkthrough

Here’s the code that powers the AI API:

// Import required modules
import express from 'express';
import cors from 'cors';
import bodyParser from 'body-parser';
import { readFileSync, existsSync } from 'fs';
import path, { dirname } from 'path';
import fetch from 'node-fetch';
import { fileURLToPath } from 'url';

// Get the directory of the current module
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

const app = express();
const PORT = process.env.PORT || 3000;

// Enable CORS and JSON parsing
app.use(cors());
app.use(bodyParser.json());

// Define the path to combined.json
const jsonPath = path.join(__dirname, 'combined.json');

// Load JSON data from the file for contextual AI responses
let json_data = '';
if (existsSync(jsonPath)) {
    try {
        json_data = readFileSync(jsonPath, 'utf-8');
    } catch (error) {
        console.error(`Error reading combined.json: ${error.message}`);
    }
} else {
    console.error(`combined.json file not found at ${jsonPath}`);
    json_data = 'No context available.'; // Fallback message
}

// Hugging Face API configuration
const api_key = 'your_api_key_here';
const api_url = 'https://api-inference.huggingface.co/v1/chat/completions';

// Test endpoint
app.get('/', (req, res) => {
    res.send('AI API is running successfully.');
});

// Chat endpoint that sends user message to Hugging Face API
app.get('/api/chat', async (req, res) => {
    const user_message = req.query.message?.toLowerCase().trim();

    if (!user_message) {
        return res.status(400).json({ message: "Error: 'message' query parameter is required." });
    }

    const data = {
        model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
        messages: [
            { role: "system", content: "Provide responses about Shujaur Rahman’s portfolio. Use context effectively and stay concise." },
            { role: "user", content: `${user_message}\n\nContext: ${json_data}` }
        ]
    };

    try {
        const response = await fetch(api_url, {
            method: 'POST',
            headers: {
                'Authorization': `Bearer ${api_key}`,
                'Content-Type': 'application/json'
            },
            body: JSON.stringify(data)
        });

        const result = await response.json();

        if (response.ok && result.choices && result.choices.length > 0) {
            const aiResponse = result.choices[0].message.content.trim();
            return res.json({ message: aiResponse });
        } else {
            return res.status(500).json({ message: "Error generating response." });
        }
    } catch (error) {
        console.error('Error:', error);
        return res.status(500).json({ message: `Error: ${error.message}` });
    }
});

// Start the server
app.listen(PORT, () => {
    console.log(`Server is running on port ${PORT}`);
});

This setup does the following:

  • Loads contextual data from combined.json, so each response generated by the AI can refer to details about me or my work.
  • Sets up an endpoint, /api/chat, that takes a user message and sends it to Hugging Face’s API.
  • Returns the AI’s response back to the frontend, allowing for a seamless chat experience.

Part 2: Building the Frontend Chat Interface

The frontend is a simple chat interface that users can interact with directly from the website. Here’s the code for the HTML and JavaScript portions that handle sending and displaying messages.

Frontend Code

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Ask AI</title>
  <link rel="stylesheet" href="style.css">
</head>
<body>
  <div class="wrapper">
    <section class="chat-area">
      <header>
        <span>Ask AI about Shujaurrahman</span>
      </header>
      <div class="chat-box">
        <div class="text">No messages are available. Start a conversation below.</div>
      </div>
      <form action="#" class="typing-area">
        <input type="text" name="message" class="input-field" placeholder="Type a message here...">
        <button><i class="fab fa-telegram-plane"></i></button>
      </form>
    </section>
  </div>
  <script src="script.js"></script>
</body>
</html>

JavaScript for Frontend Functionality

The JavaScript code manages chat interactions. Here’s how it works:

  1. It takes user input and sends it to the API endpoint.
  2. When the API responds, it displays the response in the chat window.
  3. It saves chat history in localStorage so that users don’t lose their conversation history when they reload the page.
const chatBox = document.querySelector(".chat-box");
const form = document.querySelector(".typing-area");
const inputField = form.querySelector(".input-field");

window.onload = () => {
    const savedChats = localStorage.getItem('chatHistory');
    if (savedChats) {
        chatBox.innerHTML = savedChats;
    }
};

form.onsubmit = (e) => {
    e.preventDefault();
    let userMessage = inputField.value.trim();

    if (userMessage) {
        appendMessage('User', userMessage, 'user');

        let xhr = new XMLHttpRequest();
        xhr.open("GET", `https://ai-api-vert.vercel.app/api/chat?message=${encodeURIComponent(userMessage)}`, true);
        xhr.onload = () => {
            if (xhr.status === 200) {
                const aiResponse = JSON.parse(xhr.responseText).message;
                appendMessage('AI', aiResponse, 'ai');
            } else {
                appendMessage('AI', 'Error: Unable to fetch response', 'ai');
            }
        };
        xhr.send();
        inputField.value = "";
    }
};

function appendMessage(sender, message, className) {
    let messageHTML = `<div class="message ${className}"><strong>${sender}:</strong> ${message}</div>`;
    chatBox.innerHTML += messageHTML;
    localStorage.setItem('chatHistory', chatBox.innerHTML);
    chatBox.scrollTop = chatBox.scrollHeight;
}

How It Works Together

When a user sends a message:

  1. Frontend captures the message, displays it in the chat interface, and sends a GET request to the Vercel-hosted API.
  2. Backend (API) receives this message, forwards it to Hugging Face’s Mixtral model with the contextual data, and then returns the AI-generated response.
  3. Frontend receives the AI’s response, displays it in the chat, and saves the conversation in localStorage.

Conclusion

With this project, I built a fully functional, AI-driven chatbot customized for my portfolio using Mixtral and Hugging Face. This setup allows users to interact with my site dynamically, asking questions and exploring my work, all through a conversational interface. Hosting on Vercel ensures quick API responses, while localStorage gives a smooth, continuous chat experience for users across sessions.


Fork on Github

0 likes   71 views