Building a Product Manual Chatbot with Amazon OpenSearch and Open-Source LLMs

This article guides you through building an intelligent that can answer questions based on your product manuals, leveraging the power of Amazon OpenSearch for semantic search and open-source Large Language Models () for generating informative responses. This approach provides a cost-effective and customizable solution without relying on Amazon Bedrock.

The Challenge:

Navigating through lengthy product manuals can be time-consuming and frustrating for users. A chatbot that understands natural language queries and retrieves relevant information directly from these manuals can significantly improve user experience and support efficiency.1

Our Solution: OpenSearch and Open-Source LLMs

This article demonstrates how to build such a chatbot using the following key components:

  1. Amazon OpenSearch Service: A scalable search and analytics service that we’ll use as a to store document embeddings and perform semantic search.2
  2. Hugging Face Transformers: A powerful Python library providing access to thousands of pre-trained language models, including those for generating text embeddings.3
  3. Open-Source Large Language Model (LLM): We’ll outline how to integrate with an open-source LLM (running locally or via an API) to generate answers based on the retrieved information.
  4. FastAPI: A modern, high-performance web framework for building the chatbot API.4
  5. AWS SDK for Python (Boto3): Used for interacting with Amazon S3 (where product manuals are stored) and OpenSearch.5

Architecture:

The architecture consists of two main parts:

  1. Ingestion Pipeline:
  • Product manuals (in PDF format) are stored in an Amazon S3 bucket.
  • A Python script (ingestion_opensearch.py) extracts text content from these PDFs.
  • It uses a Hugging Face Transformer model to generate vector embeddings for the extracted text.
  • The text content, associated product name, and the generated embeddings are indexed into an Amazon OpenSearch cluster.
  1. Chatbot API:
  • A FastAPI application (chatbot_opensearch_api.py) exposes a /chat/ endpoint.
  • When a user sends a question (along with the product name), the API:
  • Uses the same Hugging Face Transformer model to generate an embedding for the user’s query.
  • Queries the Amazon OpenSearch to find the most semantically similar document snippets for the given product.
  • Constructs a prompt containing the retrieved context and the user’s question.
  • Sends this prompt to an open-source LLM (you’ll need to integrate your chosen LLM here).
  • Returns the LLM’s generated answer to the user.

Step-by-Step Implementation:

1. Prerequisites:

  • AWS Account: You need an active AWS account.
  • Amazon OpenSearch Cluster: Set up an Amazon OpenSearch domain.
  • Amazon S3 Bucket: Create an S3 bucket and upload your product manuals (in PDF format) into it.
  • Python Environment: Ensure you have Python 3.6 or later installed, along with pip.
  • Install Necessary Libraries:
    Bash
    pip install fastapi uvicorn boto3 opensearch-py requests-aws4auth transformers PyPDF2 # Or your preferred PDF library

2. Ingestion Script (ingestion_opensearch.py):

Python

# (See the `ingestion_opensearch.py` from the previous response)

Key points in the ingestion script:

  • OpenSearch Client Initialization: Configured to connect to your OpenSearch domain. Remember to replace the placeholder endpoint.
  • Hugging Face Model Loading: Loads a pre-trained sentence transformer model for generating embeddings.
  • OpenSearch Index Creation: Creates an index with a knn_vector field to store embeddings. The dimension of the vector field is determined by the chosen embedding model.
  • PDF Text Extraction: You need to implement the actual PDF parsing logic using a library like PyPDF2 or pdfminer.six within the ingest_pdfs_from_s3 function. The provided code has a placeholder.
  • Embedding Generation: Uses the Hugging Face model to create embeddings for the extracted text.
  • into OpenSearch: Stores the product name, content, and embedding in the OpenSearch index.

3. Chatbot API (chatbot_opensearch_api.py):

Key points in the API script:

  • OpenSearch Client Initialization: Configured to connect to your OpenSearch domain. Remember to replace the placeholder endpoint.
  • Hugging Face Model Loading: Loads the same embedding model as the ingestion script for generating query embeddings.
  • search_opensearch Function:
  • Generates an embedding for the user’s question.
  • Constructs an OpenSearch query that combines keyword matching (on product name and content) with a k-nearest neighbors (KNN) search on the embeddings to find semantically similar documents.
  • generate_answer Function: This is a placeholder. You need to integrate your chosen open-source LLM here. This could involve:
  • Running an LLM locally using Hugging Face Transformers (requires significant computational resources).
  • Using an API for an open-source LLM hosted elsewhere.
  • API Endpoint (/chat/): Retrieves relevant context from OpenSearch and then uses the generate_answer function to respond to the user’s query.

4. Running the Application:

  1. Run the Ingestion Script: Execute python ingestion_opensearch.py to process your product manuals and index them into OpenSearch.
  2. Run the Chatbot API: Execute python chatbot_opensearch_api.py to start the API server:
    Bash
    uvicorn chatbot_opensearch_api:app –reload
    The API will be accessible at http://localhost:8000.

5. Interacting with the Chatbot API:

You can send POST requests to the /chat/ endpoint with the product_name and user_question in the JSON body. For example, using curl:


Integrating an Open-Source LLM (Placeholder):

The most crucial part to customize is the generate_answer function in chatbot_opensearch_api.py. Here are some potential approaches:

  • Hugging Face Transformers for Local LLM:
    Python
    from transformers import AutoModelForCausalLM, AutoTokenizer

    llm_model_name = “google/flan-t5-large” # Example open-source LLM
    llm_tokenizer = AutoTokenizer.from_pretrained(llm_model_name)
    llm_model = AutoModelForCausalLM.from_pretrained(llm_model_name)

    def generate_answer(prompt):
        inputs = llm_tokenizer(prompt, return_tensors=”pt”)
        outputs = llm_model.generate(**inputs, max_length=500)
        return llm_tokenizer.decode(outputs[0], skip_special_tokens=True)

    Note: Running large LLMs locally can be very demanding on your hardware (/, RAM).
  • API for Hosted Open-Source LLMs: Explore services that provide APIs for open-source LLMs. You would make HTTP requests to their endpoints within the generate_answer function.

Conclusion:

Building a product manual chatbot with Amazon OpenSearch and open-source LLMs offers a powerful and flexible alternative to managed AI . By leveraging OpenSearch for efficient semantic search and integrating with the growing ecosystem of open-source LLMs, you can create an intelligent and cost-effective solution to enhance user support and accessibility to your product documentation. Remember to carefully choose and integrate an LLM that meets your performance and resource constraints.

Agentic AI (26) AI Agent (20) airflow (7) Algorithm (22) Algorithms (20) apache (46) API (101) Automation (43) Autonomous (6) auto scaling (3) AWS (44) aws bedrock (1) Azure (22) BigQuery (11) bigtable (7) Career (2) Chatbot (10) cloud (50) code (123) cosmosdb (3) cpu (26) database (83) Databricks (10) Data structure (16) Design (62) dynamodb (16) ELK (1) embeddings (9) emr (10) examples (47) flink (9) gcp (18) Generative AI (7) gpu (7) graph (55) graph database (14) image (29) index (32) indexing (11) interview (5) java (37) json (54) Kafka (28) LLM (29) LLMs (10) monitoring (64) Monolith (10) Networking (6) NLU (2) node.js (10) Nodejs (1) nosql (21) Optimization (45) performance (101) Platform (48) Platforms (22) postgres (15) productivity (10) programming (34) python (59) RAG (105) rasa (3) rdbms (4) ReactJS (3) redis (21) Restful (3) rust (12) Spark (21) spring boot (1) sql (42) time series (13) tips (6) tricks (2) vector (15) Vertex AI (14) Workflow (21)

Leave a Reply

Your email address will not be published. Required fields are marked *