Table of Contents
ToggleFrom Text to Query: Build an AI Agent That Talks to Databases

From Plain Text to Database Queries
Building Text-to-SQL AI Agents
The Challenge: Unlocking Database Potential
Databases are powerful, but accessing them often requires knowing complex SQL. What if you could just ask questions in plain English? This infographic explores the creation of a Text-to-SQL AI agent, a system that translates natural language into database queries, making data accessible to everyone.
The Building Blocks of a Text-to-SQL Agent
The Brain: LLM
Large Language Models (like Mistral Large) with inherent SQL knowledge form the agent's core intelligence.
The Framework: LangGraph
A ReAct (Reason + Act) agent architecture is built using LangGraph, allowing for complex, multi-step reasoning.
The Interface: Next.js
A full-stack application provides the user-facing chat interface, built with Next.js, TypeScript, and Tailwind CSS.
The Database: SQLite
An in-memory SQLite database holds the data. The agent is given the database schema to understand its structure.
How the Agent Thinks: The ReAct Workflow
The agent follows a "Reason and Act" (ReAct) cycle. It thinks about the problem, chooses a tool, acts, observes the result, and repeats until the goal is achieved. This flow chart illustrates a typical database query process.
"Which customer placed the most orders?"
"I need to query the database. I'll use the GetFromDB tool."
Generates SQL query: `SELECT...`
Runs SQL query on the database.
"The customer who placed the most orders is..."
Tech Stack Composition
A breakdown of the key technologies and their role in the application, from the user-facing frontend to the AI-powered backend.
From Simple Jokes to Complex Joins
No Tool Needed
"Tell me a joke about SQL."
The LLM answers directly without needing to query the database.
Simple Database Query
"How many customers do I have?"
The agent uses the `GetFromDB` tool to run a `SELECT COUNT(*)...` query.
Complex Database Query
"Which customer placed the most orders?"
The agent formulates a complex SQL query with a `JOIN` to link the customer and order tables.
A Note on Security
A critical consideration for real-world deployment is implementing security "guardrails." It's essential to prevent the AI from having unlimited control over the database, ensuring data integrity and preventing malicious queries.
Text-to-SQL Agent: FAQ
1. What is the core functionality of the Text-to-SQL agent described in the source?
The Text-to-SQL agent allows users to interact with a database using natural language queries instead of writing SQL code directly. It leverages large language models (LLMs) trained on code, including SQL, to translate natural language questions into executable SQL queries, execute those queries against a database, and then return the results in a human-readable format.
2. What key technologies and frameworks are used to build this Text-to-SQL agent?
The Text-to-SQL agent is built using several key technologies:
- LangGraph: Used to build the core ReAct agent, which handles the reasoning and action execution.
- Next.js: Provides the frontend application framework for the user interface.
- watsonx.ai: Supplies the large language models (LLMs), such as Mistral Large, for natural language processing and SQL generation.
- SQLite 3: An in-memory database used for storing and querying data in this demonstration.
- LangChain: A framework on which LangGraph is based, providing tools and components for working with LLMs, including message handling and tool definitions.
- TypeScript: Used for type-safe development.
- Tailwind CSS: Used for styling the frontend application.
3. How does the Text-to-SQL agent interact with the database?
The agent interacts with the database through a custom tool called "GetFromDB
". This tool is defined within the LangChain framework and acts as an interface between the LLM and the database. When the LLM determines that a database query is needed to answer a user's question, it calls the "GetFromDB
" tool, providing a generated SQL query as input. The "GetFromDB
" tool then executes this SQL query against the SQLite database and returns the results to the LLM, which then processes them into a natural language response for the user. The database schema (customer and order tables) is also provided to the LLM as part of the tool definition, helping it generate accurate SQL queries.
4. What is a ReAct agent and how is it used in this context?
A ReAct agent is an AI agent that combines "Reasoning" and "Acting" to solve problems. In the context of this Text-to-SQL agent, the ReAct agent, built with LangGraph, enables the LLM to:
- Reason: Understand the user's natural language query, identify the need for database interaction, and formulate a plan (e.g., generate a SQL query).
- Act: Utilize available tools, specifically the "
GetFromDB
" tool, to execute the formulated SQL query against the database. - Observe: Receive the results from the database query.
- Reflect: Use the observed results to form a coherent natural language answer for the user.
This iterative process of reasoning and acting allows the agent to handle complex queries that require external information from the database.
5. How are LLM responses and user inputs managed in the application?
The application manages LLM responses and user inputs through a "message history" state variable in the Next.js frontend. This history stores different types of messages:
SystemMessage
: A hidden prompt that guides the LLM's behavior and informs it about its role (e.g., "generate SQLite queries, use GetFromDB tool").HumanMessage
: Represents messages or queries submitted by the user.AIMessage
: Represents the responses generated by the large language model.
These messages are serialized to JSON before being sent from the frontend (client-side) to the backend (server-side) actions.ts
file, and then deserialized for processing by LangChain and the LLM. This ensures proper communication and context retention.
6. What are the database schema and sample data used in this demonstration?
The demonstration uses an in-memory SQLite database with two tables:
customer
table: ContainsID
,email
, andname
columns.order
table: ContainsID
,customer_id
(a foreign key linking to the customer table),product
, andamount
columns.
The database is seeded with mock data: 10 sample customer entries and several order entries linked to these customers, allowing the agent to perform queries involving joins and aggregations (e.g., "how many customers do I have?", "which customer placed most orders?").
7. What considerations are mentioned for the security and robustness of a Text-to-SQL agent?
The source briefly mentions the importance of "guardrails" for a Text-to-SQL agent. This implies that for production environments, it's crucial to implement security measures to prevent the LLM from having "unlimited control of your database." While not detailed in the source, this would typically involve:
- SQL injection prevention: Ensuring that generated SQL queries are safe and cannot be manipulated by malicious user input.
- Access control: Limiting the LLM's permissions to only what is necessary (e.g., read-only access for certain tables).
- Validation and sanitization: Thoroughly checking and cleaning user input before it's used to generate SQL.
- Monitoring and logging: Tracking the queries executed by the agent for auditing and error detection.
8. How does the application handle the user interface and user experience during interactions?
The Next.js frontend provides a simple yet effective user interface:
- Input Box and Send Button: Allows users to type their natural language queries and submit them.
- Message Display: Shows a chat-like history of user queries and AI responses.
- Loading State: A visual indicator (button text changing to "Loading," input box disabled) informs the user when the LLM is processing a request, preventing multiple submissions and confusion.
- Dynamic Rendering: Messages are dynamically rendered from the application's state, distinguishing between
HumanMessages
(user) andAIMessages
(LLM) and replacing initial placeholder messages. - Clear History: The application implies the ability to clear message history by suggesting "refresh so we have a clean message history," important for managing the LLM's context window.
Posts Gallery

Agentic AI for Enterprise Automation
Discover how Agentic AI revolutionizes enterprise automation, boosting efficiency and strategic decision-making.
Read More →
How Agentic AI Works: Intent to Execution
Unpack the intricate process of Agentic AI, from understanding user intent to executing complex tasks autonomously.
Read More →
Purpose & Use Cases of Agentic AI
Explore the diverse applications and strategic importance of Agentic AI across various industries and daily operations.
Read More →
What is Agentic AI?
A foundational article explaining the core concepts of Agentic AI, defining its components and its role in modern automation.
Read More →
Why Agentic AI?
Understand the compelling reasons and significant benefits that make Agentic AI a transformative technology for efficiency and innovation.
Read More →
AI Tools Spotlight
A comprehensive overview of cutting-edge AI tools that are shaping the future of automation and intelligent systems.
Read More →