Easily integrate AG Grid with your own LLM, enabling end users to query and manipulate grid state via natural language.
The example below demonstrates a chat application, built using the AI Toolkit APIs and integrated with ChatGPT's gpt-5-mini
model, that allows for manipulation of grid state via natural language.
Suggested prompts:
- "Show me all the gold medals won by the USA"
- "Sort the competitors with the youngest first"
- "Group by country and show the total number of medals won"
How It Works Copy Link
Structured Outputs is an LLM feature that ensures model responses adhere to a supplied JSON Schema. Structured Outputs are supported by many LLMs, including ChatGPT and Gemini.
The AI Toolkit provides a getStructuredSchema
API that generates a structured schema, based on grid state. These outputs can be passed to an LLM, which can then generate valid responses that can be passed directly to the setState
API method. This ensures reliable, schema-aligned instructions for updating or manipulating the grid based on natural language input.
The schema is made up of a series of "features", each representing a different aspect of the grid that can be manipulated. The following features are currently supported:
Architecture Copy Link
At a high level, the AI Toolkit works as follows:
User Input Capture: End user enters a natural language query.
Prompt Construction (Client/Server): The application gathers three elements:
- The user query
- The current grid state, via
gridApi.getState()
- The structured schema of grid state, via
gridApi.getStructuredSchema()
LLM Service Request: The complete prompt (including the structured schema) is sent to the LLM endpoint (e.g., OpenAI, Gemini).
Response Processing & Validation: The LLM returns a JSON object conforming to the schema.
State Application: The validated JSON is passed to
gridApi.setState()
.
If you wish to integrate with an LLM that does not support structured data natively, you can still use the schema to validate and parse the response from the LLM before passing it to setState
. There are multiple libraries that can do this for you, such as ajv.
Getting Started Copy Link
To get started you'll need:
- API Key for your chosen LLM
- Existing AG Grid implementation
- Input for user queries
- Knowledge of making requests to an LLM
Creating a Schema Copy Link
The getStructuredSchema
API returns a structured JSON Schema representation of the Grid State that can then be used to create a LLM compatible schema:
// Generated Structured Schema Representation of Grid State
const gridStateStructuredSchema = gridApi.getStructuredSchema();
// Create LLM compatible JSON Schema, using Grid State Structured Schema
const schema = {
type: 'object',
properties: {
gridState: gridStateStructuredSchema,
propertiesToIgnore: {
type: 'array',
items: {
type: 'string',
enum: ['aggregation', 'filter', 'sort', 'pivot', 'columnVisibility', 'columnSizing', 'rowGroup'],
},
description: 'List of grid state properties to ignore when applying the new state',
},
explanation: {
type: 'string',
description: 'Human-readable explanation of the changes made to the grid state',
},
},
required: ['gridState', 'propertiesToIgnore', 'explanation'],
additionalProperties: false,
};
The getStructuredSchema()
API returns a narrow representation of what can be achieved using the grid's API. For example, if a column is not sortable, the schema will not include that column in the list of sortable columns. This ensures that the LLM is only able to generate valid state changes for the grid.
The schema
contains several properties for the LLM to populate:
gridState
a structured output representation of grid state.propertiesToIgnore
a list of grid state properties which have been unchanged, to ensure they are not overridden when updating grid state (optional, but recommended).explanation
a string the LLM can use to provide human-readable context to the user about the changes that have been applied (optional, but recommended).
Modifying State with Any LLM Copy Link
To provide the LLM with sufficient context, we recommend sending the users' request, the current grid state, and the schema to the LLM:
// Get Users' Request, e.g. from Input Element
const userRequest = inputElement.value.trim();
// Get Current Grid State
const gridState = gridApi.getState();
// Send User Request & Schema to LLM
const response = await callLLM(userRequest, gridState, schema)
The callLLM
function needs to be implemented in accordance with your chosen LLM. Refer to our example above for a reference implementation using ChatGPT's Completions API and the Prompting section for more information on creating system prompts.
Updating Grid State Copy Link
Once the LLM has provided a response, it should be validated against your top-level schema. This is particularly important when using an LLM that does not support structured outputs.
In this example, we're using ajv
to validate the LLMs response, before calling setState
to update the grid:
// Extract Grid State & Properties to Ignore from LLM Response
const { newGridState, propertiesToIgnore } = response;
// Init ajv Validator with Schema
const ajv = new ajv7();
const ajvValidator = ajv.compile(schema);
// Validate LLM response w/ ajv
if (!ajvValidator(newGridState)) {
console.error("Invalid Schema")
return;
}
// Update Grid State with LLM Response
gridApi.setState(newGridState, propertiesToIgnore);
Passing propertiesToIgnore
(the properties that are unchanged by the LLM) to setState
ensures that these properties are not overridden after the grid state is updated.
Excluding Features Copy Link
By default, all features are enabled and included in the schema. If you wish to limit the features returned in the schema, you can do so by providing a list of feature names to the getStructuredSchema
method.
For example, if you do not want to allow users to manipulate column visibility and sorting, you can call getStructuredSchema
like this:
const gridStateStructuredSchema = gridApi.getStructuredSchema(
{
exclude: ['columnVisibility', 'sorting']
}
);
Providing Additional Context Copy Link
Occasionally the LLM will need more information than can be provided by the grid alone. As such, you can pass in options for each column defining extra context.
description
- This provides a description of what the column contains to the LLM inline in the schema. This might include details such as the type or format of the data, to aid it when filtering or aggregating.includeSetValues
- When using the Set Filter, the LLM must be provided with the allowed values for it to correctly set those it wishes to filter. However, some Set Filters contain many values which lead to a schema which is too large for your LLM to process. By default we do not include the Set Filter values, however if you set this property to true then they will be included. Refer to the Handling Schema Size Limits for more information.
const gridStateStructuredSchema = gridApi.getStructuredSchema({ columns: {
sport: {
description: "The sport the athlete won their medal in",
includeSetValues: true
},
gold: {
description: "The number of gold medals won by this athlete at this games"
}
}});
Before including data like columns description or Set Filter values, you should be aware of the data security policy of your LLM provider.
Prompting Copy Link
The AI Toolkit does not include any prompting logic, as this will vary depending on the LLM you are using and your specific use case. This gives you the flexibility to craft prompts that are tailored to your users and the data in your grid.
In our testing we have found a few things that help get the best results from LLMs when prompting them to generate grid state changes:
- Include the current state of the grid in the prompt. This helps the LLM understand what the grid currently looks like, and what changes are being requested.
- Request the LLM to only return the grid state changes, and nothing else. This helps ensure that the response can be passed directly to
setState()
. - You may wish to provide the LLM with a few rows of data from the grid, to help it understand the data it is working with. This is especially useful if your grid contains domain-specific data that the LLM may not be familiar with. If you have a small dataset, you can even include the entire dataset in the prompt by using
exportToCsv()
. - If you have any domain specific knowledge or terminology that you want the LLM to be aware of, include that in the prompt as well. e.g. "In this dataset, a 'medal' refers to any of gold, silver or bronze medals won at the Olympic games".
- Including a list of available features also helps the LLM understand what it can and cannot do.
Below is an example prompt you can use as a starting point:
const prompt = `
You are an expert data analyst working with a data grid.
The grid contains data about Olympic athletes and their achievements.
You should respond to user requests by generating a JSON object that
represents their requested changes to the grid state. The response should
include all their requested changes, along with any features that are
already applied to the grid that they have not requested to change.
The following is the current state of a data grid, represented as a
JSON object. The grid contains data about Olympic athletes and their achievements.
Current Grid State:
${JSON.stringify(gridApi.getState(), null, 2)}
The grid has the following features available to manipulate:
- Column Visibility
- Column Sizing
- Row Grouping
- Sorting
- Aggregation
- Pivoting
- Filtering
`
Modifying the Schema Copy Link
You may modify the structured schema to surgically change options or add extra features. For example, you may wish to add schema for a custom filter, or limit which columns can be hidden by the LLM:
// Generate base schema
const baseSchema = gridApi.getStructuredSchema();
// Augment with custom constraints
function applyCustomRules(schema) {
return {
...schema,
// custom schema rules...
}
const customSchema = applyCustomRules(baseSchema);
If you choose to do this, make sure that the result is still a valid GridState
object before passing it to setState
. Be aware that the JSON schema supported by LLMs is a subset of the full JSON schema spec and build your schemas accordingly.
Handling Schema Size Limits Copy Link
includeSetValues: true
is useful when the LLM must pick from explicit allowed values (e.g., Set Filter), enabling precise filter construction. However, large cardinalities can inflate the prompt and exceed context limits.
Recommendations:
Enable
includeSetValues
selectively on low-cardinality columns only.Consider truncating to the top N most frequent values plus an "OTHER" hint.
Monitor total prompt size; keep within your model’s context window with a buffer for the model’s response.
API Copy Link
Returns the structured schema of the grid, which includes information about columns, data types, and relationships.
This schema can be passed to I services to ensure the response is of the correct format. |