Skip to main content
Charcoal is a drop-in search subagent for your AI agent. Point it at a document corpus, run a natural language query, and Charcoal runs the full retrieval loop with a state-of-the-art agent that’s tailored to your corpus. Charcoal internally handles everything that’s needed for high quality agentic retrieval: indexing, query planning, context compaction, creating a grounded synthesis with citations, metadata filtering, and scaling.
Charcoal is currently invite-only. Contact us for access.

Features

Agentic search harness

A retrieval agent that iteratively queries your corpus, reformulates on partial results, and stops when it has enough to answer. No RAG pipeline to assemble.

RL-trained model

We use reinforcement learning to train small, specialized models tailored to your corpus. The result is agentic search that’s 10x faster, cheaper, and more accurate than frontier models.

Context management

The agent manages its own context window across multi-step retrieval — compacting, pruning, and keeping only signal as it scans candidate documents.

Accuracy and recall without embedding engineering

No embedding model to pick, no chunk size to tune, no reranker to train.

Scalable storage

Documents are stored in object storage, so ingestion, indexing, and search cost a fraction of traditional search engines.

Metadata filters

Schematized typed attributes with comparison, set-membership, and logical operators. Combine natural-language search with precise structured predicates on any filterable field.

Quickstart

Getting started

Install the SDK, upload documents, and run your first search in a few minutes.

CLI

Use Charcoal from your terminal — manage namespaces, upload documents, and search.

Learn more

Namespaces & Documents

How documents are organized and schematized.

Search

Streaming, multi-turn sessions, and the search lifecycle.

Filters

The full filter syntax for narrowing results.

API Reference

Full endpoint documentation.