Charcoal is currently invite-only. Contact us for access.
What Charcoal offers
Agentic search harness
A retrieval agent that iteratively queries your corpus, reformulates on partial results, and stops when it has enough to answer. No RAG pipeline to assemble, no orchestration loop to babysit.
Context management
The agent manages its own context window across multi-step retrieval — compacting, pruning, and keeping only signal as it scans thousands of candidates. You don’t hand-roll summarization or fight token limits.
Accuracy without embedding engineering
No embedding model to pick, no chunk size to tune, no reranker to train. Charcoal owns the retrieval stack end-to-end and delivers high-accuracy results against the raw documents you upload.
Scalable storage
Ingest millions of documents across any number of namespaces. Upsert in 10k-document batches; storage and index scale horizontally without sharding, tuning, or capacity planning.
Metadata filters
Schematized typed attributes with comparison, set-membership, and logical operators. Combine natural-language search with precise structured predicates on any filterable field.
Streaming + multi-turn
Search emits server-sent events for real-time progress, and sessions support follow-up messages so the agent can clarify or refine without losing state.
Quickstart
Getting started
Install the SDK, upload documents, and run your first search in a few minutes.
CLI
Use Charcoal from your terminal — manage namespaces, upload documents, and search.
Learn more
Namespaces & Documents
How documents are organized and schematized.
Search
Streaming, multi-turn sessions, and the search lifecycle.
Filters
The full filter syntax for narrowing results.
API Reference
Full endpoint documentation.