AI Enrich
A unified gateway for LLM-powered text generation and enrichment. Route prompts to OpenAI, Anthropic, Gemini, or Perplexity through a single endpoint — with prompt templating, structured JSON output, and optional web search.
POST /ai_enrich is a single endpoint that abstracts over four LLM
providers. Use it for classification, extraction, summarization,
personalization copy, and any task where a deterministic endpoint doesn't
already exist.
Endpoint: POST /ai_enrich
Credits:
- OpenAI, Anthropic, Gemini: 2 credits / call
- Perplexity: 5 credits / call (includes live web search)
Supported Providers
| Provider | Default Model | Web search |
|---|---|---|
openai | gpt-4o-mini | Optional |
anthropic | claude-haiku-4-5-20251001 | Optional |
gemini | gemini-2.0-flash | Optional |
perplexity | sonar | Always on |
Override the default model with the model field when you need a stronger
or more specialized model (e.g. gpt-4o, claude-opus-4-6, gemini-2.0-pro).
Request
curl -X POST https://v3-api.texau.com/api/v1/ai_enrich \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"prompt": "Summarize the company {{company_name}} in 2 sentences.",
"context": { "company_name": "Acme Corp" },
"model": "gpt-4o-mini",
"temperature": 0.3
}'
Parameters
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | openai, anthropic, gemini, or perplexity |
prompt | string | Yes | Prompt text (supports {{variable}} templating) |
context | object | No | Key-value pairs substituted into {{variable}} placeholders |
model | string | No | Override the provider default |
temperature | number | No | 0–2 (default 0.3) |
system_prompt | string | No | Provider system-level instructions |
output_type | string | No | text (default) or json |
output_schema | object | No | JSON schema guiding structured output when output_type="json" |
max_tokens | integer | No | Default 4096 |
use_web_search | boolean | No | Enable web search on providers that support it |
search_recency_filter | string | No | Perplexity only: day, week, month, year |
search_domain_filter | array | No | Perplexity only: limit search to specific domains |
Templated Prompts
Use {{variable}} placeholders in the prompt and pass a context object —
ideal for running the same prompt across every row of a dataset.
{
"provider": "openai",
"prompt": "Is {{company}} a good fit for a ContentOps tool? Answer yes or no and give one reason.",
"context": { "company": "Acme Media Group" }
}
Structured JSON Output
Set output_type="json" and pass a JSON schema in output_schema. The
response is guaranteed to parse as JSON matching your shape.
{
"provider": "openai",
"output_type": "json",
"output_schema": {
"type": "object",
"properties": {
"score": { "type": "integer" },
"category": { "type": "string", "enum": ["hot", "warm", "cold"] },
"reason": { "type": "string" }
},
"required": ["score", "category", "reason"]
},
"system_prompt": "You are a lead-scoring analyst. Score prospects 1-100 based on ICP fit.",
"prompt": "Score this prospect: {{description}}",
"context": { "description": "VP of Sales at Series B logistics SaaS, 200 employees." }
}
Web Search (Perplexity)
Perplexity's sonar model performs a live web search before every
completion. Use it for research queries that need current information:
{
"provider": "perplexity",
"prompt": "What were the last three product launches by {{company}}?",
"context": { "company": "Anthropic" },
"search_recency_filter": "month",
"search_domain_filter": ["anthropic.com", "techcrunch.com"]
}
Response
{
"content": "Acme Corp is a logistics ops SaaS serving mid-market shippers. Founded 2018, headquartered in San Francisco.",
"tokens_used": 256,
"input_tokens": 45,
"output_tokens": 211,
"finish_reason": "stop",
"provider": "openai",
"model": "gpt-4o-mini-2024-07-18"
}
When output_type="json", content is already-parsed JSON matching your
output_schema.
When to Use (and When Not to)
Use ai_enrich for:
- Classifying prospects by intent, persona, or ICP fit
- Extracting attributes from unstructured bios or descriptions
- Summarizing long-form content into one-liners
- Generating personalized outreach copy per-row
- Perplexity-powered research that needs fresh web data
Don't use ai_enrich when:
- A deterministic endpoint already exists —
enrich_profile,company_enricher,web_scrape, etc. are cheaper, faster, and more reliable than asking an LLM for the same data. - You need a guaranteed schema for every row — always pair with
output_type="json"and an explicitoutput_schema.
Cost-Efficient Patterns
- Template once, run thousands. Build your prompt with
{{variable}}placeholders and loop over your data withcontextobjects. No need to rebuild the prompt per-row. - Use the cheapest suitable model.
gpt-4o-mini,claude-haiku, andgemini-flashare tuned for fast classification/extraction and are the default for a reason. - Cache upstream data. Call
enrich_profileonce and feed the result into multipleai_enrichcalls (classification + scoring + copy generation) — much cheaper than re-enriching. - Pair with Custom Functions. Run
clean_domain/normalize_company/identify_email_typebefore sending context intoai_enrichso the model isn't wasting tokens on data cleaning.
Last updated 2 weeks ago
Built with Documentation.AI