logo
API GuidesAI Enrich

AI Enrich

A unified gateway for LLM-powered text generation and enrichment. Route prompts to OpenAI, Anthropic, Gemini, or Perplexity through a single endpoint — with prompt templating, structured JSON output, and optional web search.

POST /ai_enrich is a single endpoint that abstracts over four LLM providers. Use it for classification, extraction, summarization, personalization copy, and any task where a deterministic endpoint doesn't already exist.

Endpoint: POST /ai_enrich Credits:

  • OpenAI, Anthropic, Gemini: 2 credits / call
  • Perplexity: 5 credits / call (includes live web search)

Supported Providers

ProviderDefault ModelWeb search
openaigpt-4o-miniOptional
anthropicclaude-haiku-4-5-20251001Optional
geminigemini-2.0-flashOptional
perplexitysonarAlways on

Override the default model with the model field when you need a stronger or more specialized model (e.g. gpt-4o, claude-opus-4-6, gemini-2.0-pro).

Request

curl -X POST https://v3-api.texau.com/api/v1/ai_enrich \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "openai",
    "prompt": "Summarize the company {{company_name}} in 2 sentences.",
    "context": { "company_name": "Acme Corp" },
    "model": "gpt-4o-mini",
    "temperature": 0.3
  }'

Parameters

FieldTypeRequiredDescription
providerstringYesopenai, anthropic, gemini, or perplexity
promptstringYesPrompt text (supports {{variable}} templating)
contextobjectNoKey-value pairs substituted into {{variable}} placeholders
modelstringNoOverride the provider default
temperaturenumberNo0–2 (default 0.3)
system_promptstringNoProvider system-level instructions
output_typestringNotext (default) or json
output_schemaobjectNoJSON schema guiding structured output when output_type="json"
max_tokensintegerNoDefault 4096
use_web_searchbooleanNoEnable web search on providers that support it
search_recency_filterstringNoPerplexity only: day, week, month, year
search_domain_filterarrayNoPerplexity only: limit search to specific domains

Templated Prompts

Use {{variable}} placeholders in the prompt and pass a context object — ideal for running the same prompt across every row of a dataset.

{
  "provider": "openai",
  "prompt": "Is {{company}} a good fit for a ContentOps tool? Answer yes or no and give one reason.",
  "context": { "company": "Acme Media Group" }
}

Structured JSON Output

Set output_type="json" and pass a JSON schema in output_schema. The response is guaranteed to parse as JSON matching your shape.

{
  "provider": "openai",
  "output_type": "json",
  "output_schema": {
    "type": "object",
    "properties": {
      "score": { "type": "integer" },
      "category": { "type": "string", "enum": ["hot", "warm", "cold"] },
      "reason": { "type": "string" }
    },
    "required": ["score", "category", "reason"]
  },
  "system_prompt": "You are a lead-scoring analyst. Score prospects 1-100 based on ICP fit.",
  "prompt": "Score this prospect: {{description}}",
  "context": { "description": "VP of Sales at Series B logistics SaaS, 200 employees." }
}

Web Search (Perplexity)

Perplexity's sonar model performs a live web search before every completion. Use it for research queries that need current information:

{
  "provider": "perplexity",
  "prompt": "What were the last three product launches by {{company}}?",
  "context": { "company": "Anthropic" },
  "search_recency_filter": "month",
  "search_domain_filter": ["anthropic.com", "techcrunch.com"]
}

Response

{
  "content": "Acme Corp is a logistics ops SaaS serving mid-market shippers. Founded 2018, headquartered in San Francisco.",
  "tokens_used": 256,
  "input_tokens": 45,
  "output_tokens": 211,
  "finish_reason": "stop",
  "provider": "openai",
  "model": "gpt-4o-mini-2024-07-18"
}

When output_type="json", content is already-parsed JSON matching your output_schema.

When to Use (and When Not to)

Use ai_enrich for:

  • Classifying prospects by intent, persona, or ICP fit
  • Extracting attributes from unstructured bios or descriptions
  • Summarizing long-form content into one-liners
  • Generating personalized outreach copy per-row
  • Perplexity-powered research that needs fresh web data

Don't use ai_enrich when:

  • A deterministic endpoint already exists — enrich_profile, company_enricher, web_scrape, etc. are cheaper, faster, and more reliable than asking an LLM for the same data.
  • You need a guaranteed schema for every row — always pair with output_type="json" and an explicit output_schema.

Cost-Efficient Patterns

  1. Template once, run thousands. Build your prompt with {{variable}} placeholders and loop over your data with context objects. No need to rebuild the prompt per-row.
  2. Use the cheapest suitable model. gpt-4o-mini, claude-haiku, and gemini-flash are tuned for fast classification/extraction and are the default for a reason.
  3. Cache upstream data. Call enrich_profile once and feed the result into multiple ai_enrich calls (classification + scoring + copy generation) — much cheaper than re-enriching.
  4. Pair with Custom Functions. Run clean_domain / normalize_company / identify_email_type before sending context into ai_enrich so the model isn't wasting tokens on data cleaning.