Predictable in a hot path
Fixed labels, calibrated probabilities, and stable latency for routing, dashboards, and product workflows.
Tabularis Sentiment API
Specialized multilingual classifiers for customer feedback, support tickets, reviews, and social streams. Sub-second responses, transparent per-call pricing. No prompting, no hallucinations, no large LLMs in your hot path.
Priced like a utility. $19/mo for 100K calls.
Fixed labels, calibrated probabilities, and stable latency for routing, dashboards, and product workflows.
One bearer token, one JSON payload, optional batching up to 100 texts, and no prompt or parser maintenance.
Request text is used for inference and discarded after the response; operational logs store metadata only.
Built for one job and priced for production. Send up to 100 texts per request, receive structured labels with calibrated probabilities — no prompt engineering, no parsing.
5 classes · 23 languages
Very negative → very positive, fine-tuned on multilingual social, support, and review corpora. Natively multilingual — no translate-then-classify drift.
11 labels · multi-label
Joy, sadness, anger, fear, surprise, disgust, neutral, gratitude, frustration, love, anticipation. Returns several labels per text when warranted.
Both tasks · one round-trip
Send text once, receive sentiment + emotion in a single response. 4.5 credits instead of 5 — bundled discount for the most common path.
LLMs are amazing at general reasoning. They are not the right tool for a deterministic classifier in a hot path. The math at scale is not subtle.
Free 10K credits, no card. Tabs below show the same call in three languages —
switch endpoints by changing one path: /v1/sentiment, /v1/emotions, or /v1/analyze.
curl https://api.tabularis.ai/v1/analyze \
-H "Authorization: Bearer tab_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"texts": ["I love the product, but support was slow."],
"tasks": ["sentiment", "emotion"],
"return_all_scores": true
}' import requests
resp = requests.post(
"https://api.tabularis.ai/v1/analyze",
headers={"Authorization": "Bearer tab_live_xxx"},
json={
"texts": ["I love the product, but support was slow."],
"tasks": ["sentiment", "emotion"],
},
timeout=15,
)
print(resp.json()) const res = await fetch("https://api.tabularis.ai/v1/analyze", {
method: "POST",
headers: {
Authorization: "Bearer tab_live_xxx",
"Content-Type": "application/json",
},
body: JSON.stringify({
texts: ["I love the product, but support was slow."],
tasks: ["sentiment", "emotion"],
}),
});
const data = await res.json(); {
"results": [
{
"text_index": 0,
"sentiment": {
"label": "positive", "score": 0.71,
"scores": { "positive": 0.71, "neutral": 0.18, "negative": 0.08, "very_positive": 0.02, "very_negative": 0.01 }
},
"emotion": {
"labels": ["joy", "frustration"],
"scores": { "joy": 0.81, "frustration": 0.68, "anticipation": 0.22, "neutral": 0.05 }
}
}
],
"usage": { "credits": 4.5, "billable_chunks": 1 },
"request_id": "req_d11dd78cd390d1487155d9ef"
} From CX routing to ML pipelines — anywhere you currently parse free-text by hand or pay an LLM to do a five-class job.
Auto-route by tone — angry tickets to senior reps, gratitude to retention.
Track sentiment trends per release. Detect regressions before they hit ratings.
Bucket thousands of free-text comments in seconds. Drill into the angry slice.
Brand sentiment in real time across X, Reddit, TikTok — across 23 languages, one model.
Detect frustration before users churn. Trigger human handoff on negative + frustration.
Combine sentiment + emotion as features in your ML or risk pipeline.
One API for global feedback streams. No per-language vendor stack.
Pre-filter community submissions by anger or toxicity before human review.
One model per task, natively trained across all of these languages. No per-language code paths, no per-language vendors.
Credits are unified across endpoints. 1 credit = 1 text up to 1,000 characters. Free tier needs no card. Cancel anytime via Stripe portal.
10K sentiment OR 2.5K emotion
Get free key100K sent. / 25K emo.
Start800K sent. / 200K emo.
Start2.5M sent. / 625K emo.
Start10M sent. / 2.5M emo.
StartVolume, SLA, on-prem, DPA
Talk to sales/v1/sentiment 1 cr · /v1/emotions 4 cr · /v1/analyze 4.5 cr (bundled) Chunk size: 1,000 Unicode characters · longer texts split proportionally · max 2,000 chars / text Inference runs in-memory on dedicated infrastructure. The text is discarded the moment the response is returned. No log, no warehouse, no backup contains your customers' content.
Our classifiers are built on state-of-the-art multilingual transformer research and continuously fine-tuned on real customer feedback, support, and review corpora at scale. You get production-grade inference — always-warm, metered, with API key management and audit logs — without standing up your own GPU fleet.
Specialized models are 10× faster, 10–100× cheaper, return structured probabilities (no hallucinated labels), and do not drift between API calls. LLMs are amazing at general reasoning; this is the right tool for one classifier in a hot path.
The model will still return a result, but accuracy degrades on out-of-distribution languages. For low-resource languages we recommend the Enterprise tier where we can fine-tune on your data.
Send up to 100 texts per request via the texts array. For very large jobs (>1M docs), the Batch API is on the roadmap — contact us for early access.
No — credits reset each billing period. Pricing is calibrated so most users do not hit their cap.
Change one URL and one auth header. Our response shape mirrors standard text-classification pipelines, so most integrations swap in by replacing the endpoint and key — no schema rewrites.
Yes — Enterprise tier includes a containerized deployment that runs entirely in your VPC. Same API, your servers.
99.9% on Business, 99.95% on Scale, custom SLAs on Enterprise. Free and Starter are best-effort.
Next step
Sign up, copy the curl snippet, replace one token. If you need volume pricing, EU residency, on-prem, or a custom fine-tune — we are one email away.