← Back to home

Tabularis Sentiment API

Sentiment & emotion in 23 languages.

Specialized multilingual classifiers for customer feedback, support tickets, reviews, and social streams. Sub-second responses, transparent per-call pricing. No prompting, no hallucinations, no large LLMs in your hot path.

Priced like a utility. $19/mo for 100K calls.

5 sentiment classes11 emotions23 languages Multi-labelCalibrated scoresGDPR-safe
~150ms warm p50 latency for a single text
23 native languages without translation drift
10–100× lower cost than prompt-based classification
10K free credits every month, no card required

Predictable in a hot path

Fixed labels, calibrated probabilities, and stable latency for routing, dashboards, and product workflows.

Small integration surface

One bearer token, one JSON payload, optional batching up to 100 texts, and no prompt or parser maintenance.

Privacy-first by default

Request text is used for inference and discarded after the response; operational logs store metadata only.

What it does

Two specialized models. One unified endpoint.

Built for one job and priced for production. Send up to 100 texts per request, receive structured labels with calibrated probabilities — no prompt engineering, no parsing.

01

Multilingual sentiment

5 classes · 23 languages

Very negative → very positive, fine-tuned on multilingual social, support, and review corpora. Natively multilingual — no translate-then-classify drift.

POST /v1/sentiment1 credit / chunkCalibrated scores
02

Multi-label emotion

11 labels · multi-label

Joy, sadness, anger, fear, surprise, disgust, neutral, gratitude, frustration, love, anticipation. Returns several labels per text when warranted.

POST /v1/emotions4 credits / chunkProbabilities
03

Unified analyze

Both tasks · one round-trip

Send text once, receive sentiment + emotion in a single response. 4.5 credits instead of 5 — bundled discount for the most common path.

POST /v1/analyze4.5 credits / chunkSingle round-trip
Built for one job

Stop paying $0.50/M-token LLMs for a 5-class classifier.

LLMs are amazing at general reasoning. They are not the right tool for a deterministic classifier in a hot path. The math at scale is not subtle.

Dimension
Tabularis API
LLM with prompt
Latency (warm)
~150 ms
800 ms – 3 s
Cost per 1K calls
$0.19 (Starter)
$5+ (model + tokens)
Output schema
Fixed labels + calibrated probabilities
JSON-mode; hallucinated labels possible
Throughput
Predictable, scales linearly
Token-bucket caps surprise you
Multilingual
Native, 23 languages
Yes, with English-bias drift
Determinism
Same input → same output
Varies between calls
Effort
POST /v1/sentiment
Prompt design + JSON-mode + retries
Quickstart

One POST. Sentiment and emotion in the response.

Free 10K credits, no card. Tabs below show the same call in three languages — switch endpoints by changing one path: /v1/sentiment, /v1/emotions, or /v1/analyze.

POST · api.tabularis.ai
curl https://api.tabularis.ai/v1/analyze \
  -H "Authorization: Bearer tab_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "texts": ["I love the product, but support was slow."],
    "tasks": ["sentiment", "emotion"],
    "return_all_scores": true
  }'
import requests

resp = requests.post(
    "https://api.tabularis.ai/v1/analyze",
    headers={"Authorization": "Bearer tab_live_xxx"},
    json={
        "texts": ["I love the product, but support was slow."],
        "tasks": ["sentiment", "emotion"],
    },
    timeout=15,
)
print(resp.json())
const res = await fetch("https://api.tabularis.ai/v1/analyze", {
  method: "POST",
  headers: {
    Authorization: "Bearer tab_live_xxx",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    texts: ["I love the product, but support was slow."],
    tasks: ["sentiment", "emotion"],
  }),
});
const data = await res.json();
200 OK 148 ms · 1 chunk · 4.5 credits
{
  "results": [
    {
      "text_index": 0,
      "sentiment": {
        "label": "positive", "score": 0.71,
        "scores": { "positive": 0.71, "neutral": 0.18, "negative": 0.08, "very_positive": 0.02, "very_negative": 0.01 }
      },
      "emotion": {
        "labels": ["joy", "frustration"],
        "scores": { "joy": 0.81, "frustration": 0.68, "anticipation": 0.22, "neutral": 0.05 }
      }
    }
  ],
  "usage": { "credits": 4.5, "billable_chunks": 1 },
  "request_id": "req_d11dd78cd390d1487155d9ef"
}
Where teams plug it in

Eight high-leverage places to drop in a classifier.

From CX routing to ML pipelines — anywhere you currently parse free-text by hand or pay an LLM to do a five-class job.

Customer support tickets

Auto-route by tone — angry tickets to senior reps, gratitude to retention.

App store & product reviews

Track sentiment trends per release. Detect regressions before they hit ratings.

Survey & NPS analysis

Bucket thousands of free-text comments in seconds. Drill into the angry slice.

Social media monitoring

Brand sentiment in real time across X, Reddit, TikTok — across 23 languages, one model.

Chatbot analytics

Detect frustration before users churn. Trigger human handoff on negative + frustration.

Churn prediction features

Combine sentiment + emotion as features in your ML or risk pipeline.

Multilingual CX

One API for global feedback streams. No per-language vendor stack.

Moderation triage

Pre-filter community submissions by anger or toxicity before human review.

Coverage

~80% of the world's online conversational text.

One model per task, natively trained across all of these languages. No per-language code paths, no per-language vendors.

sentiment23 languages
EnglishChinese (中文)SpanishHindiArabicBengaliPortugueseRussianJapaneseGermanMalayTeluguVietnameseKoreanFrenchTurkishItalianPolishUkrainianTagalogDutchSwiss GermanSwahili
emotion23 languages
EnglishChinese (中文)SpanishHindiArabicBengaliPortugueseRussianJapaneseGermanIndonesianTamilVietnameseKoreanFrenchTurkishItalianPolishUkrainianUrduDutchPunjabiSwahili
Pricing

Transparent credits. Hard caps. No surprise bills.

Credits are unified across endpoints. 1 credit = 1 text up to 1,000 characters. Free tier needs no card. Cancel anytime via Stripe portal.

Free Developer

$0/mo
10,000 credits / month

10K sentiment OR 2.5K emotion

Get free key

Starter

$19/mo
100,000 credits / month

100K sent. / 25K emo.

Start
most popular

Growth

$99/mo
800,000 credits / month

800K sent. / 200K emo.

Start

Business

$249/mo
2,500,000 credits / month

2.5M sent. / 625K emo.

Start

Scale

$799/mo
10,000,000 credits / month

10M sent. / 2.5M emo.

Start

Enterprise

Custom
Custom credits / month

Volume, SLA, on-prem, DPA

Talk to sales
Endpoint costs: /v1/sentiment 1 cr · /v1/emotions 4 cr · /v1/analyze 4.5 cr (bundled) Chunk size: 1,000 Unicode characters · longer texts split proportionally · max 2,000 chars / text
Privacy by construction

Your customers' text never leaves the request lifecycle.

Inference runs in-memory on dedicated infrastructure. The text is discarded the moment the response is returned. No log, no warehouse, no backup contains your customers' content.

stored as metadata
  • Timestamp
  • Endpoint hit
  • HTTP status
  • API key ID
  • Character count
  • Credits charged
  • Latency
never stored
  • The text you sent
  • End-user PII
  • Full model output beyond returned labels
  • End-user IP addresses
for compliance
  • Not a sub-processor of your end-users' content
  • GDPR-safe by construction — no personal data persists
  • DPA available on Business+ plans
  • EU residency, on-prem, no-metadata mode → Enterprise
Common questions

Practical answers before you write a line of code.

What is under the hood?

Our classifiers are built on state-of-the-art multilingual transformer research and continuously fine-tuned on real customer feedback, support, and review corpora at scale. You get production-grade inference — always-warm, metered, with API key management and audit logs — without standing up your own GPU fleet.

How is it different from OpenAI / Claude with a sentiment prompt?

Specialized models are 10× faster, 10–100× cheaper, return structured probabilities (no hallucinated labels), and do not drift between API calls. LLMs are amazing at general reasoning; this is the right tool for one classifier in a hot path.

Can I use it for languages outside the 23 listed?

The model will still return a result, but accuracy degrades on out-of-distribution languages. For low-resource languages we recommend the Enterprise tier where we can fine-tune on your data.

What about batching for offline jobs?

Send up to 100 texts per request via the texts array. For very large jobs (>1M docs), the Batch API is on the roadmap — contact us for early access.

Do credits roll over?

No — credits reset each billing period. Pricing is calibrated so most users do not hit their cap.

How do I migrate from an existing text-classification stack?

Change one URL and one auth header. Our response shape mirrors standard text-classification pipelines, so most integrations swap in by replacing the endpoint and key — no schema rewrites.

Can I run this on-premise?

Yes — Enterprise tier includes a containerized deployment that runs entirely in your VPC. Same API, your servers.

What is the SLA?

99.9% on Business, 99.95% on Scale, custom SLAs on Enterprise. Free and Starter are best-effort.

Next step

Free 10K credits. No card. Five minutes to first call.

Sign up, copy the curl snippet, replace one token. If you need volume pricing, EU residency, on-prem, or a custom fine-tune — we are one email away.