◆ Section IV AI  ·  Applied research

Intelligence,
where it earns its keep.

We don't ship AI for its own sake. We embed it where it removes real drudgery — reading 200-page case files, pre-filling claim forms, catching an anomaly in a ledger at 3 am. Small models doing quiet work, inside software you already trust.

The approach

Build where it
matters most.

Most AI in enterprise software is theatre — a chat widget bolted onto a dashboard nobody asked for. Ours is different: we begin with the twenty minutes your staff spends every morning on a task a model could do in ten seconds, and we replace that. Quietly, measurably, with human review at every step that matters.

We use the right size of model for the job — a small classifier for a single field, a tuned LLM for summarisation, a vision model for forms. And we deploy on-premise or in Indian cloud regions when your data demands it.

Capabilities

Six applied
disciplines.

AI — 01

Document intelligence

Prescriptions, lab reports, discharge summaries, invoices, purchase orders — extracted into structured data, validated, and posted to the right module automatically.

AI — 02

Clinical prediction

Readmission risk, sepsis early warning, no-show probability — narrow, explainable models trained on your hospital's own history, with human-in-the-loop review.

AI — 03

Conversational agents

Multilingual bots for admission enquiries, patient FAQs, leave applications and expense claims — grounded in your policies, not a generic internet corpus.

AI — 04

Computer vision

Biometric verification, ID card OCR, signature matching, X-ray triage support and crowd-density estimation — deployed at the edge where latency matters.

AI — 05

Forecasting & anomaly

Inventory demand, cash flow, attendance trends, fraud detection — time-series models that flag the strange before it becomes expensive.

AI — 06

Retrieval & search

Semantic search across patient histories, policy documents, academic papers and support tickets — answering questions, not just returning links.

Case in practice

Claim submission,
cut by 68%.

A multi-specialty hospital we work with was losing six hours a day to manual insurance claim preparation — transcribing doctor notes, matching diagnosis codes, attaching bills, mailing TPA portals. We built a pipeline that does most of it.

STAGE ⋅ 01

Ingest

Discharge summary, lab reports and bills pulled from HIMS as the patient checks out. No re-upload.

STAGE ⋅ 02

Extract

A fine-tuned model pulls ICD-10 diagnoses, procedure codes and amounts, matching them to TPA templates.

STAGE ⋅ 03

Review

A claims officer reviews flagged fields only — never the whole form. The model's confidence score decides what to flag.

STAGE ⋅ 04

Submit

Automated portal submission with audit trail. Six hours of work now takes about twenty minutes.

68%
Reduction in claims prep time
97.4%
Field-level extraction accuracy
₹14L
Estimated annual recovery
0
Staff reassigned elsewhere
Tooling

Opinionated
stack.

A short list of tools we use well. We avoid framework of the week — we pick mature libraries, deploy them carefully, and document the architecture for your team.

MODELS

Claude, Llama, Mistral

Closed and open-weight models, selected per task. On-prem deployment supported.

VISION

YOLO, Tesseract, PaddleOCR

Fine-tuned detectors for Indian document layouts and form vocabularies.

CLASSICAL ML

XGBoost, scikit-learn

For tabular tasks where a small explainable model outperforms a large one.

INFRA

Ray, vLLM, Triton

Serving, batching and scaling — tuned to your throughput and latency budget.

VECTOR

Qdrant, pgvector

Retrieval layers for semantic search and retrieval-augmented generation.

ORCHESTRATION

LangGraph, Temporal

Durable, inspectable workflow engines — never opaque chains.

EVAL

Ragas, Langfuse

Before-production and in-production evaluation — we measure what ships.

DEPLOYMENT

Docker, K8s, on-prem

From a single GPU box in your server room to multi-region Kubernetes.

Principles

How we think
about AI.

⊕ 01

Smaller is better

Use the smallest model that solves the problem. A 300-million-parameter classifier beats a frontier LLM on a narrow task — cheaper, faster, more auditable.

⊕ 02

Human in the loop

For anything consequential — a clinical decision, a financial posting, a hiring signal — the model suggests and a person confirms. We build for that shape by default.

⊕ 03

Evaluation first

We don't ship a model until we can say what "good" looks like, numerically. And we keep measuring after deployment — because model behaviour drifts.

⊕ 04

Your data stays yours

On-premise deployment available for every capability. For cloud, we use Indian regions and sign DPAs that keep your data out of training pipelines.

Explore

Have a repetitive workflow that could be genuinely automated?

Talk to us