JonyGPT
ServicesWorkPromptsToolsAboutBlogLet's Talk
Featured ProjectAgentic AI

Building AI systems that regulated industries can actually trust.

ClariTrial is a clinical trial intelligence platform. It pulls live data from ChEMBL, UniProt, ClinicalTrials.gov, and PubMed, then lets users query it through both deterministic panels and an agentic AI chat. The architecture is the interesting part: a three-layer system designed so the AI can never hallucinate data.

The same pattern applies anywhere accuracy is non-negotiable: legal research, financial compliance, scientific publishing, regulatory submissions. The core insight is simple. Separate the data layer from the reasoning layer. Let AI orchestrate verified queries. Never let it generate facts.

Live Informatics DemoTry the Agentic ChatClariTrial.com

The problem with AI in high-stakes industries

LLMs generate plausible text, not verified facts. For consumer apps, that is fine. For industries where a wrong answer has real consequences, it is dangerous.

Clinical Trials & Science

A hallucinated trial result could mislead researchers, waste years of work, or put patients at risk. Scientists need exact data from ChEMBL, UniProt, and ClinicalTrials.gov, not plausible-sounding approximations.

Risk: Patient safety. Wasted R&D spend. Retracted publications.

Legal

Lawyers have been sanctioned for citing AI-generated case law that never existed. Legal research demands exact citations, verified precedents, and auditable reasoning chains.

Risk: Court sanctions. Malpractice liability. Client harm.

Finance & Compliance

Fabricated financial figures, invented regulatory citations, or incorrect compliance guidance can trigger SEC investigations, failed audits, and fiduciary breaches.

Risk: Regulatory fines. Failed audits. Investor losses.

Research & Intelligence

Analysts need to synthesize across dozens of sources without the AI filling gaps with confident fiction. Every claim needs a traceable source, not a probability distribution.

Risk: Bad decisions. Lost credibility. Missed signals.

Three layers. Zero hallucination surface.

The architecture separates concerns so the AI can reason about data without ever generating it. Each layer has a clear boundary and a single job.

Layer 1

Ground Truth (ETL)

Batch pipelines build curated, validated datasets from authoritative sources. ChEMBL bioactivity data, UniProt protein records, AACT clinical trial registries, PubMed literature. Every record is validated, flagged for quality, and stored with full provenance.

Python ETL with validation and outlier detection
Paginated ingestion with duplicate flagging
Parquet output for downstream analysis
Airflow DAGs for scheduled refresh
Layer 2

Deterministic Queries (SQL/REST)

Fixed, allowlisted queries return exact, repeatable answers. No ad-hoc SQL. No prompt injection surface. The same query today returns the same structure tomorrow. These presets power both the web UI panels and the agentic chat tools.

Allowlisted SQL presets (degrader, glue, kinase, combined)
Live PostgreSQL queries against AACT replica
REST calls to ChEMBL and UniProt with caching
Identical code paths for UI and agent tools
Layer 3

Agentic Orchestration

When questions are messy or multi-step, a lead agent (Claude Sonnet) orchestrates specialist subagents. The lead reasons about what to ask and how to compose answers, but never generates data. It can only access what layers 1 and 2 provide.

Lead agent: Claude Sonnet 4.6 or GPT-4o
Specialist subagents: Claude Haiku 4.5 or GPT-4o-mini
Scoped tools per specialist (trial, literature, comparison)
Answer trace and versioned prompts in audit logs

The orchestration model

A lead agent receives the user question and decides which specialist subagents to consult. Each specialist has scoped tools and a single domain. The lead composes the final answer from structured tool output, never from its own knowledge.

Lead Agent

Receives the user question. Decides which specialists to call and in what order. Composes the final answer from structured tool output. Never runs ad-hoc SQL. Never fabricates data.

Claude Sonnet 4.6queryAactPresetconsultSpecialistconsultSpecialistsParallel
Trial Discovery

Searches ClinicalTrials.gov for trials matching criteria. Returns structured results with NCT IDs, phases, sponsors, and enrollment.

Claude Haiku 4.5searchTrials
Trial Deep Dive

Retrieves full detail for a specific trial. Eligibility criteria, endpoints, arms, locations, and status history.

Claude Haiku 4.5getTrialDetail
Evidence & Literature

Searches PubMed for relevant publications. Returns structured citation data with abstracts, authors, and journal information.

Claude Haiku 4.5searchPubMed
Trial Comparison

Compares two or more trials side by side. Phase, sponsor, enrollment, endpoints, and status differences.

Claude Haiku 4.5compareTrials

Why this structure creates trust

Each design decision removes a failure mode. The result is a system where every answer is traceable, every tool call is scoped, and every data point comes from a verified source.

No fabricated data

The AI reasons about data. It never generates it. Every number comes from a verified source through a deterministic query.

Full answer trace

Users can expand "how this answer was built" to see which tools were called, which presets ran, and which prompt version produced the response.

Scoped tool access

Each specialist agent can only use its own tools. The trial discovery agent cannot access literature tools. The lead cannot run ad-hoc SQL.

Versioned prompts

Every prompt has a version in metadata and audit logs. When behavior changes, you can trace exactly which prompt version caused it.

Deterministic first

The system tries deterministic queries before agentic reasoning. If a fixed SQL preset can answer the question, the agent layer is never invoked.

Validated ground truth

ETL pipelines validate every record before it enters the system. Outlier detection, duplicate flagging, and curation quality gates run before any agent sees the data.

The same pattern, different domains

ClariTrial applies this to clinical trials, but the architecture is domain-agnostic. Replace the data sources and the specialist agents, and the same three-layer model works anywhere accuracy matters.

Legal: Ground truth from case law databases (Westlaw, LexisNexis APIs). Deterministic queries return exact citations by jurisdiction and topic. Specialist agents for case comparison, statutory analysis, and precedent chains. Every citation is verifiable.

Finance: Ground truth from market data feeds, SEC filings, and compliance databases. Deterministic queries return exact figures by ticker, period, and filing type. Specialist agents for financial analysis, regulatory compliance, and risk assessment.

Healthcare: Ground truth from EHR systems, drug databases, and clinical guidelines. Deterministic queries return formulary data, interaction checks, and protocol specifications. Specialist agents for differential diagnosis support, treatment comparison, and guideline adherence.

The shared principle: the AI orchestrates verified data retrieval and composes answers from structured results. It never fills gaps with generated content. If the data does not exist in the verified layer, the system says so instead of guessing.

Stack and live data

31,876

ChEMBL bioactivity

measurements across 3 targets

3

UniProt proteins

Swiss-Prot reviewed entries

10+

Active TPD trials

live AACT PostgreSQL

5

Code samples

Python, R, SQL, Docker, AWS

Next.jsTypeScriptVercel AI SDKClaude Sonnet 4.6Claude Haiku 4.5PostgreSQLPythonR / ShinyAirflowDockerAWS CDKChEMBL APIUniProt APIAACTPubMed

Need trustworthy AI for your industry?

If your team works in a field where accuracy matters and wrong answers have consequences, this architecture can help. I build these systems.

Let's talk about your use caseView all projects
JonyGPT

AI coaching and development for businesses of all sizes.

ServicesWorkPromptsToolsApproachChallengesProcessAboutBlog
Privacy PolicyTerms of Service

© 2026 JonyGPT. All rights reserved.