Enterprise AI platform. Sovereign, on-premise, anti-hallucination by design.

Selvo Lens is a unified AI platform — generative AI capabilities and an intelligent document agent — running entirely on your infrastructure. Full AI power, no cloud subscription, no data leaving your network.

Air-gappable·Anti-hallucination·GDPR built-in·Full audit trail

Three steps to enterprise AI

No migration. No cloud accounts. Connect your data, deploy your models, and start using AI.

yourcompany.selvolens.com

What does the liability clause say in the Acme contract?

According to the Acme Corp Master Services Agreement (Section 8.2), liability is capped at 12 months of fees paid. Neither party shall be liable for indirect, incidental, or consequential damages.

Acme_MSA_2024.pdf - p.14 Acme_Amendment_Q3.pdf - p.3
0.94Qwen-2.5-32Bcontent
01

Connect your data and deploy your models

Ingest documents — PDF, Excel, Word, CSV, scanned files. Deploy your LLM locally via Docker Compose. Organized by department, access-controlled by role.

02

Use AI your way

Run generative tasks — drafting, summarization, multi-turn conversation — or query your documents in plain language. No special syntax, no cloud API keys.

03

Get governed, audited outputs

Every output is traced, cited, and confidence-gated. If confidence is too low, the system refuses to answer rather than guess. Full audit trail on every interaction.

Enterprise AI platform built for environments where cloud is not an option

Security, governance, and compliance are foundational - not add-ons.

Air-gapped deployment

Your data never crosses your firewall

The entire AI stack - models, database, and all user data - runs on your servers. No internet connection required after deployment. Approved for classified environments and defense networks.

Zero cloud dependencySingle GPU hostDocker Compose
terminal
# Your data never crosses your firewall
$ docker compose up -d
✓ frontend running :3000
✓ backend running :8001
✓ chromadb running :8000
✓ vllm running :8080
# Network: internal only. No egress.
Generative AI Engine

Full LLM capabilities on your infrastructure

Text generation, summarization, multi-turn conversation, and agentic workflows — powered by a locally-hosted LLM. No subscriptions, no API keys, no data leaving your network.

On-premise LLMModel agnosticNo API keys
Deterministic analytics

Numbers you can defend in an audit

Financial figures and operational metrics are computed directly from your source files - not generated from model memory. Every number is traceable to the exact row it came from.

PandasDeterministicAuditable
Full governance

Compliance built in, not bolted on

Department-scoped access control, consent management, data retention enforcement, and an immutable audit ledger. The evidence trail regulators require - automatically.

RBACGDPRAudit ledgerConfidence gatingFail-closed

Zero maintenance overhead

The system detects and repairs its own index automatically. No manual intervention, no re-uploads, no downtime from infrastructure changes.

Confidence gating

Below-threshold responses abstain - fail-closed by design

Model agnostic

Swap LLM, embedding model, or cross-encoder via .env

Response transparency

Every answer shows model tier, confidence, and source citations

Cross-language queries

LLM-based reranking fallback for any query language

On-premise OCR

Tesseract 5 for scanned PDFs - even blurry documents

Enterprise AI platform for industries where cloud AI is banned

Your sector cannot use ChatGPT or Claude. Selvo Lens gives your teams the full power of generative AI and intelligent document agents — on infrastructure your compliance team will approve.

Legal & Law Firms

Query discovery documents and case files without violating attorney-client privilege.

Financial Services

Analyze internal audits, KYC documents, and market reports under strict data sovereignty rules.

Defense & GovTech

Air-gapped deployment for classified and sensitive mission data. No internet required.

Manufacturing & R&D

Protect intellectual property and blueprints from being used to train public models.

Cloud AI vs. Selvo Lens

Why regulated industries choose on-premise over cloud AI.

Data Privacy

Cloud AIShared with provider
Selvo LensZero-leak / On-premise

Math & Analytics

Cloud AIGenerates numbers from model memory
Selvo LensDeterministic Pandas routing

Compliance

Cloud AIHard to audit
Selvo LensImmutable ledger & GDPR built-in

Deployment

Cloud AISubscription / OpEx
Selvo LensYour infrastructure / Predictable cost

Network Requirement

Cloud AIAlways online
Selvo LensAir-gappable

Confidence Handling

Cloud AIAlways answers - even when wrong
Selvo LensFail-closed with confidence gating

Audit Trail

Cloud AILimited or none
Selvo Lens50+ invariants + hash-chain integrity

Intelligent query routing

Every request — whether a generative task or a document query — is classified and routed to the right engine. Content questions get semantic search. Analytical questions get deterministic code execution.

Content

"What does the contract say about liability?"

Hybrid vector + BM25 search with cross-encoder reranking, then LLM synthesis with cited sources.

Analytical

"Average revenue by region for Q4"

LLM generates Pandas code against your data schema. Sandboxed execution returns deterministic results.

Filter & Lookup

"Show all rows where status is Active"

Direct DataFrame filtering and targeted record search. No LLM hallucination on structured data.

Executive Summary

"Give me an executive summary"

Multi-sheet LLM synthesis across entire documents. Produces structured overviews with key findings.

Metadata

"How many documents are uploaded?"

Collection-level metadata queries answered directly from the document ledger.

Cross-language

Queries in any language

LLM-based cross-language reranking fallback when embeddings cannot handle the query language.

Generative Tasks

"Draft a report, summarize findings, rewrite a clause"

Open-ended generative AI tasks are routed to the LLM directly. No retrieval overhead when the task is purely generative.

Hybrid search that adapts to each query

Every retrieval runs through dual engines with adaptive weight fusion - because metadata lookups need different retrieval weights than open-ended content questions.

Vector search - Semantic similarity via sentence-transformer embeddings in ChromaDB
BM25 keyword search - Lexical matching for exact names, IDs, and codes
Adaptive fusion - Weights shift per query type - 80% vector for content, 95% BM25 for metadata
Cross-encoder reranking - Fine-grained relevance scoring on fused results
Confidence floor - Results below threshold are rejected - never guesses
Adaptive weight fusion by query type
ContentBM25 20% · Vector 80%
AnalyticalBM25 60% · Vector 40%
FilterBM25 70% · Vector 30%
LookupBM25 80% · Vector 20%
MetadataBM25 95% · Vector 5%
BM25 (keyword) Vector (semantic)

Four containers. One GPU host. Complete sovereignty.

The entire stack runs on a single machine via Docker Compose. No cluster, no Kubernetes, no cloud.

Docker Compose · Internal Network
Frontend
Next.js · :3000

Browser-based UI for document upload, querying, and admin dashboard.

Backend
FastAPI · :8001

Query routing, classification, hybrid search, analytics engine, GDPR, governance.

ChromaDB
Vectors · :8000

Vector embeddings store. Reconstructible from upload ledger if corrupted.

vLLM Inference
GPU · OpenAI-compatible

Local LLM inference. Model-agnostic - swap via .env config per deployment.

Air-gapped Self-healing GPU-accelerated 4 uvicorn workers

Multi-format document ingestion

Upload anything. Scanned documents are OCR-processed automatically. Excel files get agentic sheet selection.

PDF
Text + scanned OCR
Excel
Multi-sheet, agentic
Word
Paragraphs + tables
CSV
Auto-encoding
Images
OCR via Tesseract 5
Text/MD
UTF-8

What teams are saying

From teams in regulated industries running Selvo Lens in production.

Didn't expect it to work on our old scanned PDFs but it actually pulled the data correctly. The OCR handled documents our previous tools couldn't touch.

M.K.

Legal

IT approved it fast because nothing leaves our network. That never happens. The air-gap capability was the deciding factor for our compliance team.

R.S.

Compliance

We finally have an AI tool that can answer 'What was the total spend last quarter?' without making up numbers. The Pandas routing is a game changer.

J.P.

Financial Operations

Runs on commodity hardware

The entire stack - frontend, backend, LLM inference, and vector database - runs on a single GPU host via Docker Compose. No cluster required.

View pricing details
Minimum Requirements
CPU
8 cores / 16 threads
RAM
32 GB
GPU
24 GB VRAM (NVIDIA)
Storage
256 GB NVMe SSD
Recommended
CPU
8 cores / 16 threads
RAM
32 GB DDR4/DDR5
GPU
RTX 5090 / RTX 6000
Storage
512 GB NVMe SSD

Before you talk to us, read this.

The questions your security, legal, and IT teams will ask - answered upfront.

Your teams want AI. Your compliance team wants control. Now you don’t have to choose.

Deploy a full AI platform — generative AI and intelligent document agents — on infrastructure you control, with governance your compliance team will approve.