NOSQL + VECTORS + TOON • ONE DATABASE • ZERO CONFIG

Database for
AI Developers

A lightweight NoSQL database with vector search, TOON format, and enterprise security built-in

25K+ops/sec @ 1M scale
1.13Mops tested (0% errors)
0.039mswrite latency
40-50%token savings
~
$brew tap krishcdbry/nexadb
$brew install nexadb
$nexadb start
✓ Binary Protocol on port 6970 (10x faster!)
✓ JSON API on port 6969 (REST fallback)
✓ Admin UI on port 9999 (Web interface)
$nexa -u root -p# Interactive CLI
✓ Connected to NexaDB (Binary Protocol)
$npm install nexaclient# JavaScript
$pip install nexaclient# Python

Need detailed instructions? View full macOS guide →

WORLD'S FIRST DATABASE WITH NATIVE TOON SUPPORT

Save 40-50% on LLM Costs

TOON (Token-Oriented Object Notation) reduces token usage for GPT-4, Claude, and other LLM APIs

Traditional JSON2,213 bytes
[
  {
    "_id": "abc123",
    "name": "Alice Johnson",
    "email": "alice@example.com",
    "age": 28,
    "city": "San Francisco",
    "role": "engineer"
  },
  {
    "_id": "def456",
    "name": "Bob Smith",
    "email": "bob@example.com",
    "age": 34,
    "city": "New York",
    "role": "manager"
  }
]
TOON Format1,396 bytes (-36.9%)
collection: users
documents[2]{_id,name,email,age,city,role}:
  abc123,Alice Johnson,alice@example.com,28,San Francisco,engineer
  def456,Bob Smith,bob@example.com,34,New York,manager
count: 2
$

40-50% Cost Savings

Reduce LLM API costs by 40-50%. For 1M API calls, save $400-500 on GPT-4 or Claude.

Faster Processing

Less data means faster LLM responses. Get your results quicker with smaller payloads.

More Context

Fit more data in token limits. Perfect for RAG systems and long-context applications.

RUST-POWERED INTERACTIVE CLI

Your Data,
Your Terminal

MySQL-like interactive CLI built in Rust. Zero dependencies, lightning fast, works everywhere.

nexa - Interactive Terminal
$ nexa -u root -p
Password: ********

Connected to NexaDB v3.0.4
Binary Protocol: localhost:6970
Multi-Database Architecture ✓

nexa(default)> databases
✓ Found 3 database(s):
  [1] default
  [2] analytics
  [3] production

nexa(default)> use_db analytics
✓ Switched to database 'analytics'

nexa(analytics)> collections
✓ Found 3 collection(s):
  [1] events (1,000,000 docs)
  [2] users (50,000 docs)
  [3] metrics (250,000 docs)

nexa(analytics)> use events
✓ Switched to collection 'events'

nexa(analytics:events)> query {"type": "purchase"}
✓ Found 125,000 documents

nexa(analytics:events)> create_db staging
✓ Database 'staging' created

nexa(analytics:events)> help
Database: databases, use_db, create_db, drop_db
Collection: collections, use, create, query, update,
            delete, count, vector_search, help, exit

Built for Developers

Lightning Fast
Built in Rust, uses MessagePack binary protocol. 10x faster than JSON REST.
Zero Dependencies
Single standalone binary. No Python, no Node.js, no runtime required.
Cross-Platform
macOS (Intel & Apple Silicon), Linux (x86_64 & ARM64), Windows. One CLI everywhere.
Multi-Database Support
Switch databases, create/drop DBs, manage collections across databases. Prompt shows db:collection context.
Install with NexaDB (auto-included):
$ brew install nexadb
# nexa command ready to use!
LIVE DEMO

Try Vector Search

Search naturally. No exact keywords needed. This is what vector search does.

Try:

How Much Can You Save?

Calculate your LLM cost savings with TOON format

1K1M
1004K
Current Monthly Cost
$150.00
50.00M tokens × $3/1M
With TOON Format (50% reduction)
$75.00
25.00M tokens
Monthly Savings:$75.00
Yearly Savings:$900.00
ℹ️
How TOON Works

TOON (Token-Oriented Object Notation) removes redundant JSON formatting, reduces field name repetition, and uses compact syntax. Your data becomes 40-50% smaller, which means 40-50% fewer tokens sent to LLM APIs. You can use TOON with jsontooncraft (any database) or get built-in export in NexaDB.

Pricing data from vellum.ai/best-llm-for-coding (Jan 2025)

Everything You Need

LLM optimization, vector search, admin panel - all included. No extra tools needed.

Vector Search Built-in

HNSW algorithm for semantic search. 200x faster than linear scan. No need for separate Pinecone/Weaviate. Perfect for RAG and AI apps.

Binary Protocol (10x Faster)

Custom binary protocol on port 6970 is 10x faster than JSON REST APIs. Most databases only have slow HTTP/JSON. We have both.

Zero Config Setup

brew install nexadb → nexadb start → Done! No configuration files, no setup wizards, no Docker required. Pure Python, works everywhere.

Lightning Fast Queries

Advanced indexing (B-Tree, Hash, Full-text) delivers 100-200x speedup. <1ms lookups, 20K reads/sec. Fast enough for real apps.

TOON Export (Convenience)

Built-in TOON export for 40-50% LLM cost savings. Just convenience - you can use jsontooncraft or any TOON library with your JSON data.

Beautiful Admin Panel

Gorgeous UI out of the box. Query editor, TOON export, real-time monitoring. Dark/light themes. No extra tools needed.

Secure by Default

Built-in encryption, RBAC, API keys, and audit logging. Secure enough for production without complex setup. MongoDB-inspired security model.

Fast Enough for Real Apps

Not trying to beat PostgreSQL. Just fast enough for MVPs and production apps with thousands of users.

Speed Improvements

HNSW Vector Search200x
B-Tree Range Query150x
Hash Index Lookup180x
Full-Text Search120x

Real Numbers

25.5K/s
@ 1M scale (binary protocol)
124K/s
Direct API (no network)
1.13M
Operations tested (0% errors)
0.039ms
Write latency @ 1M scale
PRODUCTION-GRADE ENGINEERING

Built on Solid Foundations

Enterprise-grade architecture designed for performance, reliability, and scale

Performance Engine

Binary Protocol + MessagePack
10x faster than JSON REST APIs - 25K+ ops/sec @ 1M scale
Dual MemTable Architecture
Non-blocking writes during flush - 2x write throughput
LRU Cache (10K items)
80%+ cache hit rate for hot reads - avoids disk I/O
SortedDict MemTable
O(log n) inserts vs O(n log n) - 5x faster writes

Storage & Indexing

LSM-Tree Storage Engine
Write-optimized with efficient reads - production-proven architecture
Bloom Filters
95% reduction in useless disk reads - 1% false positive rate
B-Tree Secondary Indexes
O(log n) queries instead of O(n) - 150x faster range scans
Hash Indexes
O(1) exact-match lookups - 180x faster than full scan

AI & Vector Search

HNSW Vector Index
200x faster than brute-force - <1ms searches on 100K vectors
Cosine Similarity
Built-in semantic search for AI/ML workloads
Auto-Indexing
Automatically creates vector indexes on inserts
Persistent Vector Storage
Vectors saved to disk with automatic recovery

Reliability & Durability

Write-Ahead Log (WAL)
Every write logged before applying - max 10ms data loss window
Automatic Crash Recovery
WAL replay on startup - zero data loss on clean shutdown
Background Fsync
Batched disk writes every 10ms - durability + performance
Immutable SSTables
No partial writes - atomic file creation for safety

Query Optimization

Cost-Based Query Optimizer
Automatically chooses index vs full scan - intelligent execution plans
Composite Indexes
Multi-field indexes for complex queries
Query Explain Plans
See exactly how queries execute - optimize your workloads
Predicate Reordering
Most selective filters first - minimizes data scanned

Security & Access Control

AES-256-GCM Encryption
Data encrypted at rest - enterprise-grade security
RBAC (Role-Based Access)
Fine-grained permissions - admin/user/readonly roles
Binary Protocol Authentication
Secure handshake on both HTTP and binary protocols
API Key Management
Generate and revoke keys - audit trail included

Why NexaDB is Production-Ready

O(log n)
Memory Lookups
SortedDict + LRU cache for instant reads
10ms
Max Data Loss
WAL with fsync every 10ms - configurable to 1ms
95%
Bloom Filter Hit
Eliminates 95% of useless disk reads
2x
Dual MemTable
Non-blocking writes during background flush
BUILT-IN ADMIN PANEL

Manage Your Data
Visually

Beautiful, modern admin interface with TOON export included. Access at http://localhost:9999

TOON Export with Statistics
One-click export with token reduction stats
Query Editor (JSON/TOON Toggle)
Switch between JSON and TOON format results
Real-time Dashboard
Monitor operations, storage, and performance
Dark/Light Themes
Beautiful modern UI with theme switching
Explore Admin Panel
Admin Panel - Monitoring
Admin Panel - Collections
Admin Panel - Dashboard
INTERACTIVE TUTORIAL

Build Your First AI App

From zero to semantic search in 5 minutes. No ML expertise required.

Total Time
~5 min
STEP 1 OF 7

Install NexaDB

One command. No Docker. No config files. Done.

30 seconds
bash
brew install nexadb

Why Developers Love NexaDB

Built for speed, simplicity, and AI apps. Not trying to replace MongoDB - just better for rapid development.

NexaDB
Vector Search Built-in
HNSW algorithm. 200x faster. No need for separate Pinecone/Weaviate.
Binary Protocol (10x Faster)
Custom binary protocol on port 6970. Most databases only have slow JSON APIs.
brew install nexadb
One command install. No Docker, no config files, no setup wizard. Just works.
Beautiful Admin Panel
Gorgeous UI included. No need to install MongoDB Compass separately.
TOON Export
Built-in convenience. Or use jsontooncraft with any database.
Secure Enough
Encryption, RBAC, API keys out of the box. Not bank-level, but solid.
MongoDB
Industry Standard
Battle-tested for 15+ years. Used by Fortune 500 companies.
HTTP/JSON API Only
Slow JSON-based wire protocol. No efficient binary protocol option.
Complex Setup
Config files, sharding, replica sets. Takes hours to set up properly.
No LLM Optimization
Standard JSON only. Costs 40-50% more for AI applications.
Requires External Tools
Need Compass, Atlas, or other tools for admin UI.
No Vector Search
Need to integrate with Pinecone, Weaviate, or build custom solution.

Perfect For

MVPs & Prototypes

Ship in hours, not days. Zero config, admin panel included, fast enough for production. Perfect for hackathons and proving concepts quickly.

AI & RAG Apps

Vector search + TOON format = perfect for ChatGPT wrappers, semantic search, and AI chatbots. Save 40-50% on LLM costs instantly.

Fast-Moving Startups

Build fast, iterate faster. When you need to ship features daily and MongoDB feels like overkill. Production-ready but not over-engineered.

Start Building

Simple API with TOON format support - Python and JavaScript clients available, Java coming soon

JavaScriptnexaclient
const { NexaClient } = require('nexaclient');

const client = new NexaClient({
  host: 'localhost',
  port: 6970,
  username: 'root',
  password: 'nexadb123'
});

await client.connect();

// Export in TOON format (40-50% fewer tokens)
const { toonData, stats } =
  await client.exportToon('users');

console.log('Token Reduction:',
  stats.reduction_percent + '%');
Pythonnexaclient
from nexaclient import NexaClient

client = NexaClient(
    host='localhost',
    port=6970,
    username='root',
    password='nexadb123'
)
client.connect()

# Export in TOON format
toon_data, stats = client.export_toon('users')

print(f"Token Reduction: {stats['reduction_percent']}%")
Java + Spring BootCOMING SOON
// Spring Boot auto-configuration
@Service
public class UserService {
    private final NexaClient client;

    public UserService(NexaClient client) {
        this.client = client;
    }

    public String createUser(String name) {
        Map<String, Object> user =
            Map.of("name", name);
        Map<String, Object> result =
            client.create("users", user);
        return (String) result.get("document_id");
    }
}

Ready to save
40-50% on LLM costs?

Join developers building AI applications with NexaDB and TOON format