The intelligence & security layer
for AI agents

Trace, observe and connect intelligence on demand.

LIVE OBSERVABILITY MAP
AIAgentAuth LayerPolicy EngineMemoryFacebook AdsGoogle AdsPostgreSQL12ms180ms95ms42ms

In a world where AI is getting more powerful, you need a way to observe and provide intelligence on demand.

Five pillars of
agent infrastructure

One-click OAuth to all major platforms. Automatic token refresh, secure credential storage, and a unified API for every data source your agent needs.

FacebookFacebook
Google AdsGoogle Ads
LinkedInLinkedIn
TikTokTikTok
HubSpotHubSpot
SalesforceSalesforce
GmailGmail
Google DriveGoogle Drive
CalendarCalendar
PostgresPostgres
FirecrawlFirecrawl
InstagramInstagram

Tokens are never stored in plaintext. Each credential is encrypted with a unique initialization vector and authenticated with GCM tags to prevent tampering.

AES-256-GCM Encryption
Military-grade authenticated encryption for every token
Automatic Token Refresh
Tokens are rotated before expiry—zero downtime
Zero Plaintext Exposure
Tokens are decrypted only at the moment of use, in-memory

Every API call your agent makes through Datagran is traced end-to-end. See latency, token usage, which data sources were hit, and the full decision chain.

trace_8f3a…c2d1342ms
AI AgentDatagranFacebook AdsGoogle AdsPostgreSQLBrain Memory12ms180ms95ms42ms
4 spans
1,240 tokens
success

Set granular policies per action type. Every agent request passes through the policy engine before execution—blocked actions never reach the data source.

Read campaign data
ALLOW
Query memory context
ALLOW
Update campaign budget
REVIEW
Delete user data
BLOCK

Risk scoring, human-in-the-loop approvals, and full audit trails for every policy decision.

Personas are AI agents that evaluate other AI agents. They simulate adversarial scenarios, test for prompt injection, data exfiltration, and policy circumvention—before your agent goes live.

Red Team

Attempts prompt injection and policy bypass

Compliance

Validates outputs against regulatory rules

Bias Checker

Flags biased or unfair targeting decisions

PII Guard

Prevents sensitive data from leaking out

Product

Datagran
Universal Memory

Give every AI agent persistent, queryable memory that scales from a single conversation to millions of interactions. Three tiers. One API call. An LLM planner decides where to store and where to search.

Short-term Memory

Always in context. A rolling summary plus recent raw entries. Every data fetch is auto-ingested as a structured DG entry.

Rolling summary~5k tokens
Raw entries (always intact)~10k tokens
Auto-rollup threshold50k tokens

Compiled Wiki

NEW

An LLM planner evaluates every ingestion and decides what becomes durable knowledge. Structured markdown pages, interlinked, source-aware, and syncable to Obsidian.

Page kindsentity / concept / topic / analysis
LLM planner decidescreate / update / skip
Obsidian syncpull-only plugin

Long-term Memory (RAG)

When the brain exceeds 50k tokens, overflow is embedded into vector chunks and archived for semantic search. Unlimited history, always retrievable.

Vector embeddings~500 tokens/chunk
Semantic search (cosine)top-K retrieval
Capacityunlimited
MEMORY ARCHITECTURE
Raw DataingestShort-termalways in contextLLM Plannerdecides whatWiki Compilercreate / updateWiki Pagesmarkdown + linksRollup>50k tokensLong-term RAGvector searchQuery APImulti-layerObsidian Sync

How it works

01

Ingest

Any data (ads, CRM, web scrapes, or raw text) is auto-ingested into short-term memory as a DG entry.

02

Planner

An LLM planner evaluates new data and decides whether to create, update, or skip wiki pages.

03

Compile Wiki

The wiki compiler turns source material into structured, interlinked markdown pages with source refs.

04

Query

Call POST /api/context/brain with a question. The planner searches across all three tiers.

05

Sync

Wiki pages sync to Obsidian via a pull-only plugin. Managed folder, incremental diffs, zero Git.

POST
/api/context/brain
Request
{
  "question": "What is my Facebook account ID?",
  "endUserExternalId": "user-123",
  "mindState": "auto",
  "maxTokens": 512,
  "temperature": 0.7,
  "providers": ["facebook_ads"],
  "include": {
    "citations": true,
    "reconcile": true,
    "trace": "full"
  }
}
Response
{
  "success": true,
  "answer": "Your Facebook account ID is 12345.",
  "mode": "long_term",
  "short_term": { "raw_text": "...", "tokens": 15000, "entry_count": 25 },
  "wiki": [
    { "slug": "facebook-account", "title": "Facebook Account", "kind": "entity", "relevance": 0.87 }
  ],
  "long_term": [
    { "snippet": "Connected Facebook account 12345...", "relevance": 0.82, "provider": "facebook_ads" }
  ],
  "citations": [
    { "kind": "wiki", "ref": "WK:facebook-account", "score": 0.87 },
    { "kind": "short_term", "ref": "ST:22", "score": 0.91 },
    { "kind": "long_term", "ref": "LT:3", "score": 0.82 }
  ],
  "planner_usage": { "model": "claude-haiku-4-5", "estimated_cost_usd": 0.0002 },
  "search_trace": {
    "search_order": ["short_term", "wiki", "long_term_sources"],
    "layers": {
      "short_term": { "searched": true, "used": true },
      "wiki": { "searched": true, "used": true, "hit_count": 1 },
      "long_term_sources": { "searched": true, "used": true, "hit_count": 3 }
    }
  }
}
Mind States
auto

LLM planner searches short-term + wiki + RAG. Stops early when evidence is strong.

short_term

Returns raw brain text only. No LLM call, lowest latency.

long_term

Short-term + wiki + RAG + inference. Full answer with citations and search trace.

Semantic Evidence Weighting

When you enable include.citations or include.reconcile, every memory entry is scored using a multiplicative formula that combines semantic similarity with temporal freshness.

Scoring formula
score = semantic × (0.6 + 0.4 × freshness)
where:
semantic = cosine similarity between query embedding and entry embedding, normalized to [0, 1]
freshness = e(-ln(2) × age_days / half_life) — exponential decay, 14-day half-life
Short-term scoring

The last 40 DG entries are embedded alongside your query. Each gets a cosine similarity score and a freshness score based on its timestamp. Top 8 are selected.

Candidate pool40 entries
Top-K returned8
Ref formatST:12
Long-term scoring

RAG chunks retrieved by vector similarity are re-scored with the same formula. If a rerank service is configured, results pass through a cross-encoder for higher precision.

Initial retrievaltop-K × 3
After reranktop-K
Ref formatLT:3
Weights in the response
"memory_weights": {
  "params": {
    "short_term_candidate_limit": 40,
    "short_term_top_k": 8,
    "long_term_top_k": 8,
    "freshness_half_life_days": 14
  },
  "short_term": {
    "candidates": [
      { "ref": "ST:22", "provider": "facebook_ads",
        "semantic": 0.87, "freshness": 0.92, "score": 0.74 }
    ]
  },
  "long_term": {
    "candidates": [
      { "ref": "LT:3", "provider": "facebook_ads",
        "semantic": 0.82, "freshness": 0.65, "score": 0.59,
        "relevance": 0.82 }
    ]
  }
}

When reconcile=true, conflicting evidence across tiers is flagged and the model is instructed to prefer the most recent high-confidence source for mutable facts.

Wiki API

Search compiled wiki pages directly, list pages, get a single page by slug, or trigger a recompile.

POST/api/context/wiki/search
GET/api/context/wiki/pages
GET/api/context/wiki/pages/:slug
POST/api/context/sources/search

Obsidian Sync

Pull-only sync into a managed Obsidian vault folder. No Git required. The plugin fetches manifests, diffs, and individual files through dedicated APIs.

Install once, syncs on demand or on interval
Only writes inside a managed folder (e.g. Datagran/)
Incremental: only changed files transfer
Edits in Obsidian stay local (Datagran is source of truth)

Sync

Obsidian
Wiki Sync

Your AI agent's compiled wiki, mirrored as interlinked markdown files inside your Obsidian vault. Pull-only, no Git, no config files. Install once, sync on demand.

What the Obsidian plugin does

Not a data export

The download is the plugin itself—a small app you install once into Obsidian. It's not a zip of wiki content.

Keeps your vault updated

Every time your agent ingests data, the wiki may update. Next sync pulls only changed files into your vault's managed folder.

On-demand or automatic

Click “Sync now” whenever you want, or set a background interval (e.g. every 5 minutes) in plugin settings.

Read-only mirror

Datagran is the source of truth. Edits you make to synced files in Obsidian stay local and will be overwritten on next sync.

Setup in 5 minutes

1

Download the plugin

Download the Datagran Obsidian plugin zip. This contains main.js, manifest.json, and styles.css.

Download plugin zip

The “Copy instructions for AI” button copies a markdown guide to your clipboard. Paste it into ChatGPT, Claude, or any AI assistant and it will walk you through finding the zip, locating your vault, and installing the plugin.

2

Install into Obsidian

Find your vault's location. On macOS you can check ~/Library/Application Support/obsidian/obsidian.json — vaults are often in iCloud (~/Library/Mobile Documents/iCloud~md~obsidian/Documents/), not ~/Documents.

Unzip the download into your vault's plugin folder. Create the plugins/ folder if it doesn't exist yet:

mkdir -p "/path/to/your-vault/.obsidian/plugins/datagran-obsidian"
unzip -o ~/Downloads/datagran-obsidian-v0.1.0.zip \
  -d "/path/to/your-vault/.obsidian/plugins/datagran-obsidian/"

# Verify files are directly inside (not nested in a subfolder):
ls /path/to/your-vault/.obsidian/plugins/datagran-obsidian/
# → main.js  manifest.json  styles.css

Then open Obsidian → Settings → Community plugins → turn off Restricted mode if prompted → enable Datagran Wiki Sync. You may need to restart Obsidian if the plugin doesn't appear.

3

Create a sync target + token

Use your Datagran API key to create a sync target and mint a plugin token. You'll need the target ID and token for the plugin settings.

# Create a sync target
curl -X POST 'https://YOUR_DATAGRAN_URL/api/context/obsidian/targets' \
  -H 'x-api-key: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "end_user_external_id": "user_123",
    "name": "My Vault",
    "root_folder": "Datagran"
  }'
# Returns: { "target": { "id": "TARGET_UUID", ... } }

# Mint a plugin token
curl -X POST 'https://YOUR_DATAGRAN_URL/api/context/obsidian/plugin-sessions' \
  -H 'x-api-key: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "end_user_external_id": "user_123",
    "target_id": "TARGET_UUID"
  }'
# Returns: { "token": "dgo_abc123...", "expires_at": "..." }
4

Configure the plugin

In Obsidian, go to Settings → Datagran Wiki Sync and fill in:

Base URLYour Datagran instance URL (e.g. https://www.datagran.io)
Plugin tokenThe dgo_... token from step 3
Target IDThe UUID returned when you created the sync target
Managed folderDefault Datagran—the plugin only writes inside this folder
Auto-syncOptional interval in minutes (0 = manual only)
5

Sync

Press Ctrl+P (or Cmd+P on Mac) and run Datagran: Sync now. Your vault will get:

Datagran/
  index.md              # Auto-generated index with [[wikilinks]]
  log.md                # Sync event log
  topics/
    acme-pricing.md     # Compiled wiki pages
    customer-success.md
  entities/
    zenith-competitor.md
  analysis/
    churn-2025.md

Every subsequent sync only transfers files that changed since the last sync. The plugin tracks a cursor internally so you never re-download unchanged pages.

How sync works under the hood

Datagranwiki pages updatedmanifestSync APIdiff + filespull .mdPluginwrites to vaultwriteObsidian VaultDatagran/ folderack cursor
First sync

Fetches the full manifest, downloads all files (index, log, pages), writes them into the managed folder.

Incremental sync

Sends cursor from last sync, gets only changed/deleted files, downloads and writes them, then acknowledges the new cursor.

Safety

Plugin never writes outside the managed folder. If sync fails mid-way, the cursor stays where it was—retry is safe.

Security

Encryption at
every layer

Your data, your tokens, your agent's memory—all protected with bank-grade encryption. Nothing is ever stored in plaintext.

AES-256-GCM

Every OAuth token is encrypted using AES-256 with Galois/Counter Mode. Each encryption uses a unique 96-bit initialization vector and produces an authentication tag.

// Encryption at rest
cipher = AES-256-GCM
iv = random(96 bits)
tag = authenticated
key = env.ENCRYPTION_KEY

Zero Token Exposure

Tokens are decrypted only at the instant they're needed—in memory, for the duration of the API call. They're never logged, never cached, never written to disk unencrypted.

Decrypt in memory only
Scoped to single request
No disk writes, no logs

Infrastructure Security

TLS everywhere, encrypted storage at rest, isolated compute per partner, and full audit trails for every data access.

TLS 1.3All data in transit
RLSRow-level security per partner
AUDITFull trace logs for compliance

Ready to build?

Sign up for the Datagran Intelligence Layer and start connecting your agents to the data sources they need.