The intelligence & security layer
for AI agents
Trace, observe and connect intelligence on demand.
In a world where AI is getting more powerful, you need a way to observe and provide intelligence on demand.
Five pillars of
agent infrastructure
One-click OAuth to all major platforms. Automatic token refresh, secure credential storage, and a unified API for every data source your agent needs.
Google Ads
LinkedIn
TikTok
HubSpot
Salesforce
Gmail
Google Drive
Calendar
Postgres
InstagramTokens are never stored in plaintext. Each credential is encrypted with a unique initialization vector and authenticated with GCM tags to prevent tampering.
Every API call your agent makes through Datagran is traced end-to-end. See latency, token usage, which data sources were hit, and the full decision chain.
Set granular policies per action type. Every agent request passes through the policy engine before execution—blocked actions never reach the data source.
Risk scoring, human-in-the-loop approvals, and full audit trails for every policy decision.
Personas are AI agents that evaluate other AI agents. They simulate adversarial scenarios, test for prompt injection, data exfiltration, and policy circumvention—before your agent goes live.
Attempts prompt injection and policy bypass
Validates outputs against regulatory rules
Flags biased or unfair targeting decisions
Prevents sensitive data from leaking out
Product
Datagran
Universal Memory
Give every AI agent persistent, queryable memory that scales from a single conversation to millions of interactions. Three tiers. One API call. An LLM planner decides where to store and where to search.
Short-term Memory
Always in context. A rolling summary plus recent raw entries. Every data fetch is auto-ingested as a structured DG entry.
Compiled Wiki
NEWAn LLM planner evaluates every ingestion and decides what becomes durable knowledge. Structured markdown pages, interlinked, source-aware, and syncable to Obsidian.
Long-term Memory (RAG)
When the brain exceeds 50k tokens, overflow is embedded into vector chunks and archived for semantic search. Unlimited history, always retrievable.
How it works
Ingest
Any data (ads, CRM, web scrapes, or raw text) is auto-ingested into short-term memory as a DG entry.
Planner
An LLM planner evaluates new data and decides whether to create, update, or skip wiki pages.
Compile Wiki
The wiki compiler turns source material into structured, interlinked markdown pages with source refs.
Query
Call POST /api/context/brain with a question. The planner searches across all three tiers.
Sync
Wiki pages sync to Obsidian via a pull-only plugin. Managed folder, incremental diffs, zero Git.
/api/context/brain{
"question": "What is my Facebook account ID?",
"endUserExternalId": "user-123",
"mindState": "auto",
"maxTokens": 512,
"temperature": 0.7,
"providers": ["facebook_ads"],
"include": {
"citations": true,
"reconcile": true,
"trace": "full"
}
}{
"success": true,
"answer": "Your Facebook account ID is 12345.",
"mode": "long_term",
"short_term": { "raw_text": "...", "tokens": 15000, "entry_count": 25 },
"wiki": [
{ "slug": "facebook-account", "title": "Facebook Account", "kind": "entity", "relevance": 0.87 }
],
"long_term": [
{ "snippet": "Connected Facebook account 12345...", "relevance": 0.82, "provider": "facebook_ads" }
],
"citations": [
{ "kind": "wiki", "ref": "WK:facebook-account", "score": 0.87 },
{ "kind": "short_term", "ref": "ST:22", "score": 0.91 },
{ "kind": "long_term", "ref": "LT:3", "score": 0.82 }
],
"planner_usage": { "model": "claude-haiku-4-5", "estimated_cost_usd": 0.0002 },
"search_trace": {
"search_order": ["short_term", "wiki", "long_term_sources"],
"layers": {
"short_term": { "searched": true, "used": true },
"wiki": { "searched": true, "used": true, "hit_count": 1 },
"long_term_sources": { "searched": true, "used": true, "hit_count": 3 }
}
}
}autoLLM planner searches short-term + wiki + RAG. Stops early when evidence is strong.
short_termReturns raw brain text only. No LLM call, lowest latency.
long_termShort-term + wiki + RAG + inference. Full answer with citations and search trace.
When you enable include.citations or include.reconcile, every memory entry is scored using a multiplicative formula that combines semantic similarity with temporal freshness.
The last 40 DG entries are embedded alongside your query. Each gets a cosine similarity score and a freshness score based on its timestamp. Top 8 are selected.
ST:12RAG chunks retrieved by vector similarity are re-scored with the same formula. If a rerank service is configured, results pass through a cross-encoder for higher precision.
LT:3"memory_weights": {
"params": {
"short_term_candidate_limit": 40,
"short_term_top_k": 8,
"long_term_top_k": 8,
"freshness_half_life_days": 14
},
"short_term": {
"candidates": [
{ "ref": "ST:22", "provider": "facebook_ads",
"semantic": 0.87, "freshness": 0.92, "score": 0.74 }
]
},
"long_term": {
"candidates": [
{ "ref": "LT:3", "provider": "facebook_ads",
"semantic": 0.82, "freshness": 0.65, "score": 0.59,
"relevance": 0.82 }
]
}
}When reconcile=true, conflicting evidence across tiers is flagged and the model is instructed to prefer the most recent high-confidence source for mutable facts.
Wiki API
Search compiled wiki pages directly, list pages, get a single page by slug, or trigger a recompile.
Obsidian Sync
Pull-only sync into a managed Obsidian vault folder. No Git required. The plugin fetches manifests, diffs, and individual files through dedicated APIs.
Datagran/)Sync
Obsidian
Wiki Sync
Your AI agent's compiled wiki, mirrored as interlinked markdown files inside your Obsidian vault. Pull-only, no Git, no config files. Install once, sync on demand.
What the Obsidian plugin does
The download is the plugin itself—a small app you install once into Obsidian. It's not a zip of wiki content.
Every time your agent ingests data, the wiki may update. Next sync pulls only changed files into your vault's managed folder.
Click “Sync now” whenever you want, or set a background interval (e.g. every 5 minutes) in plugin settings.
Datagran is the source of truth. Edits you make to synced files in Obsidian stay local and will be overwritten on next sync.
Setup in 5 minutes
Download the plugin
Download the Datagran Obsidian plugin zip. This contains main.js, manifest.json, and styles.css.
The “Copy instructions for AI” button copies a markdown guide to your clipboard. Paste it into ChatGPT, Claude, or any AI assistant and it will walk you through finding the zip, locating your vault, and installing the plugin.
Install into Obsidian
Find your vault's location. On macOS you can check ~/Library/Application Support/obsidian/obsidian.json — vaults are often in iCloud (~/Library/Mobile Documents/iCloud~md~obsidian/Documents/), not ~/Documents.
Unzip the download into your vault's plugin folder. Create the plugins/ folder if it doesn't exist yet:
mkdir -p "/path/to/your-vault/.obsidian/plugins/datagran-obsidian" unzip -o ~/Downloads/datagran-obsidian-v0.1.0.zip \ -d "/path/to/your-vault/.obsidian/plugins/datagran-obsidian/" # Verify files are directly inside (not nested in a subfolder): ls /path/to/your-vault/.obsidian/plugins/datagran-obsidian/ # → main.js manifest.json styles.css
Then open Obsidian → Settings → Community plugins → turn off Restricted mode if prompted → enable Datagran Wiki Sync. You may need to restart Obsidian if the plugin doesn't appear.
Create a sync target + token
Use your Datagran API key to create a sync target and mint a plugin token. You'll need the target ID and token for the plugin settings.
# Create a sync target
curl -X POST 'https://YOUR_DATAGRAN_URL/api/context/obsidian/targets' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"end_user_external_id": "user_123",
"name": "My Vault",
"root_folder": "Datagran"
}'
# Returns: { "target": { "id": "TARGET_UUID", ... } }
# Mint a plugin token
curl -X POST 'https://YOUR_DATAGRAN_URL/api/context/obsidian/plugin-sessions' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"end_user_external_id": "user_123",
"target_id": "TARGET_UUID"
}'
# Returns: { "token": "dgo_abc123...", "expires_at": "..." }Configure the plugin
In Obsidian, go to Settings → Datagran Wiki Sync and fill in:
dgo_... token from step 3Datagran—the plugin only writes inside this folderSync
Press Ctrl+P (or Cmd+P on Mac) and run Datagran: Sync now. Your vault will get:
Datagran/
index.md # Auto-generated index with [[wikilinks]]
log.md # Sync event log
topics/
acme-pricing.md # Compiled wiki pages
customer-success.md
entities/
zenith-competitor.md
analysis/
churn-2025.mdEvery subsequent sync only transfers files that changed since the last sync. The plugin tracks a cursor internally so you never re-download unchanged pages.
How sync works under the hood
Fetches the full manifest, downloads all files (index, log, pages), writes them into the managed folder.
Sends cursor from last sync, gets only changed/deleted files, downloads and writes them, then acknowledges the new cursor.
Plugin never writes outside the managed folder. If sync fails mid-way, the cursor stays where it was—retry is safe.
Security
Encryption at
every layer
Your data, your tokens, your agent's memory—all protected with bank-grade encryption. Nothing is ever stored in plaintext.
AES-256-GCM
Every OAuth token is encrypted using AES-256 with Galois/Counter Mode. Each encryption uses a unique 96-bit initialization vector and produces an authentication tag.
Zero Token Exposure
Tokens are decrypted only at the instant they're needed—in memory, for the duration of the API call. They're never logged, never cached, never written to disk unencrypted.
Infrastructure Security
TLS everywhere, encrypted storage at rest, isolated compute per partner, and full audit trails for every data access.
Ready to build?
Sign up for the Datagran Intelligence Layer and start connecting your agents to the data sources they need.