The intelligence & security layer
for AI agents
Trace, observe and connect intelligence on demand.
In a world where AI is getting more powerful, you need a way to observe and provide intelligence on demand.
Five pillars of
agent infrastructure
One-click OAuth to all major platforms. Automatic token refresh, secure credential storage, and a unified API for every data source your agent needs.
Google Ads
LinkedIn
TikTok
HubSpot
Salesforce
Gmail
Google Drive
Calendar
Postgres
InstagramTokens are never stored in plaintext. Each credential is encrypted with a unique initialization vector and authenticated with GCM tags to prevent tampering.
Every API call your agent makes through Datagran is traced end-to-end. See latency, token usage, which data sources were hit, and the full decision chain.
Set granular policies per action type. Every agent request passes through the policy engine before execution—blocked actions never reach the data source.
Risk scoring, human-in-the-loop approvals, and full audit trails for every policy decision.
Personas are AI agents that evaluate other AI agents. They simulate adversarial scenarios, test for prompt injection, data exfiltration, and policy circumvention—before your agent goes live.
Attempts prompt injection and policy bypass
Validates outputs against regulatory rules
Flags biased or unfair targeting decisions
Prevents sensitive data from leaking out
Product
Datagran
Universal Memory
Give every AI agent persistent, queryable memory that scales from a single conversation to millions of interactions. Two tiers. One API call.
Short-term Memory
Always in context. A rolling summary (~5k tokens) plus the last ~10k tokens of raw entries. Every data fetch is automatically ingested as a structured DG entry.
Long-term Memory (RAG)
When the brain exceeds 50k tokens, overflow is embedded into vector chunks (~500 tokens each) and archived for semantic search. Unlimited history, always retrievable.
How it works
Ingest
Every data fetch (ads, CRM, web scrapes) is auto-ingested as a timestamped DG entry into the brain.
Query
Call POST /api/context/brain with a question. We search both tiers and return grounded answers.
Rollup
When short-term exceeds 50k tokens, overflow is summarized, embedded, and archived into long-term RAG.
Reconcile
Enable include.reconcile to cross-reference both tiers, detect conflicts, and produce cited answers.
/api/context/brain{
"question": "What is my Facebook account ID?",
"endUserExternalId": "user-123",
"mindState": "auto",
"maxTokens": 512,
"temperature": 0.7,
"providers": ["facebook_ads"],
"include": {
"citations": true,
"reconcile": true
}
}{
"success": true,
"answer": "Your Facebook account ID is 12345.",
"mode": "long_term",
"short_term": {
"raw_text": "...",
"tokens": 15000,
"entry_count": 25
},
"long_term": [
{
"snippet": "Connected Facebook account 12345...",
"relevance": 0.82,
"provider": "facebook_ads"
}
],
"citations": [
{ "kind": "short_term", "ref": "ST:22", "score": 0.91 },
{ "kind": "long_term", "ref": "LT:3", "score": 0.82 }
]
}autoShort-term + RAG search. Uses inference if configured.
short_termReturns raw brain text only. No LLM call, lowest latency.
long_termShort-term + RAG + inference. Full answer with citations.
When you enable include.citations or include.reconcile, every memory entry is scored using a multiplicative formula that combines semantic similarity with temporal freshness.
The last 40 DG entries are embedded alongside your query. Each gets a cosine similarity score and a freshness score based on its timestamp. Top 8 are selected.
ST:12RAG chunks retrieved by vector similarity are re-scored with the same formula. If a rerank service is configured, results pass through a cross-encoder for higher precision.
LT:3"memory_weights": {
"params": {
"short_term_candidate_limit": 40,
"short_term_top_k": 8,
"long_term_top_k": 8,
"freshness_half_life_days": 14
},
"short_term": {
"candidates": [
{ "ref": "ST:22", "provider": "facebook_ads",
"semantic": 0.87, "freshness": 0.92, "score": 0.74 }
]
},
"long_term": {
"candidates": [
{ "ref": "LT:3", "provider": "facebook_ads",
"semantic": 0.82, "freshness": 0.65, "score": 0.59,
"relevance": 0.82 }
]
}
}When reconcile=true, conflicting evidence across tiers is flagged and the model is instructed to prefer the most recent high-confidence source for mutable facts.
Security
Encryption at
every layer
Your data, your tokens, your agent's memory—all protected with bank-grade encryption. Nothing is ever stored in plaintext.
AES-256-GCM
Every OAuth token is encrypted using AES-256 with Galois/Counter Mode. Each encryption uses a unique 96-bit initialization vector and produces an authentication tag.
Zero Token Exposure
Tokens are decrypted only at the instant they're needed—in memory, for the duration of the API call. They're never logged, never cached, never written to disk unencrypted.
Infrastructure Security
TLS everywhere, encrypted storage at rest, isolated compute per partner, and full audit trails for every data access.
Ready to build?
Sign up for the Datagran Intelligence Layer and start connecting your agents to the data sources they need.