The intelligence & security layer
for AI agents

Trace, observe and connect intelligence on demand.

LIVE OBSERVABILITY MAP
AIAgentAuth LayerPolicy EngineMemoryFacebook AdsGoogle AdsPostgreSQL12ms180ms95ms42ms

In a world where AI is getting more powerful, you need a way to observe and provide intelligence on demand.

Five pillars of
agent infrastructure

One-click OAuth to all major platforms. Automatic token refresh, secure credential storage, and a unified API for every data source your agent needs.

FacebookFacebook
Google AdsGoogle Ads
LinkedInLinkedIn
TikTokTikTok
HubSpotHubSpot
SalesforceSalesforce
GmailGmail
Google DriveGoogle Drive
CalendarCalendar
PostgresPostgres
FirecrawlFirecrawl
InstagramInstagram

Tokens are never stored in plaintext. Each credential is encrypted with a unique initialization vector and authenticated with GCM tags to prevent tampering.

AES-256-GCM Encryption
Military-grade authenticated encryption for every token
Automatic Token Refresh
Tokens are rotated before expiry—zero downtime
Zero Plaintext Exposure
Tokens are decrypted only at the moment of use, in-memory

Every API call your agent makes through Datagran is traced end-to-end. See latency, token usage, which data sources were hit, and the full decision chain.

trace_8f3a…c2d1342ms
AI AgentDatagranFacebook AdsGoogle AdsPostgreSQLBrain Memory12ms180ms95ms42ms
4 spans
1,240 tokens
success

Set granular policies per action type. Every agent request passes through the policy engine before execution—blocked actions never reach the data source.

Read campaign data
ALLOW
Query memory context
ALLOW
Update campaign budget
REVIEW
Delete user data
BLOCK

Risk scoring, human-in-the-loop approvals, and full audit trails for every policy decision.

Personas are AI agents that evaluate other AI agents. They simulate adversarial scenarios, test for prompt injection, data exfiltration, and policy circumvention—before your agent goes live.

Red Team

Attempts prompt injection and policy bypass

Compliance

Validates outputs against regulatory rules

Bias Checker

Flags biased or unfair targeting decisions

PII Guard

Prevents sensitive data from leaking out

Product

Datagran
Universal Memory

Give every AI agent persistent, queryable memory that scales from a single conversation to millions of interactions. Two tiers. One API call.

Short-term Memory

Always in context. A rolling summary (~5k tokens) plus the last ~10k tokens of raw entries. Every data fetch is automatically ingested as a structured DG entry.

Rolling summary~5k tokens
Raw entries (always intact)~10k tokens
Auto-rollup threshold50k tokens

Long-term Memory (RAG)

When the brain exceeds 50k tokens, overflow is embedded into vector chunks (~500 tokens each) and archived for semantic search. Unlimited history, always retrievable.

Vector embeddings~500 tokens/chunk
Semantic search (cosine)top-K retrieval
Capacityunlimited

How it works

01

Ingest

Every data fetch (ads, CRM, web scrapes) is auto-ingested as a timestamped DG entry into the brain.

02

Query

Call POST /api/context/brain with a question. We search both tiers and return grounded answers.

03

Rollup

When short-term exceeds 50k tokens, overflow is summarized, embedded, and archived into long-term RAG.

04

Reconcile

Enable include.reconcile to cross-reference both tiers, detect conflicts, and produce cited answers.

POST
/api/context/brain
Request
{
  "question": "What is my Facebook account ID?",
  "endUserExternalId": "user-123",
  "mindState": "auto",
  "maxTokens": 512,
  "temperature": 0.7,
  "providers": ["facebook_ads"],
  "include": {
    "citations": true,
    "reconcile": true
  }
}
Response
{
  "success": true,
  "answer": "Your Facebook account ID is 12345.",
  "mode": "long_term",
  "short_term": {
    "raw_text": "...",
    "tokens": 15000,
    "entry_count": 25
  },
  "long_term": [
    {
      "snippet": "Connected Facebook account 12345...",
      "relevance": 0.82,
      "provider": "facebook_ads"
    }
  ],
  "citations": [
    { "kind": "short_term", "ref": "ST:22", "score": 0.91 },
    { "kind": "long_term", "ref": "LT:3", "score": 0.82 }
  ]
}
Mind States
auto

Short-term + RAG search. Uses inference if configured.

short_term

Returns raw brain text only. No LLM call, lowest latency.

long_term

Short-term + RAG + inference. Full answer with citations.

Semantic Evidence Weighting

When you enable include.citations or include.reconcile, every memory entry is scored using a multiplicative formula that combines semantic similarity with temporal freshness.

Scoring formula
score = semantic × (0.6 + 0.4 × freshness)
where:
semantic = cosine similarity between query embedding and entry embedding, normalized to [0, 1]
freshness = e(-ln(2) × age_days / half_life) — exponential decay, 14-day half-life
Short-term scoring

The last 40 DG entries are embedded alongside your query. Each gets a cosine similarity score and a freshness score based on its timestamp. Top 8 are selected.

Candidate pool40 entries
Top-K returned8
Ref formatST:12
Long-term scoring

RAG chunks retrieved by vector similarity are re-scored with the same formula. If a rerank service is configured, results pass through a cross-encoder for higher precision.

Initial retrievaltop-K × 3
After reranktop-K
Ref formatLT:3
Weights in the response
"memory_weights": {
  "params": {
    "short_term_candidate_limit": 40,
    "short_term_top_k": 8,
    "long_term_top_k": 8,
    "freshness_half_life_days": 14
  },
  "short_term": {
    "candidates": [
      { "ref": "ST:22", "provider": "facebook_ads",
        "semantic": 0.87, "freshness": 0.92, "score": 0.74 }
    ]
  },
  "long_term": {
    "candidates": [
      { "ref": "LT:3", "provider": "facebook_ads",
        "semantic": 0.82, "freshness": 0.65, "score": 0.59,
        "relevance": 0.82 }
    ]
  }
}

When reconcile=true, conflicting evidence across tiers is flagged and the model is instructed to prefer the most recent high-confidence source for mutable facts.

Security

Encryption at
every layer

Your data, your tokens, your agent's memory—all protected with bank-grade encryption. Nothing is ever stored in plaintext.

AES-256-GCM

Every OAuth token is encrypted using AES-256 with Galois/Counter Mode. Each encryption uses a unique 96-bit initialization vector and produces an authentication tag.

// Encryption at rest
cipher = AES-256-GCM
iv = random(96 bits)
tag = authenticated
key = env.ENCRYPTION_KEY

Zero Token Exposure

Tokens are decrypted only at the instant they're needed—in memory, for the duration of the API call. They're never logged, never cached, never written to disk unencrypted.

Decrypt in memory only
Scoped to single request
No disk writes, no logs

Infrastructure Security

TLS everywhere, encrypted storage at rest, isolated compute per partner, and full audit trails for every data access.

TLS 1.3All data in transit
RLSRow-level security per partner
AUDITFull trace logs for compliance

Ready to build?

Sign up for the Datagran Intelligence Layer and start connecting your agents to the data sources they need.