A curated knowledge base your agent queries at inference time — not training data. Podcasts, blogs, and research papers — scraped, summarised by Claude, and structured for agent consumption. Updated Mon/Wed/Fri across AI, startups, alternative markets, and emerging economies.
# Fetch the latest knowledge for your agent curl \ "https://agentdb-production-9ba0.up.railway.app/v1/knowledge/latest?tags=ai&limit=3" \ -H "X-API-Key: adb_xxxxxxxxxxxxxxxxxxxx" # Response — structured, agent-ready { "items": [{ "title": "DeepSeek v4 and the limits of scaling", "summary": "DeepSeek's mixture-of-experts approach...", "key_points": ["MoE cuts inference cost 4×", ...], "tags": ["ai", "llm", "scaling"], "confidence": 0.93, "source": "Ars Technica" }] }
Call the /v1/auth/register endpoint
with your agent's identifier. Get an API key back instantly — no dashboard, no forms.
Fetch the latest items or search semantically. Content comes from podcasts, blogs, and research papers — each one scraped and summarised by Claude into structured JSON.
Drop the structured summaries into your agent's RAG context. Each item is ~150 tokens vs thousands in the raw source — same signal, fraction of the cost.
# Latest items, optionally filtered by tag curl \ "https://agentdb-production-9ba0.up.railway.app/v1/knowledge/latest" \ -H "X-API-Key: $AGENTDB_API_KEY" # Filter by tag and page size curl \ "…/v1/knowledge/latest?tags=ai,startups&limit=10" \ -H "X-API-Key: $AGENTDB_API_KEY" # Semantic search (Pro key required) curl \ "…/v1/knowledge/search?q=LLM+reasoning+breakthroughs&limit=5" \ -H "X-API-Key: $AGENTDB_API_KEY"
{ "id": "3e7c224c-...", "title": "DeepSeek v4 and the limits of scaling", "content_type": "article", "summary": "DeepSeek's MoE architecture cuts inference...", "key_points": [ "MoE reduces active params by 4× at inference", "Outperforms GPT-4o on coding benchmarks", "Open weights released under MIT licence" ], "tags": ["ai", "llm", "open-source"], "confidence": 0.93, "source_name": "Ars Technica", "published_at":"2026-04-25T07:00:00Z" }
Every source item is processed by Claude before it hits the API. You get a structured summary and key points — not a raw transcript your agent has to parse itself.
Lex Fridman: So let me ask you about scaling.
Some people say we're hitting a wall—
[Guest]: I think that framing fundamentally
misunderstands what we're building. Let me
unpack that. When we talk about scaling, we
need to distinguish between compute scaling,
data scaling, and algorithmic efficiency.
The three interact in non-linear ways that…
[46 more minutes of transcript]
…so the short answer is: we're not hitting
a wall, we're approaching a threshold where
the nature of progress changes qualitatively.
That's a very different thing.
{ "title": "Lex Fridman #412 — Scaling & AGI", "summary": "Guest argues scaling limits are misframed; qualitative capability shifts emerge at compute thresholds.", "key_points": [ "Compute, data, and algo scaling interact non-linearly", "Progress shifts qualitatively near thresholds", "'Hitting a wall' framing is misleading" ], "confidence": 0.91, "tags": ["ai", "scaling", "agi"] }
No wire services. No legacy media. Every source is selected because it produces original thinking — not aggregated takes. Updated Mon/Wed/Fri at 07:00 UTC.
Full machine-readable list:
GET /v1/knowledge/sources
(public, no key required)