Catalogian supports the Model Context Protocol (MCP), letting AI agents query your product catalog changes directly — no custom integration work required. Connect once and your agents can list sources, query delta events, and retrieve full row-level field data.
Claude.ai supports OAuth — no API key or config file needed. Works on web and desktop. Configure once and it syncs everywhere.
Catalogian for the namehttps://api.catalogian.com/v1/mcp for the URLAfter connecting, enable it per-conversation by clicking + in the chat input → Connectors→ toggle Catalogian on.
Note:Configure once on Claude.ai web — settings sync automatically to Claude desktop and mobile.
The MCP endpoint accepts standard Streamable HTTP transport:
POST https://api.catalogian.com/v1/mcp Authorization: Bearer <your-api-key>
Use a Catalogian API key (cat_live_...) from the Keys page in your dashboard. Create it with Read only scope — MCP only reads data, and a read-scoped key prevents any accidental writes via the REST API.
The API key method also works for Cursor, VS Code, and other MCP clients. Add this to your claude_desktop_config.json on macOS at ~/Library/Application Support/Claude/claude_desktop_config.json. Restart Claude Desktop. Your sources will appear as available tools.
{
"mcpServers": {
"catalogian": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://api.catalogian.com/v1/mcp",
"--header",
"Authorization: Bearer cat_live_your_key_here"
]
}
}
}Note: Claude Desktop requires the mcp-remote proxy for HTTP MCP servers — the type: "http" syntax is not yet supported in stable builds. For local or staging testing over HTTP, add "--allow-http" as an additional arg after the URL. Create your API key with Read only scope from the Keys page — MCP only needs read access.
List all feed sources for the current user. No parameters required.
Example response:
[
{
"id": "cmmfyryl30005ygld7m5596gm",
"name": "My Product Feed",
"slug": "my-product-feed",
"status": "active",
"format": "csv",
"lastCheckedAt": "2026-03-08T20:00:00.000Z",
"vanityUrl": "https://catalogian.com/feeds/acme/my-product-feed"
}
]Look up a source by its human-readable slug. Returns source details including the internal ID needed for other tools.
Input:
{ "sourceSlug": "product-catalog" }Example response:
{
"id": "cmmfyryl30005ygld7m5596gm",
"name": "Product Catalog",
"slug": "product-catalog",
"url": "https://example.com/feed.csv",
"type": "http",
"status": "active",
"format": "csv",
"keyField": "sku",
"lastCheckedAt": "2026-03-08T20:00:00.000Z",
"createdAt": "2026-01-15T10:30:00.000Z"
}Get delta events for a feed source. Returns change summaries with counts and affected keys.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
since | ISO 8601 timestamp — only return events after this time |
limit | Max results (default 50, max 200) |
Example response:
[
{
"id": "evt_01j...",
"snapshotId": "snap_01j...",
"sourceId": "cmmfyryl30005ygld7m5596gm",
"detectedAt": "2026-03-08T20:00:00.000Z",
"newCount": 14,
"changedCount": 3,
"deletedCount": 0,
"unchangedCount": 49983,
"totalCount": 50000,
"newKeys": ["sku-9001", "sku-9002"],
"changedKeys": ["sku-1234"],
"deletedKeys": []
}
]Get the actual row-level field data for a specific delta event. This is the most data-rich tool — it returns the complete field values for new, changed, and deleted rows.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
deltaEventId | The delta event ID (from get_delta results) |
changeType | new | changed | deleted | all (default: all) |
limit | Max rows per page (default 100, max 500) |
offset | Pagination offset (default 0) |
Example response:
{
"deltaEventId": "evt_01j...",
"rows": [
{
"key": "sku-1234",
"changeType": "changed",
"fields": {
"sku": "sku-1234",
"title": "Widget Pro (Updated)",
"price": "29.99",
"availability": "in_stock"
}
},
{
"key": "sku-9001",
"changeType": "new",
"fields": {
"sku": "sku-9001",
"title": "New Gadget",
"price": "49.99",
"availability": "in_stock"
}
}
],
"total": 17,
"hasMore": false
}Read rows from the current snapshot of a source. Use this to inspect live feed data — paginate with cursor for large feeds.
Input parameters:
| Param | Description |
|---|---|
sourceId | Source ID (use get_source_by_slug to resolve from slug) |
sourceSlug | Source slug — alternative to sourceId |
limit | Rows per page (default 20, max 100) |
cursor | Pagination cursor from previous response (omit for first page) |
Example response:
{
"rows": [
{
"rowKey": "sku-1234",
"data": {
"sku": "sku-1234",
"title": "Widget Pro",
"price": "29.99",
"availability": "in_stock"
}
},
{
"rowKey": "sku-5678",
"data": {
"sku": "sku-5678",
"title": "Gadget Plus",
"price": "49.99",
"availability": "in_stock"
}
}
],
"total": 50000,
"hasMore": true,
"nextCursor": "sku-5678",
"snapshotId": "abc123...",
"snapshotCreatedAt": "2026-03-08T20:00:00.000Z"
}Random or stratified sample from a feed snapshot. Use stratifyBy to get representative rows across brand, category, or any field. Ideal starting point for AI analysis of large feeds.
Feed-level quality report: field cardinality, null rates, top value distributions, data type hints, and stratification recommendations. Run this first to understand a large feed before sampling or querying.
Export the full current snapshot of a source as a CSV or JSON file. Returns a pre-signed download URL valid for 1 hour. Use this when you want to download, save, or analyze the complete feed data.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
format | csv | json (default: csv) |
Example response:
Your file is ready: https://r2.example.com/exports/usr_01j.../src_01j.../20260310T120000Z.csv?X-Amz-... Rows: 50000 | Format: csv | Expires: 2026-03-10T13:00:00.000Z
Get the field structure of a source's current snapshot. Returns all field names with null rates and example values. Use this first to understand what fields are available before running query_snapshot.
Input: sourceId or sourceSlug
Example response:
Snapshot schema for source cmmfyryl30005ygld7m5596gm Snapshot: snap_01j... (created 2026-03-08T20:00:00.000Z) Total rows: 50000 | Sampled: 100 Fields (5): availability (null 0%) — e.g. "in_stock", "out_of_stock" brand (null 2%) — e.g. "Acme", "BrandX", "WidgetCo" price (null 0%) — e.g. "29.99", "49.99", "9.99" sku (null 0%) — e.g. "sku-1234", "sku-5678" title (null 0%) — e.g. "Widget Pro", "Gadget Plus"
Run server-side aggregations on a source's full snapshot data. Handles 100K+ row feeds efficiently. Use for: unique values (distinct), counts, group-by breakdowns, min/max/avg/sum, or filtered row subsets.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
operation | distinct | count | group_by | min | max | avg | sum | filter |
field | Field name from snapshot_schema (required for all except count) |
filterOperator | eq | neq | contains | gt | lt | gte | lte (for filter) |
filterValue | Value to compare against (for filter) |
limit | Max results (default 100, max 500) |
cursor | Pagination cursor for filter operation |
Example — group by availability:
{
"sourceSlug": "product-catalog",
"operation": "group_by",
"field": "availability"
}Response:
{
"type": "group_by",
"field": "availability",
"groups": [
{ "value": "in_stock", "count": 42150 },
{ "value": "out_of_stock", "count": 7340 },
{ "value": "preorder", "count": 510 }
]
}Filter rows from a source's current snapshot using one or more conditions (AND-joined). Handles 100K+ row feeds server-side — only matching rows are returned. Supports text, numeric, and null checks. Paginate with cursor for large result sets.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
conditions | Array of filter conditions (AND-joined). Each has field, operator, and optional value |
conditions[].operator | eq | neq | contains | starts_with | gt | lt | gte | lte | is_null | is_not_null |
fields | Fields to include in output (default: all) — use to reduce response size on wide feeds |
limit | Rows per page (default 20, max 100) |
cursor | Pagination cursor from previous response |
Example — find SKUs where price > 50 and stock = 0:
{
"sourceSlug": "product-catalog",
"conditions": [
{ "field": "price", "operator": "gt", "value": "50" },
{ "field": "stock", "operator": "eq", "value": "0" }
],
"fields": ["sku", "title", "price", "stock"]
}Response:
Found ~142 matching rows (showing first 20) Row 1 (key: sku-1234): sku: sku-1234 title: Widget Pro price: 89.99 stock: 0 Row 2 (key: sku-5678): sku: sku-5678 title: Gadget Plus price: 59.99 stock: 0 ... More results available — use cursor: "clxyz..."
Export only the rows matching filter conditions as a CSV or JSON file. Returns a pre-signed download URL valid for 1 hour. Use this after filter_snapshot_rowsto export just the matching subset — not the full feed.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
conditions | Array of filter conditions (AND-joined), same as filter_snapshot_rows |
format | csv | json (default: csv) |
Example — export out-of-stock items as CSV:
{
"sourceSlug": "product-catalog",
"conditions": [
{ "field": "availability", "operator": "eq", "value": "out_of_stock" }
],
"format": "csv"
}Response:
Your filtered export is ready: https://r2.example.com/exports/usr_.../src_.../filtered-20260310T120000Z.csv?X-Amz-... Matched rows: 7340 | Format: csv | Conditions: 1 Snapshot date: 2026-03-08T20:00:00.000Z Link expires: 2026-03-10T13:00:00.000Z
Compare two snapshots of a source to see what changed — rows added, removed, or modified. By default compares the two most recent snapshots. Specify sinceDate to compare against a historical snapshot.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
sinceDate | ISO date (e.g. "2026-03-07") — compare latest snapshot against nearest snapshot before this date |
sinceSnapshotId | Compare against this specific snapshot ID |
limit | Max changed rows to show (default 50, max 200) |
Example — what changed since last Tuesday:
{
"sourceSlug": "product-catalog",
"sinceDate": "2026-03-03"
}Response:
Comparing snapshots: 2026-03-02T20:00:00.000Z → 2026-03-10T20:00:00.000Z
Added: 12 rows
Removed: 3 rows
Modified: 47 rows
ADDED (12):
SKU-9901: {"title":"New Widget Pro","price":"49.99",...}
...
REMOVED (3):
SKU-1234: {"title":"Discontinued Gadget","price":"19.99",...}
...
MODIFIED (47):
SKU-5678:
before: {"title":"Widget","price":"29.99"}
after: {"title":"Widget","price":"24.99"}Full-text keyword search across all fields in a source snapshot. Returns rows containing the search terms anywhere in their data, ranked by relevance. Good for finding products by name, description, or any text content.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
query | Search terms (e.g. "wireless headphones" or "nike running shoes") |
limit | Results per page (default 20, max 100) |
cursor | Pagination cursor from previous response |
fields | Fields to include in output (default: all) |
Example — find wireless headphones:
{
"sourceSlug": "product-catalog",
"query": "wireless headphones",
"limit": 5
}Response:
Found rows matching "wireless headphones" (ranked by relevance): Result 1 (key: SKU-4401, relevance: 0.075): title: Wireless Headphones Pro brand: Sony price: 149.99 category: Electronics Result 2 (key: SKU-4402, relevance: 0.061): title: Bluetooth Wireless Over-Ear Headphones brand: JBL price: 89.99 More results available — use cursor: "SKU-4402"
Random or stratified sample from a snapshot. Use stratifyBy for representative rows across brand/category/etc. Good starting point for AI analysis of large feeds.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
n | Total rows to return (1–500, default 50). Ignored when stratifyBy is set. |
stratifyBy | Fields to stratify by (max 3). Returns rowsPerGroup rows per unique combination. |
rowsPerGroup | Rows per unique combination when stratifyBy is set (1–20, default 2). Total capped at 500. |
fields | Fields to include in output (default: all) |
Example — stratified sample by brand:
{
"sourceSlug": "product-catalog",
"stratifyBy": ["brand", "condition"],
"rowsPerGroup": 2,
"fields": ["title", "brand", "price", "condition"]
}Response:
Sampled 94 rows from 50,234 total, stratified by brand × condition, 47 groups × 2 rows/group Row 1 (key: SKU-00001): title: Nike Air Max 270 brand: Nike price: 150.00 condition: new Row 2 (key: SKU-00045): title: Nike Pegasus 40 brand: Nike price: 120.00 condition: refurbished
Feed-level quality report: field cardinality, null rates, value distributions, type hints, and stratification recommendations. Use before sampling or querying a large feed.
Input parameters:
| Param | Description |
|---|---|
sourceId | The source ID (use this or sourceSlug) |
sourceSlug | The source slug — alternative to sourceId |
topN | Top N values to show for low/medium cardinality fields (1–50, default 10) |
Example:
{
"sourceSlug": "product-catalog",
"topN": 5
}Response:
Feed Profile: product-catalog (snapshot: 2026-03-18, 50,000 rows, 34 fields)
── LOW CARDINALITY ──────────────────────
condition (3 values, 0% null, text)
new: 48,000 refurbished: 1,500 used: 500
brand (47 values, 0% null, text)
Nike: 3,200 Adidas: 2,800 Puma: 1,100 ...
── MEDIUM CARDINALITY ───────────────────
color (120 values, 5% null, text)
size (89 values, 12% null, text)
── HIGH CARDINALITY ─────────────────────
price (4,823 values, 0% null, numeric)
title (49,847 values, 0% null, text)
── UNIQUE / IDENTIFIER ──────────────────
id sku row_key
── CONSTANT ─────────────────────────────
currency = "USD"
Stratification recommendation:
Best fields: condition, brand, google_product_category
Try: sample_snapshot(sourceSlug: "product-catalog", stratifyBy: ["condition","brand"], rowsPerGroup: 3)Get health information for a feed source. Returns a health score and diagnostic indicators.
Input: sourceId or sourceSlug
Example response:
{
"score": 100,
"indicators": {
"isActive": true,
"etagSupported": true,
"hasBeenChecked": true,
"checkIsFresh": true
}
}When a user asks "What changed in my product catalog today?", an agent can:
list_sources to find available sourcesget_delta with since set to the start of today to find recent eventsget_delta_rows on the latest event with changeType: "changed" to get actual field dataExample get_delta_rows call:
{
"sourceSlug": "product-catalog",
"deltaEventId": "evt_01j...",
"changeType": "changed",
"limit": 50
}When working with snapshot data, follow this natural sequence:
snapshot_schema — discover available fields and their typesfilter_snapshot_rows — preview matching rows (e.g. price > 50 AND stock = 0)download_filtered_snapshot — export just the matches as CSV/JSONquery_snapshot — run aggregations (count, group_by, avg, etc.)download_snapshot — export the full snapshot as CSV/JSONAdd this to your Claude Desktop system prompt for optimal tool usage:
When working with Catalogian sources: always call snapshot_schema first to understand the data structure before querying or filtering. Use filter_snapshot_rows for finding specific records, query_snapshot for aggregations, download_filtered_snapshot to export just the matching rows, and download_snapshot to export the full feed.
get_source_by_slug when you know the slug but not the internal IDsince parameter on get_delta for incremental polling — only fetch what's newchangeType filter on get_delta_rows to focus on specific change categoriessnapshot_schema first to discover available fields before running query_snapshot or filter_snapshot_rowsfilter_snapshot_rows for multi-condition filtering — supports text, numeric, and null checks with AND logicquery_snapshot for server-side aggregations on large feeds — much faster than paginating through all rowsget_snapshot_rows to read current feed data — paginate with cursor for large feedsdownload_filtered_snapshot to export only matching rows as CSV/JSON — same conditions as filter_snapshot_rowsdownload_snapshot to export the entire snapshot as a CSV or JSON file — returns a pre-signed URL valid for 1 hourcompare_snapshots to diff two snapshots — see what rows were added, removed, or modified between two points in timesearch_snapshot for full-text keyword search — finds products by name, description, or any text content, ranked by relevanceprofile_snapshot before sampling — it reveals field cardinality, null rates, and recommends fields for stratified samplingsample_snapshot with stratifyBy for representative samples across categories — much better than random sampling for analysislimit + offset (deltas) or cursor (snapshots) — use hasMore to detect when to fetch more