Managing Managed Tables
This guide walks through the full Managed Tables API flows: database provisioning, manual creation, file upload with schema inference, and row CRUD.
When to use it: when your application needs to store the company's own structured data (spreadsheets, computed datasets, operational lists) without depending on an external database. When not to use it: if you want to expose a saved visualization as an API (use the Data API).
Prerequisitesβ
- Console JWT (regular login β does not use API key).
- Assigned IAM permissions:
managed-tables:list,managed-tables:create,managed-tables:update,managed-tables:deletemanaged-tables:upload,managed-tables:data-read,managed-tables:data-write
- Managed roles that already bundle everything:
ConnectionManager,DataEngineer.
Flow overviewβ
Workflow 1: Provisioningβ
Every company must provision its storage area once. Idempotent β subsequent calls return the existing state.
export TOKEN="eyJ..."
export API_BASE_URL="http://localhost:3100"
# 1. Check status
curl "$API_BASE_URL/api/v1/managed-tables/status" \
-H "Authorization: Bearer $TOKEN"
# β { "provisioned": false, "connection_id": null }
# 2. Provision (idempotent)
curl -X POST "$API_BASE_URL/api/v1/managed-tables/provision" \
-H "Authorization: Bearer $TOKEN"
# β { "provisioned": true, "connection_id": "507f..." }
Workflow 2: Manual table creationβ
Use when you already know the schema. No file required.
# 1. Create table
curl -X POST "$API_BASE_URL/api/v1/managed-tables" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"display_name": "monthly_sales",
"columns": [
{ "name": "product", "display_name": "Product", "type": "text", "nullable": false },
{ "name": "amount", "display_name": "Amount", "type": "decimal", "nullable": true },
{ "name": "sale_date", "display_name": "Date", "type": "date", "nullable": false }
]
}'
# Response:
# { "_id": "507f...", "display_name": "monthly_sales", "columns": [...], ... }
# 2. Insert rows
curl -X POST "$API_BASE_URL/api/v1/managed-tables/507f.../data" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"rows": [
{ "product": "Shirt", "amount": 89.90, "sale_date": "2026-01-15" },
{ "product": "Pants", "amount": 159.90, "sale_date": "2026-01-15" }
]
}'
# β 201 { "inserted": 2 }
display_nameand column names match^[a-z_][a-z0-9_]*$.display_nameis unique per company among active tables.- A conflict returns
400witherror_code: MANAGED_TABLE_DUPLICATE_NAME.
Workflow 3: File upload (CSV/Excel/ZIP)β
A 2-step flow with schema inference. The API processes the file asynchronously and the client confirms the schema before the final insert.
Step 1 β uploadβ
curl -X POST "$API_BASE_URL/api/v1/managed-tables/upload" \
-H "Authorization: Bearer $TOKEN" \
-F "files=@sales_2026.csv"
# β 201 { "job_id": "abc123", "status": "queued" }
Limits: up to 10 files per call, 100MB total. Accepts .csv, .xlsx, .xls, .zip (zip is unpacked automatically).
Step 2 β status pollingβ
curl "$API_BASE_URL/api/v1/managed-tables/upload-jobs/abc123" \
-H "Authorization: Bearer $TOKEN"
States you will observe (in order):
| Status | Meaning | Client action |
|---|---|---|
queued | Waiting for processing | Keep polling |
analisando | API is processing the file | Show spinner |
waiting_confirm | Inferred schema is ready | Render confirmation UI from inferred_schema |
inserting | Insert in progress | Show progress (progress.processed_rows) |
done | Table created/updated | Redirect to user_table_id |
failed | Error | Show error |
When status === "waiting_confirm", the body contains:
{
"status": "waiting_confirm",
"inferred_schema": {
"suggested_table_name": "sales_2026",
"columns": [
{ "name": "product", "display_name": "Product", "type": "text", "nullable": false },
{ "name": "amount", "display_name": "Amount", "type": "decimal", "nullable": true }
],
"sample_rows": [{ "product": "Shirt", "amount": "89.90" }],
"total_row_count_estimate": 1500
}
}
Step 3 β confirm schemaβ
The user can edit column names, types, and the table display_name before confirming:
curl -X POST "$API_BASE_URL/api/v1/managed-tables/upload-jobs/abc123/confirm" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"display_name": "sales_2026",
"columns": [
{ "name": "product", "display_name": "Product", "type": "text", "nullable": false },
{ "name": "amount", "display_name": "Amount", "type": "decimal", "nullable": true }
]
}'
# β 200 { "status": "inserting" }
Keep polling until status === "done". The job's user_table_id points to the created table.
Workflow 4: Append/Replace into an existing tableβ
Adds new rows to an existing table (or replaces the entire content).
# Append into existing table
curl -X POST "$API_BASE_URL/api/v1/managed-tables/507f.../upload" \
-H "Authorization: Bearer $TOKEN" \
-F "files=@sales_february.csv" \
-F "mode=append"
# Replace (overwrites everything)
curl -X POST "$API_BASE_URL/api/v1/managed-tables/507f.../upload" \
-H "Authorization: Bearer $TOKEN" \
-F "files=@sales_full.csv" \
-F "mode=replace"
- The existing table
display_nameis reused β the regex^[a-z_][a-z0-9_]*$is not enforced (supports legacy tables). - Confirmed schema must match the existing table schema β extra columns cause the insert to fail.
replaceis atomic β the entire content is swapped in a single operation (no partial state visible).
Workflow 5: Row CRUDβ
Point-to-point operations on a table's rows.
# List (paginated)
curl "$API_BASE_URL/api/v1/managed-tables/507f.../data?page=1&per_page=50&sort_by=-sale_date" \
-H "Authorization: Bearer $TOKEN"
# β { "total": 1500, "quantity": 50, "records": [{ "id": "uuid", "product": "X", ... }] }
# Insert
curl -X POST "$API_BASE_URL/api/v1/managed-tables/507f.../data" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "rows": [{ "product": "Z", "amount": 50 }] }'
# Update row
curl -X PUT "$API_BASE_URL/api/v1/managed-tables/507f.../data/<rowId>" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{ "amount": 99.90 }'
# Delete
curl -X DELETE "$API_BASE_URL/api/v1/managed-tables/507f.../data/<rowId>" \
-H "Authorization: Bearer $TOKEN"
# β 204 No Content
rowId is the row's internal identifier (typically a UUID generated on insert). It appears as id in each item under records.
Workflow 6: Soft-delete a tableβ
Listings (GET /v1/managed-tables) already exclude archived tables, so the table disappears immediately. Restoration is not exposed by the API β open a support ticket if you need to revert.
Supported column typesβ
| Type | PostgreSQL DDL | Example |
|---|---|---|
text | TEXT | "abc" |
integer | BIGINT | 42 |
decimal | NUMERIC | 89.90 |
boolean | BOOLEAN | true |
date | DATE | "2026-01-15" |
datetime | TIMESTAMPTZ | "2026-01-15T10:30:00Z" |
Error handlingβ
| HTTP | Code | Common cause | Action |
|---|---|---|---|
| 400 | MANAGED_TABLE_DUPLICATE_NAME | display_name already active | Ask the user for another name |
| 400 | - | Database not provisioned | Call POST /provision first |
| 400 | - | Job not in waiting_confirm | Keep polling before confirming |
| 401 | - | Missing or expired JWT | Refresh token |
| 403 | - | Missing IAM permission | Check the user's role |
| 404 | - | Table/row/job does not exist | Re-fetch resource state |
| 422 | - | Invalid schema (regex, type, missing field) | Show specific error to the user |
| 500 | - | Internal error | Retry with backoff; log request_id |
Best practicesβ
- Don't loop
POST /provisionβ it is idempotent, but each call is more expensive. CheckGET /statusfirst. - Use exponential-backoff polling β start at 1s, double up to 10s. Upload jobs may take seconds to minutes depending on file size.
- Show
progress.processed_rows / progress.total_rowsduringinsertingto give the user feedback on large files. - Validate file sizes on the frontend before upload β the backend rejects >100MB total.
- Reuse
mode=appendfor incremental uploads; do not create a new table every time. - For custom queries on top of managed tables, expose them as a Visualization and publish them via the Data API. Managed Tables does not expose free-form SQL.
Full referenceβ
For each endpoint's detailed contract (all fields, status codes, error examples), see the Managed Tables API Reference.