API & SDK Reference
This section provides a comprehensive reference for integrating with Teckel AI through our HTTP API or JavaScript/TypeScript SDK.
HTTP API
The Teckel AI HTTP API allows you to send trace data from any programming language or platform.
Endpoint
POST https://app.teckelai.com/api/traces
Authentication
All API requests require authentication using a Bearer token in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Request Structure
Required Fields
Field | Type | Description |
---|---|---|
trace_ref | string | Your existing trace/request/correlation ID - use whatever ID system you already have |
query | string | The user's question or prompt |
response | string | The LLM's generated answer |
sources | array | Array of source documents used (see Source Object below) |
Optional Fields
Field | Type | Description |
---|---|---|
model | string | The LLM model used (e.g., "gpt-5", "claude-4") |
session_id | string | Session identifier for sequential queries in same chat instance |
user_id | string | Identifier for tracking specific users |
processing_time_ms | number | Time taken to generate response in milliseconds |
metadata | object | Custom key-value pairs, rarely needed |
Source Object Structure
Each source in the sources
array must include:
Required Source Fields
Field | Type | Description |
---|---|---|
document_ref | string | Your unique identifier for this document |
document_name | string | Human-readable name of the document |
document_text | string | The actual text/chunk content used |
Optional Source Fields
Field | Type | Description |
---|---|---|
document_last_updated | string | When document was last updated, as ISO 8601 timestamp required for document freshness analytics |
document_type | string | Type of document (e.g., "pdf", ".docx", "text") |
source_type | string | Location of document storage (e.g., "slack", "google drive", "confluence") |
source_uri | string | Path or URL to the source |
similarity | number | Vector similarity score (0-1) |
rank | number | Position of chunk within document |
owner_email | string | Document owner email |
HTTP Examples
cURL Example
curl -X POST https://app.teckelai.com/api/traces \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"trace_ref": "your-existing-request-id-or-trace-id",
"query": "What is your refund policy?",
"response": "Our refund policy allows returns within 30 days...",
"sources": [
{
"document_ref": "doc-456",
"document_name": "Refund Policy v2.0",
"document_text": "Customers may request a full refund within 30 days...",
"document_last_updated": "2024-01-15T10:00:00Z",
"document_type": "pdf",
"source_type": "confluence",
"similarity": 0.95,
"source_uri": "https://docs.example.com/policies/refund"
}
],
"model": "gpt-5",
"session_id": "session-abc",
"user_id": "user@example.com",
"processing_time_ms": 245
}'
Response Format
Success Response (200)
{
"success": true,
"trace_id": "550e8400-e29b-41d4-a716-446655440000",
"trace_ref": "my-correlation-id-123",
"message": "Trace created successfully"
}
Validation Error Response (400)
{
"error": "Validation failed with 2 error(s)",
"validation_errors": [
{
"field": "trace_ref",
"message": "trace_ref is required"
},
{
"field": "sources[0].document_text",
"message": "document_text is required and must be a non-empty string",
"details": { "index": 0 }
}
]
}
Rate Limit Response (429)
{
"error": "Rate limit exceeded",
"retry_after": 60
}
Rate Limiting
- Default: 1000 requests per hour per organization
- Rate limit headers are included in responses:
X-RateLimit-Limit
: Maximum requests per hourX-RateLimit-Remaining
: Remaining requestsX-RateLimit-Reset
: Unix timestamp when limit resets
Request Size Limits
- Maximum request size: 5MB
- Maximum 50 sources per trace
- Query limited to 10,000 characters
- Response limited to 50,000 characters
JavaScript/TypeScript SDK
The Teckel Tracer SDK provides a convenient wrapper around the HTTP API for JavaScript and TypeScript applications.
Installation
npm install @teckel-ai/tracer
Basic Usage
import { TeckelTracer } from '@teckel-ai/tracer';
// Initialize the tracer with configuration
const tracer = new TeckelTracer({
api_key: process.env.TECKEL_API_KEY,
endpoint: 'https://app.teckelai.com/api/traces' // Optional, this is the default
});
// Trace an LLM interaction
const traceRef = await tracer.trace(
query,
response,
sources, // Required - array of source documents
{
trace_ref: existingRequestId, // Use your existing trace/request ID
model: 'gpt-5',
session_id: 'session-abc',
user_id: 'user@example.com',
processing_time_ms: 245,
metadata: {
department: 'support',
priority: 'high'
}
}
);
console.log(`Trace sent with reference: ${traceRef}`);
Source Document Interface
interface SourceDocument {
// Required fields
document_ref: string; // Your unique document identifier
document_name: string; // Human-readable document name
document_text: string; // The actual text/chunk content
// Optional metadata
document_last_updated?: string; // ISO 8601 timestamp - CRITICAL for freshness scoring
document_type?: string; // File type: "pdf", "docx", "txt", "md", "html"
source_type?: string; // Storage location: "slack", "google_drive", "confluence"
source_uri?: string; // Path or URL to source
similarity?: number; // Vector similarity score (0-1)
rank?: number; // Position/rank within document
owner_email?: string; // Document owner's email address
}
Complete Example with Error Handling
import { TeckelTracer } from '@teckel-ai/tracer';
const tracer = new TeckelTracer({
api_key: process.env.TECKEL_API_KEY
});
async function processUserQuery(userQuery: string) {
try {
// 1. Retrieve relevant documents from your vector database
const retrievedDocs = await vectorDB.search(userQuery);
// 2. Format sources for Teckel
const sources = retrievedDocs.map(doc => ({
document_ref: doc.id,
document_name: doc.title,
document_text: doc.content,
document_last_updated: doc.updatedAt,
document_type: doc.fileType, // e.g., "pdf", "docx"
source_type: doc.source, // e.g., "confluence", "google_drive"
source_uri: doc.url,
similarity: doc.score,
rank: doc.chunkIndex,
owner_email: doc.ownerEmail
}));
// 3. Generate response with your LLM
const llmResponse = await generateResponse(userQuery, retrievedDocs);
// 4. Send trace to Teckel
const traceRef = await tracer.trace(
userQuery,
llmResponse,
sources,
{
trace_ref: `trace-${Date.now()}`, // Your correlation ID
model: 'gpt-4',
session_id: getCurrentSession(),
user_id: getCurrentUser(),
processing_time_ms: 150,
metadata: {
environment: 'production',
version: '1.0.0'
}
}
);
console.log(`Trace sent successfully: ${traceRef}`);
return llmResponse;
} catch (error) {
console.error('Error processing query:', error);
// The SDK handles failures gracefully - your app continues running
return fallbackResponse;
}
}
Processing Modes
Batch Processing (Default)
- Processing typically completes within 1 hour
- Maximum turnaround time: 24 hours
- Cost-effective for standard use cases
- Ideal for production environments
Realtime Processing (Premium)
- Processing completes in 20-30 seconds
- Premium pricing applies
- Contact sales for access
Best Practices
- Always provide sources - Required for accurate evaluation
- Include
last_updated
timestamps - Critical for freshness scoring - Use meaningful
trace_ref
values - Helps with debugging - Track session and user IDs - Enables better analytics
- Handle errors gracefully - The SDK won't interrupt your service
- Validate before sending - Check required fields are present
Support
- Documentation: https://docs.teckelai.com
- Email: support@teckelai.com
- API Status: https://status.teckelai.com
Language-Specific SDKs
Language | Status | Installation |
---|---|---|
JavaScript/TypeScript | ✅ Available | npm install @teckel-ai/tracer |
Python | 🚧 Coming Soon | Use HTTP API for now |
Java | 📋 Planned | Use HTTP API |
For languages without native SDKs, use the HTTP API examples above.