Getting Started with Teckel AI
Set up Teckel AI in 15 minutes. This guide walks through integration with your RAG application.
Prerequisites
Before starting, you need:
- A Teckel AI organization account (sign up here)
- Admin access to generate API keys
- Access to your RAG application's codebase
1. Get Your API Key
- Log in to your Teckel AI dashboard
- Navigate to Admin Panel > API Keys
- Click "Generate Key"
- Name your key (e.g., "Production Server")
- Click "Generate Key" and copy it immediately
- Store it securely as an environment variable:
TECKEL_API_KEY
Security Note: Never commit API keys to version control or expose them in client-side code.
2. Install the SDK
Node.js / TypeScript
npm install @teckel-ai/tracer
Python
Python SDK coming soon. Use our HTTP API directly in the meantime (see API & SDK Reference).
3. Add Tracing to Your Code
Initialize the Tracer
import { TeckelTracer } from '@teckel-ai/tracer';
// Initialize once, reuse everywhere
const tracer = new TeckelTracer({
api_key: process.env.TECKEL_API_KEY
});
Format Your Source Documents
Convert your retrieved chunks into our format:
const sources = retrievedChunks.map(chunk => ({
document_ref: chunk.id, // Required: Your unique ID
document_name: chunk.docId || chunk.title, // Required: Document name
document_text: chunk.text, // Required: The chunk content
document_last_updated: chunk.lastModifiedDate, // Recommended for freshness
source_uri: chunk.url || chunk.path, // Optional, helps debugging
similarity: chunk.retrievalScore, // Optional: Vector similarity
owner_email: chunk.authorEmail // Optional for notifications
}));
Send the Trace
After your LLM generates a response:
const traceRef = await tracer.trace(
userQuestion,
aiAnswer,
sources, // REQUIRED: Formatted chunks
{
trace_ref: yourExistingTraceId, // REQUIRED: Use your existing trace/request ID
model: 'gpt-4o', // Your AI model
session_id: 'user-session-123', // Optional session tracking
processing_time_ms: 1500 // Optional performance data
}
);
The trace method is async and non-blocking. It won't add noticeable latency to your responses. If our system is down, it won't affect your user experience.
4. Verify It's Working
- Run a test query through your application
- Open your Teckel AI dashboard
- Navigate to Audit History
- Your trace should appear at the top
- Quality scores and claims analysis will be available after processing (typically within 1 hour, maximum 24 hours)
If you don't see your trace, check:
- Your API key is correct
- The SDK isn't throwing errors in your console
- Your network allows outbound HTTPS to
api.teckelai.com
5. What Happens Next?
Once integrated, Teckel AI automatically:
- Analyzes every AI response using our unified Teckel Judge evaluation
- Groups similar queries into topics for pattern recognition
- Identifies documentation gaps and distinguishes between retrieval vs content issues
- Provides actionable feedback with specific improvement recommendations
- Can optionally test your vector database with ground truth validation
Check the Dashboard Guide to explore your insights and the Core Concepts to understand our analysis methodology.
Common Integration Patterns
Express.js API Endpoint
app.post('/api/chat', async (req, res) => {
const { question } = req.body;
// Your existing RAG logic
const chunks = await vectorDB.search(question);
const answer = await llm.generate(question, chunks);
// Add Teckel tracing (sources are required)
const sources = chunks.map(formatChunk);
await tracer.trace(
question,
answer,
sources, // REQUIRED parameter
{
trace_ref: req.id || req.requestId, // Use your existing request ID
model: 'gpt-4o'
}
);
res.json({ answer });
});
Next.js API Route
export async function POST(request) {
const { question } = await request.json();
// Your RAG implementation
const chunks = await searchDocuments(question);
const answer = await generateAnswer(question, chunks);
// Teckel tracing (sources are required)
const sources = formatChunksForTeckel(chunks);
await tracer.trace(
question,
answer,
sources, // REQUIRED parameter
{
trace_ref: request.headers.get('x-request-id'), // Use your existing trace ID
model: 'gpt-4o'
}
);
return Response.json({ answer });
}
Need Help?
- Check our API & SDK Reference for detailed API documentation
- Review Security Best Practices for production deployments
- Contact support@teckelai.com for integration assistance