Getting Started with Teckel AI
This guide provides a comprehensive walkthrough for integrating Teckel AI into your enterprise AI application. The process is designed to be quick and straightforward, allowing you to start monitoring your AI's output quality in minutes.
Prerequisites
Before you begin, ensure you have the following:
- An active Teckel AI organization account. If you don't have one, you can sign up on our website.
- Administrative access to your Teckel AI organization to generate API keys.
- Access to the codebase of your RAG application.
1. Obtain Your API Key
Your API key is your secret credential for sending data to Teckel.
- Log in to your Teckel AI dashboard.
- Navigate to the Admin Panel > API Keys section.
- Click on "Generate Key".
- Provide a descriptive name for your key (e.g., "Production Server").
- Click "Generate Key". Your new API key will be displayed.
- Copy the key immediately and store it in a secure location, like an environment variable (
TECKEL_API_KEY
). For security reasons, you will not be able to see the full key again.
Important: Treat your API key like a password. Do not expose it in client-side code or commit it to your version control system.
2. Install the Teckel Tracer SDK
Our SDK is the easiest way to integrate with Teckel AI. We currently offer a package for Node.js environments.
Node.js / TypeScript
npm install @teckel/tracer
This package provides the TeckelTracer
class, which you will use to send data to our platform.
Python
A Python SDK is coming soon. In the meantime, you can use our HTTP API directly. See the SDK Reference for an example.
3. Integrate the Tracer into Your Application
The core of the integration is a single function call. You'll add this call to your code right after your LLM generates a response.
Step 3.1: Initialize the Tracer
It's best to initialize the TeckelTracer once and reuse the instance throughout your application.
import { TeckelTracer } from 'teckel-tracer';
// Initialize with your API key from the environment variable
const tracer = new TeckelTracer(process.env.TECKEL_API_KEY);
Step 3.2: Prepare Your Source Documents
Before you can trace an interaction, you need to format any document chunks that your RAG system retrieves. The tracer expects an array of source objects.
const sources = retrievedChunks.map(chunk => ({
document_name: chunk.docId || chunk.title, // Required
document_text: chunk.text, // Required
last_updated: chunk.lastModifiedDate, // Optional, highly recommended for freshness scoring
source_uri: chunk.url || chunk.path, // Optional, helps with debugging
confidence: chunk.retrievalScore, // Optional
owner: chunk.authorEmail, // Optional for specific document owner notifications
}));
Step 3.3: Call tracer.trace()
After you get the answer from your LLM, call the trace function with the relevant data.
const traceId = await tracer.trace(userQuestion, aiAnswer, {
model: 'gpt-4o', // Your chatbot’s AI model
sources: sources,
sessionId: 'user-session-123', // Optional, for grouping user queries
processingTimeMs: 1500, // Optional, for performance tracking
});
The trace method is asynchronous but designed to be non-blocking. It sends the data to Teckel's servers and resolves quickly, so it won't add any noticeable latency for your end-users.
4. Verify Your Integration
Once you've integrated the SDK, you can verify that everything is working correctly:
-
Run a test query through your application.
-
Log in to your Teckel AI dashboard and navigate to the Audit History page.
-
You should see the trace you just sent appear at the top of the list.
-
Within a few seconds, the quality scores (Faithfulness, Context Precision, Response Relevancy, and Document Freshness) will populate, along with claims analysis data showing which factual statements are supported by your sources.
If you see your data in the dashboard, your integration is complete! If you encounter any issues, check the console for warnings from the SDK, and ensure your API key is correct.