Glossary of Terms
Here is a comprehensive glossary of the key terms and concepts you'll encounter while using Teckel AI.
Accuracy
The core accuracy metric that measures how well the AI's response is grounded in the provided source documents. Calculated as the ratio of supported claims to total claims extracted from the response. A score of 1.0 means all factual claims are fully supported by the sources.
AI Audit (LLM-as-a-Judge)
The process of automatically reviewing an AI's answer using another AI model. This is also referred to as LLM-as-a-Judge. This process produces an audit result with scores and identifies issues with the quality of the original answer.
Chunking
Breaking documents into smaller pieces for AI to process. This makes it easier for AI systems to find and use the most relevant information when answering questions.
Claims Analysis
The process of breaking down AI responses into individual factual statements and mapping them to supporting source chunks. This provides transparency into which claims are supported by evidence and which are potentially hallucinated.
Completeness
A measure (0-1.0) of how well the AI's response addresses the user's specific question. This evaluates whether the AI stayed on topic and provided information that directly answers what was asked. A score of 1.0 means the response directly and comprehensively addresses the question.
Context Precision
A measure (0-1.0) of how relevant the retrieved document chunks are to answering the user's question. Calculated as the ratio of relevant chunks to total chunks retrieved. A score of 1.0 means all retrieved chunks were directly useful for answering the question.
Document Freshness
A measure (0-1.0) of how recent the source documents are, calculated using a 2-year decay curve from the last_updated timestamp. Fresh documents (0-219 days) score 0.7-1.0, aging documents (219-511 days) score 0.3-0.7, and stale documents (511+ days) score below 0.3.
Embeddings/Vectorization
The process of converting text into numerical representations (vectors) that capture semantic meaning. Teckel AI uses embeddings to analyze query patterns and identify similar questions in the Query Insights feature.
Grounding
Making sure AI answers are based on real sources, not made up. Grounding ensures that AI responses stick to the facts provided in your documentation rather than inventing information.
Hallucination
When AI makes up information that wasn't in its sources. This is one of the main problems Teckel AI helps you detect and prevent by analyzing whether claims in your AI responses are actually supported by your documents.
Human-in-the-Loop
Having people review and improve AI responses to ensure quality. Teckel AI helps identify which documents need human review by flagging low-scoring topics and flagging common issues.
Hybrid Search
Using both keyword matching and meaning-based search together. This combines the precision of exact word matches with the flexibility of semantic understanding to find the most relevant information.
Issue
A specific problem identified in an AI response through claims analysis, such as unsupported claims, missing details, unclear terminology, or reliance on outdated information. Issues are tagged and categorized to help improve documentation.
LLM (Large Language Model)
The AI model (like GPT-4, Claude, etc.) that generates answers to user questions. Teckel AI is model-agnostic and observes the input and output of any LLM.
Overall Score
A weighted average of the four core metrics (Accuracy, Precision, Completeness, and Document Freshness), providing a single indicator of response quality.
Prompt
The question or additional instructions given to an AI to parse. In Teckel AI's context, this is typically the user's query that your RAG system is trying to answer.
Query Insights
Teckel AI's semantic analysis feature that uses embeddings to identify patterns in user queries, revealing trending topics, common questions, and areas where your AI struggles to provide good answers.
RAG (Retrieval-Augmented Generation)
A technique where an LLM is augmented with retrieved context from a knowledge base. Teckel AI is specifically designed for RAG systems, as it monitors the answer that was produced using this retrieved information.
RAGAS
An open-source framework for evaluating RAG systems. Teckel AI takes inspiration from RAGAS metrics but implements its own optimized evaluation pipeline with metrics like Accuracy, Precision, and Completeness for better performance.
Retrieval
Finding and fetching relevant information from your documents. This is the first step in RAG systems where the most relevant chunks are selected to help answer a user's question.
SDK (Software Development Kit)
The @teckel-ai/tracer npm package that provides a simple interface for integrating Teckel AI into your application. The SDK handles trace submission, error handling, and authentication.
Semantic Clustering
The process of grouping similar queries together based on their meaning rather than exact keywords. This helps identify patterns in what users are asking and where documentation might be lacking.
Semantic Search / Vector Search
Finding related content based on meaning and context rather than exact keyword matches. Most AI chat applications use semantic search for retrieval. Teckel AI uses this to analyze query patterns and identify similar questions across your trace history.
Session
A grouping key for traces that can correspond to a user's chat session or any other logical grouping of queries. This allows for filtering and aggregation of data by session (filtering by session coming soon).
Source / Document Chunk
In Teckel AI's context, a "source" is a piece of content used to derive an answer, such as a paragraph from a document
Teckel Judge
Our proprietary evaluation engine that performs automated analysis of AI responses. It calculates four core metrics (Accuracy, Precision, Completeness, and Document Freshness) and provides detailed claims analysis to identify improvement opportunities.
Token
A piece of text (like a word or part of a word) that AI processes. Understanding tokens helps explain AI pricing and context limits.
Topic Clustering
The automatic categorization of user queries into meaningful topics based on semantic similarity. This helps you understand what subjects your users ask about most frequently.
Topic Gaps
Areas identified through query analysis where your documentation is missing, incomplete, or consistently produces low-quality responses. Topic gaps highlight the most impactful documentation improvements you can make.
Topic Relationships
Connections and dependencies between different query topics, showing how different subjects relate to each other in your users' questions. This helps identify documentation that should be linked or cross-referenced.
Trace
A single interaction record in Teckel AI, corresponding to one user question and the AI's answer, along with associated metadata. Each trace has a unique identifier for auditing.
Vector Database / Embeddings
Many RAG systems use vector embeddings (numerical representations of text) and vector databases for retrieving relevant document chunks. Teckel AI works with any vector database or retrieval system. We additionally use embedding on user queries in our Query Insights section, to show you themes from your AI's output.