Skip to main content

Glossary of Terms

Here is a comprehensive glossary of the key terms and concepts you'll encounter while using Teckel AI.

Accuracy Score

A quantitative measure (0–100) of how factually correct the AI's answer is, given the provided sources. A score of 100 means the answer is perfectly accurate and supported by the sources, with no hallucinations.

AI Audit (LLM-as-a-Judge)

The process of automatically reviewing an AI's answer using another AI model. This is also referred to as LLM-as-a-Judge. This process produces an audit result with scores and identifies issues with the quality of the original answer.

API Key

A secret token used to authenticate API requests from your systems to Teckel's API, identifying your organization's data.

Completeness Score

A measure (0–100) of how fully the answer addresses all parts of the user's question. A score of 100 means the answer is fully complete.

Freshness Score

A measure (0–100) of the recency of the information sources. More recent sources result in a higher freshness score.

Issue

A textual description of a specific problem found in an answer, categorized by type (Accuracy, Completeness, or Freshness).

LLM (Large Language Model)

The AI model (like GPT-4, Claude, etc.) that generates answers to user questions. Teckel is model-agnostic and observes the input and output of any LLM.

Organization

In Teckel AI, your company's account is an Organization. It owns all of your traces, audit results, and other data, and has a unique identifier.

Overall Score

A weighted composite of the accuracy, completeness, and freshness scores, providing a single metric to judge the overall quality of an answer.

RAG (Retrieval-Augmented Generation)

A technique where an LLM is augmented with retrieved context from a knowledge base. Teckel AI is specifically designed for RAG systems, as it monitors the answer that was produced using this retrieved information.

RLS (Row-Level Security)

A database security mechanism that restricts which rows of data a certain user or role can access. We use RLS in our Postgres database to ensure each organization can only access its own rows.

Session

A grouping key for traces that can correspond to a user's chat session or any other logical grouping of queries. This allows for filtering and aggregation of data by session.

Source / Document Chunk

In Teckel's context, a "source" is a piece of content used to derive an answer, such as a paragraph from a document or a snippet from an article.

Supabase

The backend-as-a-service platform that we use for our database and authentication. It is built on top of PostgreSQL and provides a secure and scalable foundation for our platform.

Team

A subgroup within an organization used for organizing content and user access by department or project. This feature is optional.

Teckel Judge

Our proprietary LLM-as-a-judge model that performs the automated analysis of your AI's responses.

Trace

A single interaction record in Teckel, corresponding to one user question and the AI's answer, along with associated metadata. Each trace has a unique identifier for auditing.

Vector Database / Embeddings

Many RAG systems use vector embeddings (numerical representations of text) and vector databases for retrieving relevant document chunks. Teckel works with any vector database or retrieval system. We additionally use embedding on user queries in our Query Insights section, to show you themes from your AI's output.