Unlocking the AI Landscape: A Comprehensive Glossary

Understanding core terminology is crucial for UK businesses seeking to leverage cutting-edge technological innovations in the rapidly evolving landscape of artificial intelligence.
This comprehensive glossary breaks down over 100 essential AI terms in clear, accessible language, helping business leaders and technology professionals navigate the complex world of artificial intelligence.
> Actor-Critic Method
A reinforcement learning technique where one part of the AI system performs actions while another evaluates how good those actions are – like having a performer who acts and a critic who provides feedback.
> Adaptive Learning
An AI approach where systems automatically modify their algorithms and learning processes based on new data and changing environments, enabling more dynamic and responsive intelligence.
> Adversarial Machine Learning
A technique that develops AI systems that detect and resist malicious attempts to manipulate or deceive machine learning models.
> AI Data Extraction
The process of pulling relevant information from unstructured data like text or images using machine learning and natural language processing.
> AI Ethics
The study of moral principles and standards governing the development and use of artificial intelligence, focusing on multiple areas, including fairness, accountability, transparency, explainability, sustainability and societal impact.
> Algorithm
A step-by-step computational procedure or formula for solving problems and performing tasks, serving as the fundamental instruction set for AI systems.
> Algorithm Bias
Systematic errors in AI systems that can lead to unfair or discriminatory outcomes, often resulting from skewed or unrepresentative training data.
> Artificial Intelligence (AI)
A technology that enables computers to simulate human-like cognitive functions, including learning, problem-solving and decision-making.
> AI Governance
A framework of rules and practices for developing and using AI systems responsibly, safely and ethically.
> Artificial General Intelligence
A hypothetical form of AI system that would match or exceed human-level intelligence across any task.
> Artificial Narrow Intelligence (ANI)
AI systems designed to perform specific tasks, like facial recognition or language translation, also known as weak AI, without generalised cognitive abilities.
> Artificial Superintelligence (ASI)
A hypothetical AI system that would far surpass human intelligence and cognitive function in virtually every field.
> Artificial Neural Networks (ANN)
A type of machine learning used in AI, inspired by the human brain structure, designed to recognise patterns in data.
> Attention Mechanism
A technique that helps AI systems focus on the most relevant parts of input data, similar to how humans pay attention to specific details in a conversation.
> Augmented Intelligence
AI systems designed to enhance rather than replace human intelligence and capabilities, focusing on collaborative intelligence.
> Automation
The use of AI technology to perform tasks and processes with minimal human intervention – applications include customer interactions (chatbots), data analysis and document processing.
> Automatic Speech Recognition
AI technology that converts spoken words into written text, enabling voice-based interactions and transcription.
> Backpropagation
A machine learning technique – the primary method used to train neural networks by adjusting their parameters based on errors in their output.
> Bias-Variance Tradeoff
The balance between an AI model’s ability to accurately fit training data and generalise to new, unseen data. High bias (underfitting) in an AI model can lead to data oversimplification, while high variance (overfitting) can lead to overcomplexity—a balance of both is needed for optimal performance.
> Capsule Networks
A type of neural network architecture designed to capture hierarchical relationships in data more effectively than standard convolutional neural networks (CNNs). They remain an emerging research area with limited practical uses.
> Chatbot
An AI program that employs a number of methods, including machine learning and natural language processing, to simulate conversation with human users.
> Classification
A process that categorises data into predefined groups or categories – applications include email spam filtering, patient diagnosis and sentiment analysis.
> Clustering
A process of grouping similar data points together without predefined categories – applications include customer segmentation and fraud detection.
> Collaborative Filtering
A process used in recommendation systems that predicts user preferences based on similar user behaviour.
> Computer Vision
The field of AI that enables computers to ‘see’ via processing and interpretation of visual data such as digital images.
> Convolutional Neural Network (CNN)
A type of artificial neural network particularly effective at processing image and visual data, used in computer vision.
> Data Augmentation
Creating new training data by modifying existing data in realistic ways to increase data volume, diversity and quality.
> Data Imputation
The process of replacing missing values in a dataset with estimated values so that the dataset can be retained and analysed.
> Deep Learning
A subset of machine learning that uses multi-layered neural networks to process complex patterns in data.
> Deterministic Model
An approach in which the same input conditions always produce the same output. Unlike probabilistic models, there is no inherent randomness or uncertainty.
> Dimensional Reduction
Techniques for reducing the number of features in a dataset while preserving its important characteristics.
> Distribution Shift
When the patterns in new data differ from those in the training data, potentially causing AI systems to perform poorly, continuous monitoring helps to mitigate this.
> Dropout
A technique in deep learning where nodes (processing units) in neural networks are intentionally dropped during training to reduce overfitting and improve AI model performance.
> Early Stopping
A technique that prevents AI models from becoming too specialised by training data by stopping the training process at an optimal point to retain the ability to generalise new data.
> Ensemble Learning
A technique that combines multiple AI models to improve prediction accuracy and system performance.
> Edge AI
AI systems that operate on local devices rather than in the cloud, enabling faster and more private processing – applications include wearable health-monitoring accessories and home security systems.
> Enterprise AI
The implementation of AI solutions across an entire organisation to improve business operations and decision-making.
> Explainable AI (XAI)
AI systems designed to provide transparent and interpretable explanations for their decisions, helping users understand the reasoning behind AI-generated outputs – a key element of responsible AI.
> Feature Engineering
The process of selecting, transforming, and creating relevant features from raw data to improve machine learning model performance.
> Frontier AI
The most advanced AI systems that push the boundaries of what’s currently possible with artificial intelligence.
> Generative AI
AI systems capable of creating new content like text, images, code, and music by learning from existing data.
> Generative Adversarial Networks (GANs)
AI systems where two deep neural networks, a generator and a discriminator, compete to create increasingly accurate and realistic data.
> Gradient Descent
An optimisation algorithm that trains AI models by gradually adjusting their parameters to minimise errors.
> Graph Neural Networks (GNNs)
A specialised type of neural network, designed to process graph data structure (a non-linear data type).
> Grounding
A process connecting AI language models to real-world context and facts to improve relevance and accuracy.
> Hyperparameters
Configurable settings data scientists use to control how an AI model learns from data to optimise performance.
> Inference
A process a trained AI model uses to make predictions or decisions based on new data, this stage happens after training to test model capabilities.
> Instance Segmentation
A computer vision task that involves identifying and separating individual objects within images.
> Knowledge Distillation
A process of transferring knowledge from a large, complex AI model to a smaller, more efficient one.
> Language Model Fine-Tuning
Adapting a pre-trained language model with additional training to perform specific tasks or handle specialised content.
> Latency
The time delay between when an AI system receives input data and when it produces output – low-latency systems are considered more user-friendly.
> Large Language Models (LLMs)
Advanced AI systems trained on vast amounts of text data to understand and generate human-like language.
> Latent Variable Models
AI models that work with unobservable variables that influence observable data, used to improve the accuracy of predictions.
> Machine Learning (ML)
A subset of AI that focuses on enabling algorithms to learn patterns from data in order to make predictions or decisions, without being explicitly programmed to handle every scenario. This approach allows systems to improve performance as they encounter new data, rather than simply imitating human cognition.
> Manifold Learning
Techniques for understanding the structure of high-dimensional data by reducing it to a lower-dimensional form where it is easier to analyse and understand.
> Misinformation
False or misleading information that AI systems might generate or need to detect, ranging from inaccurate predictions and unrealistic imagery to deliberately deceitful content.
> Model Training
The process of teaching an AI algorithm how to perform its intended task using curated data sets to refine performance.
> Monte Carlo Methods
Mathematical techniques used in AI to solve statistical problems through random sampling.
> Multi-Modal AI
AI systems that can process and produce different types of data like text, images, and sound—for example, they can produce a text caption for a user image and vice versa.
> Natural Language Processing (NLP)
Machine learning technology that enables computers to understand and generate human language.
> Neuro-Symbolic AI
A hybrid approach combining neural networks with symbolic reasoning (where symbols and rules are used to process data) to improve AI system capabilities.
> Open-source
Software, including AI models, whose code is freely available for anyone to use, modify, and distribute.
> One-Shot Learning
The ability of AI systems to learn from a single example or minimal training data.
> Overfitting
When an AI model learns the training data too precisely, performing poorly on new, unseen data, with limited generalisation capability.
> Parameter-efficient Fine-Tuning (PEFT)
Techniques for adapting large AI models to new tasks while updating only a small portion of their parameters.
> Predictive Analytics
Using AI and statistical methods to predict future outcomes based on historical data.
> Pre-training
The initial training of an AI model on a large dataset to learn general functions before fine-tuning it for specific tasks.
> Principal Component Analysis (PCA)
A technique for reducing data complexity while preserving its most important patterns.
> Probabilistic Model
An AI model capable of handling complex, real-world challenges using probability and uncertainty in its calculations.
> Prompt Engineering
The practice of designing and refining input instructions to elicit more accurate, relevant, and nuanced responses from large language models.
> Quantisation
A process that reduces the numerical precision of model parameters. This decreases model size and speeds up inference by lowering computational overhead.
> Recommender Systems
AI systems that suggest items or content based on user preferences and behaviour.
> Recurrent Neural Networks (RNNs)
A deep learning model designed to process sequential data such as sentences, weather data and customer purchase history.
> Regression
A machine learning technique that studies relationships between data variables and builds predictive models that organisations can use for forecasting and decision-making.
> Reinforcement Learning
An AI training approach where systems learn through trial and error, receiving either negative or positive feedback to optimise learning.
> Responsible AI
The practice of developing and using AI systems in ways that are ethical, transparent, and beneficial to society.
> Robotics
The field combining AI with physical machines to perform tasks in the real world with human-like capability.
> Self-Attention
A key mechanism used in machine learning, namely natural language processing and computer vision, allows AI models to weigh the importance of different parts of their input data.
> Semi-Supervised Learning
A machine learning approach using labelled and unlabelled data to train AI models and improve learning accuracy.
> Sentiment Analysis
Using natural language processing and machine learning to determine text data’s emotional tone or opinion.
> Sequence Modelling
AI techniques for processing and predicting sequential data like text or time series based on historical data.
> Sparse Representations
Data representations that use relatively few non-zero values to capture important information.
> Spectral Clustering
A technique for grouping data points based on their connections rather than absolute positions.
> Stochastic Gradient Descent (SGD)
An algorithm that optimises AI model parameters using random training data samples to minimise errors.
> Supervised Learning
A machine learning technique where algorithms are trained using labelled data, with clear input-output pairs, to help the model learn predictive patterns.
> Support Vector Machines (SVM)
A machine learning algorithm that finds the best boundary between different categories of data.
> Synthetic Data
Artificially generated data that mimics real-world data used to train AI systems when real data is scarce or sensitive.
> Tokenisation
The process of breaking text into smaller units or ‘tokens’ that AI models can process faster, it is the foundation of natural language processing.
> Training Datasets
Collections of example data, or inputs, used to teach AI models how to perform their tasks and process information.
> Transfer Learning
A machine learning technique using knowledge learned in one task to improve performance on a different but related task.
> Transformer Models
A powerful type of neural network architecture introduced by Google in 2017, particularly good at processing sequential data such as text and speech in near real-time, is now widely used in natural language processing.
> Underfitting
When an AI model is too simple to capture important patterns in the training data.
> Universal Approximation Theorem
A mathematical principle stating that neural networks can approximate any continuous function.
> Unsupervised Learning
A machine learning technique where algorithms are trained to identify patterns in unlabelled data without predefined outcomes.
> Vanishing Gradient Problem
A technical challenge in training deep neural networks where learning becomes ineffective in early layers due to backpropagation.
> Variational Autoencoder (VAE)
A type of neural network that learns to generate new data similar to its training dataset examples.
> Vision Transformer (ViT)
A type of transformer model adapted for processing images, used in computer vision tasks.
> X-risk
Hypothetical dangers that extremely advanced AI systems might pose to humanity – this concept is particularly relevant to the development of artificial general intelligence (AGI) and artificial superintelligence (ASI).
> Zero-Shot Learning
The ability of AI systems to handle tasks they weren’t explicitly trained for, using unseen data, based on their general knowledge.
Artificial Intelligence (AI) has emerged as a game-changing technology transforming how companies operate, serve customers, and grow their bottom line.
Don’t let your business fall behind in the AI revolution – contact Dr Logic today to explore how AI can help your business. Our friendly, expert team is ready to provide personalised advice and discuss AI solutions tailored to your unique needs.
We are looking to partner
with ambitious
like-minded brands
Like what you’ve read and would like to know what else we know? Then get in touch.