
A
- Artificial General Intelligence (AGI)
AI matching or exceeding human performance across a broad range of tasks, unlike today’s specialized systems.
- AI Architecture
Overall design of an AI system: models, data pipelines, APIs, training/inference infrastructure.
- AI Explainability
Ability to interpret and understand AI decisions—key for trust and transparency.
- Autonomous Agents
Software components that perform sequences of tasks in an environment with minimal human intervention, making decisions to achieve goals.
- AI Stability
Consistency of outputs across repeated runs or evolving tasks. Crucial for reliable creative workflows (e.g., consistent character appearance across multiple generations) Promptus Cosyflows preserve parameters/seeds for stable outputs.
B
- Bias (LLM Bias)
Systematic preferences or unfair tendencies in model outputs reflecting biases in training data.
- Batch Size
Number of training examples processed together before updating model weights; important hyperparameter in training.
- Benchmark
Standardized test suite to measure/compare AI model performance (e.g., AIM 2024 for reasoning models).
- Black-Box Model
System whose internal workings are opaque to users, making explainability and debugging harder.
- Blockchain for AI
Using blockchain to track data provenance, model ownership, or compensation in generative workflows.
C
- Chunking
Dividing data (e.g., documents) into manageable pieces for processing, especially in retrieval-augmented generation. i.e. RAG pipelines use chunking to feed relevant context.
- Content Analytics
Algorithms analyzing text, images, or video to extract insights, often used in AI pipelines. Underpins recommendation and personalization features.
- Content Quality Filters
Mechanisms to ensure generated content meets standards (e.g., filtering out inappropriate or low-quality outputs).
- Context Length
Maximum tokens an LLM can process at once; longer context enables understanding extended inputs.
- Conversational AI
Systems (chatbots, voice assistants) designed for human-like dialogue; personality and empathy matter for engagement.
- Cosyflows / Visual Workflows
(Promptus concept) Node-based interfaces where creators link operations visually rather than writing code or deep prompt engineering.
D
- Data Labeling
Tagging data (images, text) to train supervised models.
- Data Pipeline
Sequence of stages for collecting, processing, and storing data for AI training or inference.
- Data Sanitization
Removing or masking sensitive information (PII) in datasets for privacy compliance.
- Data Validation
Ensuring data accuracy and consistency before using it in AI systems.
- Dataset
Structured collection of data used to train or evaluate models.
- Deep Learning
Subset of machine learning using neural networks with many layers to learn hierarchical representations.
- Diffusion Models
Generative models that iteratively denoise random noise to produce images or other data; widely used in image/video generation (e.g., Stable Diffusion).
- Domain Adaptation
Adjusting a pre-trained model to perform well in a new but related domain.
E
- Embeddings
Dense vector representations encoding semantic relationships of text, images, or other modalities.
- Ethics in AI
Principles ensuring AI is developed and used responsibly—addressing fairness, transparency, accountability, and privacy.
- Evaluation Metrics
Measures (accuracy, BLEU, FID, etc.) to quantify model performance for specific tasks.
- Explainability / Interpretability
Techniques (e.g., LIME, SHAP) that help users understand why a model made a certain decision.
F
- Feature Engineering
Selecting and transforming data attributes to improve model performance.
- Fine-Tuning (LLM Fine-Tuning)
Further training of a pre-trained model on task-specific data to improve performance on that task.
- Foundation Model
Pre-trained model (often large) serving as a base for downstream tasks, e.g., Stable Diffusion for images.
- Frameworks & Libraries
Software (PyTorch, TensorFlow, etc.) for building and training AI models.
- Few-Shot / Zero-Shot Learning
Ability of models to perform tasks with few or no labeled examples, often via prompt engineering or meta-learning.
- Federated Learning
Training models across decentralized devices while keeping data local for privacy.
G
- Generative Pre-trained Transformers (GPT)
Family of transformer-based language models (e.g., ChatGPT) pre-trained on large texts.
- Grounding LLMs
Providing relevant context (external knowledge) at inference time to improve factual accuracy.
- Gradient Descent
Optimization algorithm adjusting model weights by following gradients to minimize loss.
- GANs (Generative Adversarial Networks)
Two-network framework (generator vs. discriminator) for image/audio generation.
- Guardrails
Constraints ensuring AI outputs remain ethical, accurate, and non-harmful. E.g., content filters or rule-based checks in Promptus workflows.
H
- Hallucinations
When LLMs produce plausible-sounding but false or fabricated information.
- Human-in-the-Loop (HITL)
Involving humans in model training or inference pipelines for oversight, correction, and quality control. See conversational/avatar workflows (claudiaperez.co.uk).
- Hybrid Search
Combining semantic (vector-based) and keyword (sparse) search for better retrieval.
- Hyperparameters
Settings (learning rate, batch size, etc.) chosen before training that influence model behavior and performance.
I
- Inference
Running a trained model on new data to obtain predictions or generated content.
- Image-to-Image / Text-to-Image
Generation tasks converting input images or text prompts into new images (e.g., Stable Diffusion).
- Indexing
Creating data structures (e.g., vector indexes) to speed up retrieval in large datasets.
- Interactive AI Interfaces
Tools (visual workflows, chatbots) allowing users to steer AI processes in real-time. Promptus Cosyflows and conversational avatars illustrate this (claudiaperez.co.uk).
- IoT & Edge AI
Running AI models on edge devices (phones, sensors) for low-latency, privacy-preserving applications. See “Wan 2.1 Locally on PC” for on-device AI and model compression discussion (promptus.ai).
J
- Jailbreaking (LLM Jailbreaking)
Techniques to circumvent model guardrails to produce unintended outputs.
- JavaScript/Node Integration
(Supplemental) Using AI models within web applications for interactive experiences.
K
- Knowledge Engineering
Designing systems that leverage structured knowledge (ontologies, rules) alongside AI.
- Knowledge Graph
Representation of entities and relationships enabling richer semantic understanding.
- Kubernetes for AI
Deploying and scaling AI services in containerized environments.
L
- Large Language Model (LLM)
Transformer-based language models trained on vast text corpora, capable of language understanding and generation.
- LLMOps
MLOps practices tailored to LLM deployment, monitoring, and maintenance.
- Latency
Time delay between input and model output; lower latency is crucial for interactive applications.
- Linear Regression / Classical Models
Traditional ML baseline techniques, often contrasted with deep learning.
- Locking / Seed Control
Fixing randomness (seeds) in generative models to ensure reproducibility. Promptus offers “Stability Mode” to preserve seeds across runs.
- Lifecycle Management
Managing the entire model lifecycle: data collection, training, deployment, monitoring, and retirement.
M
- MLOps
Practices for deploying, monitoring, and maintaining ML systems in production.
- Multimodal Models
Models handling multiple data types (text, image, audio, video) for richer interactions. Promptus MoMM workflows demonstrate combining text, image, video models.
- Model Multi-Modality (MoMM)
(Promptus concept) Combining different specialized AI models in one workflow for best results. See “Democratizing AI: The Promptus No-Code Revolution” for examples of combining Stable Diffusion, SDXL, Veo 3 in one workflow (claudiaperez.co.uk).
- Model Compression / Distillation
Techniques to shrink large models into smaller ones for efficient on-device inference. “Wan 2.1 Locally on PC” discusses lightweight vs. powerful local models (promptus.ai).
- Metadata
Data describing other data (e.g., timestamps, labels) used for search and context.
- Metric Learning
Training embeddings so semantically similar items are close in vector space.
N
- Neural Network
Computational model inspired by the brain, consisting of interconnected layers of nodes (neurons).
- Natural Language Processing (NLP)
Field focused on AI understanding and generating human language.
- Notation / Prompt Syntax
(Supplemental) Formalism for writing prompts or specifying nodes in visual workflows.
- Normalization
Preprocessing step scaling data to a standard range, improving training stability.
O
- Ontology
Structured representation of concepts and relationships in a domain, often feeding knowledge graphs.
- Open Source AI
Models and code released publicly (e.g., Stable Diffusion, DeepSeek R1), enabling community-driven innovation. See DeepSeek R1 article in Promptus blog for impact of open-source reasoning models.
- Optimization
Process of adjusting model parameters to minimize loss during training.
- Overfitting / Underfitting
Overfitting: model memorizes training data but fails on new data. Underfitting: model too simple to capture patterns.
- Ontology Alignment
Mapping concepts across different ontologies to enable interoperability.
P
- Prompt Engineering
Crafting inputs (text or structured) to steer LLMs or generative models toward desired outputs.
- Pre-training
Initial training of a model on large generic datasets before fine-tuning for specific tasks.
- Privacy-Preserving AI
Techniques (federated learning, differential privacy) ensuring user data remains confidential.
- Parameter Preservation
(Promptus concept) Keeping consistent settings across generations for stable outputs. Cosyflows preserves parameters to avoid “building on quicksand.”
- Performance Tuning
Adjusting hyperparameters or infrastructure to meet latency and throughput requirements.
- Personalization
Tailoring model behavior or content to user preferences (e.g., creative memory in conversational AI).
Q
- Quality Assurance (QA)
Processes for verifying AI outputs meet desired standards, especially for generative content.
- Quantization
Reducing numeric precision of model weights (e.g., 16-bit to 8-bit) to run efficiently on limited hardware.
R
- Reinforcement Learning (RL)
Training models via rewards/punishments in interactive environments (e.g., RLHF for LLM alignment).
- Retrieval-Augmented Generation (RAG)
Integrating external knowledge retrieval (e.g., from a vector database) into generation for accuracy.
- Responsible AI
Ensuring AI systems are fair, transparent, and aligned with ethical guidelines.
- Reproducibility
Ability to obtain consistent results across runs, aided by seed control and stable workflows.
- Robustness
Model’s resilience to input variations or adversarial attacks.
S
- Semantic Search
Searching by meaning rather than keywords, using embeddings to match concepts.
- Self-Supervised Learning
Learning representations from unlabeled data by creating proxy tasks.
- Transfer Learning
Reusing a pre-trained model for a new but related task, often via fine-tuning.
- Synthetic Data
Artificially generated data for training when real data is scarce or sensitive.
- Scalability
Ability of AI systems to handle growing data volumes or user loads efficiently.
- Safety & Guardrails
Measures (filters, monitoring) preventing harmful or biased outputs. Important in any generative workflow to maintain ethical standards.
- Structured vs. Unstructured Data
Structured: organized in schemas (tables). Unstructured: free-form (text, images), needing different processing.
T
- Transformer
Neural architecture excelling in sequence tasks (text, image, video) via self-attention.
- Tokenization
Breaking text into tokens (words/subwords) or representing other modalities for model input.
- Temperature
LLM hyperparameter controlling randomness/creativity in generated text.
- Transferability
Model’s ability to generalize knowledge from one domain/task to another.
- Test-Time Training
Adjusting model behavior at inference to improve performance (e.g., for longer video coherence).
- Trustworthy AI
Building systems users can trust via transparency, explainability, and stability.
Notable AI Companies & Platforms
Below are some prominent companies, platforms, and open-source initiatives shaping the AI landscape. Knowing these helps understand model origins, capabilities, and integration options.
- OpenAI: Pioneers of GPT models (ChatGPT, GPT-4), DALL·E image generation, and research on alignment and safety.
- Google DeepMind / Google AI: Developers of models like Gemini, Imagen, and research in reasoning and multimodal.
- Anthropic: Known for Claude models, focused on safety and interpretability.
- Meta AI / Meta Platforms: Released LLaMA family, research on open-source LLMs.
- Stability AI: Creator of Stable Diffusion, driving open-source image generation.
- Hugging Face: Platform hosting countless open-source models, datasets, and tooling (Transformers library).
- Runway ML: User-friendly creative tools, notably in video generation with Gen models.
- Luma Labs: Innovators in video generative models (e.g., Dream Machine).
- DeepSeek: Open-source reasoning models (R1) advancing AI inference and distillation.
- Suno AI, Udio AI: AI music generation platforms driving audio creativity.
- Flux Kontext (Black Forest Labs): Emerging framework for image editing and consistency.
- Promptus: No-code visual workflows combining multiple models (Stable Diffusion, SDXL, Veo 3, etc.) in a unified interface.
- Sand AI: Released MAGI-1 open-source video generator.
- LTX Studios: Open-source video generation (e.g., LTXV13B).
- HeyGen: Avatar/video synthesis tools.
- Shelf.io: Knowledge platform with AI glossary and enterprise-focused AI optimization resources.
- Docker / Kubernetes: Not companies per se but essential platforms for deploying and scaling AI services.
- AWS, Azure, GCP: Major cloud providers offering managed AI services, GPUs, and infrastructure.
- Replicate: Platform for running open-source models (images, video) in the cloud.
- ComfyUI Community Plugins: Many developers (e.g., Teacher Húlúwá’s Layer Style plugin) extend ComfyUI for visual workflows.
- LM Studio / LM Mayhem: Tools for running LLMs locally on consumer hardware.
Note: This list is illustrative, not exhaustive. The AI ecosystem evolves rapidly, with new startups and research labs emerging frequently.
Conclusion
Familiarity with these 100 terms and notable companies/platforms enables clearer communication, better design of AI workflows, and more informed decision-making. As AI continues evolving, this glossary—grounded in practical examples like Promptus Cosyflows, local model deployment, and conversational avatars—provides a solid foundation for creators, developers, and leaders alike.
Feel free to revisit this glossary as you explore AI projects. Understanding these concepts will help you harness AI’s full potential in creative and technical endeavors. 🎉🚀
References
- Democratizing AI: The Promptus No-Code Revolution (claudiaperez.co.uk)
- Wan 2.1 Locally on PC (promptus.ai)
- Create Lifelike AI Talking Avatars with Promptus (claudiaperez.co.uk)
- Shelf’s AI Glossary: 80 Essential Artificial Intelligence Terms Explained (shelf.io)

Stay ahead in AI visual creation
our weekly insights. Join the AI creation movement. Get tips, templates, and inspiration straight to your inbox.