12 min
Jun 16, 2025

100 AI Terms Everyone Should Know

100 essential ai terms
ai image generatorfemale ai founderfemale ai founderclaudia perez ai founder

A

  1. Artificial General Intelligence (AGI)
    AI matching or exceeding human performance across a broad range of tasks, unlike today’s specialized systems.
  2. AI Architecture
    Overall design of an AI system: models, data pipelines, APIs, training/inference infrastructure.
  3. AI Explainability
    Ability to interpret and understand AI decisions—key for trust and transparency.
  4. Autonomous Agents
    Software components that perform sequences of tasks in an environment with minimal human intervention, making decisions to achieve goals.
  5. AI Stability
    Consistency of outputs across repeated runs or evolving tasks. Crucial for reliable creative workflows (e.g., consistent character appearance across multiple generations) Promptus Cosyflows preserve parameters/seeds for stable outputs.

B

  1. Bias (LLM Bias)
    Systematic preferences or unfair tendencies in model outputs reflecting biases in training data.
  1. Batch Size
    Number of training examples processed together before updating model weights; important hyperparameter in training.
  2. Benchmark
    Standardized test suite to measure/compare AI model performance (e.g., AIM 2024 for reasoning models).
  1. Black-Box Model
    System whose internal workings are opaque to users, making explainability and debugging harder.
  2. Blockchain for AI
    Using blockchain to track data provenance, model ownership, or compensation in generative workflows.

C

  1. Chunking
    Dividing data (e.g., documents) into manageable pieces for processing, especially in retrieval-augmented generation. i.e. RAG pipelines use chunking to feed relevant context.
  1. Content Analytics
    Algorithms analyzing text, images, or video to extract insights, often used in AI pipelines. Underpins recommendation and personalization features.
  1. Content Quality Filters
    Mechanisms to ensure generated content meets standards (e.g., filtering out inappropriate or low-quality outputs).
  2. Context Length
    Maximum tokens an LLM can process at once; longer context enables understanding extended inputs.
  3. Conversational AI
    Systems (chatbots, voice assistants) designed for human-like dialogue; personality and empathy matter for engagement.
  1. Cosyflows / Visual Workflows
    (Promptus concept) Node-based interfaces where creators link operations visually rather than writing code or deep prompt engineering.

D

  1. Data Labeling
    Tagging data (images, text) to train supervised models.
  2. Data Pipeline
    Sequence of stages for collecting, processing, and storing data for AI training or inference.
  3. Data Sanitization
    Removing or masking sensitive information (PII) in datasets for privacy compliance.
  4. Data Validation
    Ensuring data accuracy and consistency before using it in AI systems.
  5. Dataset
    Structured collection of data used to train or evaluate models.
  6. Deep Learning
    Subset of machine learning using neural networks with many layers to learn hierarchical representations.
  7. Diffusion Models
    Generative models that iteratively denoise random noise to produce images or other data; widely used in image/video generation (e.g., Stable Diffusion).
  8. Domain Adaptation
    Adjusting a pre-trained model to perform well in a new but related domain.

E

  1. Embeddings
    Dense vector representations encoding semantic relationships of text, images, or other modalities.
  2. Ethics in AI
    Principles ensuring AI is developed and used responsibly—addressing fairness, transparency, accountability, and privacy.
  3. Evaluation Metrics
    Measures (accuracy, BLEU, FID, etc.) to quantify model performance for specific tasks.
  4. Explainability / Interpretability
    Techniques (e.g., LIME, SHAP) that help users understand why a model made a certain decision.

F

  1. Feature Engineering
    Selecting and transforming data attributes to improve model performance.
  2. Fine-Tuning (LLM Fine-Tuning)
    Further training of a pre-trained model on task-specific data to improve performance on that task.
  3. Foundation Model
    Pre-trained model (often large) serving as a base for downstream tasks, e.g., Stable Diffusion for images.
  4. Frameworks & Libraries
    Software (PyTorch, TensorFlow, etc.) for building and training AI models.
  5. Few-Shot / Zero-Shot Learning
    Ability of models to perform tasks with few or no labeled examples, often via prompt engineering or meta-learning.
  6. Federated Learning
    Training models across decentralized devices while keeping data local for privacy.

G

  1. Generative Pre-trained Transformers (GPT)
    Family of transformer-based language models (e.g., ChatGPT) pre-trained on large texts.
  2. Grounding LLMs
    Providing relevant context (external knowledge) at inference time to improve factual accuracy.
  3. Gradient Descent
    Optimization algorithm adjusting model weights by following gradients to minimize loss.
  4. GANs (Generative Adversarial Networks)
    Two-network framework (generator vs. discriminator) for image/audio generation.
  5. Guardrails
    Constraints ensuring AI outputs remain ethical, accurate, and non-harmful. E.g., content filters or rule-based checks in Promptus workflows.

H

  1. Hallucinations
    When LLMs produce plausible-sounding but false or fabricated information.
  2. Human-in-the-Loop (HITL)
    Involving humans in model training or inference pipelines for oversight, correction, and quality control. See conversational/avatar workflows (claudiaperez.co.uk).
  1. Hybrid Search
    Combining semantic (vector-based) and keyword (sparse) search for better retrieval.
  2. Hyperparameters
    Settings (learning rate, batch size, etc.) chosen before training that influence model behavior and performance.

I

  1. Inference
    Running a trained model on new data to obtain predictions or generated content.
  2. Image-to-Image / Text-to-Image
    Generation tasks converting input images or text prompts into new images (e.g., Stable Diffusion).
  3. Indexing
    Creating data structures (e.g., vector indexes) to speed up retrieval in large datasets.
  4. Interactive AI Interfaces
    Tools (visual workflows, chatbots) allowing users to steer AI processes in real-time. Promptus Cosyflows and conversational avatars illustrate this (claudiaperez.co.uk).
  1. IoT & Edge AI
    Running AI models on edge devices (phones, sensors) for low-latency, privacy-preserving applications. See “Wan 2.1 Locally on PC” for on-device AI and model compression discussion (promptus.ai).

J

  1. Jailbreaking (LLM Jailbreaking)
    Techniques to circumvent model guardrails to produce unintended outputs.
  2. JavaScript/Node Integration
    (Supplemental) Using AI models within web applications for interactive experiences.

K

  1. Knowledge Engineering
    Designing systems that leverage structured knowledge (ontologies, rules) alongside AI.
  2. Knowledge Graph
    Representation of entities and relationships enabling richer semantic understanding.
  3. Kubernetes for AI
    Deploying and scaling AI services in containerized environments.

L

  1. Large Language Model (LLM)
    Transformer-based language models trained on vast text corpora, capable of language understanding and generation.
  2. LLMOps
    MLOps practices tailored to LLM deployment, monitoring, and maintenance.
  3. Latency
    Time delay between input and model output; lower latency is crucial for interactive applications.
  4. Linear Regression / Classical Models
    Traditional ML baseline techniques, often contrasted with deep learning.
  5. Locking / Seed Control
    Fixing randomness (seeds) in generative models to ensure reproducibility. Promptus offers “Stability Mode” to preserve seeds across runs.
  6. Lifecycle Management
    Managing the entire model lifecycle: data collection, training, deployment, monitoring, and retirement.

M

  1. MLOps
    Practices for deploying, monitoring, and maintaining ML systems in production.
  2. Multimodal Models
    Models handling multiple data types (text, image, audio, video) for richer interactions. Promptus MoMM workflows demonstrate combining text, image, video models.
  1. Model Multi-Modality (MoMM)
    (Promptus concept) Combining different specialized AI models in one workflow for best results. See “Democratizing AI: The Promptus No-Code Revolution” for examples of combining Stable Diffusion, SDXL, Veo 3 in one workflow (claudiaperez.co.uk).
  1. Model Compression / Distillation
    Techniques to shrink large models into smaller ones for efficient on-device inference. “Wan 2.1 Locally on PC” discusses lightweight vs. powerful local models (promptus.ai).
  1. Metadata
    Data describing other data (e.g., timestamps, labels) used for search and context.
  2. Metric Learning
    Training embeddings so semantically similar items are close in vector space.

N

  1. Neural Network
    Computational model inspired by the brain, consisting of interconnected layers of nodes (neurons).
  2. Natural Language Processing (NLP)
    Field focused on AI understanding and generating human language.
  3. Notation / Prompt Syntax
    (Supplemental) Formalism for writing prompts or specifying nodes in visual workflows.
  4. Normalization
    Preprocessing step scaling data to a standard range, improving training stability.

O

  1. Ontology
    Structured representation of concepts and relationships in a domain, often feeding knowledge graphs.
  2. Open Source AI
    Models and code released publicly (e.g., Stable Diffusion, DeepSeek R1), enabling community-driven innovation. See DeepSeek R1 article in Promptus blog for impact of open-source reasoning models.
  1. Optimization
    Process of adjusting model parameters to minimize loss during training.
  2. Overfitting / Underfitting
    Overfitting: model memorizes training data but fails on new data. Underfitting: model too simple to capture patterns.
  3. Ontology Alignment
    Mapping concepts across different ontologies to enable interoperability.

P

  1. Prompt Engineering
    Crafting inputs (text or structured) to steer LLMs or generative models toward desired outputs.
  2. Pre-training
    Initial training of a model on large generic datasets before fine-tuning for specific tasks.
  3. Privacy-Preserving AI
    Techniques (federated learning, differential privacy) ensuring user data remains confidential.
  4. Parameter Preservation
    (Promptus concept) Keeping consistent settings across generations for stable outputs. Cosyflows preserves parameters to avoid “building on quicksand.”
  1. Performance Tuning
    Adjusting hyperparameters or infrastructure to meet latency and throughput requirements.
  2. Personalization
    Tailoring model behavior or content to user preferences (e.g., creative memory in conversational AI).

Q

  1. Quality Assurance (QA)
    Processes for verifying AI outputs meet desired standards, especially for generative content.
  2. Quantization
    Reducing numeric precision of model weights (e.g., 16-bit to 8-bit) to run efficiently on limited hardware.

R

  1. Reinforcement Learning (RL)
    Training models via rewards/punishments in interactive environments (e.g., RLHF for LLM alignment).
  2. Retrieval-Augmented Generation (RAG)
    Integrating external knowledge retrieval (e.g., from a vector database) into generation for accuracy.
  3. Responsible AI
    Ensuring AI systems are fair, transparent, and aligned with ethical guidelines.
  4. Reproducibility
    Ability to obtain consistent results across runs, aided by seed control and stable workflows.
  5. Robustness
    Model’s resilience to input variations or adversarial attacks.

S

  1. Semantic Search
    Searching by meaning rather than keywords, using embeddings to match concepts.
  2. Self-Supervised Learning
    Learning representations from unlabeled data by creating proxy tasks.
  3. Transfer Learning
    Reusing a pre-trained model for a new but related task, often via fine-tuning.
  4. Synthetic Data
    Artificially generated data for training when real data is scarce or sensitive.
  5. Scalability
    Ability of AI systems to handle growing data volumes or user loads efficiently.
  6. Safety & Guardrails
    Measures (filters, monitoring) preventing harmful or biased outputs. Important in any generative workflow to maintain ethical standards.
  1. Structured vs. Unstructured Data
    Structured: organized in schemas (tables). Unstructured: free-form (text, images), needing different processing.

T

  1. Transformer
    Neural architecture excelling in sequence tasks (text, image, video) via self-attention.
  2. Tokenization
    Breaking text into tokens (words/subwords) or representing other modalities for model input.
  3. Temperature
    LLM hyperparameter controlling randomness/creativity in generated text.
  4. Transferability
    Model’s ability to generalize knowledge from one domain/task to another.
  5. Test-Time Training
    Adjusting model behavior at inference to improve performance (e.g., for longer video coherence).
  6. Trustworthy AI
    Building systems users can trust via transparency, explainability, and stability.

Notable AI Companies & Platforms

Below are some prominent companies, platforms, and open-source initiatives shaping the AI landscape. Knowing these helps understand model origins, capabilities, and integration options.

  • OpenAI: Pioneers of GPT models (ChatGPT, GPT-4), DALL·E image generation, and research on alignment and safety.
  • Google DeepMind / Google AI: Developers of models like Gemini, Imagen, and research in reasoning and multimodal.
  • Anthropic: Known for Claude models, focused on safety and interpretability.
  • Meta AI / Meta Platforms: Released LLaMA family, research on open-source LLMs.
  • Stability AI: Creator of Stable Diffusion, driving open-source image generation.
  • Hugging Face: Platform hosting countless open-source models, datasets, and tooling (Transformers library).
  • Runway ML: User-friendly creative tools, notably in video generation with Gen models.
  • Luma Labs: Innovators in video generative models (e.g., Dream Machine).
  • DeepSeek: Open-source reasoning models (R1) advancing AI inference and distillation.
  • Suno AI, Udio AI: AI music generation platforms driving audio creativity.
  • Flux Kontext (Black Forest Labs): Emerging framework for image editing and consistency.
  • Promptus: No-code visual workflows combining multiple models (Stable Diffusion, SDXL, Veo 3, etc.) in a unified interface.
  • Sand AI: Released MAGI-1 open-source video generator.
  • LTX Studios: Open-source video generation (e.g., LTXV13B).
  • HeyGen: Avatar/video synthesis tools.
  • Shelf.io: Knowledge platform with AI glossary and enterprise-focused AI optimization resources.
  • Docker / Kubernetes: Not companies per se but essential platforms for deploying and scaling AI services.
  • AWS, Azure, GCP: Major cloud providers offering managed AI services, GPUs, and infrastructure.
  • Replicate: Platform for running open-source models (images, video) in the cloud.
  • ComfyUI Community Plugins: Many developers (e.g., Teacher Húlúwá’s Layer Style plugin) extend ComfyUI for visual workflows.
  • LM Studio / LM Mayhem: Tools for running LLMs locally on consumer hardware.

Note: This list is illustrative, not exhaustive. The AI ecosystem evolves rapidly, with new startups and research labs emerging frequently.

Conclusion

Familiarity with these 100 terms and notable companies/platforms enables clearer communication, better design of AI workflows, and more informed decision-making. As AI continues evolving, this glossary—grounded in practical examples like Promptus Cosyflows, local model deployment, and conversational avatars—provides a solid foundation for creators, developers, and leaders alike.

Feel free to revisit this glossary as you explore AI projects. Understanding these concepts will help you harness AI’s full potential in creative and technical endeavors. 🎉🚀

References

  1. Democratizing AI: The Promptus No-Code Revolution (claudiaperez.co.uk)
  2. Wan 2.1 Locally on PC (promptus.ai)
  3. Create Lifelike AI Talking Avatars with Promptus (claudiaperez.co.uk)
  4. Shelf’s AI Glossary: 80 Essential Artificial Intelligence Terms Explained (shelf.io)
create ai image example

Stay ahead in AI visual creation

Join 10,000+ AI creators and brand leaders getting
our weekly insights. Join the AI creation movement. Get tips, templates, and inspiration straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
ai tech founder
ai tech founder
ai top tech founders
ai tech founder
ai tech founder
ai tech founder
ai top tech founders