Skip to main content
GLOSSARY · 90 TERMS · AI · DEVOPS · COMPLIANCE

Shared vocabulary for consistent execution.

Authoritative definitions across AI governance, DevOps velocity, compliance frameworks, and modernization terminology. Consistent language is the first prerequisite of consistent delivery.

TERMS
90
Defined entries
DOMAINS
4
AI · DevOps · Compliance · Infra
ALIASES
Yes
Acronyms mapped
FILTERABLE
Yes
Search below
THE TERMS

90 definitions. Searchable.

Use the filter to find a specific term. Each entry links to a full definition page with usage context and related concepts.

  • ai governanceStructures, policies, and controls ensuring responsible, compliant, and value-driven AI deployment.
  • ai readinessAn organization’s maturity across data, culture, tooling, and process to scale AI initiatives.
  • cmmc level 2Cybersecurity Maturity Model Certification level focused on advanced practices for protecting controlled unclassified information.
  • fedrampFederal Risk and Authorization Management Program establishing standardized cloud security assessment.
  • zero trust architectureSecurity paradigm minimizing implicit trust, continuously verifying users, devices, and context.
  • retrieval augmented generationPattern combining vector-based retrieval with generative models to ground responses in source-of-truth context.
  • prompt injectionAdversarial manipulation of model instructions to override intended behavior or leak sensitive data.
  • change failure ratePercentage of production changes resulting in degraded service requiring remediation.
  • deployment frequencyRate at which an organization successfully releases to production—key DORA metric.
  • mttrAverage time to restore service after an incident—core resiliency indicator.
  • service level objectiveTarget reliability performance level for a service, measured via SLIs.
  • policy as codeEncoding governance & compliance rules in machine-enforceable formats executed in CI/CD and runtime.
  • data minimizationPrinciple restricting data collection and retention to only what is strictly necessary for defined purposes.
  • model driftDegradation of model performance over time due to changing data distributions or concept evolution.
  • threat modelingStructured process to identify, categorize, and prioritize potential system threats for mitigation.
  • ai bill of materialsInventory detailing datasets, models, parameters, and dependencies used in an AI system.
  • explainabilityDegree to which model decision pathways can be interpreted and validated by humans.
  • governance matrixMapped framework aligning roles, controls, risks, and audit evidence for AI lifecycle stages.
  • copilot adoptionStructured enablement and governance activities driving responsible developer AI assistant usage.
  • mvp validationEvidence-driven process confirming market desirability and feasibility before scaling build investment.
  • finopsPractice of aligning cloud spend with business value through cross-functional accountability and optimization.
  • feature flagMechanism enabling runtime toggling of functionality for safe deployment and experimentation.
  • vector embeddingDense numerical representation of semantic meaning used for similarity search and retrieval.
  • model registryCentral system storing model versions, metadata, lineage, and promotion status.
  • data lineageTraceable lifecycle of data origin, transformations, and downstream usage.
  • incident retrospectionStructured analysis of an incident to extract learnings and remediation actions.
  • continuous validationOngoing automated verification of model performance, drift, and operational constraints.
  • policy engineRuntime or CI component evaluating declarative governance or compliance rules against events.
  • secret scanningAutomated detection of hardcoded credentials or sensitive tokens in code and configs.
  • concept driftShift in the underlying relationship between input features and target outputs over time.
  • benchmark datasetCurated labeled data used to consistently evaluate model performance across iterations.
  • prompt templateReusable structured input pattern for a language model to ensure consistent task performance.
  • synthetic dataArtificially generated data approximating statistical properties of real datasets for training or testing.
  • chain of thoughtIntermediate reasoning steps a model generates to reach an answer—may be hidden or exposed.
  • hallucination rateObserved frequency of unsupported or fabricated model outputs over evaluated scenarios.
  • tool orchestrationCoordinated invocation of external functions/APIs by an AI agent to accomplish multi-step tasks.
  • reasoning traceCaptured sequence of intermediate model planning or deliberation steps for debugging and evaluation.
  • agent intervention ratePortion of agent-handled tasks requiring human takeover or override.
  • data redactionRemoval or masking of sensitive entities before model exposure.
  • prompt hygienePractices ensuring clarity, safety, and consistency in prompt construction and maintenance.
  • drift detectionMonitoring pattern identifying statistically significant change in model behavior or data distributions.
  • autonomy escalationControlled handoff from automated agent flow to supervised human resolution path.
  • golden datasetHigh-quality, curated benchmark dataset used for regression evaluation.
  • risk registerCatalog of identified risks with scoring, mitigations, and ownership for ongoing governance.
  • evidence artifactDocumented proof (logs, screenshots, exports) demonstrating control operation or compliance status.
  • progressive deliveryGradual release strategy (canaries, feature flags) to limit blast radius and observe impact.
  • scenario evaluationStructured test harness executing representative tasks to score model or agent performance.
  • capability boundaryExplicitly defined operational scope limiting an agent’s accessible actions or tools.
  • token budgetAllocated limit on tokens or cost for a model interaction or reasoning chain.
  • guardrailSafety or policy mechanism preventing or mitigating undesired model behaviors.
  • context windowMaximum token length a model can process in a single interaction.
  • latent representationCompressed numerical encoding learned by a model capturing semantic structure.
  • semantic driftChange in meaning or usage of business/domain terminology over time impacting model performance.
  • policy driftGradual divergence between documented governance policies and actual operational behaviors.
  • evaluation harnessAutomated framework executing tests to score model or agent performance across metrics.
  • fail fast rollbackMechanism enabling rapid reversal of a deployment upon early anomaly signals.
  • feature flag debtAccumulated complexity and risk from stale or orphaned feature toggles.
  • cost per successful taskEconomic efficiency metric dividing total AI/agent compute & infra spend by successful outcomes.
  • hallucination exceptionCaptured event where model output is flagged as unsupported or safety-invalid.
  • model cardDocumentation artifact summarizing intended use, limitations, and performance characteristics of a model.
  • red team scenarioAdversarial evaluation case probing system weaknesses or unsafe behavior potential.
  • prompt template registryVersion-controlled collection of approved prompt patterns.
  • structured loggingConsistent machine-parseable log format enabling reliable analysis across systems.
  • trace samplingSelective retention of a subset of detailed execution traces for cost-effective observability.
  • secret rotationPeriodic replacement of credentials or keys to reduce compromise impact window.
  • evidence reuseLeveraging a single control implementation artifact across multiple compliance frameworks.
  • baseline driftShift in previously recorded performance baseline requiring recalibration of targets.
  • scorecardConcise dashboard summarizing KPIs or maturity indicators against targets.
  • semantic enrichmentAugmentation of text with additional contextual descriptors supporting retrieval or reasoning.
  • license utilizationActive use percentage of provisioned software or platform licenses.
  • acceptance rateRatio of AI assistant suggestions accepted vs total suggestions shown.
  • drift indexComposite indicator aggregating multiple drift signals into a single severity score.
  • hallucination taxonomyCategorized schema of hallucination types for consistent classification and remediation.
  • change lead timeElapsed time from code commit to production deployment.
  • intervention logRecord of human overrides or adjustments during agent operation used for refinement.
  • memory ledgerDurable store of curated prior interactions or learnings reused by agents.
  • tool budgetLimit on number or cost of external tool calls per agent task.
  • confidence thresholdMinimum confidence score required before automated action or response emission.
  • risk heat mapVisual matrix correlating impact and likelihood for prioritized mitigation focus.
  • metric normalizationAdjusting metrics to common scales facilitating comparison across services or teams.
  • elastic scalingAutomatic adjustment of compute resources based on load conditions.
  • golden pathOpinionated, pre-approved implementation approach minimizing decision friction.
  • code provenanceTraceability of source code origin, contributions, and transformation history.
  • supply chain securityPractices protecting software dependencies, build processes, and artifact integrity.
  • dependency SBOMSoftware Bill of Materials enumerating components and dependencies in an application.
  • governance cadenceRecurring scheduled forum reviewing controls, metrics, and risk posture.
  • scenario pass ratePercentage of evaluation scenarios meeting defined success criteria.
  • tenant isolationLogical or physical segmentation ensuring one tenant cannot access another tenant’s data.
  • data retention matrixTable defining retention duration and disposal method per data class.
  • variance analysisAssessment of deviation between expected and actual performance values.
NEXT STEP

Shared vocabulary applied to your program.

Every engagement begins with a written glossary alignment to eliminate ambiguity before architecture decisions.