GLOSSARY · 90 TERMS · AI · DEVOPS · COMPLIANCE
Shared vocabulary for consistent execution.
Authoritative definitions across AI governance, DevOps velocity, compliance frameworks, and modernization terminology. Consistent language is the first prerequisite of consistent delivery.
TERMS
90
Defined entries
DOMAINS
4
AI · DevOps · Compliance · Infra
ALIASES
Yes
Acronyms mapped
FILTERABLE
Yes
Search below
THE TERMS
90 definitions. Searchable.
Use the filter to find a specific term. Each entry links to a full definition page with usage context and related concepts.
- ai governance – Structures, policies, and controls ensuring responsible, compliant, and value-driven AI deployment.
- ai readiness – An organization’s maturity across data, culture, tooling, and process to scale AI initiatives.
- cmmc level 2 – Cybersecurity Maturity Model Certification level focused on advanced practices for protecting controlled unclassified information.
- fedramp – Federal Risk and Authorization Management Program establishing standardized cloud security assessment.
- zero trust architecture – Security paradigm minimizing implicit trust, continuously verifying users, devices, and context.
- retrieval augmented generation – Pattern combining vector-based retrieval with generative models to ground responses in source-of-truth context.
- prompt injection – Adversarial manipulation of model instructions to override intended behavior or leak sensitive data.
- change failure rate – Percentage of production changes resulting in degraded service requiring remediation.
- deployment frequency – Rate at which an organization successfully releases to production—key DORA metric.
- mttr – Average time to restore service after an incident—core resiliency indicator.
- service level objective – Target reliability performance level for a service, measured via SLIs.
- policy as code – Encoding governance & compliance rules in machine-enforceable formats executed in CI/CD and runtime.
- data minimization – Principle restricting data collection and retention to only what is strictly necessary for defined purposes.
- model drift – Degradation of model performance over time due to changing data distributions or concept evolution.
- threat modeling – Structured process to identify, categorize, and prioritize potential system threats for mitigation.
- ai bill of materials – Inventory detailing datasets, models, parameters, and dependencies used in an AI system.
- explainability – Degree to which model decision pathways can be interpreted and validated by humans.
- governance matrix – Mapped framework aligning roles, controls, risks, and audit evidence for AI lifecycle stages.
- copilot adoption – Structured enablement and governance activities driving responsible developer AI assistant usage.
- mvp validation – Evidence-driven process confirming market desirability and feasibility before scaling build investment.
- finops – Practice of aligning cloud spend with business value through cross-functional accountability and optimization.
- feature flag – Mechanism enabling runtime toggling of functionality for safe deployment and experimentation.
- vector embedding – Dense numerical representation of semantic meaning used for similarity search and retrieval.
- model registry – Central system storing model versions, metadata, lineage, and promotion status.
- data lineage – Traceable lifecycle of data origin, transformations, and downstream usage.
- incident retrospection – Structured analysis of an incident to extract learnings and remediation actions.
- continuous validation – Ongoing automated verification of model performance, drift, and operational constraints.
- policy engine – Runtime or CI component evaluating declarative governance or compliance rules against events.
- secret scanning – Automated detection of hardcoded credentials or sensitive tokens in code and configs.
- concept drift – Shift in the underlying relationship between input features and target outputs over time.
- benchmark dataset – Curated labeled data used to consistently evaluate model performance across iterations.
- prompt template – Reusable structured input pattern for a language model to ensure consistent task performance.
- synthetic data – Artificially generated data approximating statistical properties of real datasets for training or testing.
- chain of thought – Intermediate reasoning steps a model generates to reach an answer—may be hidden or exposed.
- hallucination rate – Observed frequency of unsupported or fabricated model outputs over evaluated scenarios.
- tool orchestration – Coordinated invocation of external functions/APIs by an AI agent to accomplish multi-step tasks.
- reasoning trace – Captured sequence of intermediate model planning or deliberation steps for debugging and evaluation.
- agent intervention rate – Portion of agent-handled tasks requiring human takeover or override.
- data redaction – Removal or masking of sensitive entities before model exposure.
- prompt hygiene – Practices ensuring clarity, safety, and consistency in prompt construction and maintenance.
- drift detection – Monitoring pattern identifying statistically significant change in model behavior or data distributions.
- autonomy escalation – Controlled handoff from automated agent flow to supervised human resolution path.
- golden dataset – High-quality, curated benchmark dataset used for regression evaluation.
- risk register – Catalog of identified risks with scoring, mitigations, and ownership for ongoing governance.
- evidence artifact – Documented proof (logs, screenshots, exports) demonstrating control operation or compliance status.
- progressive delivery – Gradual release strategy (canaries, feature flags) to limit blast radius and observe impact.
- scenario evaluation – Structured test harness executing representative tasks to score model or agent performance.
- capability boundary – Explicitly defined operational scope limiting an agent’s accessible actions or tools.
- token budget – Allocated limit on tokens or cost for a model interaction or reasoning chain.
- guardrail – Safety or policy mechanism preventing or mitigating undesired model behaviors.
- context window – Maximum token length a model can process in a single interaction.
- latent representation – Compressed numerical encoding learned by a model capturing semantic structure.
- semantic drift – Change in meaning or usage of business/domain terminology over time impacting model performance.
- policy drift – Gradual divergence between documented governance policies and actual operational behaviors.
- evaluation harness – Automated framework executing tests to score model or agent performance across metrics.
- fail fast rollback – Mechanism enabling rapid reversal of a deployment upon early anomaly signals.
- feature flag debt – Accumulated complexity and risk from stale or orphaned feature toggles.
- cost per successful task – Economic efficiency metric dividing total AI/agent compute & infra spend by successful outcomes.
- hallucination exception – Captured event where model output is flagged as unsupported or safety-invalid.
- model card – Documentation artifact summarizing intended use, limitations, and performance characteristics of a model.
- red team scenario – Adversarial evaluation case probing system weaknesses or unsafe behavior potential.
- prompt template registry – Version-controlled collection of approved prompt patterns.
- structured logging – Consistent machine-parseable log format enabling reliable analysis across systems.
- trace sampling – Selective retention of a subset of detailed execution traces for cost-effective observability.
- secret rotation – Periodic replacement of credentials or keys to reduce compromise impact window.
- evidence reuse – Leveraging a single control implementation artifact across multiple compliance frameworks.
- baseline drift – Shift in previously recorded performance baseline requiring recalibration of targets.
- scorecard – Concise dashboard summarizing KPIs or maturity indicators against targets.
- semantic enrichment – Augmentation of text with additional contextual descriptors supporting retrieval or reasoning.
- license utilization – Active use percentage of provisioned software or platform licenses.
- acceptance rate – Ratio of AI assistant suggestions accepted vs total suggestions shown.
- drift index – Composite indicator aggregating multiple drift signals into a single severity score.
- hallucination taxonomy – Categorized schema of hallucination types for consistent classification and remediation.
- change lead time – Elapsed time from code commit to production deployment.
- intervention log – Record of human overrides or adjustments during agent operation used for refinement.
- memory ledger – Durable store of curated prior interactions or learnings reused by agents.
- tool budget – Limit on number or cost of external tool calls per agent task.
- confidence threshold – Minimum confidence score required before automated action or response emission.
- risk heat map – Visual matrix correlating impact and likelihood for prioritized mitigation focus.
- metric normalization – Adjusting metrics to common scales facilitating comparison across services or teams.
- elastic scaling – Automatic adjustment of compute resources based on load conditions.
- golden path – Opinionated, pre-approved implementation approach minimizing decision friction.
- code provenance – Traceability of source code origin, contributions, and transformation history.
- supply chain security – Practices protecting software dependencies, build processes, and artifact integrity.
- dependency SBOM – Software Bill of Materials enumerating components and dependencies in an application.
- governance cadence – Recurring scheduled forum reviewing controls, metrics, and risk posture.
- scenario pass rate – Percentage of evaluation scenarios meeting defined success criteria.
- tenant isolation – Logical or physical segmentation ensuring one tenant cannot access another tenant’s data.
- data retention matrix – Table defining retention duration and disposal method per data class.
- variance analysis – Assessment of deviation between expected and actual performance values.