Guardrails Needed for AI Analytics Agents, Not Increased Model Size

Guardrails Needed for AI Analytics Agents, Not Increased Model Size

5 Min Read

Imagine a VP of finance at a large retailer. She asks the company’s new AI analytics agent a straightforward question: “What was our revenue last quarter?” The response arrives within seconds.

Confident.

Clean.

Incorrect.

This situation is more common than many organizations want to admit. AtScale, a company that helps organizations implement governed analytics environments and ensure semantic consistency, has discovered that merely increasing model parameters does not solve the AI governance and context issues enterprises encounter.

When AI systems access inconsistent or ungoverned data, adding complexity to the model does not solve the problem; it exacerbates it. Organizations across various industries have been quick to develop AI systems that analyze data, generate insights, and trigger automated workflows. As a result, AI models have adapted to react quickly through larger model parameters, increased computing power, and additional features. The underlying belief has been that making a model large enough will eventually produce reliable results.

However, evidence suggests that this assumption may not stand. Recent TDWI research indicates that nearly half of respondents view their AI governance initiatives as immature or very immature. This issue may concern data lineage and the business definitions these models are based on more than the models’ capabilities.

Why larger models don’t solve governance

The AI industry often assumes that creating more advanced models will somehow rectify performance errors. This assumption can quickly fail in enterprise analytics.

While scale may extend a model’s reasoning capabilities, it doesn’t enforce the business’s agreed definition of gross margin. It doesn’t fix metric discrepancies that have resided in separate dashboards for years, nor does it produce traceable lineage.

Governance issues don’t resolve with scale. Business rules buried in individual tools, inconsistent definitions across teams, and outputs without an audit trail are structural problems that a larger model cannot fix, merely producing unreliably fluent answers.

At AtScale, we see a common problem among our clients: When inconsistent data definitions are carried into their AI layer, issues persist, often with greater speed and less transparency than previously.

Performance and responsibility are distinct roles. A model reasons. A governance layer defines over what the model reasons, constrains the business logic application, and ensures outputs can be traced back to a source of record. One cannot replace the other.

The real risk: Unconstrained agents in enterprise environments

The issue with AI agents rarely lies with the model itself; it’s about what the model handles and if anyone can see its actions.

With common context, AI agents might interpret data differently across systems. In large enterprises, even small definition discrepancies can lead to different outcomes. Structural risks typically arise from four main causes:

– Agents sourcing from places where metrics can mean different things to different teams, muddling data definitions.
– Metrics from various departments that disagree—prompting two agents to provide two answers without clarity on which is correct.
– Unclear reasoning produces outputs lacking traceable lineage.
– Audit gaps: Without tracing outputs back to a governed source, catching errors or assigning accountability is unreliable.

These aren’t signs of AI failure. They indicate the infrastructure surrounding AI hasn’t kept pace.

What guardrails truly mean in AI analytics

Guardrails are often seen as constraints. However, they frequently provide the conditions necessary for AI agents to function with greater confidence.

Guardrails can align AI-generated outputs with established business logic. They also create a structure for autonomous agents to operate within, so as autonomy grows, so does reliability. In analytics, guardrails typically come in several specific formats:

– Shared data definitions: Consistent terms like revenue, churn, or margin, shared across systems.
– Business logic constraints: Rules directing calculations regardless of the tools or agents performing them.
– Lineage visibility: The ability to trace any output’s origin.
– Access controls: Permissions defining what data an agent can access.
– Standardized metrics: Consistent definitions across departments and platforms.

The goal isn’t to hinder AI’s performance but to give AI a solid foundation.

The role of the semantic layer as a constraint framework

A semantic layer mediates between data and the applications and AI agents using it, defining business concepts, implementing logical processes, and providing a common terms framework for all applications and AI agents.

A semantic layer doesn’t manipulate or duplicate data; it defines data representation. By querying a governed semantic layer instead of the base table, AI agents can generate output based on business-defined logic, not mere inference. This distinction becomes crucial when multiple AI agents on different systems must produce similar outputs.

AtScale views the semantic layer as a context boundary ensuring AI agents interpret data with shared business definitions. More like a common language than a guardrail, it ensures systems operate with mutual understanding.

Governance is an architectural issue, not a model issue

Enterprise organizations understand that AI governance involves more than crafting the largest model; it’s about creating an environment where the chosen model functions optimally. A well-designed, governed architecture—with shared definitions, traceable logic, and a shared context across all systems—likely yields better, more reliable results than a larger model running

You might also like