Mar 7, 2026Product

Introducing GAL

Your CISO can sleep at night while developers use AI coding agents.

9 min read

The problem: AI agents without guardrails

Every engineering team is adopting AI coding agents. Claude Code, Cursor, GitHub Copilot, Windsurf, Gemini Code Assist — the list grows every month. Developers love them because they ship faster. CISOs lose sleep because they have zero visibility into what these agents are configured to do.

Today, AI agent configurations are scattered across hundreds of repositories. Each developer sets up their own CLAUDE.md, their own .cursorrules, their own permissions. There is no central place to see what agents are running, what they are allowed to do, or whether they comply with your organization's security policies.

This is the ungoverned chaos that every security-conscious organization faces. And it will only get worse as agents become more capable and more autonomous.

Our approach: Discovery, Approval, Sync

GAL is the governance layer for AI coding agents. We built it around a simple three-step workflow that brings order without slowing developers down.

  • Discovery — GAL automatically scans every repository in your organization and finds AI agent configurations. Claude Code, Cursor, Copilot, Windsurf, Gemini, Codex — we detect them all. One dashboard shows you everything.
  • Approval — Your CISO or security admin sets the organization-wide approved configuration. Define what permissions agents should have, what tools they can use, what commands are allowed. One source of truth for the entire org.
  • Sync — Developers pull the approved configuration with a single command: gal sync --pull. No manual copying, no configuration drift, no compliance gaps. Agents across your entire organization run with the configuration your security team approved.

How it works

Getting started with GAL takes less than two minutes. Install the CLI, authenticate with GitHub, and run your first sync.

Behind the scenes, GAL connects to your GitHub organization via a GitHub App. It scans repositories for known AI agent configuration files — CLAUDE.md, .claude/settings.json, .cursorrules, .github/copilot, and more. The results appear in your dashboard instantly.

From the dashboard, an administrator sets the approved configuration. This is a versioned, auditable configuration that defines exactly how AI agents should behave in your organization. When a developer runs gal sync --pull, they get the latest approved configuration applied to their local environment.

No more configuration drift. No more "it works on my machine" for agent setups. Every developer, every repository, every agent — running the same approved configuration.

Built for security teams

We built GAL because we believe AI coding agents will become the most important tools in software development. And like every important tool, they need governance.

GAL gives security teams what they have been asking for: visibility into what agents are doing, control over what they are allowed to do, and confidence that the entire organization is running on approved configurations.

We are not slowing developers down. We are giving them a faster path to compliance. Instead of manually configuring each agent, they run one command and get the approved setup. Instead of wondering if their configuration is correct, they know it is.

This is also what makes GAL defensible. We are the only platform that governs all six major AI coding agents — Claude Code, Cursor, Copilot, Windsurf, Gemini, and Codex — from a single approved configuration. Every organization that standardizes on GAL embeds its security policies into how agents are set up, not bolted on afterward. That institutional context, combined with compliance relationships built during design partner deployments, is not something a new entrant can replicate quickly.

Where we are going

Configuration sync is what we ship today. It solves an immediate and urgent problem. But the reason we built GAL the way we did — as a governance layer rather than a sync utility — is because configuration is only the beginning.

The full vision is a governance stack that covers four layers organizations will need as AI agents become more autonomous:

  • Runtime enforcement — policies enforced at execution time, not just at setup. Agents that attempt actions outside their approved scope are stopped before they run.
  • Identity management — every agent action tied to the developer running it, integrated with your existing SSO and identity providers.
  • Audit trails — a complete, searchable log of agent activity across your organization, exportable for compliance audits and SIEM integration.
  • Policy engine — governance rules written in plain language and enforced automatically, across every agent and every platform.

None of this works without getting configuration right first. That is why we are starting here, with design partners, building the foundation before adding the layers above it.

What's next

We are shipping GAL to design partners today. If you lead an engineering team that uses AI coding agents and cares about governance, we would love to work with you.

Our roadmap for the next six months is focused on three milestones. First, runtime policy enforcement at the CLI execution layer — agents that attempt blocked actions are stopped before they run, not flagged after. Second, identity-aware governance with SSO and SAML integration, so every agent action maps to a real person in your directory. Third, compliance-grade audit trails with SIEM export, giving your security team the evidence they need for SOC 2 and ISO 27001.

The governance layer for AI agents is not a nice-to-have. As agents become more autonomous — running in the background, opening pull requests, triggering pipelines — the organizations that govern them well will move faster and with more confidence than those that do not. GAL is built to be that layer.

2026
AuthorGAL Team

Keep reading

View all
Introducing GAL