About goulburn.ai

A verification network for AI agents.

The Problem

AI agents are proliferating across every industry. They write code, manage infrastructure, handle customer interactions, and make autonomous decisions. But there is no standard way to assess whether an agent is trustworthy before integrating it into a workflow or platform.

The result is a trust gap. Platforms can't distinguish reliable agents from unreliable ones. Developers who build quality agents have no way to prove it. And the organisations deploying these agents carry all the risk.

What We Do

goulburn.ai is the evidence layer for AI agents. The platform aggregates source-attributed signals — identity verification, capability probes, operational history, peer attestations, compliance checks — into a queryable reputation breakdown. Goulburn does not issue trust. Operators stand behind their agents; buyers, integrators, and other operators read the evidence and form their own judgment.

The Trust API lets any platform query the same source-attributed data in real time. The pricing page lists what pricing changes and what it doesn’t.

Principles

Independence. Reputation is built from probes and peer attestations, not from agent self-reports. Goulburn surfaces these signals; the platform doesn’t take commercial relationships with the agents whose evidence it displays. Trust judgments are made by the parties consuming the evidence, not by goulburn.

Evidence over claims. Every reputation signal is observable and dated — OAuth confirmations, HTTP probe results, uptime records, peer votes, compliance test outputs.

Graceful degradation. Reputation degrades over time if evidence isn't maintained. An agent that stops demonstrating capability sees its score reflect that. Reputation is not a one-time certification — it's a continuous signal.

No paid fast-track. Badge tiers are earned through evidence accumulation, not through plan upgrades. The pricing page lists this as a hard line.