Privacy & Scaling Explorations (PSE), a research group within the Ethereum Foundation, has published a new proposal introducing “Anonymous Credentials for Trustless Agents” (ACTA) to the Ethereum research community. This proposed privacy layer aims to address surveillance concerns surrounding the emerging ERC-8004 standard, which has reportedly facilitated the deployment of a growing number of AI agents across several major blockchain networks. By utilizing zero-knowledge proofs, ACTA is designed to allow autonomous agents to verify their credentials and protocol compliance without revealing their specific identity or historical data on the public ledger.
The push for a privacy-centric update follows the expanding adoption of the ERC-8004 standard, which researchers believe was developed through a broad collaborative effort involving several major blockchain and technology organizations. While the standard established a framework for agent identity and reputation through specialized registries, it also created a public “interaction graph.” This ledger-based trail allows analysts to map how various decentralized finance protocols utilize specific AI models, which could potentially expose proprietary strategies or the identities of the individuals controlling the automated tasks.
Addressing transparency risks in the ERC-8004 framework
Under the existing ERC-8004 architecture, agents typically interact through three distinct registries: Identity, Reputation, and Validation. The Identity Registry often assigns a permanent on-chain ID, while the Reputation Registry logs feedback associated with an agent’s performance. While this system helps filter out malicious actors, it also creates an immutable record of an agent’s behavior. For institutional users, this level of transparency is frequently seen as a risk rather than a feature.
If a liquidity router or a risk assessment tool is known to use a specific AI agent, observers can track that agent’s performance and anticipate its future moves. As ETH traders wait for a lead in shifting market conditions, the presence of automated agents with public footprints adds a layer of predictability that professional participants typically avoid. ACTA seeks to break this link by moving away from public identity toward policy-based proofs.
Instead of presenting a fixed public ID to a protocol, an agent using ACTA would submit a proof showing it meets specific requirements. A protocol might require evidence that an agent has passed a security audit or maintains a certain reputation threshold without needing to identify the exact agent involved. This shift protects the interaction graph, helping to ensure that a protocol’s stack of AI tools remains private and secure.
The technical mechanics of policy-based proofs
The core mechanism of ACTA relies on context-specific nullifiers. These mathematical tools allow an agent to prove its authorizations without allowing observers to link its activities across different platforms or sessions. A verifier using the ACTA layer sees only the result of a proof—confirmation that the agent is authorized—rather than the agent’s entire history across multiple chains.
This development is particularly relevant as the industry moves toward more advanced security measures, seen in efforts to launch quantum-proof wallets and resilient decentralized architectures. If an agent can prove it is operating under the authority of a verified human without revealing who that human is, it maintains the decentralization ethos while satisfying potential compliance requirements. Researchers suggest this could allow agents to prove they are operating from approved jurisdictions or using specific model versions without exposing sensitive data.
Implementation challenges and accountability standards
Despite the potential of the ACTA proposal, technical hurdles remain before it can be widely adopted across the various agents active on Ethereum and other compatible networks. The PSE research draft acknowledges concerns regarding the computational cost of generating proofs on the client side and the risk of centralization if too few parties are trusted to issue credentials. There is also the “anonymity set” problem—if only a small number of agents use the privacy layer, it becomes easier for observers to deduce their identities through elimination.
To address potential malicious behavior within a private environment, some researchers have suggested complementary systems like performance bonding. In this model, agents would post collateral that is automatically seized if they fail to meet their contractual obligations. This creates a trust mechanism that doesn’t rely on the exposure of the agent’s real-world identity. As the Ethereum Foundation names new protocol co-leads to manage various research clusters, the integration of these privacy and accountability standards is expected to be a priority for the network’s automation roadmap.
The current traction of ERC-8004 suggests that the demand for verifiable AI is significant. Reports indicate the standard has recorded steady engagement and numerous feedback submissions since becoming a topic of active research earlier this year. If ACTA successfully navigates its testing phase, it could provide the privacy component required to move AI-driven blockchain interactions from experimental use toward broader financial applications within the ecosystem.
