ACE Platform - EU AI Act Compliance Statement
Effective Date: December 1, 2025 · Version: 1.0.0
Regulation: EU AI Act (Regulation 2024/1689)
1. Executive Summary
This document describes how the ACE (Agentic Context Engineering) platform complies with the European Union Artificial Intelligence Act (EU AI Act, Regulation 2024/1689).
Key Points:
- ACE is classified as a LIMITED RISK AI system
- ACE is NOT intended for any HIGH-RISK applications under Annex III
- ACE complies with transparency obligations under Article 50
- Users (deployers) maintain full oversight and control
2. System Description
2.1 What is ACE?
ACE is an AI-powered pattern learning platform for software developers that:
- Receives execution traces (coding task descriptions, steps taken, results)
- Analyzes traces using AI (Anthropic Claude) to identify patterns
- Stores learned patterns in a structured "playbook"
- Retrieves relevant patterns for future coding tasks
2.2 AI Components
| Component | AI Model | Function | Provider |
|---|---|---|---|
| Reflector | Claude Sonnet 4.5 | Analyzes traces, identifies patterns | Anthropic |
| Curator | Claude Haiku 4.5 | Merges, deduplicates patterns | Anthropic |
| Embeddings | Jina Code Embeddings | Semantic similarity matching | Sentence Transformers |
2.3 Value Chain Position
┌─────────────────────────────────────────────────────────────────┐ │ EU AI ACT VALUE CHAIN │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ GPAI MODEL PROVIDER AI SYSTEM PROVIDER DEPLOYER │ │ (Anthropic) (Code Engine/ACE) (Users) │ │ │ │ │ │ │ │ Claude models │ ACE platform │ │ │ │ (Sonnet, Haiku) │ (API + SDK) │ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌─────────┐ ┌───────────┐ ┌─────────┐ │ │ │ GPAI │ ─────────▶ │ ACE │ ──────▶│ End │ │ │ │ Models │ integrated │ Platform │ used │ Users │ │ │ └─────────┘ into └───────────┘ by └─────────┘ │ │ │ │ Anthropic's Code Engine's Customer's │ │ obligations obligations obligations │ │ │ └─────────────────────────────────────────────────────────────────┘
3. Risk Classification
3.1 Classification Analysis
Under Article 6 of the EU AI Act, AI systems are classified based on their intended purpose and potential impact.
| Risk Level | Criteria | ACE Status |
|---|---|---|
| Unacceptable (Art. 5) | Social scoring, manipulation, exploitation | NOT APPLICABLE |
| High-Risk (Annex III) | Employment, education, credit, justice, etc. | NOT INTENDED |
| Limited Risk (Art. 50) | AI systems requiring transparency | APPLICABLE |
| Minimal Risk | General AI systems | APPLICABLE |
3.2 Why ACE is NOT High-Risk
ACE does not fall under Annex III high-risk categories because:
| Annex III Category | ACE Analysis |
|---|---|
| Biometrics | ACE does not process biometric data |
| Critical Infrastructure | ACE is not used for infrastructure management |
| Education | ACE does not determine educational access or outcomes |
| Employment | ACE is explicitly prohibited from employment decisions |
| Essential Services | ACE is not used for credit, insurance, or benefits |
| Law Enforcement | ACE has no law enforcement applications |
| Migration | ACE has no immigration/asylum applications |
| Justice | ACE has no legal/judicial applications |
3.3 Safeguards Against High-Risk Use
To ensure ACE is not used for high-risk purposes, we implement:
- Acceptable Use Policy - Explicitly prohibits Annex III uses
- Terms of Service - Legally binding prohibition on high-risk use
- Technical Controls - No features designed for HR/employment use
- Monitoring - Usage patterns reviewed for policy compliance
4. Transparency Compliance (Article 50)
4.1 AI Disclosure
Users are informed that they are interacting with an AI system through:
| Disclosure Point | How Implemented |
|---|---|
| Website | Clear statement that ACE uses AI (Claude by Anthropic) |
| API Documentation | AI processing described in SDK docs |
| Terms of Service | Section 2 explicitly describes AI processing |
| Privacy Policy | Section 3.2 details AI components and purposes |
| In-App | Management interface shows AI-generated patterns |
4.2 AI Processing Information
Users are informed about:
- Which AI models process their data (Claude Sonnet, Claude Haiku)
- What decisions AI makes (pattern retention, similarity matching)
- How to override AI decisions (management interface)
- How to disable AI processing (configuration settings)
5. Human Oversight (Article 14)
5.1 Oversight Mechanisms
Users maintain oversight through:
| Mechanism | Description |
|---|---|
| Pattern Review | All AI-generated patterns visible in management interface |
| Manual Override | Users can modify, delete, or add patterns manually |
| Voting System | Upvote/downvote to influence pattern confidence |
| Learning Control | Can disable automatic learning entirely |
| Configuration | Adjustable thresholds for AI decisions |
| Data Export | Full export of all data at any time |
| Data Deletion | Complete deletion of projects/accounts |
5.2 Effective Oversight Design
The system is designed so that:
- No AI decision is irreversible
- Users can inspect all AI reasoning (patterns include evidence)
- AI suggestions can be ignored or overridden
- Human judgment is final in all cases
6. Data Governance (Article 10)
6.1 Training Data
ACE does NOT train AI models. We use pre-trained models from Anthropic (Claude). Anthropic is responsible for their training data governance.
6.2 User Data Processing
For user-submitted data, we ensure:
| Requirement | Implementation |
|---|---|
| Relevance | Only data user explicitly submits is processed |
| Quality | Validation of input format and structure |
| Rights | User owns their data; we process under contract |
| Minimization | Only necessary data retained |
| Security | Encryption in transit and at rest |
7. Technical Documentation (Article 11)
We maintain documentation covering:
- System architecture and data flows
- AI component specifications
- Security measures and controls
- API specifications
- Operational procedures
8. Record-Keeping (Article 12)
We maintain comprehensive logs:
| Log Type | Contents | Retention |
|---|---|---|
| API Access Logs | Requests, responses, timestamps | 90 days |
| Audit Logs | Administrative actions, token views | 1 year |
| Learning Logs | Patterns created, updated, deleted | 1 year |
| Error Logs | System errors and exceptions | 90 days |
9. Accuracy and Robustness (Article 15)
Accuracy Measures
- Confidence scoring based on helpful/harmful feedback
- 85% similarity threshold prevents redundant patterns
- Low-confidence patterns (<30%) automatically pruned
- User feedback refines pattern quality
Robustness Measures
- Pydantic schema validation on all inputs
- Rate limiting prevents system overload
- Graceful degradation on AI failures
- System functions without AI if needed
10. Cybersecurity (Article 15)
| Control | Implementation |
|---|---|
| Authentication | Token-based API authentication |
| Authorization | Multi-tenant isolation, role-based access |
| Encryption | TLS 1.3 in transit, AES-256 at rest |
| Token Security | SHA-256 hashing, encrypted storage via Clerk |
| Audit Trail | Token view logging with IP, user agent |
11. Provider Obligations Summary
As an AI System Provider under the EU AI Act, Code Engine:
| Obligation | Status | Evidence |
|---|---|---|
| Risk Classification | Complete | This document, Section 3 |
| Transparency | Complete | ToS, Privacy Policy, Documentation |
| Human Oversight Design | Complete | Management interface, configuration API |
| Data Governance | Complete | Privacy Policy, data handling procedures |
| Technical Documentation | Complete | Architecture docs, API specs |
| Record-Keeping | Complete | Logging infrastructure, Logfire |
| Accuracy Measures | Complete | Confidence scoring, pruning, feedback |
| Cybersecurity | Complete | Security audit, encryption, access controls |
12. Deployer Guidance
If you integrate ACE into your own products, you may be a "deployer" under the EU AI Act with your own obligations:
12.1 Your Potential Obligations
| If You... | You May Need To... |
|---|---|
| Integrate ACE into a product | Assess if YOUR product is high-risk |
| Process end-user data through ACE | Conduct data protection impact assessment |
| Operate in the EU | Ensure compliance with EU AI Act deployer rules |
| Use ACE for HR purposes | STOP - This is prohibited |
12.2 Resources
13. Contact for Compliance Inquiries
For questions about EU AI Act compliance:
- Email: compliance@code-engine.app
- Subject: "EU AI Act Inquiry - ACE"
For compliance documentation requests (enterprise customers): legal@code-engine.app
This document demonstrates Code Engine's commitment to responsible AI development and regulatory compliance.