The Infrastructure of Care: Ethical Architecture Beyond Compliance
We don’t need ethical AI — we need ethical infrastructure that makes care the default.
The Premise: Compliance is Not Ethics
"AI ethics" and "responsible tech" are often treated as policy or compliance problems to be audited after the fact. This is a fundamental mistake. Ethics can and should be compiled into the system's architecture. Telemetry boundaries, data retention logic, observability scope, and recovery behaviors all encode a moral stance.
What if we treated “care” as an architectural quality, as measurable and non-negotiable as “availability” or “latency”?
1. Designing the Infrastructure of Care
This approach manifests in specific architectural choices:
- Data Minimalism: Treat data minimization not as a legal requirement, but as a form of energy efficiency and user respect.
- Empathetic Recovery: Design recovery policies by first asking "who is harmed by this downtime?" The answer determines the recovery strategy.
- Transparency Logs as Moral Prosthetics: Use audit logs not for security, but to give users a clear view into how the system perceives and acts upon them.
2. Case Links: From Open Science to AI Agents
This model is heavily influenced by the norms of open science, where provenance, versioning, and reproducibility are paramount. We can apply the same rigor to AI agents, embedding principles of consent and reversibility directly into their operational design.
3. Toward Care as an Architectural Metric
We can begin to measure care through feedback loops, trust events, and clearly defined restitution pathways. This allows us to define a new kind of Service Level Objective: an "SLO of Care," which tracks whether the system is acting in the best interest of its users. This moves ethics from a vague ideal to an operational discipline.