TL;DR: No AI is permitted for the development of Inertia. This includes (but is not limited to) writing code or tests, writing or responding to issues or PRs, language translation, reviewing security practices in the project.
Pull Requests generated with AI will be closed without review.
This is going to sound a bit like a university ethics essay, I’m sorry in advance.
If you are building a secure messaging system for users who may face state-level adversaries (and if you think that’s not you, look at what’s happening in the USA right now), your responsibility extends beyond functionality to verifiable trust. Projects like Signal demonstrate this clearly: when compelled by legal process, they were able to show they possessed virtually no user data, because their systems were designed that way from first principles. That level of assurance depends not just on what is built, but how it is built. Inertia follows the same philosophy: every component must be understandable, auditable, and defensible under pressure.
AI-assisted development introduces an unavoidable gap in informed trust. These systems are opaque, externally controlled, and not fully auditable. Using them in the development process implicitly extends trust to parties and mechanisms outside the project’s control. For users in high-risk environments, this is not an acceptable trade-off. They must be able to rely on a system whose properties can be explained without reference to black-box tooling.
There is also a question of accountability. Security-conscious software requires clear human ownership of every decision. AI-generated or influenced code blurs that responsibility, making it harder to attribute intent, reasoning, and potential failure points. The lessons of the XZ Utils backdoor incident are directly relevant: a sophisticated, long-term supply chain compromise succeeded in part because trust and authorship became diffuse and difficult to scrutinise. In the XZ case, it was possible to isolate an individual contributor making malicious changes. Introducing AI into the development process risks compounding this problem, adding another layer where intent cannot be clearly established. AI-enabled pull requests (especially in the case of “vibe coding”, where the nominal author has, by definition, never actually seen the code) make the commit author inherently untrustworthy regardless of their prior contributions to the project, since they are not the actual author of the request or code. In a domain where mistakes or compromises can have severe real-world consequences, this ambiguity is ethically unacceptable.
Finally, this policy reflects alignment with the principles behind Reticulum and its design philosophy. The Zen of Reticulum emphasises simplicity, autonomy, and minimising reliance on centralised or opaque systems. Inertia is not just a frontend application; it involves porting and implementing security-critical components in Swift, where correctness, determinism, and full comprehension of the code are essential. Introducing AI into this process would conflict with those principles by adding opaque influences into parts of the system that must be rigorously understood. Avoiding AI is therefore a deliberate choice to preserve conceptual integrity, ensure that all security properties are derived from fully understood mechanisms, and maintain alignment with the decentralised, self-reliant ethos that Reticulum embodies.