Why “Trustless”? Why .AI?
Why “Trustless”?
“Trustless” is a word that has two meanings that paradoxically are largely one the opposite to the other. The solution to this paradox is the main root of our innovation. It creates a cognitive dissonance that stimulates critical thinking.
In its original historical meaning, “trustless” stands for “untrustworthy”. It is a negative connotation that derives from the concept “faithless”, engrained deep in our western culture through the Christian concept of faith, whereby blind uncritical faith in scriptures is not only a good and right thing to do but something that if challenged may even get you burned on a stick - like so many Christian reformers, man and women, learned through history.
In its second meaning, developed more recently, it stands for the concepts of “untrusting” and “distrustful”, i.e. doing away with the assumption of unverified trust in anything and anyone. Such a concept is at the root of the invention of democracy and why it works, by making so that properly-conducted democratic elections appear trustworthy to most people, even though there would be huge economic and political interests to corrupt them. It is also the heart of our greatest engineering safety successes - such as commercial aviation to nuclear weapons safety controls - by informing its certification and oversight mechanism.
We chose the name TRUSTLESS because it aligns with the concept of “Trustless Computing”, the philosophy and mission of the NGO from which we spun off, the Trustless Computing Association, as expressed in its Trustless Computing Paradigms and its Trustless Computing Certification Body Initiative.
Trustless Computing is “computing that does not require unverified trust in any person, entity or technology”, that carries to the ultimate consequences the philosophies that are behind similar initiatives like the Trust No One model by US security expert Gibson, and the Zero Trust model proposed by John Kindervag - but goes orders of magnitude beyond.
Trustless Computing refers to an IT service or IT experience in which no unverified upfront trust is assumed in anyone or anything, except for the inherent self-guaranteeing resiliency, transparency and accountability of all organizational processes that are critically involved in its operation, lifecycle and certification governance, whose quality can be assessed by moderately educated and informed citizens - just like we already have with democratic electoral systems.
The term “trustless computing” is also chosen because it means the opposite of Trusted Computing™ – the dominant security paradigm for the most secure IT systems of the last few decades - as brought forward by the largest IT companies via the Trusted Computing Group and Global Platform. Such security paradigm is based on the fallacious assumption that there are a subset of critical components in a secure IT system that are trusted “a priori” because such group states so, according to highly partial and obscure certifications which do not enable an end-user, or experts he trusts, to assess the basis of such claims.
Why “.AI”?
In an online video Elon Musk, summarized (00:00-00.06) his view of the future as "The least scary future I can imagine is where we have at least democratized AI". Further on in the same video (06:50-8:10), he states of AI: "The rate of improvement is reality dramatic. We have to find a way to make sure that the advent of digital superintelligence is one that is symbiotic with humanity. I think this is the single greatest existential risk that we phase and the most pressing one. I am not normally an advocate of regulation and oversight. But this is a case where you have a very serious danger to the public“. He goes on to say: “Therefore there needs to be a public body that has insight and oversight to ensure that everyone is developing AI safely. This is extremely important. The danger of AI is much greater than the danger of nuclear warheads, by a lot. And no one would suggest that we allow anyone to build warheads, that would be insane. Mark my words "AI is far more dangerous than nukes, far".
A very similar conclusion has been determined by Nick Bostrom - arguably the most influential research and expert on the future of the most advanced AI, called superintelligence, artificial general intelligence (AGI) or singularity - via two of his recent papers on the dangers of openness in advanced AI (“Strategic Implications of Openness in AI Development” pdf) and the consequent need for extremely strict global oversight (“The Vulnerable World Hypothesis” pdf). It may not be a coincidence that not long after such writings Elon Musk resigns as chairman of the Open AI initiative, founded with the naive libertarian idea that making the most advanced AI tools widely available would produce beneficial and democratic AI.
Once our Trustless Computing Certification Body, its governance model, and the low-level hardware and software platform of our Seevik Pod and Phone, have proven to provide to communications and transactions of the most wealthy, powerful and politically exposed levels of security and privacy that are radically unprecedented while concurrently ensuring legitimate lawful access - they will be positioned to become the certifications and technical platform of choice for the root-of-trust subsystems of the most security-critical, privacy-critical or safety-critical AI systems, such as self-driving cars, drones, social media feed systems, including advanced AI systems intelligence.
It’s the ability to ensure concurrently extreme levels of privacy but also extreme resiliency for legitimate lawful access, would enable them to be used to implement the “extremely strict global oversight” while maintaining democracy and civil freedom, which are in turn necessary to have democratic oversight over AI …
Such a central role will enable TRUSTLESS.AI and the Trustless Computing Certification Body to realize huge profits and huge public good potential.