Creating trustworthy AI

Dr. Matt Katz (subatomicdoc) on Twitter alerted me to a fascinating article about the regulatory frameworks for artificial intelligence and creating “trustworthy AI”. There has been a recent spate of increased focus on bias and discrimination in artificial intelligence. I agree that all algorithms have “some bias” but those are debatable. I had previously argued to open source the algorithms and invite public scrutiny that would remove all doubt. However, most companies prefer to keep it under proprietary wraps, citing commercial “secrets”.

The “Aletheia Framework” offers to provide an insight into “trustworthy” AI. Here’s a quick run down from WikiPedia:

A painting that reveals (alethe) a whole world. Heidegger mentions this particular work of Van Gogh’s (Pair of Shoes, 1895) in “The Origin of the Work of Art“.

Aletheia (Ancient Greekἀλήθεια) is truth or disclosure in philosophy. It was used in Ancient Greek philosophy and revived in the 20th century by Martin Heidegger. Aletheia is variously translated as “unclosedness”, “unconcealedness”, “disclosure” or “truth“. The literal meaning of the word λήθεια is “the state of not being hidden; the state of being evident.” It also means factuality or reality.[1] It is the opposite of lethe, which literally means “oblivion”, “forgetfulness”, or “concealment”.[2] According to Pindar’s First Olympian Ode As the name of a Greek goddess, Aletheia is the daughter of Zeus, while Aesop’s Fables[4] state she was crafted by Prometheus.

[embeddoc url=”https://radoncnotes.com/wp-content/uploads/2020/12/aletheia-framework-worksheet-v1.pdf” viewer=”google”]

The website says:

Once the AI has been applied, the framework includes a five-step continuous automated checking process, which, if comprehensively applied, tracks the decisions the AI is making to detect bias or malfunction and allow human intervention to control and correct it. Anyone can access The Aletheia FrameworkTM, however, it is likely to be most applicable to members of big or small organisations who are accountable for managing risks, assurance, ethics and compliance. Critique in the spirit of improvement is encouraged and updated version will be published in the future.

I am not sure if the framework will have any legal backing or whether the organisations seeking to deploy the AI will adhere to the listed ideals.

I am reiterating this statement that machine learning needs to deployed to find efficiency gains and automate routine tasks. If you outsource the idea of “thinking” to an algorithm, it would be difficult to justify the associated perverse outcomes.

One thought on “Creating trustworthy AI

Comments are closed.