Trustworthy AI explained

Trustworthy AI refers to artificial intelligence systems designed and deployed to be transparent, robust and respectful of data privacy.

Trustworthy AI makes use of a number of Privacy-enhancing technologies (PETs), including homomorphic encryption, federated learning, secure multi-party computation, differential privacy, zero-knowledge proof.[1] [2]

The concept of trustworthy AI also encompasses the need for AI systems to be explainable, accountable, and robust. Transparency in AI involves making the processes and decisions of AI systems understandable to users and stakeholders. Accountability ensures that there are protocols for addressing adverse outcomes or biases that may arise, with designated responsibilities for oversight and remediation. Robustness and security aim to ensure that AI systems perform reliably under various conditions and are safeguarded against malicious attacks.[3]

ITU standardization

Trustworthy AI is also a work programme of the International Telecommunication Union, an agency of the United Nations, initiated under its AI for Good programme. Its origin lies with the ITU-WHO Focus Group on Artificial Intelligence for Health, where strong need for privacy at the same time as the need for analytics, created a demand for a standard in these technologies.

When AI for Good moved online in 2020, the TrustworthyAI seminar series was initiated to start discussions on such work, which eventually led to the standardization activities.[4]

Multi-Party Computation

Secure multi-party computation (MPC) is being standardized under "Question 5" (the incubator) of ITU-T Study Group 17.[5]

Homomorphic Encryption

Homomorphic encryption allows for computing on encrypted data, where the outcomes or result is still encrypted and unknown to those performing the computation, but can be deciphered by the original encryptor. It is often developed with the goal of enabling use in jurisdictions different from the data creation (under e.g. GDPR).

ITU has been collaborating since the early stage of the HomomorphicEncryption.org standardization meetings, which has developed a standard on homomorphic encryption. The 5th homomorphic encryption meeting was hosted at ITU HQ in Geneva.

Federated Learning

Zero-sum masks as used by federated learning for privacy preservation are used extensively in the multimedia standards of ITU-T Study Group 16 (VCEG) such as JPEG, MP3, and H.264, H.265 (aka MPEG).

Zero-knowledge proof

Previous pre-standardization work on the topic of zero-knowledge proof has been conducted in the ITU-T Focus Group on Digital Ledger Technologies.

Differential privacy

The application of differential privacy in the preservation of privacy was examined at several of the "Day 0" machine learning workshops at AI for Good Global Summits.

See also

Notes and References

  1. Web site: Advancing Trustworthy AI - US Government . 2022-10-24 . National Artificial Intelligence Initiative . en-US.
  2. Web site: TrustworthyAI . live . https://web.archive.org/web/20221024211101/https://www.itu.int/en/ITU-T/Workshops-and-Seminars/2022/0901/Pages/TrustworthyAI.aspx . 2022-10-24 . 2022-10-24 . ITU . en-US.
  3. Web site: ‘Trustworthy AI’ is a framework to help manage unique risk . 2024-06-01 . MIT Technology Review . en.
  4. Web site: TrustworthyAI Seminar Series . 2022-10-24 . AI for Good . en-US.
  5. Shulman . R. . Greene . R. . Glynne . P. . 2006-03-21 . Does implementation of a computerised, decision-supported intensive insulin protocol achieve tight glycaemic control? A prospective observational study . Critical Care . 10 . 1 . P256 . 10.1186/cc4603 . 1364-8535 . 4092631 . free.