Regulatory system with Trustworthy AI & AI Governance
In industrial manufacturing, embedded AI computers can support computing or image processing to increase efficiency and accuracy. Keeping up with the trends of Artificial Intelligence, we can see how Artificial Intelligence brings value by assisting people in analyzing and making decisions in those Fields which include business, manufacturing, finance, healthcare, transportation, etc. However, there is the risk of privacy, security, safety, system transparency issues, etc. during AI inference.
For AI Governance, the trustworthiness of AI has been shown via risk management and mechanisms by many countries and international organizations. In 2019, the Organization for Economic Cooperation and Development (OECD) declared that AI principles are the first intergovernmental on Artificial Intelligence; at the same time, the United States kicked off the American AI Initiative to facilitate AI leadership. In 2020, the government of the United States announced Guidance for Regulation of Artificial Intelligence Applications and Executive Order 13859 to balance AI innovation with the policy. In 2021, the European Commission also kicked off the Artificial Intelligence Act to regulate and oversee the providers and people who utilize the AI system.
To achieve the goal of Trustworthy AI, many countries have been measuring and evaluating the risks and benefits via regulatory systems. The National Institute of Standards and Technology dived into Trustworthy AI evaluation and announced that the Artificial Intelligence Risk Management Framework with the purpose of if the AI system can be conformity and accuracy assessed. Also, they listed the characteristics of trustworthy AI systems including valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed to evaluate AI and the solutions provided by AI.
Let’s wrap up this article and see you all again next week. :)