Responsible AI

From Failures to Foundations: Building a Responsible AI Future

The Internet is replete with examples of artificial intelligence (AI) failures or ‘AI gone wrong’ stories. This includes the story of Amazon’s AI-based experimental recruiting tool that was scrapped for showing bias against women. The hiring tool did not rate candidates in a gender-neutral way and preferred male candidates over female applicants exposing the hidden bias within the AI-powered system.

The red-hot field of AI offers unlimited opportunities but there are perils too. While failures and going wrong may not be good but, in the end, these could be essential prerequisites for success. Have our failures stopped us from exploring the world of AI? The answer is: no. That’s why with the ever growing AI and machine learning (ML) landscape, our emphasis on responsible, trustworthy and ethical use of artificial intelligence has increased too.

Focusing responsible AI strategies

Artificial intelligence disruption isn’t limited to one sector or industry. It is literally transforming everything and impacting our everyday lives. Today AI/ML technologies are being developed and used in healthcare, finance, agriculture, logistics, security and defense and many other areas to achieve efficiency and to stay ahead of the game.

With the increasing penetration of AI, government departments, companies and firms are introducing responsible AI strategies based on best practices and ethical principles to avoid pitfalls and to make transparent, trustworthy and unbiased decisions.

The carefully crafted responsible AI strategies also help implement these principles into practice while harnessing the true potential of artificial intelligence. Below are some of the key principles and best practices developed and adopted by different departments, companies and corporations in order to leverage artificial intelligence in a responsible and trusted manner.

Department of Defense’s AI ethical principles which are aligned with the Department’s AI Strategy direct the U.S. military to lead in AI ethics and the lawful use of AI systems are responsible, equitable, traceable, reliable and governable AI practices. This reflects the DOD’ s commitment to its ethical principles, including the protection of privacy and civil liberties.

Microsoft has developed six different key principles which are designed to put people first and help to empower others. These principles for responsible AI: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security.

Google’s recommended practices for AI are using a human-centered design approach, identifying multiple metrics to assess training and monitoring, examining raw data, understanding the limitations of dataset and model, test and continue to monitor and update the system after deployment.

The National Institute of Standards and Technology’s (NIST) essential building blocks of AI trustworthiness include accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security (resilience) and mitigation of harmful bias.

IBM’s Trust and Transparency Principles include the purpose of AI is to augment human intelligence, data and insights belong to their creator and new technology, including AI systems, must be transparent and explainable.

One Response

Add a Comment

Your email address will not be published. Required fields are marked *