Responsible AI refers to the design, development, deployment, and use of AI systems in a manner that is ethical, safe, transparent, regulatory-compliant, and beneficial to society at large.
It acknowledges that AI has the power to bring about significant improvements across numerous sectors and industries, as well as wider society, but that it also gives rise to significant risks and potential negative outcomes.
Responsible AI aims to embed ethical principles into AI systems and workflows to mitigate these risks and negative outcomes, while maximizing the benefits of AI.
Businesses and other organizations have published various principles-based frameworks for responsible AI, from tech giants such as Microsoft and Google to international bodies such as the OECD and the World Economic Forum.
While individual frameworks differ, some common themes or requirements for responsible AI can be identified.