What is Responsible AI (RAI) and why is it important?
Transcript (auto-generated)
Jody Nelson at SRES Shorts. I’d like to discuss a little bit about Responsible AI, also known as RAI. So although there is no universal definition of RAI, we do see a lot of large organizations like Google, Microsoft, IBM, and others publicly show it on their website and show the importance of Responsible AI. Now, it is important for us to understand all the possible issues, limitations, unintended consequences of both our AI data and the AI model itself. Now, a lot of our AI has to deal with the culture.
We need to establish organizational-wide ethical values and AI principles. And then we need to monitor how those AI principles are being used in the actual practice through generally some form of audits. So when we’re dealing with the cultural aspects of this, we’re not monitoring just the AI product itself, so the outputs, we’re also monitoring the management that built up those products. This is very important for us. Now in doing so, we need humans involved, actual people involved in here, and they have to have some kind of form of accountability. We also need subject matter experts to understand the AI architecture, understand the organizational strategy for AI.
Additionally, we want some kind of ethical board, some kind of review of what’s going on to make sure we meet our principles and are establishing our values correctly. So this is not just ML coders. common across a lot of organizations and their principles. Generally, we talk about transparency and explainability. This is very critical, although we have to caution that this can cause also cybersecurity concerns. So, we have to take that into consideration as well. Other things to consider, fairness, accountability as I mentioned before, and privacy of the user.