
New artificial intelligence (AI) platforms and tools are emerging every day to help developers, data scientists, and business analysts. However, this rapid development of new technology also increases the complexity of AI constellations beyond the capacity of responsibility and accountability of AI systems.
That’s the conclusion from a recent survey of 1,240 executives published by MIT Sloan Management Review and Boston Consulting Group (MIT SMR and BCG), which looked at the development of responsible initiatives in AI, and the adoption of both internally and externally built AI tools – – what researchers call “shadow AI”.
Also: Meet the post-AI developer: More creative, more business-focused
The promise of AI has consequences, suggest the authors of the study, Elizabeth Renieris (Oxford’s Institute for Ethics in AI), David Kiron (MIT SMR), and Steven Mills (BCG): “For example, generative AI has proven that is not available, that is not predictable. risk to organizations that are not prepared for its many use cases.”
Many companies “have been caught off guard by the spread of shadow AI use across the enterprise,” said Renieris and his co-authors. In addition, the rapid development of AI “makes it difficult to use AI responsibly and puts pressure on responsible AI programs to keep up.”
They warn the dangers posed by ever-increasing shadow AI are also increasing. For example, companies’ growing reliance on a growing supply of third-party AI tools, along with the rapid adoption of generative AI — algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or pseudo-realistic text, images, or audio — exposing them to new commercial, legal, and reputational risks that are difficult to track.
The researchers discuss the importance of responsible AI, which they define as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in a service that is good for those individual and society while still achieving transformative impact on business. .”
Another difficulty comes from the fact that many companies “appear to be scaling back internal resources dedicated to responsible AI as part of a broader trend of industry layoffs,” the researchers warn. “These reductions in responsible investment in AI are happening, arguably, when it’s most needed.”
Also: How to use ChatGPT: Everything you need to know
For example, the widespread employee use of the ChatGPT chatbot has caught many organizations by surprise, and may have security implications. The researchers said responsible AI frameworks were not written to “deal with the sudden, unimaginable number of risks introduced by generative AI tools”.
Research suggests that 78% of organizations report accessing, purchasing, licensing, or otherwise using third-party AI tools, including commercial APIs, pre-trained models, and data. More than half (53%) rely only on third-party AI tools and do not internally design or develop AI technologies themselves.
Responsible AI programs “must include both internally built and third-party AI tools,” Renieris and his co-authors urge. “The same ethical principles should apply, no matter where the AI system comes from. In the end, if something goes wrong, it doesn’t matter to the person who is negatively affected whether the tool was built or purchased.”
The co-authors caution that while “there is no silver bullet for mitigating third-party AI risks, or any type of AI risk for that matter,” they urge a multi-prong approach to ensure responsible AI in today’s wide open environment.
Also: ChatGPT and the new AI are devastating
Such methods may include the following:
- Assess the vendor’s responsible AI practices
- Contractual language mandating adherence to responsible AI principles
- Pre-certification and vendor audit (if available)
- Internal product-level reviews (where a third-party tool is integrated into a product or service)
- Compliance with relevant regulatory requirements or industry standards
- Include a comprehensive set of policies and procedures, such as guidelines for ethical AI development, risk assessment frameworks, and monitoring and auditing protocols
The specter of legislation and government mandates may make such actions a necessity as AI systems are introduced, the co-authors warn.