What constitutes an AI risk – and how should the C-suite manage it?

0
38
What constitutes an AI risk – and how should the C-suite manage it?


What constitutes an AI risk – and how should management deal with it? | Insurance business America

“Potential can be exploited” with the right measures

Risk management news

By Kenneth Araullo

As artificial intelligence (AI) becomes increasingly integrated into business operations, it introduces a complex set of risks that require careful management. These risks range from potential regulatory violations and cybersecurity vulnerabilities to ethical dilemmas and privacy concerns.

Given the significant consequences of mismanaging AI, it is critical for directors and executives to develop comprehensive risk management strategies to effectively mitigate these threats.

Edward Vaughan (pictured above), a management liability associate at Lockton, has highlighted the complex challenges and responsibilities associated with integrating AI into business operations, particularly highlighting the potential liability for directors and officers.

“To be prepared for the potential regulatory scrutiny or claims settlement that comes with the introduction of a new technology, it is essential that boards carefully consider the introduction of AI and ensure that sufficient risk mitigation measures are in place,” said Vaughan.

AI significantly increases productivity, streamlines operations and promotes innovation across various sectors. However, Vaughan points out that these benefits come with significant risks, such as: B. potential harm to customers, financial losses and increased regulatory scrutiny.

“Companies’ disclosure of AI use is another potential source of risk. As investor interest in AI grows, companies and their boards may be tempted to overstate the extent of their AI capabilities and investments. This practice, known as “AI washing,” recently led to a plaintiff in the U.S. filing a securities class action lawsuit against an AI-enabled software platform company, arguing that investors were misled,” he said.

Furthermore, the regulatory landscape is evolving, as evidenced by laws such as the EU AI Law, which calls for greater transparency in how companies use AI.

“Just as disclosures may overstate AI capabilities, companies may also underestimate their exposure to AI-related disruptions or fail to disclose that their competitors are adopting AI tools more quickly and effectively. “Cybersecurity risks or faulty algorithms leading to reputational damage, competitive harm or legal liability are potential consequences of poorly implemented AI,” Vaughan said.

Who is responsible for these risks?

For directors and executives, these evolving challenges underscore the importance of monitoring AI integration and understanding the risks involved. Responsibilities span various areas, including ensuring legal and regulatory compliance to prevent AI from causing competitive or reputational harm.

“Allegations of poor AI governance practices or claims of failure of AI technology and misrepresentations may be made against directors and officers in the form of a breach of directors’ duties.” Such claims could damage a company’s reputation and result in a D&O class action lawsuit,” he said.

Additionally, protecting AI systems from cyber threats and ensuring data privacy are critical given the vulnerabilities associated with digital technologies. Vaughan notes that transparent communication with investors about the role and impact of AI is also critical to managing expectations and avoiding misrepresentations that could lead to legal challenges.

Directors could face negligence claims due to AI-related errors such as discrimination or data breaches, which could lead to significant legal and financial consequences. Misrepresentation claims could also arise if AI-generated reports or disclosures contain inaccuracies.

In addition, directors must ensure that adequate insurance cover is in place to cover potential losses caused by AI, as highlighted by insurers such as Allianz Commercial, which have specifically warned about the impact of AI on cybersecurity, regulatory risks and misinformation management.

Risk management for AI-related risks

To effectively manage these risks, Vaughan suggests that boards implement comprehensive decision-making protocols for evaluating and adopting new technologies.

“Boards, in consultation with internal and external advisors, may consider establishing an AI ethics committee to advise on the implementation and management of AI tools. This committee may also be able to help monitor new policies and laws related to AI. If a company does not have the in-house expertise to develop, use and maintain AI, this can be done through a third party,” he said.

Ensuring that employees are well trained and equipped to use AI tools responsibly is critical to maintaining operational integrity. Establishing an AI ethics committee can provide valuable guidance on the ethical use of AI, monitor legislative developments, and address concerns related to AI bias and intellectual property.

Finally, Vaughan said that while AI offers significant opportunities for growth and innovation, it also requires a careful approach to governance and risk management.

“As AI continues to evolve, it is critical for companies and their boards to clearly understand the risks associated with this technology. With the appropriate measures, the exciting potential of AI can be exploited and the risk minimized,” said Vaughan.

What do you think about this story? Please share your comments below.

Stay up to date with the latest news and events

Join our mailing list, it’s free!



Source link

2024-05-06 19:00:32

www.insurancebusinessmag.com