Risks to Consider Before Integrating AI and ML into Your Business.
April 2022 - Hank Clark, Chief Strategy and Technology Officer, CSO Group
Organisations worldwide have made significant investments in artificial intelligence (AI) and machine learning (ML) in recent years.
For the most part, AI and ML have been recognised as crucial investments to support digital transformation for many businesses. Australia's federal government has also earmarked $124.1 million for investment in strengthening Australian leadership in developing and adopting responsible AI, demonstrating the technology's inherent value.
Despite the advantages, when it comes to transforming business operations, AI and ML aren't a quick fix or band-aid solution for all potential problems. And, they each come with potential risks to the business. AI and ML tools can be skewed, compromised, and/or maliciously used, causing disruption and chaos for businesses, whether the impact is intentional or not. As such, it's essential that business leaders carefully consider the risks associated with these technologies before integrating them into the existing technology stack.
bad information in, bad information out
One of the biggest risks with AI and ML tools is around the quality of data that is used. Unclean or poisoned data can impact on the output, so it's essential that organisations maintain good governance and best practice around their use of data. However, there are risks that go deeper than data quality with AI and ML.
There are also challenges surrounding the models themselves. Machines do what they're told, so there's a significant amount of trust on the part of the organisation that the algorithms used in AI and ML tools have been considered, designed, defined, and coded into the tools correctly. However, there is a lack of transparency around the algorithms and associated decision-making that creates risk. Organisations need to trust that the coding has been completed with appropriate levels of quality control and with the right outcomes in mind. Unfortunately, AI and ML tools are also subject to the biases of those that create them, which means there can be missing context that leads to incorrect results impacting the decisions that are made.
Organisations may fall into a pattern of blind trust and overreliance on the outcomes of AI and ML tools, which can be difficult for businesses to recover from. To mitigate this, organisations must invest in good governance, embedded into the design process, to avoid any potentially unintended consequences around biased AI and ML tools.
How to get started
On a fundamental level, there needs to be better alignment within the organisation between data officers and the chief security officer. It's not enough to leave the risk management with the technical specialists; it's also critical to have the business leaders onside as well. The IT team is practiced at thinking like an adversary so IT team members intrinsically understand how data and technologies like AI and ML can be misused. However, business leaders may better understand how their competitors are operating and the impact technical disruption will have from an operational perspective. As such, it's crucial for the two roles to converge and maintain a strong working partnership.
Digital environments aren't static; the landscape is constantly evolving and it's essential that organisations continue to tune and optimise their tools on an ongoing basis to keep pace. Taking a proactive approach to manage and sustain good governance, especially for tools like AI and ML, can better position organisations for future success.
It's also important for organisations to work closely with their service providers and vendors and collaborating with the wider ecosystem to help build better AI and ML tools, especially in environments that are increasing in complexity.
CSO Group provides organisations with effective cybersecurity services, risk management, and protection. For more information or to find out how CSO Group can assist you, please contact the CSO team.