Machine Learning is the order of the day when it comes to utilising AI to provide enhanced analytical capability, but what if it can be 'influenced', and not in a good way, through adversaries with a long view perspective?
As the availability of data to train machine learning models increases, there appears to be no validation of the origination of the data sets, let alone any accreditation to confirm it's legitimacy.
Train With Confidence
“If you have put garbage in a computer nothing comes out but garbage, but this garbage having passed through a very expensive machine, is somehow enabled and none dare to criticise it” - Rory Bremner
Taking for granted that our systems are training on accurate data will have potentially catastrophic consequences in years to come. Whilst the technology world rushes in at break neck speed to deliver the next generation of say object recognition or projectile impact fallout not many are looking to see that the model is being provided with authentic and accredited data. An adversary can potentially infect the data sets available today with deliberately misleading assets in order to ensure the target is protected rather than detected.
Garbage In Garbage Out (GIGO)
A relatively new company, Advai (https://advai.co.uk), is challenging the current acceptance that the data sets are merely a tool to train the model. They use in house developed systems to corrupt the data to test the performance of the algorithm. The aim is to determine how well the algorithm performs its designed task. The result is however two fold - it educates those involved in developing AI algorithms to the vulnerability of their algorithm and secondly it ensures the delivery of a resulting model that is more resistant to being fooled by nefarious manipulated training data. The Advai website blog page provides a number of examples, where manipulated data sets can have significant impacts on the performance of the AI. These include controlling the stock exchange, the challenges of IBMs 'Doctor' WATSON, and one that the defence and intelligence sector may be interested in which is to fool an image classifier (https://advai.co.uk/a-simple-black-box-adversarial-ai-attack-on-an-image-classifier/)
Microsoft Steps into the Foray
Advai isn't the only company concerned about the threat vulnerabilities of AI and its associated technologies. Microsoft has also seen the potential damage targeted attacks against AI algorithms and learning systems can have on the future of its reliability. After all, if people don't believe what the machine is telling them then they won't use it. In December 2021 Microsoft released a white paper named 'AI Security Risk Management Framework' (click the image below for the full white paper) with the aim to "empower organizations to reliably audit, track, and improve the security of the AI systems". The document states 3 main goals:
Provides a comprehensive perspective to AI system security. We looked at each element of the AI system lifecycle in a production setting: from data collection, data processing, to model deployment. We also accounted for AI supply chains, as well as the controls and policies with respect to backup, recovery, and contingency planning related to AI systems.
Outlines machine learning threats and recommendations to abate them. To directly help engineers and security professionals, we enumerated the threat statement at each step of the AI system building process. Next, we provided a set of best practices that overlay and reinforce existing software security practices in the context of securing AI systems.
Enables organizations to conduct risk assessments. The framework provides the ability to gather information about the current state of security of AI systems in an organization, perform gap analysis, and track the progress of the security posture.
Furthermore,Microsoft partnered with Mitre to develop an ' Advaserial ML Threat Matrix which can be seen below. More recently however, this has been relaunched as ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) in an interactive format so users can explored in more detail each area. The interactive version can be accessed by clicking the Threat matrix below.
A regulated Future?
There doesn't currently exist any proposed intent to regulate the field of AI. However, regulation is likely to come at some point but its unlikely to be as a result of the problem identified above. That said, governments might start inviting onperformecne standards for AI based solutions provided within their procurement framework. Regulation in general is more likely to come as a result of protests from the human rights organisations. They have seen the threat posed by the rush for control over private data through AI systems not only in the privacy stakes but also in the underperformance of many algorithms creating unfairness at an epic scale. It is something we will need to prepare for, and rightly so, as we are going to rely on it to make decisions that used to be the preserve of real people where Vulcan objectiveness wasn't always the right path to take.
Comments