The nightmare scenario for all AI non believers often plays out much like the fictional Skynet from the Terminator movies. An intelligent machine goes rogue and destroys all of humanity. Despite the tremendous developments in machine learning, we're still far away from that scenario, the development of killer robots notwithstanding.
AI offers both tremendous benefits and threats to cybersecurity and studying the current state of affairs provides a fascinating insight into the very nature of AI itself.
Proliferation Brings Weaknesses
With every step of development in AI comes an added security threat. Take the widespread use of virtual assistants like Siri, Alexa etc. These programs make our lives a lot easier and introduce a more human face to what is essentially, a self learning software.
The level of access these programs have in our lives makes them a prime target for nefarious actors. Indeed, in May 2018, the New York Times reported that researchers in the United States and China had been able to command these virtual assistant programs to unlock passwords and dial phone numbers, all without the knowledge of the owners.
Its doesn't take a huge leap of imagination to see this sort of thing can occur with institutions which use AI in the form of customer service operations which collect sensitive data. A hack of such systems will prove catastrophic and the increasing number of ransomware attacks points to the fact that most organizations are a step behind when it comes to sniffing out AI backed cyberattacks.
The threat of AI in cyberattacks is unique in that the very nature of viruses these days has changed
Financial companies are not the only ones at risk. These days a number of healthcare and manufacturing services companies integrate AI into a variety of their operations and the implications of an attack on these systems could be catastrophic for them.
The threat of AI in cyberattacks is unique in that the very nature of viruses these days has changed. What was once a program that was designed to exploit a vulnerability within a software has now become a much more powerful Trojan horse.
These days viruses can either attack your system upfront or choose to embed themselves deep into your system and learn it inside out. In the latter case, it could be a simple piece of code embedded within a customer experience software, for example, that flies under the radar of most anti virus programs
This piece of code, upon learning, mutates and then launches its attack on its targets. The biggest conundrum with these kinds of attacks is that the more the targeted system fights back, the more evolved the attacks become since the virus is "learning" more about its target's defenses.
Indeed, many pieces of malware these days launch "soft" attacks on a system to provoke its defenses into action and once the defenses are understood, the real attack commences.
This self evolving, learning mechanism makes almost every anti virus protection out there obsolete. However, this mechanism poses just as big an opportunity as it does a risk
The Opportunities Inherent in Machine Learning
Current security processes involve time consuming manual intervention and validation actions. Once a threat is identified, a person needs to review the threat, classify it and act on it if appropriate. As you can imagine, this is not much of a defense against a threat that is ever evolving.
Machine learning technology has begun to make its mark on cybersecurity within institutions. Firms such as Darktrace and Shape Security boast complex, self learning algorithms which protect their clients from cyber threats.
In a case study highlighted on Darktrace's website, the firm details how its Enterprise Immune system technology helped lending analytics firm Ipreo, respond to threats before hand and nipping them in the bud.
One of the biggest benefits highlighted in that case study makes for interesting reading. Comparing the Draktrace technology to existing cyber defense technologies, the Chief Information Security Office at Ipreo, Chirs Ampofo, explains how the new technology simply learns your behavior and then responds and reacts to threats on its own.
In other words, its biggest strength also happens to be the exact same thing which makes the threats it faces so formidable.
While AI continues to revolutionize the cyber security space, the fact of the matter is that security is a step behind the treats that are out there. A major reason for this is the open source nature of machine learning algorithms.
Any skilled hacker can take a basic piece of AI code and turn it into a piece of malware. The financial benefits of developing a piece of malware need not be stated and as such, there isn't much upfront cost to such action.
Developing a strong defense however, requires substantial time and money investment into infrastructure and currently, only institutions are able to afford this kind of upfront costs. This leaves a large section of the populace vulnerable to such attacks.
A major challenge for cybersecurity firms is making their software more accessible and reducing the black box nature of their code. The black box is a problem almost every AI firm faces and reducing this mystery goes a long way in getting firms and institutions to accept AI as a solution.
AI presents, much like any other disruptive phenomenon, massive opportunities as well as potential for nefarious use. Classifying machine learning tech as inherently bad is taking an extremely simplistic view.
The correct way to look at it would be to recognize it as a tool, no different from a hammer. One can use it to either pound nails into a wall to hang a beautiful painting or use it to break into someone's home and steal said painting.
Ultimately its power comes from the use it is put to.