Security in the age of artificial intelligence

29June

Security in the age of artificial intelligence

Cyber threats are growing in scale and complexity; nearly three-quarters of respondents to a 2015 ISACA study expect that their company will be the target of sophisticated attacks using multiple vectors and methodologies. As attacks become more automated and more difficult to detect, traditional cybersecurity practices – many built around manual processes – simply can’t keep up.

Given that backdrop, it’s easy to see cybersecurity as the next big opportunity for machine learning and artificial intelligence (AI). Security analysts don’t have the bandwidth to sift through the mountains of security-related data coming from the growing array of connected devices. What’s worse, it’s easy to spend far too much time chasing down false positives or fake alarms, while the real dangers lurk elsewhere.

AI and machine learning can help IT and security professionals more quickly identify risks and anticipate problems before they occur.  Built into a cybersecurity solution, machine learning can parse through reams of historical security data to create a picture of a specific attack based on its variables and relationships and “learn” from that knowledge to predict the next attack. Leveraging insights from big data, the tools can make rapid distinctions between appropriate and suspicious activity, raising red flags when anomalies occur, 24/7.

In addition, once a machine algorithm identifies a threat, it can quickly take action to prevent data loss. AI software also adapts more easily to the continuously evolving threat landscape, whereas traditional security mechanisms like rules or signatures require constant care and feeding from administrators to stay current.

Still early

Dozens of tool vendors are folding AI and machine learning capabilities into their offerings. Yet new research from MIT’s Computer Science and Artificial Intelligence Laboratory and PatternEx makes the case that AI and machine learning on their own are not a complete recipe for cybersecurity success. Rather, the research argues that a combination of human experts and machine learning systems—what it’s calling supervised machine learning—is the optimal defense strategy. This “AI squared” approach is 10 times better at catching threats than machine learning and reduces false positives by a factor of five, the researchers conclude.

While AI and machine learning hold a lot of promise for cybersecurity, deployments are still limited. In a survey by C.A. Walker and HPE, 42% of security professionals said they are not using any kind of big data security analytics solution, and 38% are creating status reports manually.

AI and machine learning are likely to have an impact on cybersecurity in another big way. As these automated technologies play a more prominent role in the data center as a way to offload manual processes like event monitoring and provisioning, they could open the door to new types of security vulnerabilities. One recent study found that early adopters of this new generation of intelligent machines harbor concerns about security as they hand off more functions to fully autonomous, self-learning robots.

Many experts believe these issues will subside over time and see a bright future for AI and machine learning as a way to safeguard corporate information assets. It’s highly likely that the cybersecurity showdown of the future will not be between humans, but rather a faceoff between machines.

Posted by Beth Stackpole  Posted on 29 Jun 
  • advanced persistent threats, artificial intelligence, automation, cybersecurity, HPE, machine learning