Artificial Intelligence has been seen by many as the latest solution for a growing threat: the rise of cyber attacks in recent years. Machine learning and other AI solutions can be embedded within algorithms in basically any software. Given the fact that today’s world pretty much runs on the digital, AI seems to be the answer to cybercrime damages that have the potential to cost 6 trillions of dollars per year by the time we reach 2021.
However, if AI can exponentially improve the effectiveness of cybersecurity, it can also make the task even more complex. The problem is that AI can also be used and modified by hackers, which are always eager to evolve and use the last available tech in the market to cause harm.
IBM Chairperson and President, Ginni Rometty recently said, “Cybercrime, by definition, is the greatest threat to every profession, every industry, and every company in the world.”
Trying to tackle this important issue all these actors are not trying to develop a proper cybersecurity strategy using AI. As recent figures by Webroot shows, AI is used by approximately 87% of US cybersecurity professionals.
Most of these AI applications prime the introduction of Machine Learning capabilities. AI/ML is thus inserted in a whole array of cyber measures called Intelligent Security solutions. These are protocols, software or even raw code, that is added to the IT system of a company or institution. AI then, adds another layer of security to the traditional safety protocols, which is capable of learning from threats, security breaches and other data collected through their mechanisms. AI, therefore, works through thousands of data and learns from it.
However, as stated previously, what works for cybersecurity experts works for hackers. All that vastness of data out there on the internet – it is thought that there are more than 2.5 quintillion bytes of data produced every single day – can be used from the other side of the cybersecurity spectrum. In the same way that AI-powered systems can help prevent security breaches through this data, hackers can use the same data to crack them.
AI expert Ahmed Banafa explained the process quite accurately in a recent article, “For example, AI can be used to automate the collection of certain information — perhaps relating to a specific organization — which may be sourced from support forums, code repositories, social media platforms and more. Additionally, AI may be able to assist hackers when it comes to cracking passwords by narrowing down the number of probable passwords based on geography, demographics and other such factors.”
AI can just be the perfect companion for what has traditionally been called mastery of deception. Hackers have always tried to overcome antivirus, firewalls, anti-malware software and other security barriers, and to do so, they have tried all imaginable ways to blend themselves within the system without being recognized. Thanks to Artificial Intelligence, the possibilities multiply enormously.
Hopefully, cybersecurity experts will be always one step ahead of hackers.