Earlier this year (2018), the annual Consumer Electronics Show (CES) took place, bringing technology companies from around the world to unveil and showcase their latest and greatest inventions. It comes as no surprise that artificial intelligence (AI) took centre stage.
With press debating whether it’s the superhero or supervillain of our day, one thing’s for sure: it’s causing a stir. With most raving about the exciting potential of AI, some argue that AI is a victory of marketing, yet to deliver real improvements to the world we live in.
Let’s look at a positive example: the banking industry. Developments across the industry, both individually and collectively are forcing financial institutions away from the traditional banking model towards openness and collaboration, transforming both backend and frontend processes.
New technology has necessitated discussion around how best to protect consumer data and as a result, this year will see some major shake-ups to data protection regulations. Cybersecurity is essential for any digital network and AI – or, more specifically, machine learning – will arguably be our best line of defence against hackers. Machine learning is already being used to monitor networks of the future. As described by Darktrace CEO Nicole Eagan, it’s the only way to defend networks against the ‘unknown unknowns’ – the inside jobs that your anti-virus software won’t find.
But it’s a double-edged sword. Because this technology isn’t just at the disposal of those ‘doing good’. Likened to ‘what food is to humans’, AI feeds off its own data, making it stronger. This means, for example, that it’s able to replicate and self-generate phishing messages that are so realistic, that any target will fall for them, downloading malicious attachments.
Let’s take Google Play Store as an example, with hackers recently hiding malware and porn ads inside gaming apps. New malware was downloaded more than 3 million times, hiding inside nearly 70 different game apps that were seemingly intended for kids and teens. Such insidious behaviour is hard to track and of course, potentially becomes damaging for any associated brand.
So, what does this prove and what’s the answer: is AI the facilitator or the impediment of evil? Hacking stories continue to show us time after time that no company is out of scope for malicious intent, no matter how large or small.
But when we look at the rise of artificial intelligence, it’s easy to get carried away with a dystopian vision of sentient machines rebelling against humans. Here we are forgetting one thing: that it’s more often than not the people behind AI that are driving this wrong behaviour. AI’s self-replicating nature, combined with its power, ability for growth and scale, causes a colossal impact, but it’s humans that set it off on this destructive path to begin with.
In my opinion, there’s a pressing need to find a moral compass to direct the intelligent machines with which we’re increasingly sharing our lives. The good news is, the responsibility is in our hands. We have the capacity to teach it behaviour and reactions that are based on both context and training.
My honest answer? AI itself is not the problem; it’s the lack of understanding around it’s true potential and the standards it needs to be governed by.
And to that point, its ‘singularity’ – ability to be independent of thought – is both its pro and its con.
So, take a moment to think: what impact will artificial intelligence have on your business? And most importantly, with all of the hype surrounding AI, how can you ensure your company is at the forefront of these discussions?
And while thinking of the answer, remember one thing: AI can emulate the ‘better angels of our nature’, we just need to show it how to.