>
Cyber Security

Changing Cyber Threat Landscape: Part 2

Pete Burnap, Professor of Data Science & Cybersecurity at Cardiff University, is a leading research in the field of cyber security analytics – the fusion of artificial intelligence, cybersecurity and risk management. In Part 2 of our analysis on the changing cyber threat landscape, we spoke to Pete to discover more about his team's research into the psychological triggers that cyber criminals look to exploit.
Pete Burnap
Professor of Data Science & Cybersecurity at Cardiff University

To begin, I’m going to give an overview of some of the work we’re doing at Cardiff University School of Computer Science & Informatics to better understand psychological effects of cybercrime. Our research looks at human factors behind cybercrime including susceptibility, motivations and social dynamics. The type of questions we’re looking to gain insight on include: How can using emotive language affect the likelihood of an attack being successful? How do people become susceptible to clicking on malicious links or give their information away in phishing attacks? Why do people commit cyber attacks in the first place?

Our Approach

When conducting our research, we’ve adopted the following process which looks through three different lenses.

  1. Risk Assessment and Modelling

The first lens is risk assessment and modelling. With this we're really focused on understanding how different parts of interconnected systems depend on each other. By mapping this out we can then start to uncover the cascading effects that go through various systems and subsystems. This gives us an insight into how networks might become infected if an adjacent system were compromised. Flipping it around, we can then start to model better approaches to system management.

  1. Risk Communication, Governance, Decision Making

This feeds into the second lens which looks at risk communication, governance and decision making. This is where we do a lot of work with our psychologists and our criminologists, and even the political side of international relations. If we can quantify and qualify risk, how do we then communicate it across our organisations? This then becomes about creating secure cultures and enabling people to behave in a secure way.

  1. Data Analytics

The third pillar brings in data analytics. Here we use Machine Learning to identify patterns across datasets, highlighting any outliers or elements in the data that might lead us to a conclusion. We use data and analytics from both software and human driven behaviours within networks to understand what is happening on our systems.

This loops all the way back around to risk assessment. Tieing in subsystem level monitoring to risk assessment, understanding what's happening to how it's going to affect us and the cascading failure.

The Power of Emotions in Cybercrime

Twitter as a case study

In one recent study, we looked at Twitter posts to uncover whether there was any evidence of emotional impact on why people click on these links. As a platform, Twitter has emerged as one of the most popular platforms to get real time updates on entertainment and current affairs. However, due to its 280 character restriction and automatic shortening of URLs, it is continuously targeted by cybercriminals to carry out drive-by download attacks, where a user’s system is infected by merely visiting a Web page.

As part of the study, we gathered around 4 million tweets from seven different sporting events over three years and identified those tweets that used to carry out a drive-by download attack. Cybercriminals propagate malware across the platform by using popular events or trending hashtags and creating misleading tweets to lure users to malicious Web pages. A drive-by download attack is carried out by obscuring malicious URLs in an enticing tweet and used as clickbait to lure users to a malicious Web page.

What we discovered

Our results show that both social and content factors are statistically significant for the size and survival of information flows for both malicious and benign tweets. In the malicious data sample, negative and fear related words were used on average 0.39 times per tweet. In contrast, the benign data sample included only 0.09 positive fear-related words per tweet on average. Sentiments of anger followed a similar pattern with the malicious dataset recording 0.21 words per tweet versus 0.07 in the benign dataset.

While we can't really suggest any causality or prior determination of the attackers to use these words, we can clearly see from the sample of tweets that they are heavily laden with emotive words to try and entice people to click on them. And it's not just negative words either. Expressions of Joy, for example, had a mean frequency, within malicious tweets of 0.71 per tweet and 0.17 on benign. So the emotive aspects of both negative and positive are clearly being used.

Retweeting

So once we kind of had an understanding of how much emotion was in tweets that contain malicious and non malicious links, we actually built some statistical models to determine a sort of an explanatory factor as to why things were retweeted.

We found some statistically significant results with tweets containing sentiments of fear more likely to be retweeted by a factor of 2.4. Sadness was 1.32 times more likely and anticipation 1.3 times more. Irrespective of whether it was deliberate, attackers will typically adopt emotive language to entice their victims — and clearly it works.

Going back to what Damon said in Part 1, this could well be an entry point that cybercriminals are looking to exploit to hack into your organisation's systems. That means that it is more important than ever to make your end users and colleagues aware that this technique exists and empower them to be more vigilant.

How Artificial Intelligence can help

Lastly, I’d like to discuss a project we recently did in collaboration with Airbus to better understand how we can detect in real-time things like ransomware running on our system.

First we ran a set of malicious samples, malware, and a set of benign samples, typical things you’d find on a computer (Word documents, PDFs, etc.) and we collected the system behaviours while it was running. Over time we built neural network models using those system behaviours, such that we were able to train the neural network to recognise at each second what was malicious and what was benign.

You can see in the diagram below how much CPU, network traffic etc. each sample uses as they pass through the system. Every second, our neural network will give you a percentage of how sure the model is that something is malicious.

Matilda Rhode, PhD student at Cardiff University

Our results were highly accurate with the neural network able to tell us of a sample was malicious or benign at around 94% success rate within four seconds. From there we were able to hook the underlying processes and terminate them before the malware took over the system.

What this means is that, in theory if we could get this kind of neural network incorporated into a piece of next-gen antivirus software — then as an organisation you can start to get to the level of constant, automated monitoring of your systems. Which as Damon mentioned, being so proactive can only be a good thing for your organisation.

Join us for the next Hodge Tech Series virtual roundtable on 7th October where we will be discussing the "new" way of working.

Supported By
Hodge Tech Series
book your free trial today
Try us, on the house
Book Now
Beforeyousine CTA decoration

We just launched our new Startup Academy — Find out more