Terrorism recruitment on the dark web
What kind of person is susceptible to being recruited into joining a terrorist group? How do internet users become involved in online markets selling weapons, drugs and other contraband?
These are some of the questions security expert Michael Osborne is working on as part of the European project PROTON to understand criminal behaviour online. The study looks at all of the attributes necessary as a whole for terrorist recruitment and cybercrime and tries to model them. That way, we can learn what increases or decreases criminal behaviour on a large scale and inform a policy response. This means researching both on popular social media platforms and what’s known as the dark web, a part of the internet accessible only via specialised software and known for its anonymity.
“It’s about modelling the journey into recruitment. And then you can start playing with the attributes to see if we do that or that, it is less likely that someone becomes radicalised or not,” Osborne said.
Key to this model, is establishing a generic profile of the type of internet user who may be susceptible to becoming involved with terrorism or online crime and understand their behaviour through a series of “signposts.” Osborne likened the approach to a technique used by police and investigators which involves building a profile of a criminal suspect. Investigators create a picture of certain behaviours someone perpetrating a specific crime is likely to possess, helping them reduce possible suspects.
While there is a wealth of research on these patterns of criminality in the physical world, Osborne said not enough is known about how terrorist recruitment and other cybercrime works online.
How would you define the dark web or dark internet for anyone not familiar with it?
There’s lots of names for it, but it’s essentially those areas that Google can’t reach. So they are private forums, websites essentially where you need some kind of registration (…), areas that have nothing to do with anything bad but they’re not searchable. Then there are other areas that are very strictly anonymous because that’s what the participants want there, either for good purposes, either for protection; for example, if you want to make a statement about a government and you want to be able to do that without being traceable. However, it’s also used for cybercrime.
What type of crime is the project looking at specifically, are we talking about online marketplaces for illegal goods in the model of the Silk Road before it was closed by police in 2013?
Exactly. There has been some research that shows that the people who get into criminal activity [online] had a previous life in cyberspace before they got into criminality. Maybe they were just a hacker, but they had a trace. Before they became nefarious they had a good presence in the web or the dark web. And the idea was to try and understand if one could create profiles in the sense of the sort of person that might be likely to migrate from a normal use of the web to something like running a dark web market, for example.
This is not at the individual level, this is one of the important things we have to say. We’re looking at collective attributes to create what we call a digital persona, which is a group of characteristics or set of characteristics for a typical sort of person, not an individual. With the aim that, if we could create such personas, one could maybe identify certain people who would be more likely to evolve into doing something bad.
So the idea is to establish a profile that can be used later for prevention outreach?
It’s more about understanding with the tools we have, so we’re able to create these personas that could be used by whoever for prevention. Because there wasn’t really much material or research in this space, in the cyber domain, the question was if we could get enough data together.
One of the things we did is look at Twitter. And if you’re trying to create a cohort of users from Twitter, which is a humongous amount of data, you have to filter it in order to get to a body of like people, and then see if there are any personality traits for that group.
A lot of it is about dealing with a mass of data, which is very different from the physical world because there the problem is you only have a handful of candidates for collecting data. It’s the opposite in cyberspace where it’s really about filtering out everything except for the subgroup you actually want.
What are “signposts,” and how do they form a part of understanding online crime?
An example is ISIS. They often try to recruit people getting them into a chatroom where they can have a one to one conversation. Essentially they want to build a relationship with that person.
It would start with someone who is interested, in the sense that [the person] feels some kind of injustice at what is going on. [This person] might look at Twitter, where there will be links to all sorts of youtube videos, then youtube videos would be liked by people. (…) And we were looking at ways to see what we could link between these different names, between identifiers.
This kind of model building relies on creating profiles of users. Did you come across any ethical problems?
There was no personal profiling. No profiling of individuals. This is why we develop this content of a persona. It’s essentially how you group like individuals and have a collective analysis on their texts. On Twitter, for example, we anonymise data so we broke the link to the original. We get data in large streams so we immediately broke back the link to whichever pseudonym was being used on the twitter account.
By Sam Edwards
Photo credits: Gery Wibowo
19 March 2019