Exclusive Talk with Toby Lewis, Global Head of Threat Analysis at Darktrace

Toby Lewis, Head of Threat Analysis
Prior to joining Darktrace, Toby spent 15 years in the UK Government’s cyber security threats response unit, including as the UK National Cyber Security Centre’s Deputy Technical Director for Incident Management. He has specialist expertise in Security Operations, having worked across Cyber Threat Intelligence, Incident Management, and Threat Hunting. He has presented at several high-profile events, including the NCSC’s flagship conference, CyberUK, the SANS CyberThreat conference, and the Cheltenham Science Festival. He was a lead contributor to the first CyberFirst Girls Competition, championing greater gender diversity in STEM and cyber security. Toby is a Certified Information Systems Security Professional (CISSP) and holds a Master’s in Engineering from the University of Bristol.
Q1: Please tell us about your role at Darktrace. What made you excited to join the Darktrace team? 

Toby: My role here at Darktrace is the Global Head of Threat Analysis. My day-to-day job involves looking at the 100 or so cybersecurity analysts we have spread from New Zealand to Singapore, the UK, and most major time zones in the US. My main role is to evaluate how we can use the Darktrace platform to work with our customers. How can we ensure that our customers get the most out of our cybersecurity expertise and support when using AI to secure their network? 

The other half of my role at Darktrace is subject matter expertise. This role involves talking to reporters like yourself or our customers who want to hear more about what Darktrace can do to help them from a cybersecurity perspective, discussing the context of current events. That part of my role was born out of a nearly 20-year career in cybersecurity. I first started in government and was one of the founding members of the National Cybersecurity Center here in the UK. It was a natural progression to continue my career at Darktrace. 

Let’s get back to the original question of what excited me about joining them. Over the last 15+ years, I’ve worked in threat intel incident response, incident management, and anything to do with security operations. A lot of that work was very reactive. We had to wait for somebody to become compromised, and we would then spend time understanding what was going on. What did the attackers do? How did they get there? From an attacker’s perspective, we could garner all this great threat intelligence, and we could then share that threat intelligence with whomever we thought needed protecting. But there always had to be a sacrificial lamb. There always had to be somebody who had to get hit, somebody who had to be compromised first so you could learn from their misfortune. 

One of the models that really excited me about Darktrace was the idea that it’s actually not fed by threat intelligence or knowledge of what attackers have done in the past. Using AI to learn about the defender’s environments, Darktrace protects against anything that doesn’t look like the defender (rather than what they think looks like an attacker). It is a powerful way of detecting things that have never been seen before, which was exciting for me. 

Q2: What are some of the biggest challenges companies face in terms of securing their organizations in 2022? How does Darktrace play a role? 

Toby: There have been several competing things happening simultaneously. On the one hand, you have the increasing use of SaaS and the cloud. On the other hand, we have got this big thing called Covid. Even as people return to the office, I don’t think we will lose hybrid work or working from home. 

Networks are no longer constrained to this tight, perimeter network that firewalls can secure. Your data is in the cloud, on some SaaS provider, or a third-party website. Fundamentally, you’re allowing your users to log in from home or over the internet. Across all these scenarios, your users can now access your data from wherever they are in the world. The trade-off is that if users can access your data from anywhere, so can an attacker. So, it becomes a question of how you would defend against that. 

How would you change your cybersecurity posture from a very traditional, barbed wire fence methodology, focusing on defending the physical perimeter, to something that anybody with internet access can now have a go at penetrating the network? The big thing that we have seen is the power of credentials. Under the old model, that perimeter, a username, and a password alone would have sufficed. Now, with widespread internet access, it’s no longer enough. An attacker can take advantage of the fact that people use weak passwords, that they reuse passwords, and that those passwords get compromised and leaked online. When users are reusing the same password across multiple sites, and it gets leaked, many other accounts could fall prey and be impacted. 

From a Darktrace perspective, recognizing that credentials have become a powerful tool for an attacker’s arsenal, we need to start thinking about how to defend the network. When somebody logs on with a username and password, how do you know they are whom they say? You have mechanisms like multi-factor authentication (MFA), but MFA isn’t a silver bullet. It’s not, “you have MFA, and therefore, all your security worries are over.” We know companies that construct MFA solutions still get targeted. We know there are weaknesses in some forms of MFA, such as SMS-based MFA, so we know that it can’t be a silver bullet. 

Using something like Darktrace’s Self-Learning AI is helpful to understand users’ behaviors so that when somebody does log on, we can determine whether that’s an expected behavior based on how we have seen them operate before. Then when they gain access to the environment and begin to move around laterally, and access services, all of those data points provide us a point of comparison with what we know that user has done in the past. That allows us to detect those unusual local events without firing on a known bad IP address or known string of texts from a malware beacon.

Q3: Can you comment on some of the cyber security breaches that took place in 2022? Ex: NVIDIA and Samsung 

Toby: The NVIDIA breach was interesting because when it first struck, it was maybe a day or two after the Russian invasion of Ukraine, and I think everyone was wondering, is this the retaliation? Is this the cyberwar that everyone has been predicting? NVIDIA is a strange place for a cyberwar to start, but was this what we expected to see? And then it transpired that it probably wasn’t anything to do with Russia at all. 

Ransomware is an incident that I spent the last four or five years focused on, more so than any other incident. It felt like yet another ransomware attack occurred when we first saw it. But as the incident evolved and more information came out about it, it was interesting to note that maybe this wasn’t a ransomware attack. Maybe, the motivation wasn’t purely about getting financial information or a financial advantage through ransom payment. We saw threats of attackers publishing hidden source code online. We noticed strange demands: “If you do not meet these demands, we will publish your source code.” We saw demands around things like removing rate-limiting for crypto mining. Attackers demanded that if companies weren’t willing to do that, they should at least open source their drivers and software so that attackers could do it themselves. 

It became one of the first attacks I have seen where it was not about trying to get a direct financial return. It was not about trying to have an ideological impact from a hacker activist perspective but getting a company to change its business practices. That said, there’s probably some financial gain further down the line in cryptomining using NVIDIA GPUs. 

Q4: Tell us about Darktrace’s self-learning AI. How does Darktrace use self-learning AI to stop cyber disruption? 

Toby: Darktrace’s approach is very different from other cybersecurity companies. Our focus is not on learning about the attacker and the methods they might use but on learning about the defender and building an understanding of normal behaviors within that organization. Self-Learning AI is constantly evolving and learns ‘on the job.’ 

We learn about our customers, including how users interact with their devices, how devices interact with each other, and what technologies different users use, for example. On a more technical level, we’re connecting data points based on packets hitting our sensors or when an API integration collects a log event. Over time, we learn behaviors and build a unique data set for each customer’s environment – understanding what is normal and what isn’t. From there, we can enforce that normal. If there is something anomalous or malicious, we can easily identify those behaviors and notify security teams in real-time.

Once something has been promoted as suspicious, we can then start applying some degree of cybersecurity context over the top. We determine that this is unusual, but this looks like an admin account, and it looks like they are trying to interact with your domain controller. Some cybersecurity context emphasizes that this anomalous event might be more worrying than just a strange random event in your environment. 

The key thing here is to focus not on the attackers but on the defenders. We build a very tight profile of what we understand about each of our customers because if there’s something alien to the environment – if the attacker is trying to get in or even an insider is trying to move around and access data they wouldn’t normally do – all that stands out from their normal behavior profile. Even if we didn’t know that they were mounting their attack from a known bad IP address, the behavior stands out compared to the other users in that environment. It’s certainly enough for us to believe that that activity is worth investigating. 

Because self-learning AI has such a deep understanding of environments and normal behaviors, it can autonomously respond when something deviates from that normal. Darktrace’s response capability, Antigena, can quarantine devices until the human team can respond. 

Q5: How does Darktrace’s solution contrast to other AI approaches? What makes Darktrace different from its competitors? 

Toby: There are two key differentiators to highlight when answering this question. The first comes back to the idea that we often throw around the word artificial intelligence (AI), but there isn’t just one way of doing it. When we look across organizations implementing AI into their technologies, it’s often an add-on – it’s not something at the core of what they do. But Darktrace has had AI at its core since its founding in 2013. 

There is a difference between supervised and unsupervised and between self-learning and pre-trained AI models. If you’re looking at a pre-trained AI model, you entirely rely on the training data and the information fed into the models before deploying it into a customer environment. Is that truly reflective of all the cyber threats that exist? Is it fully encompassing ransomware, cybercrime, nation-states’ sophisticated hackers, or the anonymous group targeting Russia currently? What happens if the attacker changes their tradecraft so radically that previous models no longer match the activity we’d expect to see? 

From Darktrace’s perspective, we recognize that attackers are incredibly diverse, broad, and too many to count. To try and build up a set of models based on attackers is an impossible task to perfect. Instead, that’s why we focus on the defenders. That’s the big difference between self-learning AI that learns the customer environment to differentiate between normal and anomalous behaviors. Pre-trained AI cannot evolve and detect unknown or never-before-seen threats.

The other aspect that makes us unique is our ability to not just detect and alert on activity but also to respond. We can apply cybersecurity context and take direct, targeted action with Darktrace’s Antigena autonomous response technology when we see suspicious or unusual activity. We can respond autonomously; this is important because we know that ransomware actors are more likely to attack when an organization’s defenses are at their weakest – such as when security teams sleep at 3 am on a Sunday. From a defender’s perspective, this means security teams don’t have to triage every alert and run a 24/7 security operations center. Antigena is already operating in the background, containing the attack as it’s happening, giving human teams time to wake up, respond, and understand what’s going on before a full eviction. Again, that approach is unique across the cybersecurity industry. 

Q6: What is Darktrace Cyber AI Research Center? What are some of the most innovative research and patents to come out of Darktrace’s research? 

Toby: A year ago, I joined an organization that now boasts around 60 patents. It’s an organization where R&D is at the core of what it’s doing. We have invested heavily in how we do research and development. We have a group of researchers predominately based in Cambridge that is genuinely at the center of AI research for its use in cybersecurity. 

Until now, this research just went to the core of our product to make our products better, but we are the only ones that really benefit from it and our customers indirectly. 

The idea we’re starting to think about is how we can share and publish this information. That’s ultimately how the research center originated. We are taking the current R&D work that we’re already doing to support our products and customers and asking: how can we share some of this with the broader community? How can we give a little bit of an insight into the work we’re doing? 

A part of that is about answering the questions of AI critics. AI critics will say AI is just a magic box doing stuff they don’t understand. But exposing our Research Center lets us show you how it works. Let us show you the research that underpins what we are doing. And again, that research has been ongoing since our founding in 2013. As we move forward into 2022 and beyond, we have been looking more and more into how we can use AI in different parts of these cybersecurity operations domains. 

It is not just about detection or the response that I have already alluded to; it is also starting to look at that Prevent model. What can we do to warn our customers about where the weak points are in their environments? How can we reassure our customers that we have complete visibility of their environment? Is there an area here that is a specific concern well before an attack occurs? Can we get our customers to start shoring up their defenses based on how we are using AI to identify weak points, hotspots, and bottlenecks in their environment? 

Q7: What are some of the use cases of Darktrace’s self-learning AI solution? Tell us about Darktrace’s latest partnerships in the tech industry. 

Toby: When Darktrace first started doing this work, it was geared toward the network level – packets, bits, and bytes flying around the network – and being able to profile that sort of activity and understand normal. As time has gone on, we have found more diverse ways of interacting and bringing in data from our customers. That data doesn’t just exist at a network layer; it exists in the cloud, SaaS, endpoints, and more. 

Some of the big pushes we have done in the last few years (and partly accelerated as our customers reacted to Covid) have been focused on how we integrate with other products. How do we bring their data to us? How do we bring their data to our AI so that we are better than the sum of individual parts?

I have worked with customers whose technology stacks are incredibly diverse, with many competing vendors. But, generally speaking, they operate in isolated silos. You have one product that might tell you one thing, then you copy and paste the results from there and put it in another tool. Then you allow it to churn, and you copy and paste it again. You’re bouncing from one tool to the next. From my perspective, some of the great things to see when I talk to our customers and our development teams is that we have successfully integrated with other major tech vendors. Ultimately, we want customers not to treat security as a siloed model.

One of the biggest partnerships we launched last year was with Microsoft. It’s been great to get a good, rich understanding of how Microsoft developed its telemetry, such as its security audit logs with Microsoft 365 and Defender. Now, we can bring all those data points into Darktrace, apply our AI on top, and provide an additional layer of assurance for customers using a Microsoft-first technology stack. It is a powerful way of augmenting an existing security stack.

Q8: What do you foresee as the biggest trends in cybersecurity in 2022? 

Toby: If you had asked me this three weeks ago, several things came to mind. The first was ransomware. I think ransomware has probably been one of the most significant topics in the last two years. There is this idea of attackers targeting a network for purely financial gain, locking out a network, stealing data, and encrypting it. 

One of those predictions is that companies choose to pay a ransom not necessarily because their files are encrypted but because businesses can’t continue normal operations. For all the hassle that an attacker must go through to encrypt files, maybe they don’t actually need to encrypt the files to have the same impact on a business. 

We start to see ransomware spread to other parts of the network estate and authentication services – so things like Active Directory servers, for example – such that attackers are not actually encrypting the data, but they are stopping your network from running. When people can’t run their companies, they are losing millions of dollars a day—suddenly, paying a ransom of $10 million as a one-off way to get your Active Directory controller back online doesn’t seem as bad. 

Another trend we have been following is the risk of the rise of the insider threat — this idea of the Great Resignation. People are now free to make life changes they previously held back on due to COVID lockdowns. Do they opt for a career change or move to a new job? Is this the opportunity to stay working from home as their de facto way of working? Or is their company mandating a return to the office?

We are seeing this almighty churn start with staff moving from organization to organization. Not only are they potentially a risk to their employer by walking out with sensitive data in their possession, but what does the process look like when they actually leave? How is that company locking down that environment access so that somebody can come in after that employee has left and keep that account active, logging on, and getting access to the network? 

Finally, we have the Russia-Ukraine conflict. Many assumed this would be the first major military conflict where cyber was a critical factor in deciding elements of the battlespace. Now, arguably, we have not seen that evolve as we thought. But does that still change the cybersecurity landscape? Has the invasion of Ukraine brought more Western entities together and criminal entities together to fight a common foe? Will we see a truce on the horizon? I don’t know. But certainly, it means there is a lot more uncertainty due to the global disruption. 

 | Website

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...