Frontline of defence or Achilles’ heel: the critical role of IT and security personnel in organisational security
Executive summary
System administrators and security analysts play a critical role in delivering organisational cyber security and resilience.
However, these roles are often underresourced and overloaded with repetitive tasks. This is a dangerous combination that creates vulnerability to common cyber attacks based on human error.
Automation is only part of the solution – investing in new technology without appropriately resourcing teams risks exacerbating these vulnerabilities.
The frontline of security
Systems administrators and security analysts play a critical role in delivering organisational cyber security and resilience. These are the people who patch vulnerable systems, configure access and security controls around services, and triage alerts from automated monitoring and detection systems.
Often this work will be conducted by contracted third parties, working for managed service providers (MSPs), managed security services providers (MSSPs), or in external Security Operations Centres (SOCs). In other cases these roles will be conducted in-house within an organisation.
These people should be at the core of an organisation’s cyber security and resilience. However, in practice, working conditions and poor organisational spending choices can often turn these teams from the frontline of defence into an organisation’s Achilles’ heel.
The prospect of automating more of this work, including through the use of AI, will be attractive to many organisations. However, if this comes at the expense of investment in the people doing this work then it risks exacerbating vulnerabilities in an organisation’s security posture.
Admins and analysts
The people who work in these roles are generally well educated and interested in the field of cyber security, but the day-to-day experience of these roles is often challenging. Analysts and administrators are often overwhelmed with repetitive work, battling with outdated software, and with little or no budget for improvements.
The teams are a high priority target for threat actors looking for users with access and high levels of privilege. These are people that organisations should be protecting because of the privileges they hold on systems, but too often organisations create an environment where these teams go from a defensive asset to a vulnerability
People working under heightened cognitive load for longer periods, and without a sense of job satisfaction, are more prone to errors, such as misconfiguring systems. They are also more susceptible to phishing and other forms of social engineering, with greater consequences for the organisation if they have a higher level of privilege on the organisational network.
MSPs are especially enticing for threat actors since they serve as a gateway to numerous client organisations, enabling attackers to cause far greater disruption than by targeting a single entity. The use of remote management and monitoring tools by MSPs also creates a convenient point of entry for attackers.
This pattern was exemplified in January 2024 when Black Hunt, a ransomware group, attacked a large MSP in Paraguay, gaining entry via unsecured remote desktop protocols. Over 330 of the provider’s servers were encrypted, and the web services of more than 300 client companies were disrupted.
A fundamental issue in the cyber security industry
The ‘boring’ but fundamental risks associated with human error are consistently under-emphasised in the cyber security industry’s work. Instead, the industry focuses on novel threats. For example, the rapid rise of artificial intelligence (AI) has raised concerns about AI-powered attacks, spanning from the potential of bad actor GPTs to AI-enhanced malware.
This fascination with novel threats is natural; it is rooted in human psychology, which leads us to new and interesting themes. The discovery of a new zero-day exploit is reported in industry press in much the same way as the release of a flagship product from a top tech company.
Similarly, the industry does not cover threat actors equally but tends to focus on a relatively narrow group of high-profile threat actors associated with states historically viewed as the US’s adversaries. This focus has been demonstrated in academic research showing cyber threat intelligence reporting to prioritise high-end threats that are targeting high-profile victims.
Unsurprisingly then, the cyber security industry offers products and services that are marketed as offering technological solutions to high-end, novel threats. The claim that a product uses ‘good’ AI to defeat ‘bad’ AI attacks is a prime example of this dynamic. However, in the market for lemons that is the cyber security industry, determining the effectiveness of these products is extremely difficult.
In practice, as Ian Levy, the former technical director of the UK’s National Cyber Security Centre has argued, many of these products function exactly like medieval amulets – the user believes they are efficacious precisely because they do not understand them. Given the choice between investing more into an existing team and buying a new ‘amulet’, there is a tendency for companies to choose the latter as it is easier and psychologically comforting.
The artificial administrator is no panacea
Automation, including through the application of artificial intelligence, has the potential to reduce the cognitive burden associated with these roles, reducing the likelihood of error in simple, repetitive tasks. It raises the prospect of humans being freed up to do more consequential and interesting work.
Yet alongside those potential benefits, there are also likely to be developments that reinforce negative dynamics in the sector. Maintaining and operating these systems will require additional work. In some cases this will leave humans with more repetitive work to carry out, while the machine does the interesting work of cyber security.
Given the cyber security industry’s history, it is likely that AI security products will be aimed at high-end, novel threats – the ones that feature in marketing material – but fail to address longstanding ‘boring’ vulnerabilities around human error. Far from reducing the workload of human staff, it may in the long run increase it as these teams are tasked with ripping out the ‘digital asbestos’ of poorly designed and integrated AI systems.
The use of AI may also create new categories of work. Like humans, AIs are subject to bias, vary in their trustworthiness, and can be subverted by external threat actors. This creates the prospect of AI insider threat. As companies begin to offer AI-based solutions to monitoring AI insider threat, this risks creating an entirely new set of systems that human admins and analysts are required to implement and monitor.
Conclusion
While it can be tempting to look to technology that promises quick fixes, in the long run automation without investment in people will exacerbate the risk. Before focusing on the high-level, sophisticated threat that is less likely to affect your organisation, it is better to focus on strengthening the organisation’s security fundamentals. Technology is unlikely to be the solution here; it will require investment in the people and teams that deliver this work on a daily basis.
Likewise, it is crucial to give people responsibility and the ability to act. As automation becomes increasingly widespread, it is likely that an increasing proportion of the work of administrators and analysts will be centred on making decisions based on evidence presented by the machine. For this work to be engaging and valuable – and not simply another repetitive, underappreciated task – these teams need to have the resources and the support from within the organisation to make meaningful decisions. The alternative risks creating a situation where working in these roles is increasingly repetitive and unengaging, leading to an increasing risk of errors and elevated security risk.
What you should do
Spend less money on amulets, and more on people.
Shiny amulets are good for box ticking, but less good for driving organisational resilience. A strong security posture comes from well-resourced teams with the right mix of people.Focus more on basic cyber hygiene.
Executives often fixate on the novel high-end threats, rather than addressing persistent sources of vulnerability that can be exploited by actors with limited technical ability.Treat errors as a learning experience.
People will make errors, particularly when they are overworked and underappreciated. A culture in which errors are admitted and rectified, rather than punished, will ultimately be more resilient and secure.
Tyburn St Raphael is a security boutique. Our experts come from UK government, military, and academic backgrounds and have decades of experience in delivering intelligence-led investigations and solutions. We provide training designed to develop best security practices with impactful exercises to businesses, including one of the world’s largest private equity houses. We also support entities through digital crises, including ransomware, fraud, and online threats. We can be contacted via info@tyburn-str.com.