Elisa AI Cyber Outlook 2024 – Overview of the impact of AI from the cybersecurity perspective

Lukuaika 7 min

In a very short time, impressive technological leaps in AI (artificial intelligence) have significantly impacted the operations and operating environments of a wide variety of organisations.

As with any major change, people have mixed feelings about AI. Developments in technology, especially on the scale wrought by AI, offer enormous opportunities for developing the way things are done.

On the other hand, big change is also scary – especially when we’re talking about technologies where it’s very hard for most people to understand how they work. This uncertainty drags down the pace of operational development.

In this blog post, I discuss AI from a cybersecurity perspective. Our target is to reduce uncertainty and accelerate the pace at which technology is harnessed to enable even more productive operations.

Four perspectives on AI

AI is not a new phenomenon, but the recent explosion in its current capabilities and popularity can be explained by:1

1) Increased computing power

2) The amount and quality of data and interconnections between devices

3) Advanced algorithms

AI systems can be roughly divided into two categories: “predictive AI” (PredAI) systems and “generative AI” (GenAI) systems. GenAI solutions have been getting a lot of media attention due to the popularity of tools like ChatGPT.

According to research by Gartner2 when it comes to GenAI, the senior management of organisations are particularly concerned about challenges related to privacy, risks related to misuse, and fears related to unemployment. Good planning and increasing knowledge are potential solutions to the first two concerns.

I look at AI through cyber security from four perspectives: AI as a tool, as a product, as a weapon, and as a shield for cyber defence. From each perspective, I expand on what kind of threats the use of AI technologies is bringing and how organisations should proactively react to these threats.

AI as a tool

The “AI as a tool” perspective concerns the use of AI technologies as part of everyday life. The tools that employees use might be customised ones acquired from a supplier, or they could be free, browser-based tools, such as ChatGPT.

Whatever the solution, the fact is that the organisation is relying on a solution produced by another operator in order to get some kind of benefit for its own operations.

Benefits

Examples of the benefits of using AI as a tool include:

  • Better decisions thanks to the ability to process huge amounts of data and compile relevant information for decision-makers from the data
  • Time savings when AI tools help with daily chores, for example content production
  • Better and more consistent end results and cost savings through AI-automated processes

 

Threats

Threats related to the use of AI tools can be divided into two categories in terms of cyber security: The first threat is the loss of confidentiality of information given to AI systems.

AI tools can be fed or ”prompted” with information, or they can be given access to the organisation’s datasets. According to an analysis by Cyberhaven3 11% of the data that employees enter into ChatGPT is classified as confidential. For this reason, many companies have banned their employees from using ChatGPT.

Information entered into or given to AI tools may end up being freely used by the supplier of the tool or its subcontractors. Suppliers of AI tools are also a very interesting target for criminals because of the source code of the tool as well as the information that it uses or that is fed into it.

Another threat is people placing too much trust in AI tools and their output, resulting in bad decisions or the use of inaccurate content. There have been a number of examples of general-purpose, free tools being relied on even in important matters.

In one incident in the USA, ChatGPT was used to gather sources as part of legal cases, and in Australia, Google’s Bard tool was used when the Australian parliament investigated the practices of large consulting firms. These cases are connected by the fact that people were relying on the output of “smart” tools without checking the sources.

This resulted in incorrect conclusions being drawn when the AI tools produced information that was false or untrue.

Solutions

These threats are not new, but AI has highlighted their potential impact. The solutions to these problems are not new either – but they are more important in terms of organisations’ ability to function. Organisations need to have formal processes through for systematically evaluating new acquisitions from the point of view of information security risks.

AI tools emphasise how carefully organisations need to check e.g. the wording of contracts and agreements, especially how and where the solution provider can use the information given to the tool. Organisations should also pay special attention to how the tool works and how reliable the information it produces is.

Employees should be trained on the effects of misinformation and disinformation and the importance of being analysing sources critically.

In addition, the organisation should determine which tools employee can use in their work tasks and provide instructions for using those tools to avoid the worst mistakes.

AI as a product

The difference between the “AI as a product” perspective and the previous one is that, in this case, organisations are creating the AI ​​solutions at least partly themselves. They are, therefore, responsible for the development and maintenance of the tool, often with the assistance of a partner.

According to Statista4 the AI ​​market will reach approximately USD 180 billion by the end of 2024 and will grow by 20–30% annually.

So, it’s no wonder that so many technology companies are trying to get on, or stay on, the crest of this huge wave. The development of AI solutions has many of the same problems as developing “normal” software, but it also has unique challenges.

Threats

Just like the previous perspective, threats from AI as a product can also be roughly divided into two categories5. The first threat is related to the confidentiality of information. For an AI product to work as desired, it needs huge amounts of high-quality training data.

When a lot of high-quality data is collected, the mass of data in question is likely to be valuable from many points of view.

Compiling and processing data is laborious and very likely to be subject to strict legal obligations. If the data in question is leaked, either by accident or by criminals, the data leak will cause various kinds of damage from several different perspectives.

Another threat is a loss of integrity in the AI ​​model itself or in the training data. At worst, this can lead to the AI ​​product being compromised and outputting incorrect, false or biased results.

For example, an Air Canada chatbot incorrectly told a passenger they could get a refund, against company policy, and the American technology and real estate company Zillow acquired a record number of properties based on incorrect valuations.

In the Zillow case in particular, their overconfidence in their own solution led to significant financial losses and 20% of the company’s employees losing their jobs.

Solutions

To solve the threats described above, organisations developing AI solutions should focus especially on training software developers. The processes and methods related to secure software development must also better take into account the threats and vulnerabilities typical of AI systems.

During the last couple of years, lots of high-quality information related to the topic has been produced by e.g. OWASP, NIST and MITRE. In addition, organisations should consider risk management related to supply chains, especially when creating solutions that consist of components that another party is responsible for developing and maintaining.

AI as a weapon

This perspective examines the use of AI technology as part of the criminal’s toolbox. AI solutions can significantly enhance operations in organisations that operate legally and strive for good, but the same is unfortunately also true for criminals.

According to the UK’s National Cyber ​​Security Centre6 AI will not revolutionise the activities of criminals in the next couple of years. Advanced technologies are improving existing tactics, techniques and procedures– especially social engineering attacks.

Threats

Currently, it appears that criminals are benefitting from the opportunities brought by AI in very similar ways to legitimate organisations. AI helps criminals to create even better software more quickly – especially malware. AI is also helping criminals at the beginning of their careers to rapidly develop their skills and abilities.

A perfect example of this is a case where a person managed to create malware using ChatGPT alone, without writing a single line of code himself. As a result, no malware-detection software was able to identify it as malicious. AI applications can also find vulnerable targets more quickly and automate at least part of their attacks.

The use of AI has become more sophisticated in attacks based on social engineering, using the victim’s native language better and the use of “deepfakes”. Perhaps the most famous example (allegedly) of a successful deepfake attack occurred in Singapore.

A finance employee took part in what they thought was a legitimate video call to discuss a money transfer of around USD 25 million, but criminals had faked the images and voices of the meeting participants.

The employee who participated in the videoconference did know all of the participants in person, and because the video call and the people felt so real, the employee finally agreed to complete the money transfer.

Solutions

Managing these threats requires organisations to have mature threat exposure management, as well as the ability to manage vulnerabilities and incidents More organisations are likely to pay increasing attention to penetration testing and “purple teaming”.

Since the pool of experts in these areas is quite limited, organisations need to plan how they can develop these competences internally in the next few years.

The way to prevent attacks, especially related to social engineering and deepfakes, is to increase awareness among employees. With training and active communication, their ability to recognise even sophisticated scams can be significantly increased.

AI as a shield for cyber defence

The final perspective relates to how organizations could use AI for their cyber defence. In a study by Deloitte7, 69% of respondents thought it would be necessary to use AI cyber defence in the future.

Gartner estimates8 that less than a third of investments related to cyber defence and AI will prove to be profitable in the next few years. In the same assessment, Gartner suggests that organisations will be able to significantly reduce the amount of training needed for novice cyber defence experts, especially with the support of GenAI.

In the near future, AI is expected to be especially useful for detecting and responding to threats, making it possible to automate and simplify processes related to identity and access management.9 This would have a significant impact on preventing cyber incidents. Just as with criminals, the benefits of AI in producing and testing higher-quality code will help organisations with their cyber security efforts.

AI is expected to very quickly bring support to experts working in operational information security tasks and software development.10

In particular, the ability of GenAI to present complex technical issues more simply helps people to react to events and alarms in the IT environment more quickly and more successfully.

Article in Finnish: Elisa AI Cyber Outlook 2024 – Katsaus tekoälyn vaikutuksiin kyberturvallisuuden näkökulmasta

Sources:

1. UK Department for Science, Innovation & Technology: Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk

2. Gartner: Data Interactive: Leaders’ Top Generative AI Concerns by Industry

3. Cyberhaven: 11% of data employees paste into ChatGPT is confidential

4. Statista: Artificial intelligence worldwide market size

5. Gartner: Generative AI Adoption: Top Security Threats, Risks and Mitigations & Gartner: Trustworthy and Responsible AI Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations

6. UK National Cyber Security Centre: The near-term impact of AI on the cyber threat

7. Deloitte: Securing the future ai in cybersecurity

8. Gartner Predicts 2024: AI & Cybersecurity — Turning Disruption Into an Opportunity

9. Gartner: Identity and Access Intelligence Innovation With Generative AI

10. Gartner: 4 Ways Generative AI Will Impact CISOs and Their Teams

Kirjoittanut

Teemu Mäkelä
Chief Information Security Officer

Kirjoittaja työskentelee Elisan tietoturvajohtajana vastuullaan kyberturvallisuuden kokonaisuus. Mäkelä on työskennellyt lähes 20 vuotta tietoturva- ja tietoliikenne-alalla ja hänellä on kokemusta niin suurista kansainvälisistä ICT-alan yrityksistä kuin alan konsultoinnista. Teemu Mäkelä sai syksyllä 2020 Vuoden Tietoturvapäällikkö® -tunnustuksen.