AI Archive - SITS

Even the most sceptical have realized this by now: There is no way around artificial intelligence. Whether it’s chatbots in customer service, predictive maintenance in industry, fraud detection in finance, credit risk assessment, voice assistants or smart home applications: the list of AI assistance is almost endless, and with technological advances and the increasing availability of data, new areas of application are constantly opening up. In this context, the same applies to IT security as to other areas: AI is both a curse and a blessing. It helps to make systems more secure, but is also increasingly being used in cyber attacks. It is time to focus on defensive tactics that stand up to new, AI-based attack methods.

AI in corporate use: a classification

AI in corporate use: a classification

  • According to Next Move Strategy Consulting, the market for artificial intelligence will – unsurprisingly – be characterized by strong growth over the next ten years: The current value of almost 100 billion US dollars is expected to increase twenty-fold to almost two trillion US dollars by 2030.
  • According to LearnBonds, sales of AI software will rise to over 126 billion US dollars by 2025, compared to 22.6 billion US dollars in 2020. One in five employees will have to hand over some of their tasks to AI.
  • A McKinsey analysis has also found that AI technologies have the potential to increase global economic output by an average of 1.2 percent per year by 2030.
  • According to the ifo Institute, 13.3% of companies in Germany currently use AI and 9.2% are planning to use it. A further 36.7% of the companies surveyed are discussing possible application scenarios.
  • The most common use cases for AI in companies include automating business processes, analyzing data for decision-making and improving product quality and performance.
  • At the same time, however, many fear the negative consequences of the AI wave: almost two thirds of Germans are concerned that the use of AI could lead to job losses. According to YouGov, a total of 45% of Germans are skeptical about the use of artificial intelligence.

Another negative aspect that can go hand in hand with the use of AI, alongside many positive achievements, are increasing and ever more dangerous cyber attacks. In the past, a high level of IT expertise, a lot of time and effort were often required to launch an attack, but today, with the help of AI, even non-experts can become hackers with just a few clicks. Companies and authorities are called upon to face up to this development.

Advantages of AI for Cybersecurity:

  1. Improved threat analysis
  2. Optimized identification of attack precursors
  3. Improved access control and password practices
  4. Minimization and prioritization of risks
  5. Automated threat detection
  6. Improved efficiency and effectiveness of employees

Disadvantages of AI for Cybersecurity

  1. Challenges regarding reliability and accuracy
  2. Concerns about data protection and security
  3. Lack of transparency
  4. Distortion of training data and algorithms

How is AI used by cyber criminals?

Social engineers use AI to initiate more precise phishing strategies and deepfakes. Hackers are using AI-supported password guessing and cracking CAPTCHA to gain unauthorized access to sensitive data. Today’s attackers are moving so efficiently and using new methods that companies often struggle to automate controls and install security patches to keep up. What you need is a continuous threat management program that detects and actively prioritizes the biggest threats. AI is the basis for many new attack methods and tactics, which are also increasingly automated, so hackers are now operating more broadly and at scale than ever before.

These AI-generated attack models are coming into focus:

  • Phishing and Social Engineering: AI is used to create personalized and convincing phishing emails. These can entice employees to disclose sensitive information or open malicious links.
  • Adversarial Attacks: Criminals can use AI to generate specially manipulated data that causes AI systems, such as image recognition tools or security mechanisms, to make incorrect or unexpected decisions.
  • Automated Attacks: AI-driven bots can automatically detect vulnerabilities in systems, exploit exploits and carry out attacks – without any human intervention.
  • Detection of security vulnerabilities: AI can be used to analyze large amounts of data and identify potential security vulnerabilities in systems or networks, which are then exploited for attacks.
  • Obfuscation of malware: AI helps to develop malware that is difficult to detect because it adapts its characteristics to the environment or changes on its own to circumvent conventional security mechanisms.

The major advantage is that companies can also use artificial intelligence for their own purposes and beat hackers at their own game, as AI can also be used to defend systems and data. With the help of innovative AI tools, many attacks can be detected early and countermeasures can be taken automatically to minimize the impact. Examples include the aforementioned improved access controls, threat analyses and prioritization of risks.

How can companies protect themselves against AI-supported attacks?

A single solution or firewall is not enough: if you really want to protect your systems, data or employees from AI-supported attacks in the long term, you need a combination of technical solutions, training and proactive security strategies. On the one hand, the workforce must be sensitized to the risks of AI-supported attacks in special security awareness training courses. Employees should be able to recognize suspicious activities and react appropriately. In addition, clear security guidelines and procedures must be implemented for dealing with AI technologies and potential attacks. This includes guidelines for accessing sensitive data, using AI tools and dealing with suspicious activities. In addition, network traffic, system activities and other indicators of potential attacks must be continuously monitored and analyzed in order to detect and prevent suspicious activities at an early stage.

Another aspect are technical security measures: These include tools for detecting and defending against attacks, including intrusion detection systems (IDS), intrusion prevention systems (IPS), firewalls, antivirus software and endpoint security solutions. Regular software and operating system updates help to close security gaps and minimize potential points of attack. Attack simulations and penetration tests also help to test the company’s resistance to AI-supported attacks and identify vulnerabilities. Another part of the holistic security strategy is undoubtedly close cooperation and consultation with other organizations, government agencies and other stakeholders. Discussions and meetings can be used to exchange information about new threats and attack techniques as well as best practices and to learn from the experiences of others.

Continuous Threat Exposure Management (CTEM)

When it comes to defending against AI-generated cyber attacks, sooner or later the term Continuous Threat Exposure Management (CTEM) comes up. With the help of such an approach, organizations prepare themselves for constantly changing security threats and develop fast and efficient response options. CTEM supports the continuous monitoring and evaluation of threats and security risks. The goal is to constantly re-evaluate, control and contain an organization’s exposure to potential threats and vulnerabilities. In contrast to traditional approaches to security monitoring, which are often based on reactive measures, CTEM focuses on proactive and continuous monitoring to identify and address potential threats at an early stage.

``Gartner predicts that companies that prioritize their cybersecurity investments based on a CTEM program will see cybersecurity breaches drop by more than 60 percent by 2026.``

Five key points: How to successfully defend against AI-supported cyber attacks

  1. Use 24/7 monitoring. First of all, organizations should continuously monitor their networks, systems, applications and data, as this is the only way to identify potential security threats at an early stage.
  2. Evaluate and prioritize risks. Security risks should be analyzed and prioritized based on their threat potential, potential impact and likelihood of attack in order to allocate resources effectively and focus on the most important threats.
  3. Automate processes. Innovative automation solutions and advanced analytics techniques such as machine learning and artificial intelligence can be used to process large volumes of security data and identify unusual, anomalous activity.
  4. Integrate threat data. By bringing together data from multiple sources, such as security information and events (SIEM), threat intelligence and vulnerability management, you get a comprehensive picture of the security situation.
  5. Plan for continuous adjustments. Constant adjustments and improvements are important to respond to changing threat landscapes and new security risks – especially with fast-moving AI, where new approaches can emerge almost daily.

CTEM covers all of the above and helps organizations to improve their security practices, shorten response times to security incidents and minimize the risk of security breaches and data loss.

AI will play an increasingly important role in cybersecurity in the future. It has the potential to support IT and security experts, drive innovation and improve information security. At the same time, however, organizations are called upon to put cyber criminals who use AI for their own purposes in their place. It is the decisions we can make as humans that determine whether AI acts as a “good guy” or a “bad guy”.

Microsoft 365 Copilot - Is your company ready for AI?

  • Microsoft 365 Copilot is an artificial intelligence that is directly integrated into Microsoft Office programs, SharePoint and Exchange Server.
  • The system supports employees in everyday tasks and thus increases the efficiency of different departments.
  • The introduction of Copilot has significant implications for companies’ data protection and therefore requires comprehensive coordination and guidance.

Assessment of Copilot Readiness

The licensing of Copilot for Microsoft 365 represents a significant change to a company’s IT security architecture compared to the use of ChatGPT, Gemini or other AI assistants based on Large Language Models (LLM). Unlike ChatGPT & Co., Copilot does not just access predefined data. The Microsoft tool retrieves additional information from the Internet and – even more importantly – from the company’s own database. Copilot uses data from the SharePoint server, for example, and can also access emails, chats and documents via Microsoft Graph. This means that information that was previously only available locally in the data of individual employees and groups may also be visible in Copilot’s responses and content.

``The implementation of Copilot in an organization may have an impact on existing GDPR compliance, depending on how Copilot is used and what data is being processed. It is therefore advisable to re-check compliance after the introduction of Copilot to ensure that no breaches or risks arise.``

Oliver Teich (Strategic Consultant)

Check and/or implement authorization models

Microsoft itself advises in the Copilot documentation: “It is important that you use the permission models available in Microsoft 365 services such as SharePoint to ensure that the right users or groups have the right access to the right content in your organization.”

It is not enough to check the permissions of users and groups. Other access paths such as guest access, local SharePoint permissions, share links and external and public access should also be carefully reviewed.

Note: People who do not belong to your company can also have access to data via shared team channels.

Note: Copilot does not accept any labels assigned via Microsoft Purview Information Protection (MPIP) in its responses. Although the system ensures that only data that is relevant to the respective user is used for AI-generated content, the response itself does not receive an MPIP label.

Overall, a strict need-to-know policy should therefore be implemented in the company. With Copilot, it is more important than ever that employees only have access to the data that is relevant to their respective tasks. It is advisable to implement a zero-trust architecture based on the principle of least privilege, or at least a strict review of all access permissions if this is not possible.

Checking the data protection policy

Microsoft claims that both Microsoft 365 and Copilot comply with the General Data Protection Regulation. The company promises on its website: “Microsoft Copilot for Microsoft 365 complies with our existing privacy, security and compliance obligations to Microsoft 365 commercial customers, including the General Data Protection Regulation (GDPR) and the European Union (EU) Data Limitation Regulation.”

``Check whether you need to carry out a data protection impact assessment (DPIA) for the use of Copilot. A DPIA is a systematic analysis of the impact of data processing on the protection of personal data.``

Oliver Teich (Strategic Consultant)

Evaluation of additional agreements

However, the German Federal and State Data Protection Conference (DSK) and other supervisory authorities, such as ENISA, think that the Data Protection Addendum (DPA) offered by Microsoft does not adequately meet the requirements of European data protection law. They recommend that companies conclude an additional data processing agreement with Microsoft or at least review this carefully. The State Commissioner for Data Protection and Freedom of Information of North Rhine-Westphalia describes in a handout which considerations are important here. Essentially, the experts recommend: “A supplementary agreement to the DPA to be concluded between the controller and Microsoft should make it clear that this supplementary agreement takes precedence over all conflicting contractual texts included by Microsoft and takes precedence over these in the event of a conflict.” This supplementary agreement should regulate the following points, among others:

  • Microsoft’s own responsibility in the context of data processing for business activities that are triggered by the provision of products and services to the customer,
  • Obligation to follow instructions, disclosure of processed data, fulfillment of legal regulations
  • Implementation of technical and organizational measures in accordance with Art. 32 GDPR
  • Deletion of personal data and
  • Information about sub-processors.

If such agreements have already been made or evaluated, they should at least be subjected to a new data protection impact assessment as part of the Copilot roll-out.

Data might leave the boundaries of the Microsoft 365 service

In general, Microsoft promises that all data in the Microsoft 365 system will be stored and processed within the EU. In the context of Copilot, however, the company points out two exceptions to this principle:

  • For example, a graph-based chat can be linked to web content. In this case, a Bing query required for this can also contain internal company data – and thus end up at Microsoft. To be on the safe side, all Bing functions of Copilot should therefore be deactivated.
  • Plugins can also be installed for Copilot. Here, Microsoft explicitly recommends: “Review the privacy policy and terms of use of the plug-in to determine how it handles your organization’s data.” Companies that use Copilot should therefore generally not allow plug-ins in the system or require a separate data protection and risk assessment for each plug-in used.

Review IT security strategy

In a study on the use of AI language models in companies, the German Federal Office for Information Security (BSI) comes to the conclusion that, in addition to many advantages, these systems can also harbor new IT security risks or increase the threat potential of known IT threats.

The BSI therefore advises: “In response to these potential threats, companies or authorities should carry out a risk analysis for the use of large AI language models in their specific application before integrating them into their work processes. They should also evaluate misuse scenarios to determine whether they pose a threat to their workflows. Based on this, existing security measures can be adapted and, if necessary, new measures can be taken and users can be informed about the potential dangers.”

Before introducing the Copilot system, companies should therefore urgently gain an overview of the current status of their IT security architecture. To this end, not only Microsoft 365, but also all other programs, apps, services and plugins used should be checked. Microsoft itself recommends the introduction of a zero-trust model for Copilot.

Works council may be required to approve AI deployment

The start into the AI future cannot be decided by management or the IT department on its own. As a system such as Copilot has a significant impact on workflows and processes, an existing works council must be involved in the planning of the introduction or for a pilot project itself.

As the AI systems can monitor the performance and behavior of employees, the works council has a right of co-determination and can even demand the conclusion of a works agreement on the use of AI.

Employee training

Probably the most important step in the introduction of the Copilot system in Microsoft 365 is the training of employees. The following points should be communicated clearly and comprehensibly to all those who will later work with Copilot:

  • The AI’s results should never be accepted without verification. Microsoft itself admits: “The answers generated by generative AI are not guaranteed to be 100% reliable.” This somewhat slippery wording means that AI sometimes invents information. So before relying on the data provided by Copilot, it should always be checked by employees independently of the Copilot system. This is because Microsoft only provides Copilot information as part of its best-effort quality guidelines and therefore assumes no liability for the accuracy of the system’s statements.
  • The use of Copilot means that a so-called semantic index is created for each user. This is used to create content in future that sounds authentic and corresponds to the user’s style. To do this, the AI analyzes the characteristics and habits of its users over several weeks.
  • All requests to the AI are initially saved and can later be viewed by the user (and senior administrators) at any time in the Copilot interaction history. This applies not only to entries in applications such as Word, PowerPoint or Excel, but also to team meetings in which Copilot’s automatic transcription function has been activated.
``The creation of individual language profiles for individual users can be compatible with EU data protection law if a number of factors are taken into account and complied with. Copilot offers various options for controlling and managing the creation of individual voice profiles for individual users, for example by selecting the data sources, setting the data protection level and the deletion, accessibility and correctability of the data by the user.``

Oliver Teich (Strategic Consultant)

Ready for the AI revolution with Copilot

Copilot offers great possibilities: It simplifies everyday work, automatically creates conference recordings, designs presentations and prepares data in an easy to read format. However, these powerful capabilities also mean far-reaching intervention in a company’s data protection structure.

The introduction of the Copilot system must therefore be organized, supported and managed at many levels. Only if a company is fully prepared for the AI assistant can it take full advantage of the system’s possibilities and opportunities. If, on the other hand, mistakes are made during implementation, there is a risk of actual data protection leaks in the office architecture as well as regulatory problems.

Deepfake phishing - identify dangers and take countermeasures

Social engineering is as old as mankind – and still works remarkably well: this type of attack is based on gaining trust and getting the victim to do things they shouldn’t actually do: For example, revealing passwords or other sensitive information. Attackers are increasingly turning to so-called deepfakes. Deepfakes are manipulated videos, images or audio recordings in which artificial intelligence and machine learning are used to integrate a person’s face or voice into another scene. This technology can be used to create convincingly real fakes that are almost indistinguishable from real content. Attackers no longer only have manipulative emails at their disposal for deceptive maneuvers, but can also, for example, arrange a phone call in which the caller sounds like the CEO of the company being attacked using AI technology that is freely available on the internet.

Enhanced deepfakes - increased risk of manipulation

The technology required for this, which is based on complex machine learning algorithms, has developed rapidly in recent years. As a result, more and more such sophisticated deepfakes are appearing. Some are used for online fraud, others as fake news. Politicians in particular are among the victims, such as Joao Doria, the governor of the Brazilian state of São Paulo. In 2018, a video showing him in an alleged sex orgy was published. Later, it turned out that it was a fake. Denunciation is one danger, targeted disinformation is another. Fake news is already commonplace before elections, and fake news videos are also appearing more and more frequently and are quickly distributed online. It is already possible to put words into politicians’ mouths that they never said. In the past, manipulated clips were easy to identify with a scrutinizing glance. Now, even experts have to look at least twice.

Attack rate on companies on the rise

Deepfake technology harbors a number of IT risks for companies. In deepfake phishing, for example, cyber criminals try to use deepfake content to trick victims into making unauthorized payments or disclosing sensitive information. A recent case from Hong Kong shows how this works: According to police reports, a financial employee of a multinational company transferred 25 million dollars to fraudsters who successfully impersonated the company’s CFO in a video conference. “In the video conference with several people, it turned out that all the people present were fake,” said Chief Superintendent Baron Chan Shun-ching. The employee initially suspected that the invitation to the video conference was a phishing email. However, during the video call, the clerk put aside his doubts as the other participants looked and sounded exactly like his colleagues. Believing that all the other participants in the call were genuine, the worker agreed to transfer a total of 200 million Hong Kong dollars – the equivalent of around 25.6 million US dollars. This is just one of many cases in which fraudsters have used deepfakes to manipulate publicly available video and other footage in order to defraud companies of their money.

This example shows just how convincing deepfake videos have become: https://www.youtube.com/watch?v=WFc6t-c892A

Two types of deepfake phishing attacks

Such deepfake phishing campaigns occur more effectively and more frequently as AI technology advances. CISOs are well advised to prepare their employees to defend against such attacks. One way to do this is to explain to them what deepfake phishing is and how it works. Essentially, there are two types of deepfake phishing attacks:

  • Real time attacks: In a successful real-time attack, the spoofed audio or video is so sophisticated that the victim believes the person on the phone or in a video conference is who they say they are, such as a colleague or customer. In these interactions, the attackers often create a strong sense of urgency by making victims believe they have imaginary deadlines, penalties and other consequences for delays in order to put pressure on them and get them to react rashly.
  • Non real-time attacks: In non-real-time attacks, cyber criminals impersonate another person through fake audio or video messages, in whose name they will then distribute fake instructions via asynchronous communication channels such as chat, email, voicemail or social media. This type of communication reduces the pressure on criminals to respond credibly in real time. At the same time, it allows them to perfect deepfake clips before distributing them. Therefore, an attack that is not in real time can be very sophisticated and raise less suspicion among victims.

Compared to text-based phishing campaigns, deepfake video or audio clips sent by email also have a higher chance of passing through security filters. Attacks that do not take place in real time also allow attackers to increase their reach. For example, an attacker can pretend to be the CFO and send the same audio or video message to all employees in the finance department. This increases the likelihood that someone will fall for it and disclose confidential information. In both types of attack, social media activity usually provides attackers with enough information to strategically strike when targets are most likely to be distracted or most predisposed.

Identify deep fake phishing

The identification of deepfake phishing attacks is based on four pillars:

  • Phishing in general: Every manager and every employee must internalize this principle: phishing is based on tempting victims to make rash decisions. Therefore, a sense of urgency should immediately trigger an alarm in every interaction. For example, if a person – be it the CEO or important customers – asks for an immediate bank transfer or product delivery, everyone should pause and check whether it is a legitimate request.
  • Deepfake characteristics in videos: The company’s security officers should raise employees’ awareness of known and new attack methods through continuous training. One advantage of this is that most people find deepfake phishing training particularly interesting, engaging and educational. After all, it is almost entertaining to watch deepfake videos and identify suspicious visual clues. Typical signs that it could be a deepfake video include unrealistic blinking, inconsistent lighting and unnatural facial movements. Other indications of a fake are flickering at the edges of the distorted faces, hair, eyes and other parts of the face.
  • Deepfake characteristics in audio files: Pronunciation errors often occur in text-to-speech (TTS) systems, especially when the spoken word does not correspond to the trained language. Monotone speech output is caused by insufficient training data, while forgery methods currently still have difficulties in correctly imitating certain features such as accents. Different input data can lead to unnatural sounds, and the need to capture the semantic content before synthesis can greatly delay the process of generating high-quality fakes. Tip: For training on how to identify manipulated audio data, the Fraunhofer AISEC website is a good place to start.
  • Confirmation of identity: In case of urgent requests, employees should politely point out that, due to the increased number of phishing attacks, the person must confirm their identity using two-factor authentication via separate channels. Alternatively, in the case of suspicious interactions by phone or email, the other person should disclose information that can only be known to both parties. A typical example would be a question about the length of service. Close coworkers may even ask more personal questions, such as how many children the other person has or when they last ate together. This may be uncomfortable, but it is an effective and efficient mechanism to expose fraudsters.