ITIS 499 · SENIOR PROJECT · AY 2025–2026 SEM 2

AI in Cybersecurity

Enhancing Threat Detection and Response

🎓Supervisor: Dr. Ali Zolait
👩‍💻Maryam Abdulla  ·  202100437
👩‍💻Emaan Rashid Latif  ·  202109038
Submitted: May 12, 2026

Table of Contents

// ABSTRACT

Abstract

This research project examines the application of artificial intelligence (AI) in detecting and preventing cyber threats, focusing specifically on the context of Bahrain. As Buczak and Guven (2016) point out, modern cyberattacks are growing highly complex, making it virtually impossible to rely on manual human effort alone for threat detection and mitigation. This shift makes the development of automated, intelligent systems a necessity.

To explore this, we used a mixed-methods approach, combining a thorough review of existing academic literature with empirical data from a survey of 250 IT students and professionals in Bahrain. This research assessed local user awareness, trust, and perceived reliance on AI-driven security tools, alongside the organizational drivers behind adopting these technologies.

Our survey data shows that respondents hold a generally positive attitude toward the integration of AI in security operations, with an overall attitude mean score of 3.64 out of 5.00. Although 60% of the respondents view AI as an essential asset in modern security, they also highlighted significant implementation challenges. These concerns mirror the warnings of Guidotti et al. (2018) regarding the lack of algorithmic transparency, accuracy issues, and the risk of human over-reliance on automated systems. Similarly, Nallapareddy and Katta (2024) argue that AI is most effective when deployed to support and augment human analysts rather than trying to replace them entirely.

BahrainUOBArtificial Intelligence CybersecurityThreat DetectionMixed-Methods Research

Acknowledgments

This research has been completed thanks to Allah, the Almighty's blessings. We would like to express our deepest gratitude and appreciation to everyone who supported and contributed to the successful completion of this senior project research.

Our sincere thanks go to Dr. Ali Zolait for his continuous guidance, encouragement, and valuable feedback throughout every stage of this work. His patience, dedication, and insightful advice have been instrumental in shaping and improving the quality of our research.

We would also like to extend our heartfelt appreciation to our families and friends for their endless support, understanding, and motivation during this journey. Finally, we thank all the respondents who took the time to participate and provide valuable input that helped us accomplish our research objectives.

// CH.01

Introduction

1.1 Research Background

In this section, the basics of Artificial Intelligence (AI) in cybersecurity are introduced. This is about facilitating real-life applications of AI's impact on digital threat detection and response. This background aims to make readers aware of these circumstances to provide them with background for the appreciation of the current study and the context in which it was done.

1.1.1 Overview of Cybersecurity

Cybersecurity is described by NIST (2023) as the process of protecting computers, systems, networks, and data from attacks or any damage, thereby ensuring uninterrupted functioning and safety. With most organizations nowadays—including financial institutions, healthcare facilities, educational institutions, and governmental bodies—utilizing computers for various purposes, cybersecurity has become one of the most important tasks within IT.

NIST (2023) noted that the primary objective of cybersecurity is to keep the CIA triad, including confidentiality, integrity, and availability. "Confidentiality" refers to data being kept private; "integrity" means data not being altered without authorization; and finally, "availability" means systems and data that are available when required.

Furthermore, Buczak and Guven (2016) reported that cyber threats have evolved and become harder to manage over the current years. Phishing attacks, malware, ransomware, and large-scale data breaches are becoming more widespread and damaging. The typical security technologies that depend on known attack signatures and fixed rules are no longer sufficient to address these new types of threats.

1.1.2 Evolution of Artificial Intelligence

Over the years, artificial intelligence has undergone significant transformation. Early AI systems were predominantly rule-based, requiring developers to manually program all the logic needed for execution. These systems performed well on simple, structured problems; however, they broke down when presented with novel or unexpected inputs.

According to Shone et al. (2018), all that changed with the introduction of machine learning. Machine learning enables systems to learn on their own, steadily and through data, rather than following a set of instructions. Deep learning followed, utilizing a technique that is part of artificial intelligence known as neural networks, which process highly complex data. Thanks to these developments, AI is far more adaptable and effective in handling rapidly evolving environments—a critical need in cybersecurity.

Nallapareddy and Katta (2025) noted that in the cybersecurity world, this represents a shift from simply recognising signature-based threats to learning and adapting. Older systems could only detect familiar attacks, whereas newer AI-powered systems can identify unusual patterns even without prior exposure to a specific attack type.

1.1.3 Role of AI in Cybersecurity

AI is used for several specific jobs in the field of cybersecurity. The most apparent is threat detection; AI systems can more quickly and effectively process large amounts of data and identify anything that looks suspicious.

Nallapareddy and Katta (2025) noted that AI not only detects but is also involved in incident response. Once a threat is verified, AI can help determine how dangerous the threat is and how affected systems are, and in some instances even automatically respond by isolating the compromised device or blocking suspicious traffic.

Furthermore, Buczak and Guven (2016) reported that AI is helping security teams reduce false alarms. AI can filter out irrelevant noise and direct attention to genuine threats. In summary, AI is not a replacement for human decision-making in cybersecurity; rather, it is a tool for transforming security operations into a faster, more accurate, and more efficient process.

1.2 Research Problem

Cybersecurity threats have become more sophisticated, faster, and harder to detect using traditional defence mechanisms. Conventional security systems often rely on static rules, manual monitoring, and reactive responses, making them insufficient against modern attacks such as zero-day exploits, advanced persistent threats (APTs), and large-scale data breaches. While AI offers promising capabilities—such as real-time threat detection, automated response, and predictive analysis—its integration into cybersecurity is not without challenges. Organizations face uncertainties regarding the accuracy, reliability, and ethical implications of AI-based systems, as well as concerns about over-reliance on automation.

Therefore, this research seeks to:

  • Identify the factors that influence the successful implementation of AI in threat detection and response systems.
  • Assess the impact of these factors on the overall performance, reliability, and decision-making quality of AI-driven cybersecurity solutions.

1.3 Research Questions

1.4 Research Objectives

1.5 Research Significance

This research is significant because cyberattacks have increased in frequency and severity. Buczak and Guven (2016) state that existing systems rely on fixed signatures or rules for threat detection, and this strategy does not work well against new types of attacks. AI—in particular, machine learning and deep learning—provides a new way of doing things that can help you learn over time and detect things that newer approaches may fail to detect.

Shone et al. (2018) showed that AI-based IDS systems have been shown to process vast amounts of network traffic and system logs faster than their human counterparts and have the capability to detect suspicious activity faster, providing security personnel with more time for responding.

Furthermore, Nallapareddy and Katta (2025) discussed how AI also helps to speed up incident response. The automated response system is a method that can be used to quickly contain threats before severe damage is done. This level of speed is not easily possible in a large, complex environment with manual operations.

1.6 Gaps Analysis

While there is already significant research around the use of AI in cybersecurity, there are some gaps that are not well represented. Shone et al. (2018) and Tang et al. (2020) both discovered that while many current research works are primarily concerned with enhancing detection accuracy, they often fail to consider the practical problems of deployment within an organization's environment.

Capuano et al. (2022) also discovered that there is a lack of explainability and trust. If there are no tools in place to make AI decisions more explainable, then it will be difficult for organizations to fully trust and rely on AI-based decision-making.

Furthermore, Ahmad et al. (2019) found that most studies view threat detection and incident response as two distinct entities. There has been little research that examines the use of AI in an integrated workflow, including detection, response, and continuous learning. Finally, there is not enough research that focuses on human and organizational issues, such as the willingness of security teams to embrace AI and the level of training required.

1.7 Theoretical Framework

1.7.1 AI-Driven Threat Detection Models

Nallapareddy and Katta (2025) argued that traditional security tools struggle to defend against previously unseen attacks because they rely on fixed rules or known signatures. In contrast, AI-based systems use machine learning and deep learning algorithms that can identify abnormal or harmful activity across systems, networks, and user behaviour, even when that activity does not match any previously known pattern.

Shone et al. (2018) classified network traffic as normal or malicious using supervised learning algorithms such as Support Vector Machines or Random Forest. Deep learning abilities like Deep Neural Networks and Long Short-Term Memory (LSTM) can establish significantly more intricate patterns and abnormalities concealed in massive datasets. Capable of learning from data, AI models can identify not only known threats but new and unknown ones as well, including zero-day attacks.

Dharmesh et al. (2023) found that another key component of AI-driven threat detection is anomaly detection. AI can establish a profile of how a network should operate and then look for activities that are abnormal. Arif et al. (2023) explored the use of AI to be predictive—predicting the direction of attack based on past data to then fortify the system before the attack.

Kreinbrink (2019) noted challenges such as imbalanced datasets, biased models, and the complexity of deep learning systems. This is why joining AI analysis and human review has been recommended by many researchers to ensure both technical velocity and accuracy, plus human judgment and oversight.

Figure 1. AI-Driven Threat Detection Models (Iqbal H. Sarker, 2022)

1.7.2 Incident Response Frameworks

Incident response encompasses the formal incident handling process, which involves the following stages: Preparation, Detection, Containment, Eradication, Recovery, and Post-incident Review. With the inclusion of AI, all of these can be quicker and more accurate.

Nallapareddy and Katta (2025) discovered that this theoretical model is based on the NIST Incident Response Life Cycle. AI can automate the early stages of detection and warn in terms of priority, helping to save time from confirming that an incident is real.

According to NIST (2023), organizations are now incorporating SOAR into their incident response strategies. SOAR tools, when integrated with artificial intelligence, can correlate security alerts from different sources, provide context for these alerts, and suggest or even carry out necessary remediation measures, minimizing critical performance indicators like MTTD and MTTR.

Alevizos and Dekker (2024) highlighted that a continuous learning loop represents one of the more significant developments in AI-driven incident response. Once a particular incident has been resolved, data from the event can be used to refine the AI operating system to detect and respond better to similar incidents in the future.

1.8 Research Methodology

1.8.1 Data Collection

Data for this study were collected from two main sources: online surveys and a systematic literature review.

(a) Primary data: Google Forms were used to design an online questionnaire, which was distributed via digital means. Participants rated their agreements with statements on a five-point Likert-type scale. Respondents were tallied and sorted automatically and subsequently checked to eliminate missing and/or inconsistent data from analysis.

(b) Secondary data: Academic journals and conference papers published from year 2016 to 2025 were reviewed from sources including IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library, MDPI Open Access, Wiley Online Library, NIST Official Website, ResearchGate, ProQuest, Google Scholar, AIS Electronic Library, and IJMDSA Journal Portal.

1.8.2 Data Analysis

We took the survey answers and turned them into simple facts. For background questions, the researchers compiled data and made percentages of respondents. For Likert scale questions, we found the average score to show whether the group mostly agreed or disagreed with ideas. Charts were made so the patterns would be easy to see.

1.8.3 Ethical Considerations

All data were collected ethically. Participation in the survey was voluntary and anonymous, and no personal or sensitive information was recorded. All secondary sources were properly cited following academic standards.

1.9 Research Structure

This research consists of a total of six chapters:

Figure 2. Research Structure

// CH.02

Literature Review

2.1 Cybersecurity

2.1.1 Definition of Cybersecurity

As defined by NIST (2023), cybersecurity refers to the processes adopted to safeguard computers, computer networks, and any information stored within such devices against any unauthorized access, destruction, or disruption. Among the core concepts in this area include confidentiality (keeping data confidential), integrity (protecting data from any unauthorized modification), and availability (ensuring accessibility of the data).

According to a panel of experts at NIST (2023), cybersecurity involves protecting systems in such a manner as to ensure the safety of stored data, the uninterrupted operation of computing systems, and the continuity of business operations. Security includes both policy and behavior within organizations.

2.1.2 Importance of Cybersecurity

According to NIST (2023), the significance of cybersecurity has increased tremendously because most organizations now depend on technology to store their critical information and conduct their businesses. A cyber-attack can lead to devastating effects like losses, destruction, and legal ramifications. Health organizations, financial institutions, educational organizations, and even government agencies are at the highest risk of cyber-attacks because of the value of their information.

Shilpa et al. (2024) noted that a poor security system creates great risks of being attacked or hacked as it makes it relatively simple to steal valuable data from an organization or cause irreparable damage to their IT infrastructure, which, in turn, leads to a loss of customer and partner confidence.

2.1.3 Common Cybersecurity Threats

The cyber threats that modern organizational networks are exposed to are complex. Buczak and Guven (2016) explained malware as malicious software designed to affect operations or compromise data integrity. The growing threat of ransomware is described by NIST (2023) as hostile encryption of file systems for the purpose of financial extortion. Russell and Norvig (2021) explained Advanced Persistent Threats (APTs) as actors that can break into systems for extended periods to gain strategic intelligence. Zero-day exploits are covered by Apruzzese et al. (2020)—they take advantage of vulnerabilities in systems prior to when vendor patches are released.

2.1.4 Incident Response in Cybersecurity

The concept of incident response refers to the sequential approach employed by organizations in reaction to a detected cyberattack. As per NIST (2023), the procedure involves the following six stages: preparation, detection and analysis, containment, eradication, recovery, and post-incident activity. This set of actions is critical for minimizing the potential harm inflicted upon businesses while restoring their operational capabilities promptly.

Nallapareddy and Katta (2025) argued that for ensuring successful incident response, the utilization of security-related technologies combined with the involvement of human decision-makers is required. The existing approach is insufficient due to its reliance on manual intervention to a significant degree, leading to delays in addressing cyberattacks, particularly when they are massive or intricate.

2.1.5 Traditional Cybersecurity Approaches

Buczak and Guven (2016) argued that conventional security software, while previously reliable, is no longer capable of coping with contemporary cybersecurity challenges. Most traditional applications rely on signature-based detection, which can only identify malware that has already been catalogued in its database. Consequently, when attackers develop new malware or modify existing variants, conventional security tools fail to detect the threat.

Conventional security tools are also known for excessive alerting, issuing multiple alerts most of which tend to be false. Security personnel become fatigued after receiving thousands of false alerts daily. Furthermore, conventional security tools lack interoperability—they are designed to operate independently, making it challenging to coordinate security team efforts across the entire infrastructure.

2.2 Artificial Intelligence (AI)

Artificial Intelligence (AI) has become one of the most important technologies in modern computing. It focuses on enabling machines to perform tasks that normally require human intelligence, such as learning, reasoning, problem-solving, and decision-making.

2.2.1 Definition and Overview of AI

In simple terms, Artificial Intelligence (AI) is the way we make machines—especially computer systems—copy the way human intelligence works. This basically involves three main steps: learning from new data, using logic to solve problems, and being able to fix its own mistakes. Russell and Norvig (2021, p. 4) argued that AI is focused on the design of "intelligent agents"—smart programs that can look at what is happening in their environment and then take the right actions to reach specific goals.

2.2.2 History of Artificial Intelligence

The development of artificial intelligence has seen several stages. At its early stages, AI was very basic and ran based on a predefined rule set. In cases where the input did not align with the rule set, it could not provide an output. Buczak and Guven (2016) argued that the emergence of machine learning revolutionized this paradigm, enabling computational engines to identify intrinsic patterns in data without explicit human programming. Presently, AI is more advanced through deep learning, which involves the use of artificial neural networks that mimic some cognitive functions in human beings.

2.2.3 AI Tools and Techniques

Different approaches can be used by AI based on the goals of the application. Machine learning comprises mainly three types of processes:

Shone et al. (2018) discovered that deep learning uses neural networks to work with highly sophisticated data—designs such as CNNs and RNNs help computers in detecting images, recognizing voice patterns, and identifying suspicious activities within a network.

2.2.4 Applications of AI in Different Fields

Tang et al. (2020) discussed that AI is being incorporated into different industries: in the medical industry for disease detection and health monitoring; in finance and banking for fraud detection and risk assessment; in transport for traffic management and self-driving cars; in manufacturing for predictive maintenance.

Nallapareddy and Katta (2025) noted that in cybersecurity, AI plays an essential role in developing defense systems—helping in detecting potential threats, analyzing network behaviors, and stopping cyberattacks automatically.

2.2.5 Factors Impacting the Effectiveness of AI

According to Buczak and Guven (2016), for an AI model to be effective, a lot of unbiased data needs to be fed into the model so that it learns to produce reliable output. Corrupted data will lead to a biased model that makes mistakes in practice even if it worked well in testing scenarios.

Guidotti et al. (2018) argued that the second factor is transparency in AI systems—being able to explain why the AI came to a particular conclusion. Many intelligent machines, especially those employing deep learning, have been referred to as "black boxes" because the reasons for the decisions are hard to comprehend. This creates mistrust, which can be especially problematic in areas like cybersecurity where reliability is key.

2.3 Integration of AI in Cybersecurity

2.3.1 Relationship between AI and Cybersecurity

AI and cybersecurity are linked concepts due to the massive amount of data produced by computers every day. AI technologies are adept at processing such massive amounts of data because AI algorithms can detect hidden trends as well as unusual behavior in it.

Alevizos and Dekker (2024) said that one of the strengths of using AI for cybersecurity purposes is its ability to adapt to changes. Static rules are characteristic of traditional security methods; therefore, any changes in threats are challenging to incorporate into the existing systems. On the contrary, an AI-based security system can learn from each threat and update its defenses accordingly.

2.3.2 AI for Enhancing Threat Detection

Buczak and Guven (2016) proposed that AI performs better in detecting threats since they use data-driven approaches to detect hazardous patterns regardless of whether the type of attack has already occurred before.

Shone et al. (2018) noted that AI systems can simultaneously analyze old and current data to detect if the network traffic is legitimate or malicious. Deep learning techniques can find increasingly subtle patterns hidden within large datasets to improve detection results in high-traffic networks where attackers use sophisticated methods to evade detection.

Tang et al. (2020) discussed anomaly detection—developing a model of how typical network traffic should look like and then setting up alerts whenever any deviation occurs. Such a method is effective in detecting insider threats and zero-day attacks.

2.3.3 Incident Response and Threat Analysis using AI

Nallapareddy and Katta (2025) report the capabilities of AI extend further than threat detection. The remedy process is usually lengthy and involves tedious work for people; however, AI assists through the automation of activities and decision-making process that will lead to faster action and improved knowledge. According to NIST (2023), security incidents should be triaged and prioritized depending on their severity and possible implications. AI can determine the scope of security incidents, correlate such incidents to the network activities, and implement automatic actions like blocking malicious traffic and isolating affected systems.

AI is also extremely helpful in retrospective analysis and understanding the root causes of intrusions. This information is vital in developing preventive actions to minimize the chances of future attacks. Nevertheless, the involvement of humans is still crucial—people must be able to verify and check AI decisions for correctness.

2.3.4 Technical Factors Influencing AI-based Threat Detection

Buczak and Guven (2016) reported several important technical considerations that influence the efficiency of AI for threat detection. First, the quality of data input is of crucial importance—large amounts of reliable data are required, while inaccurate data may result in flawed conclusions and/or a high false positive rate.

Guidotti et al. (2018) discovered that the complexity of the AI model is an essential factor. More complex deep-learning algorithms are likely to have better results in terms of threat identification, yet they might require significant computing resources and may not be easily deployable in settings with limited computational capacity.

Apruzzese et al. (2020) noted that the robustness of a tool needs to be assessed—the capability of the system to resist manipulation attempts by an adversary who tries to mislead the AI by presenting carefully chosen inputs.

2.3.5 Future Trends in AI-driven Cybersecurity

According to Capuano et al. (2022), the current primary focus lies on developing Explainable AI, which seeks to make the rationale behind computations comprehensible to security experts. Improved explainability of alerts' reasoning will increase confidence in the system among human users.

Arif et al. (2023) discussed predictive AI—using past data to predict when and where future attacks might occur to better prepare for them. Scientists are also improving the application of AI algorithms for increasing cloud computing and network security.

Despite impressive achievements, there are many hurdles—the topics of privacy, discrimination, and biased algorithms should not be overlooked. Another alarming trend is the use of artificial intelligence by adversaries for conducting more advanced attacks, thus fueling what Nallapareddy and Katta (2025) describe as a digital arms race.

// CH.03

Research Model & Hypotheses

3.1 Research Model

This research is based on a research model which consists of three main variable categories:

3.1.1 Conceptual Research Framework

The conceptual framework captures the connection between the different aspects involved in the research study. The central assumption postulates that AI can improve the security situation significantly—through helping teams identify risks faster and more accurately, responding faster to attacks, and maintaining efficiency despite difficult circumstances.

According to Venkatesh et al. (2016) and Capuano et al. (2022), AI will deliver results effectively only when the workers have been professionally trained and when the usefulness and usability of the AI tools are clear to the team. As Tang et al. (2020) suggest, this combination of human factors and technical skills represents a popular and successful strategy in studying AI applications within organizations.

Figure 3. Research Framework

3.1.2 Independent Variables

AI-based Threat Detection Capabilities: Refers to the ability of AI-powered devices to recognize cyber threats using data analysis. Traditional software is limited to identifying previously documented attacks, while AI leverages machine learning. Shone et al. (2018) explore the possibility of using sophisticated learning algorithms like deep learning to uncover hidden threats in big data. According to Dharmesh et al. (2023), anomaly detection is useful in recognizing new zero-day attacks and internal threats.

AI-based Incident Response Capabilities: Refers to the ability of AI systems to help or even react on their own when faced with a threat—including ranking alerts based on importance, correlating incidents, and performing automatic actions such as removing infiltrated hosts or blocking suspicious traffic (Nallapareddy and Katta, 2025). Moreover, AI allows implementing a feedback loop for incident management as illustrated by Alevizos and Dekker (2024).

Technical Factors of AI Systems: Includes data quality and accuracy, the ability to scale, and whether the tool is well-integrated into the existing security architecture. According to Buczak and Guven (2016), AI systems are only as good as the data they receive. Apruzzese et al. (2020) caution against building vulnerable models that can be easily manipulated by providing malicious inputs.

3.1.3 Dependent Variable

Effectiveness of Cybersecurity Threat Detection and Response: Defined as the level of capability within a firm to detect any cyber threats and effectively react to security threats to avoid damage and get back to regular operations. Signs of effectiveness include improvement in the threat detection rate, shortened incident response time, and a decrease in cases of false positives. According to Shone et al. (2018) and Nallapareddy and Katta (2025), competencies developed through the use of AI have helped positively affect these areas through fast processing and automated activities.

3.1.4 Organizational Factors (Moderating Variables)

Organizational Knowledge and Skills: The level of technical understanding and expertise that security teams possess in relation to AI-based cybersecurity tools. As Capuano et al. (2022) noted, teams with high levels of AI knowledge are better positioned to unlock the full potential of these tools, while untrained teams may mismanage even the most advanced systems.

Perceived Usefulness of AI: The degree to which security professionals believe that using AI tools will improve their job performance. As Venkatesh et al. (2016) highlighted, when users genuinely believe a tool helps them detect threats faster or reduces false alarms, they are more likely to trust it and use it consistently.

Perceived Ease of Use: The degree to which security professionals believe that using AI cybersecurity tools requires minimal effort. When a tool is perceived as intuitive and user-friendly, people are more likely to adopt it into their daily routines and trust it for critical decisions.

3.2 Research Hypotheses

H1
AI-based threat detection capabilities have a positive impact on the effectiveness of cybersecurity threat detection and response.
H2
AI-based incident response capabilities have a positive impact on the effectiveness of cybersecurity threat detection and response.
H3
Technical factors of AI systems positively influence the effectiveness of cybersecurity threat detection and response.
H4
Organizational knowledge and expertise positively moderate the relationship between AI-based capabilities and the effectiveness of cybersecurity threat detection and response.
H5
Perceived usefulness of AI positively moderates the relationship between AI-based capabilities and the effectiveness of cybersecurity threat detection and response.
H6
Perceived ease of use of AI systems positively moderates the relationship between AI-based capabilities and the effectiveness of cybersecurity threat detection and response.
// CH.04

Methodology

4.1 Introduction

The present chapter provides an overview of the design and methodology used in the research. Overall, this chapter is the roadmap of procedures employed to determine the influence of artificial intelligence on improving cybersecurity. To collect quantitative data, a questionnaire survey was used, which allowed collecting opinions and knowledge related to the subject matter from participants.

4.2 Research Strategy

4.2.1 Research Approach

This research uses quantitative research methodology. Emphasis is laid on the use of numbers to observe trends in the data that has been collected. The data was collected with the help of a questionnaire that was distributed on Google Forms. As per Saunders et al. (2019), this methodology allows us to convert different perspectives into percentages and visualize them easily.

4.2.2 Research Philosophy

In terms of epistemology, our research uses the positivist approach. It means that knowledge is gained not only through people's subjective experiences or narratives but from observations and measurable information. The researchers utilized the Likert scale ranging from "Strongly Agree" to "Strongly Disagree" to stay objective. This technique helps transform subjective information into objective statistics for further calculations.

4.3 Research Methods

4.3.1 Questionnaire-Based Survey

We employed a structured questionnaire developed using Google Forms. The survey was designed to collect all information in sections. Initially, demographic questions were asked to find out participants' age, gender, and fields of study. The largest section was devoted to attitudes towards AI technology in cybersecurity—participants were asked about its efficiency in threat detection and counterattacks as well as its usefulness overall. Almost all questions were phrased in the form of statements evaluated on a five-point Likert scale ranging from "Strongly Agree" to "Strongly Disagree."

4.3.2 Advantages of Using Google Forms

Google Forms was selected for developing and distributing our questionnaire. Since the survey was distributed online, we were able to engage a considerable number of 250 respondents using an easy-to-share link.

4.3.3 Limitations of the Method

Although the survey method proved beneficial in obtaining the needed data, certain limitations should be mentioned. Due to the usage of closed questions, it was impossible for respondents to give more elaborate explanations concerning their responses. Additionally, it is possible that respondents answered quickly without reading all the statements carefully. Finally, since it was conducted via the Internet, respondents' interpretations might affect the results.

4.4 Sample Selection

4.4.1 Target Population

The target population of this research is people who have a certain degree of knowledge or experience in artificial intelligence and cybersecurity in the academic or professional sphere—university students studying IT-related subjects including Information Systems, Computer Science, Engineering, and Cybersecurity, as well as professionals working in IT and cybersecurity. The purposeful choice was made since members of this population are more capable of giving knowledgeable views regarding the use of AI in the environment of cybersecurity.

4.4.2 Sampling Technique

We have used convenience sampling as our sampling technique—a non-probability sampling technique whereby the subjects most easily accessible to the researcher are selected. We distributed the questionnaire through WhatsApp groups consisting of university students and anyone with knowledge about IT or cybersecurity. According to the suggestion by Creswell & Creswell (2018), convenience sampling is often used in student-conducted scholarly studies.

4.4.3 Sample Size

The researchers had collected 250 responses by the time data collection was stopped. Each submission was reviewed before proceeding with analysis to ensure that it was finished and that the individual had worked on the questions. Any answers that left mandatory questions blank were deleted, as were answers that resembled random marking. Following this review, all 250 responses were validated and retained for analysis.

4.5 Data Collection

4.5.1 Data Collection Tool

The data collection tool was Google Forms, a free online survey and questionnaire creation tool. The questionnaire can be constructed and distributed automatically in real time via any messaging application (WhatsApp), and all responses are automatically stored in a single location without having to collect anything manually.

4.5.2 Questionnaire Structure

Primary data collection was conducted over a six-month period from November 1, 2025, to May 1, 2026. The questionnaire was categorized into four parts:

4.5.3 Data Collection Procedure

The questionnaire was distributed via WhatsApp because it was the fastest and most direct means of reaching individuals with the appropriate backgrounds. On the first page, an introductory paragraph was provided stating that participation was voluntary, responses would be kept fully anonymous, and they would only be used in academic research. Respondents were also informed that the survey would last approximately five or seven minutes, and there was no correct or wrong answer. A final analysis was done on 250 valid and complete responses.

4.5.4 Research Framework

The framework shows how the key factors that constitute the effectiveness of cybersecurity threat detection and response are interconnected. The independent variables (H1–H3) are supposed to directly affect the dependent variable. In addition, the model has three supporting variables (H4–H6) which concern human and usability aspects. Overall, the framework provides an emphasis on both technical and human factors.

4.6 Data Analysis

4.6.1 Data Preparation

Before beginning analysis, all responses received through the survey were examined to ensure that the collected data was clean and reliable—data cleaning is an important step before carrying out analysis, a necessity highlighted by Creswell and Creswell (2018). Each submission was verified to ensure responses to all mandatory questions were present. Where there were signs that the respondent had not devoted sufficient attention (such as selecting the same option throughout scale questions, known as "straight-lining"), submissions were removed.

4.6.2 Data Analysis Techniques

The analysis was based on descriptive statistics—interpreting the data to make summaries and descriptions without going further into any statistical inferences. For demographics, frequency counts and percentages were computed for every variable. For the Likert scale sections, means for statements and for constructs were computed to understand the level of agreement/dissent from the average point.

4.6.3 Tools Used for Analysis

The software utilized during the analysis phase was Google Forms, since the survey was conducted using the platform from 1st December 2025 until 1st May 2026. Google Forms automatically generated visual summaries for all questions as soon as responses were collected, including pie charts and bar charts showing the distribution of participant responses across the five-point Likert scale.

// CH.05

Data Analysis & Results

5.1 Introduction

The following chapter contains the analysis and interpretation of the data collected through the online survey questionnaire. The aim here is to find out the level of awareness among the participants regarding the application of AI technology to enhance threat detection capability in improving cybersecurity.

5.2 Demographic Analysis

5.2.1 Gender Distribution

The survey results revealed that most respondents were female, accounting for 63.6% of the total sample, while male respondents made up the remaining 36.4%.

Table 1. Gender Distribution
GenderFrequencyPercentage
Female15963.6%
Male9136.4%
Total250100%

5.2.2 Age Group Distribution

The largest group of participants fell within the 20–25 years age range, representing 45.2% of the sample. This was followed by participants aged 18–20 years (31.2%), respondents aged 26 and above (14.4%), and those under 18 (9.2%).

Table 2. Age Group Distribution
Age GroupFrequencyPercentage
Under 18239.2%
18–207831.2%
20–2511345.2%
26 and above3614.4%
Total250100%

5.2.3 Educational Level

Most respondents (60.4%) were enrolled in a Bachelor's Degree program. The next largest group was composed of 16% enrolled in a Master's Degree program, followed by Associate Diploma holders (15.6%).

Table 3. Educational Level
Educational LevelFrequencyPercentage
Associate Diploma3915.6%
Bachelor's Degree15160.4%
Master's Degree4016%
PhD83.2%
Other124.8%
Total250100%

5.2.4 Field of Study

Students specializing in Information Technology made up the largest group (49.6%), followed by Business (18%), Engineering (17.2%), Science (9.6%), and other disciplines (5.6%).

Table 4. Field of Study
Field of StudyFrequencyPercentage
Information Technology12449.6%
Business4518%
Engineering4317.2%
Science249.6%
Other145.6%
Total250100%

5.3 Data Validity and Reliability

5.3.1 Validity Analysis

Content validity was established through an extensive literature review during the questionnaire design phase. Each question was developed based on established constructs from prior studies on AI adoption and cybersecurity effectiveness. The questionnaire was evaluated and approved by the supervisor before distribution.

Table 5. KMO and Bartlett's Test Results
TestValue
KMO Measure of Sampling Adequacy0.933
Bartlett's Test — Approx. Chi-Square2151.226
df66
Sig.<.001
Table 6. Factor Loadings and AVE
ConstructItemFactor LoadingAVE
AI-based Threat DetectionV140.8600.767
V150.888
V160.903
V170.850
AI-based Incident ResponseV180.8410.788
V190.907
V200.907
V210.894
Technical Features of AIV220.8980.833
V230.923
V240.913
V250.916
Knowledge and SkillsV260.8900.807
V270.891
V280.920
V290.893
Usefulness of AIV300.9030.840
V310.931
V320.914
V330.919
Ease of UseV340.9180.859
V350.945
V360.916
V370.928

5.3.2 Reliability Analysis

Cronbach's Alpha was used to test the internal consistency of the six constructs. Values of alpha varied between 0.897 and 0.935, suggesting a very high level of reliability in all constructs.

Table 7. Reliability Statistics
ConstructItemsCronbach's AlphaInterpretation
AI-based Threat Detection (ATIDC)40.935Excellent
AI-based Incident Response (AIIRC)40.926Excellent
Technical Features of AI (TFIS)40.901Excellent
Knowledge and Skills (SKILL)40.897Good
Usefulness of AI (USEFULNESS)40.909Excellent
Ease of Use (EASE)40.914Excellent

5.4 Data Analysis

This section reports a descriptive analysis of the findings for all constructs from Sections B, C, and D. A mean score >3.0 indicates overall agreement; <3.0 indicates overall disagreement.

3.50
AI AWARENESS MEAN
3.56
CYBERSEC AWARENESS MEAN
3.58
THREAT DETECTION MEAN
3.49
INCIDENT RESPONSE MEAN
3.64
OVERALL OPINION MEAN

5.4.1 General Awareness of AI

The total mean was 3.50 with moderate agreement on the four items. The statement "I believe AI technologies are becoming increasingly important in cybersecurity practices" had the highest mean rating of 3.56, with 60.0% of participants agreeing or strongly agreeing. Overall, the trend is positive, indicating that the awareness of AI among the samples is somewhat solid.

5.4.2 General Awareness of Cybersecurity

The second construct yielded an overall mean value of 3.56. The item with the highest score was "I believe I am aware of cybersecurity policies in my organization or school" with a mean of 3.60 and 61.2% agreement. Most respondents possess a decent amount of cybersecurity knowledge and awareness.

5.4.3 AI-based Threat Detection Capabilities

Overall mean of 3.58. The highest response was "I believe AI improves the accuracy of detecting cyber threats" (mean 3.60, 61.2% agreement). The consistency of responses with agreement rates ranging above 58% indicates respondents have a generally positive perception of AI's threat detection capabilities.

5.4.4 AI-based Incident Response Capabilities

The overall mean for this construct was the lowest at 3.49. "I believe using AI makes handling cybersecurity incidents easier" scored mean 3.55 with 55.2% agreement. The lowest item was "I believe AI can help an organization's systems respond faster to any cyber-attack incidents" with mean 3.42 and 53.2% agreement. This may suggest respondents are slightly less confident of AI's role during the response phase.

5.4.5 Technical Features of AI Systems

Average of 3.58. The strongest item was "I believe AI systems can adapt to new cybersecurity threats over time" with mean 3.63 and 63.6% agreement, highlighting the technical benefits of learning and evolving AI.

5.4.6 Knowledge and Skills

Overall average of 3.57. The statement "I believe lack of technical expertise can limit the effectiveness of AI in cybersecurity" had the highest average at 3.64 with 61.2% agreement. Respondents were less sure about organizations investing adequately in training—pointing to a practical challenge organizations need to address.

5.4.7 Perceived Usefulness of AI

Overall mean of 3.57. "I believe AI is useful for improving cybersecurity in the organization" and "I believe using AI improves the overall performance of cybersecurity operations" both scored mean 3.61 with agreement rates of 64.0% and 64.4% respectively.

5.4.8 Perceived Ease of Use

Ease of use had the lowest overall mean (3.40) among all constructs. "I believe it is easy to integrate AI tools into existing cybersecurity systems" had the lowest mean at 3.44. Lower ratings in learning simplicity and integration make sense given that many respondents were students who may not have first-hand experience of enterprise-grade AI security tools.

5.4.9 Effectiveness of Cybersecurity Threat Detection and Response

Overall mean of 3.52. "I believe AI enhances overall cybersecurity performance in the organization" had the highest mean at 3.58 and 58.8% agreement. The relative lack of consensus for false alarm reduction and attack prevention may demonstrate awareness that these outcomes depend on the quality of the AI model and the expertise available to maintain it.

5.4.10 Overall Opinion

Section D produced the highest construct mean in the entire survey at 3.64. "I support the use of AI in cybersecurity" scored the highest individual mean at 3.68 with 64.8% agreement. The consistently high agreement scores suggest that overall sentiment among respondents is strongly positive toward AI in cybersecurity.

5.5 Hypotheses Testing

Results from the multiple linear regression analysis indicate that the model is highly predictive of the dependent variable (F = 198.928, p < 0.001). Together with USEFULNESS (B = 0.750, p < 0.001) and EASE (B = 0.272, p = 0.028), the findings provide statistical support for the hypotheses that perceived usefulness and ease of use are critical to cybersecurity effectiveness.

Table 8. Results of Multiple Linear Regression
VariableBStd. ErrorBetat-valueSig.VIF
Constant-.149.327-.455.650
Technical Features (TFIS).200.116.1861.717.0897.607
AI Incident Response (AIIRC).099.081.1311.233.2207.314
AI Threat Detection (ATIDC)-.139.108-.140-1.282.2027.710
Knowledge and Skills (SKILL).133.140.121.953.34310.536
Ease of Use (EASE).272.122.1782.226.0284.161
Usefulness of AI (USEFULNESS).750.147.5045.090<.0016.373

5.6 Decision

The whole model is very significant (F = 198.928, p < 0.001, R² = 0.918)—all six independent variables together explain 91.8% of the variability in cybersecurity effectiveness. In terms of individual hypotheses, H5 (Perceived Usefulness, p < 0.001) and H6 (Perceived Ease of Use, p = 0.028) were proven statistically significant. Hypotheses H1–H4 were not found to be significant as individual predictors due to high multicollinearity.

Table 9. Summary of Hypotheses Results
HypothesisStatementBetaSig.Result
H1AI Threat Detection → Effectiveness-0.1400.202Not Supported
H2AI Incident Response → Effectiveness0.1310.220Not Supported
H3Technical Features → Effectiveness0.1860.089Not Supported
H4Knowledge and Skills → Effectiveness0.1210.343Not Supported
H5Perceived Usefulness → Effectiveness0.504<0.001✓ Supported
H6Ease of Use → Effectiveness0.1780.028✓ Supported
Table 10. Model Summary
RR SquareAdjusted R SquareStd. Error of the Estimate
0.9580.9180.9160.284
H1
AI-based Threat Detection → Effectiveness
Not Supported
H2
AI-based Incident Response → Effectiveness
Not Supported
H3
Technical Features → Effectiveness
Not Supported
H4
Knowledge and Skills → Effectiveness
Not Supported
H5
Perceived Usefulness → Effectiveness (β=0.504)
✓ Supported
H6
Ease of Use → Effectiveness (β=0.178)
✓ Supported
// CH.06

Conclusion & Future Work

6.1 Introduction

The results from the analysis of the research carried out on artificial intelligence for improving the detection of cybersecurity threats have been captured in this chapter. The research was conducted at the University of Bahrain and involved a study to determine how students and Information Technology personnel view the use of AI-based tools for enhancing cybersecurity.

6.2 Summary of Findings

Data collection was completed using an online survey on Google Forms. In total, the survey received 250 valid responses from students and professionals from technology-related industries. Questions related to six key areas were asked: awareness of AI, awareness of cybersecurity, application of AI for threat detection, application of AI for incident response, technical factors, and organizational factors.

Most participants appeared to be aware of what AI is and how it is applied in cybersecurity. Most saw it as an advantage against cyber threats. The largest number of respondents (49.6%) came from the IT field. There was also a high level of cybersecurity awareness—participants knew all the typical threats like phishing or malware.

In terms of threat detection, the use of AI technology was positively assessed. Participants believed that this approach helped in detecting cyber threats faster and with higher efficiency compared to traditional methods. AI received similarly positive assessments regarding incident response. Concerning organizational factors, the higher the rating in terms of usefulness and accessibility, the more confidence participants had regarding AI tools' overall effectiveness—perfectly corresponding to predictions made by Venkatesh et al. (2016) based on the UTAUT theory.

6.3 Conclusion

The current research evaluated how AI-based capabilities are perceived in relation to cybersecurity effectiveness, with a particular focus on the Bahrain context. Survey data from 250 participants revealed generally positive attitudes, with mean scores ranging from 3.40 to 3.64 across all constructs.

Multiple linear regression analysis (F = 198.928, p < 0.001, R² = 0.918) demonstrated that the six constructs collectively explain 91.8% of the variance in perceived cybersecurity effectiveness. Among the six hypotheses tested, two were statistically supported:

The remaining four hypotheses (H1–H4) were not statistically significant as individual predictors, most likely attributable to multicollinearity among the independent variables. Nevertheless, positive Beta values for H1–H4 confirm directional relationships consistent with the theoretical framework.

These findings align with the UTAUT framework advanced by Venkatesh et al. (2016). Organizations that invest in AI technology without ensuring that end-users find it useful and easy to work with are unlikely to realize its full potential.

6.4 Project Contributions and Limitations

6.4.1 Contributions to the Cybersecurity Field

6.4.2 Limitations of the Study

6.5 Recommendations

6.5.1 Practical Recommendations for IT Professionals

6.5.2 Academic Recommendations for Cybersecurity Education

6.5.3 Policy Recommendations for Organizations in Bahrain

6.6 Future Work

// REFS

References

References managed via mybib.com

// APPENDIX

Appendix A & B — Questionnaire

SURVEY ON

AI in Cybersecurity: Enhancing Threat Detection and Response

This survey is conducted as part of a senior research project examining the integration of Artificial Intelligence (AI) in cybersecurity. The purpose of this study is to explore how AI-based technologies enhance threat detection, incident response, and overall cybersecurity effectiveness within organizations.

Prepared by: Maryam Abdulla (202100437) & Emaan Rashid Latif (202109038)
Supervised by: Dr. Ali Zolait

Section A: Basic Information
Gender: Male / Female
Age: Under 18 / 18–20 / 20–25 / 26 and above
Type of degree: Associate Diploma / Bachelor's / Master's / PhD / Other
Field of study: Information Technology / Engineering / Business / Science / Other
Section B: General Awareness of AI & Cybersecurity

Scale:

1 – Strongly Disagree 2 – Disagree 3 – Neutral 4 – Agree 5 – Strongly Agree

B1. General Awareness of Artificial Intelligence

A. I believe I know what Artificial Intelligence (AI) is
B. I believe I know that AI is used in cybersecurity
C. I believe I understand that AI can help detect online threats
D. I believe AI technologies are becoming increasingly important in cybersecurity practices

B2. General Awareness of Cybersecurity

A. I believe I know what Cybersecurity means
B. I believe I am aware of common cybersecurity threats (e.g., phishing, malware, hacking)
C. I believe I take steps to protect my personal information online
D. I believe I am aware of cybersecurity policies in my organization or school
Section C: AI in Cybersecurity (Research Constructs)

C1. AI-based Threat Detection Capability

A. I believe AI can help to find cyber threats faster than traditional methods
B. I believe AI can notice unusual online activities quickly
C. I believe AI can help to detect new cyber threats that were not seen before
D. I believe AI improves the accuracy of detecting cyber threats

C2. AI-based Incident Response Capabilities

A. I believe AI can help an organization's systems respond faster to any cyber attack incidents
B. I believe AI-based incident response can reduce the damage caused by cyber attacks
C. I believe using AI makes handling cybersecurity incidents easier
D. I believe AI helps organizations take automated actions during cyber incidents

C3. Technical Features of AI Systems

A. I believe AI systems are accurate and effective in addressing cybersecurity threats
B. I believe AI systems are reliable, that make cybersecurity better
C. I believe AI systems can process large amounts and it works well
D. I believe AI systems can adapt to new cybersecurity threats over time

C4. Knowledge and Skills

A. I believe organizations have trained people who use AI in cybersecurity properly
B. I believe technical knowledge of AI capabilities can increase the benefits of using AI
C. I believe organizations provide sufficient training for employees to use AI in cybersecurity
D. I believe lack of technical expertise can limit the effectiveness of AI in cybersecurity

C5. Usefulness of AI

A. I believe AI is useful for improving cybersecurity in the organization
B. I believe AI helps to make better security decisions
C. I believe AI adds value to cybersecurity tasks that are achieved in the organization
D. I believe using AI improves the overall performance of cybersecurity operations

C6. Ease of Use

A. I believe AI cybersecurity tools are easy to use
B. I believe learning to use AI tools for cybersecurity is simple
C. I believe using AI-based tools is more comfortable
D. I believe it is easy to integrate AI tools into existing cybersecurity systems

C7. Effectiveness of Cybersecurity Threat Detection and Response

A. I believe AI improves the speed of detecting cybersecurity threats
B. I believe AI reduces the number of false alarms in cybersecurity systems
C. I believe AI improves the organization's ability to prevent cyber attacks
D. I believe AI enhances overall cybersecurity performance in the organization
Section D: Overall Opinion
A. I support the use of AI in cybersecurity
B. I believe AI will be important for the future of cybersecurity
C. I believe organizations should invest more in AI technologies for cybersecurity
D. I would recommend the adoption of AI tools for cybersecurity purposes