Enhancing Threat Detection and Response
This research project examines the application of artificial intelligence (AI) in detecting and preventing cyber threats, focusing specifically on the context of Bahrain. As Buczak and Guven (2016) point out, modern cyberattacks are growing highly complex, making it virtually impossible to rely on manual human effort alone for threat detection and mitigation. This shift makes the development of automated, intelligent systems a necessity.
To explore this, we used a mixed-methods approach, combining a thorough review of existing academic literature with empirical data from a survey of 250 IT students and professionals in Bahrain. This research assessed local user awareness, trust, and perceived reliance on AI-driven security tools, alongside the organizational drivers behind adopting these technologies.
Our survey data shows that respondents hold a generally positive attitude toward the integration of AI in security operations, with an overall attitude mean score of 3.64 out of 5.00. Although 60% of the respondents view AI as an essential asset in modern security, they also highlighted significant implementation challenges. These concerns mirror the warnings of Guidotti et al. (2018) regarding the lack of algorithmic transparency, accuracy issues, and the risk of human over-reliance on automated systems. Similarly, Nallapareddy and Katta (2024) argue that AI is most effective when deployed to support and augment human analysts rather than trying to replace them entirely.
This research has been completed thanks to Allah, the Almighty's blessings. We would like to express our deepest gratitude and appreciation to everyone who supported and contributed to the successful completion of this senior project research.
Our sincere thanks go to Dr. Ali Zolait for his continuous guidance, encouragement, and valuable feedback throughout every stage of this work. His patience, dedication, and insightful advice have been instrumental in shaping and improving the quality of our research.
We would also like to extend our heartfelt appreciation to our families and friends for their endless support, understanding, and motivation during this journey. Finally, we thank all the respondents who took the time to participate and provide valuable input that helped us accomplish our research objectives.
In this section, the basics of Artificial Intelligence (AI) in cybersecurity are introduced. This is about facilitating real-life applications of AI's impact on digital threat detection and response. This background aims to make readers aware of these circumstances to provide them with background for the appreciation of the current study and the context in which it was done.
Cybersecurity is described by NIST (2023) as the process of protecting computers, systems, networks, and data from attacks or any damage, thereby ensuring uninterrupted functioning and safety. With most organizations nowadays—including financial institutions, healthcare facilities, educational institutions, and governmental bodies—utilizing computers for various purposes, cybersecurity has become one of the most important tasks within IT.
NIST (2023) noted that the primary objective of cybersecurity is to keep the CIA triad, including confidentiality, integrity, and availability. "Confidentiality" refers to data being kept private; "integrity" means data not being altered without authorization; and finally, "availability" means systems and data that are available when required.
Furthermore, Buczak and Guven (2016) reported that cyber threats have evolved and become harder to manage over the current years. Phishing attacks, malware, ransomware, and large-scale data breaches are becoming more widespread and damaging. The typical security technologies that depend on known attack signatures and fixed rules are no longer sufficient to address these new types of threats.
Over the years, artificial intelligence has undergone significant transformation. Early AI systems were predominantly rule-based, requiring developers to manually program all the logic needed for execution. These systems performed well on simple, structured problems; however, they broke down when presented with novel or unexpected inputs.
According to Shone et al. (2018), all that changed with the introduction of machine learning. Machine learning enables systems to learn on their own, steadily and through data, rather than following a set of instructions. Deep learning followed, utilizing a technique that is part of artificial intelligence known as neural networks, which process highly complex data. Thanks to these developments, AI is far more adaptable and effective in handling rapidly evolving environments—a critical need in cybersecurity.
Nallapareddy and Katta (2025) noted that in the cybersecurity world, this represents a shift from simply recognising signature-based threats to learning and adapting. Older systems could only detect familiar attacks, whereas newer AI-powered systems can identify unusual patterns even without prior exposure to a specific attack type.
AI is used for several specific jobs in the field of cybersecurity. The most apparent is threat detection; AI systems can more quickly and effectively process large amounts of data and identify anything that looks suspicious.
Nallapareddy and Katta (2025) noted that AI not only detects but is also involved in incident response. Once a threat is verified, AI can help determine how dangerous the threat is and how affected systems are, and in some instances even automatically respond by isolating the compromised device or blocking suspicious traffic.
Furthermore, Buczak and Guven (2016) reported that AI is helping security teams reduce false alarms. AI can filter out irrelevant noise and direct attention to genuine threats. In summary, AI is not a replacement for human decision-making in cybersecurity; rather, it is a tool for transforming security operations into a faster, more accurate, and more efficient process.
Cybersecurity threats have become more sophisticated, faster, and harder to detect using traditional defence mechanisms. Conventional security systems often rely on static rules, manual monitoring, and reactive responses, making them insufficient against modern attacks such as zero-day exploits, advanced persistent threats (APTs), and large-scale data breaches. While AI offers promising capabilities—such as real-time threat detection, automated response, and predictive analysis—its integration into cybersecurity is not without challenges. Organizations face uncertainties regarding the accuracy, reliability, and ethical implications of AI-based systems, as well as concerns about over-reliance on automation.
Therefore, this research seeks to:
This research is significant because cyberattacks have increased in frequency and severity. Buczak and Guven (2016) state that existing systems rely on fixed signatures or rules for threat detection, and this strategy does not work well against new types of attacks. AI—in particular, machine learning and deep learning—provides a new way of doing things that can help you learn over time and detect things that newer approaches may fail to detect.
Shone et al. (2018) showed that AI-based IDS systems have been shown to process vast amounts of network traffic and system logs faster than their human counterparts and have the capability to detect suspicious activity faster, providing security personnel with more time for responding.
Furthermore, Nallapareddy and Katta (2025) discussed how AI also helps to speed up incident response. The automated response system is a method that can be used to quickly contain threats before severe damage is done. This level of speed is not easily possible in a large, complex environment with manual operations.
While there is already significant research around the use of AI in cybersecurity, there are some gaps that are not well represented. Shone et al. (2018) and Tang et al. (2020) both discovered that while many current research works are primarily concerned with enhancing detection accuracy, they often fail to consider the practical problems of deployment within an organization's environment.
Capuano et al. (2022) also discovered that there is a lack of explainability and trust. If there are no tools in place to make AI decisions more explainable, then it will be difficult for organizations to fully trust and rely on AI-based decision-making.
Furthermore, Ahmad et al. (2019) found that most studies view threat detection and incident response as two distinct entities. There has been little research that examines the use of AI in an integrated workflow, including detection, response, and continuous learning. Finally, there is not enough research that focuses on human and organizational issues, such as the willingness of security teams to embrace AI and the level of training required.
Nallapareddy and Katta (2025) argued that traditional security tools struggle to defend against previously unseen attacks because they rely on fixed rules or known signatures. In contrast, AI-based systems use machine learning and deep learning algorithms that can identify abnormal or harmful activity across systems, networks, and user behaviour, even when that activity does not match any previously known pattern.
Shone et al. (2018) classified network traffic as normal or malicious using supervised learning algorithms such as Support Vector Machines or Random Forest. Deep learning abilities like Deep Neural Networks and Long Short-Term Memory (LSTM) can establish significantly more intricate patterns and abnormalities concealed in massive datasets. Capable of learning from data, AI models can identify not only known threats but new and unknown ones as well, including zero-day attacks.
Dharmesh et al. (2023) found that another key component of AI-driven threat detection is anomaly detection. AI can establish a profile of how a network should operate and then look for activities that are abnormal. Arif et al. (2023) explored the use of AI to be predictive—predicting the direction of attack based on past data to then fortify the system before the attack.
Kreinbrink (2019) noted challenges such as imbalanced datasets, biased models, and the complexity of deep learning systems. This is why joining AI analysis and human review has been recommended by many researchers to ensure both technical velocity and accuracy, plus human judgment and oversight.
Figure 1. AI-Driven Threat Detection Models (Iqbal H. Sarker, 2022)
Incident response encompasses the formal incident handling process, which involves the following stages: Preparation, Detection, Containment, Eradication, Recovery, and Post-incident Review. With the inclusion of AI, all of these can be quicker and more accurate.
Nallapareddy and Katta (2025) discovered that this theoretical model is based on the NIST Incident Response Life Cycle. AI can automate the early stages of detection and warn in terms of priority, helping to save time from confirming that an incident is real.
According to NIST (2023), organizations are now incorporating SOAR into their incident response strategies. SOAR tools, when integrated with artificial intelligence, can correlate security alerts from different sources, provide context for these alerts, and suggest or even carry out necessary remediation measures, minimizing critical performance indicators like MTTD and MTTR.
Alevizos and Dekker (2024) highlighted that a continuous learning loop represents one of the more significant developments in AI-driven incident response. Once a particular incident has been resolved, data from the event can be used to refine the AI operating system to detect and respond better to similar incidents in the future.
Data for this study were collected from two main sources: online surveys and a systematic literature review.
(a) Primary data: Google Forms were used to design an online questionnaire, which was distributed via digital means. Participants rated their agreements with statements on a five-point Likert-type scale. Respondents were tallied and sorted automatically and subsequently checked to eliminate missing and/or inconsistent data from analysis.
(b) Secondary data: Academic journals and conference papers published from year 2016 to 2025 were reviewed from sources including IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library, MDPI Open Access, Wiley Online Library, NIST Official Website, ResearchGate, ProQuest, Google Scholar, AIS Electronic Library, and IJMDSA Journal Portal.
We took the survey answers and turned them into simple facts. For background questions, the researchers compiled data and made percentages of respondents. For Likert scale questions, we found the average score to show whether the group mostly agreed or disagreed with ideas. Charts were made so the patterns would be easy to see.
All data were collected ethically. Participation in the survey was voluntary and anonymous, and no personal or sensitive information was recorded. All secondary sources were properly cited following academic standards.
This research consists of a total of six chapters:
Figure 2. Research Structure
As defined by NIST (2023), cybersecurity refers to the processes adopted to safeguard computers, computer networks, and any information stored within such devices against any unauthorized access, destruction, or disruption. Among the core concepts in this area include confidentiality (keeping data confidential), integrity (protecting data from any unauthorized modification), and availability (ensuring accessibility of the data).
According to a panel of experts at NIST (2023), cybersecurity involves protecting systems in such a manner as to ensure the safety of stored data, the uninterrupted operation of computing systems, and the continuity of business operations. Security includes both policy and behavior within organizations.
According to NIST (2023), the significance of cybersecurity has increased tremendously because most organizations now depend on technology to store their critical information and conduct their businesses. A cyber-attack can lead to devastating effects like losses, destruction, and legal ramifications. Health organizations, financial institutions, educational organizations, and even government agencies are at the highest risk of cyber-attacks because of the value of their information.
Shilpa et al. (2024) noted that a poor security system creates great risks of being attacked or hacked as it makes it relatively simple to steal valuable data from an organization or cause irreparable damage to their IT infrastructure, which, in turn, leads to a loss of customer and partner confidence.
The cyber threats that modern organizational networks are exposed to are complex. Buczak and Guven (2016) explained malware as malicious software designed to affect operations or compromise data integrity. The growing threat of ransomware is described by NIST (2023) as hostile encryption of file systems for the purpose of financial extortion. Russell and Norvig (2021) explained Advanced Persistent Threats (APTs) as actors that can break into systems for extended periods to gain strategic intelligence. Zero-day exploits are covered by Apruzzese et al. (2020)—they take advantage of vulnerabilities in systems prior to when vendor patches are released.
The concept of incident response refers to the sequential approach employed by organizations in reaction to a detected cyberattack. As per NIST (2023), the procedure involves the following six stages: preparation, detection and analysis, containment, eradication, recovery, and post-incident activity. This set of actions is critical for minimizing the potential harm inflicted upon businesses while restoring their operational capabilities promptly.
Nallapareddy and Katta (2025) argued that for ensuring successful incident response, the utilization of security-related technologies combined with the involvement of human decision-makers is required. The existing approach is insufficient due to its reliance on manual intervention to a significant degree, leading to delays in addressing cyberattacks, particularly when they are massive or intricate.
Buczak and Guven (2016) argued that conventional security software, while previously reliable, is no longer capable of coping with contemporary cybersecurity challenges. Most traditional applications rely on signature-based detection, which can only identify malware that has already been catalogued in its database. Consequently, when attackers develop new malware or modify existing variants, conventional security tools fail to detect the threat.
Conventional security tools are also known for excessive alerting, issuing multiple alerts most of which tend to be false. Security personnel become fatigued after receiving thousands of false alerts daily. Furthermore, conventional security tools lack interoperability—they are designed to operate independently, making it challenging to coordinate security team efforts across the entire infrastructure.
Artificial Intelligence (AI) has become one of the most important technologies in modern computing. It focuses on enabling machines to perform tasks that normally require human intelligence, such as learning, reasoning, problem-solving, and decision-making.
In simple terms, Artificial Intelligence (AI) is the way we make machines—especially computer systems—copy the way human intelligence works. This basically involves three main steps: learning from new data, using logic to solve problems, and being able to fix its own mistakes. Russell and Norvig (2021, p. 4) argued that AI is focused on the design of "intelligent agents"—smart programs that can look at what is happening in their environment and then take the right actions to reach specific goals.
The development of artificial intelligence has seen several stages. At its early stages, AI was very basic and ran based on a predefined rule set. In cases where the input did not align with the rule set, it could not provide an output. Buczak and Guven (2016) argued that the emergence of machine learning revolutionized this paradigm, enabling computational engines to identify intrinsic patterns in data without explicit human programming. Presently, AI is more advanced through deep learning, which involves the use of artificial neural networks that mimic some cognitive functions in human beings.
Different approaches can be used by AI based on the goals of the application. Machine learning comprises mainly three types of processes:
Shone et al. (2018) discovered that deep learning uses neural networks to work with highly sophisticated data—designs such as CNNs and RNNs help computers in detecting images, recognizing voice patterns, and identifying suspicious activities within a network.
Tang et al. (2020) discussed that AI is being incorporated into different industries: in the medical industry for disease detection and health monitoring; in finance and banking for fraud detection and risk assessment; in transport for traffic management and self-driving cars; in manufacturing for predictive maintenance.
Nallapareddy and Katta (2025) noted that in cybersecurity, AI plays an essential role in developing defense systems—helping in detecting potential threats, analyzing network behaviors, and stopping cyberattacks automatically.
According to Buczak and Guven (2016), for an AI model to be effective, a lot of unbiased data needs to be fed into the model so that it learns to produce reliable output. Corrupted data will lead to a biased model that makes mistakes in practice even if it worked well in testing scenarios.
Guidotti et al. (2018) argued that the second factor is transparency in AI systems—being able to explain why the AI came to a particular conclusion. Many intelligent machines, especially those employing deep learning, have been referred to as "black boxes" because the reasons for the decisions are hard to comprehend. This creates mistrust, which can be especially problematic in areas like cybersecurity where reliability is key.
AI and cybersecurity are linked concepts due to the massive amount of data produced by computers every day. AI technologies are adept at processing such massive amounts of data because AI algorithms can detect hidden trends as well as unusual behavior in it.
Alevizos and Dekker (2024) said that one of the strengths of using AI for cybersecurity purposes is its ability to adapt to changes. Static rules are characteristic of traditional security methods; therefore, any changes in threats are challenging to incorporate into the existing systems. On the contrary, an AI-based security system can learn from each threat and update its defenses accordingly.
Buczak and Guven (2016) proposed that AI performs better in detecting threats since they use data-driven approaches to detect hazardous patterns regardless of whether the type of attack has already occurred before.
Shone et al. (2018) noted that AI systems can simultaneously analyze old and current data to detect if the network traffic is legitimate or malicious. Deep learning techniques can find increasingly subtle patterns hidden within large datasets to improve detection results in high-traffic networks where attackers use sophisticated methods to evade detection.
Tang et al. (2020) discussed anomaly detection—developing a model of how typical network traffic should look like and then setting up alerts whenever any deviation occurs. Such a method is effective in detecting insider threats and zero-day attacks.
Nallapareddy and Katta (2025) report the capabilities of AI extend further than threat detection. The remedy process is usually lengthy and involves tedious work for people; however, AI assists through the automation of activities and decision-making process that will lead to faster action and improved knowledge. According to NIST (2023), security incidents should be triaged and prioritized depending on their severity and possible implications. AI can determine the scope of security incidents, correlate such incidents to the network activities, and implement automatic actions like blocking malicious traffic and isolating affected systems.
AI is also extremely helpful in retrospective analysis and understanding the root causes of intrusions. This information is vital in developing preventive actions to minimize the chances of future attacks. Nevertheless, the involvement of humans is still crucial—people must be able to verify and check AI decisions for correctness.
Buczak and Guven (2016) reported several important technical considerations that influence the efficiency of AI for threat detection. First, the quality of data input is of crucial importance—large amounts of reliable data are required, while inaccurate data may result in flawed conclusions and/or a high false positive rate.
Guidotti et al. (2018) discovered that the complexity of the AI model is an essential factor. More complex deep-learning algorithms are likely to have better results in terms of threat identification, yet they might require significant computing resources and may not be easily deployable in settings with limited computational capacity.
Apruzzese et al. (2020) noted that the robustness of a tool needs to be assessed—the capability of the system to resist manipulation attempts by an adversary who tries to mislead the AI by presenting carefully chosen inputs.
According to Capuano et al. (2022), the current primary focus lies on developing Explainable AI, which seeks to make the rationale behind computations comprehensible to security experts. Improved explainability of alerts' reasoning will increase confidence in the system among human users.
Arif et al. (2023) discussed predictive AI—using past data to predict when and where future attacks might occur to better prepare for them. Scientists are also improving the application of AI algorithms for increasing cloud computing and network security.
Despite impressive achievements, there are many hurdles—the topics of privacy, discrimination, and biased algorithms should not be overlooked. Another alarming trend is the use of artificial intelligence by adversaries for conducting more advanced attacks, thus fueling what Nallapareddy and Katta (2025) describe as a digital arms race.
This research is based on a research model which consists of three main variable categories:
The conceptual framework captures the connection between the different aspects involved in the research study. The central assumption postulates that AI can improve the security situation significantly—through helping teams identify risks faster and more accurately, responding faster to attacks, and maintaining efficiency despite difficult circumstances.
According to Venkatesh et al. (2016) and Capuano et al. (2022), AI will deliver results effectively only when the workers have been professionally trained and when the usefulness and usability of the AI tools are clear to the team. As Tang et al. (2020) suggest, this combination of human factors and technical skills represents a popular and successful strategy in studying AI applications within organizations.
Figure 3. Research Framework
AI-based Threat Detection Capabilities: Refers to the ability of AI-powered devices to recognize cyber threats using data analysis. Traditional software is limited to identifying previously documented attacks, while AI leverages machine learning. Shone et al. (2018) explore the possibility of using sophisticated learning algorithms like deep learning to uncover hidden threats in big data. According to Dharmesh et al. (2023), anomaly detection is useful in recognizing new zero-day attacks and internal threats.
AI-based Incident Response Capabilities: Refers to the ability of AI systems to help or even react on their own when faced with a threat—including ranking alerts based on importance, correlating incidents, and performing automatic actions such as removing infiltrated hosts or blocking suspicious traffic (Nallapareddy and Katta, 2025). Moreover, AI allows implementing a feedback loop for incident management as illustrated by Alevizos and Dekker (2024).
Technical Factors of AI Systems: Includes data quality and accuracy, the ability to scale, and whether the tool is well-integrated into the existing security architecture. According to Buczak and Guven (2016), AI systems are only as good as the data they receive. Apruzzese et al. (2020) caution against building vulnerable models that can be easily manipulated by providing malicious inputs.
Effectiveness of Cybersecurity Threat Detection and Response: Defined as the level of capability within a firm to detect any cyber threats and effectively react to security threats to avoid damage and get back to regular operations. Signs of effectiveness include improvement in the threat detection rate, shortened incident response time, and a decrease in cases of false positives. According to Shone et al. (2018) and Nallapareddy and Katta (2025), competencies developed through the use of AI have helped positively affect these areas through fast processing and automated activities.
Organizational Knowledge and Skills: The level of technical understanding and expertise that security teams possess in relation to AI-based cybersecurity tools. As Capuano et al. (2022) noted, teams with high levels of AI knowledge are better positioned to unlock the full potential of these tools, while untrained teams may mismanage even the most advanced systems.
Perceived Usefulness of AI: The degree to which security professionals believe that using AI tools will improve their job performance. As Venkatesh et al. (2016) highlighted, when users genuinely believe a tool helps them detect threats faster or reduces false alarms, they are more likely to trust it and use it consistently.
Perceived Ease of Use: The degree to which security professionals believe that using AI cybersecurity tools requires minimal effort. When a tool is perceived as intuitive and user-friendly, people are more likely to adopt it into their daily routines and trust it for critical decisions.
The present chapter provides an overview of the design and methodology used in the research. Overall, this chapter is the roadmap of procedures employed to determine the influence of artificial intelligence on improving cybersecurity. To collect quantitative data, a questionnaire survey was used, which allowed collecting opinions and knowledge related to the subject matter from participants.
This research uses quantitative research methodology. Emphasis is laid on the use of numbers to observe trends in the data that has been collected. The data was collected with the help of a questionnaire that was distributed on Google Forms. As per Saunders et al. (2019), this methodology allows us to convert different perspectives into percentages and visualize them easily.
In terms of epistemology, our research uses the positivist approach. It means that knowledge is gained not only through people's subjective experiences or narratives but from observations and measurable information. The researchers utilized the Likert scale ranging from "Strongly Agree" to "Strongly Disagree" to stay objective. This technique helps transform subjective information into objective statistics for further calculations.
We employed a structured questionnaire developed using Google Forms. The survey was designed to collect all information in sections. Initially, demographic questions were asked to find out participants' age, gender, and fields of study. The largest section was devoted to attitudes towards AI technology in cybersecurity—participants were asked about its efficiency in threat detection and counterattacks as well as its usefulness overall. Almost all questions were phrased in the form of statements evaluated on a five-point Likert scale ranging from "Strongly Agree" to "Strongly Disagree."
Google Forms was selected for developing and distributing our questionnaire. Since the survey was distributed online, we were able to engage a considerable number of 250 respondents using an easy-to-share link.
Although the survey method proved beneficial in obtaining the needed data, certain limitations should be mentioned. Due to the usage of closed questions, it was impossible for respondents to give more elaborate explanations concerning their responses. Additionally, it is possible that respondents answered quickly without reading all the statements carefully. Finally, since it was conducted via the Internet, respondents' interpretations might affect the results.
The target population of this research is people who have a certain degree of knowledge or experience in artificial intelligence and cybersecurity in the academic or professional sphere—university students studying IT-related subjects including Information Systems, Computer Science, Engineering, and Cybersecurity, as well as professionals working in IT and cybersecurity. The purposeful choice was made since members of this population are more capable of giving knowledgeable views regarding the use of AI in the environment of cybersecurity.
We have used convenience sampling as our sampling technique—a non-probability sampling technique whereby the subjects most easily accessible to the researcher are selected. We distributed the questionnaire through WhatsApp groups consisting of university students and anyone with knowledge about IT or cybersecurity. According to the suggestion by Creswell & Creswell (2018), convenience sampling is often used in student-conducted scholarly studies.
The researchers had collected 250 responses by the time data collection was stopped. Each submission was reviewed before proceeding with analysis to ensure that it was finished and that the individual had worked on the questions. Any answers that left mandatory questions blank were deleted, as were answers that resembled random marking. Following this review, all 250 responses were validated and retained for analysis.
The data collection tool was Google Forms, a free online survey and questionnaire creation tool. The questionnaire can be constructed and distributed automatically in real time via any messaging application (WhatsApp), and all responses are automatically stored in a single location without having to collect anything manually.
Primary data collection was conducted over a six-month period from November 1, 2025, to May 1, 2026. The questionnaire was categorized into four parts:
The questionnaire was distributed via WhatsApp because it was the fastest and most direct means of reaching individuals with the appropriate backgrounds. On the first page, an introductory paragraph was provided stating that participation was voluntary, responses would be kept fully anonymous, and they would only be used in academic research. Respondents were also informed that the survey would last approximately five or seven minutes, and there was no correct or wrong answer. A final analysis was done on 250 valid and complete responses.
The framework shows how the key factors that constitute the effectiveness of cybersecurity threat detection and response are interconnected. The independent variables (H1–H3) are supposed to directly affect the dependent variable. In addition, the model has three supporting variables (H4–H6) which concern human and usability aspects. Overall, the framework provides an emphasis on both technical and human factors.
Before beginning analysis, all responses received through the survey were examined to ensure that the collected data was clean and reliable—data cleaning is an important step before carrying out analysis, a necessity highlighted by Creswell and Creswell (2018). Each submission was verified to ensure responses to all mandatory questions were present. Where there were signs that the respondent had not devoted sufficient attention (such as selecting the same option throughout scale questions, known as "straight-lining"), submissions were removed.
The analysis was based on descriptive statistics—interpreting the data to make summaries and descriptions without going further into any statistical inferences. For demographics, frequency counts and percentages were computed for every variable. For the Likert scale sections, means for statements and for constructs were computed to understand the level of agreement/dissent from the average point.
The software utilized during the analysis phase was Google Forms, since the survey was conducted using the platform from 1st December 2025 until 1st May 2026. Google Forms automatically generated visual summaries for all questions as soon as responses were collected, including pie charts and bar charts showing the distribution of participant responses across the five-point Likert scale.
The following chapter contains the analysis and interpretation of the data collected through the online survey questionnaire. The aim here is to find out the level of awareness among the participants regarding the application of AI technology to enhance threat detection capability in improving cybersecurity.
The survey results revealed that most respondents were female, accounting for 63.6% of the total sample, while male respondents made up the remaining 36.4%.
| Gender | Frequency | Percentage |
|---|---|---|
| Female | 159 | 63.6% |
| Male | 91 | 36.4% |
| Total | 250 | 100% |
The largest group of participants fell within the 20–25 years age range, representing 45.2% of the sample. This was followed by participants aged 18–20 years (31.2%), respondents aged 26 and above (14.4%), and those under 18 (9.2%).
| Age Group | Frequency | Percentage |
|---|---|---|
| Under 18 | 23 | 9.2% |
| 18–20 | 78 | 31.2% |
| 20–25 | 113 | 45.2% |
| 26 and above | 36 | 14.4% |
| Total | 250 | 100% |
Most respondents (60.4%) were enrolled in a Bachelor's Degree program. The next largest group was composed of 16% enrolled in a Master's Degree program, followed by Associate Diploma holders (15.6%).
| Educational Level | Frequency | Percentage |
|---|---|---|
| Associate Diploma | 39 | 15.6% |
| Bachelor's Degree | 151 | 60.4% |
| Master's Degree | 40 | 16% |
| PhD | 8 | 3.2% |
| Other | 12 | 4.8% |
| Total | 250 | 100% |
Students specializing in Information Technology made up the largest group (49.6%), followed by Business (18%), Engineering (17.2%), Science (9.6%), and other disciplines (5.6%).
| Field of Study | Frequency | Percentage |
|---|---|---|
| Information Technology | 124 | 49.6% |
| Business | 45 | 18% |
| Engineering | 43 | 17.2% |
| Science | 24 | 9.6% |
| Other | 14 | 5.6% |
| Total | 250 | 100% |
Content validity was established through an extensive literature review during the questionnaire design phase. Each question was developed based on established constructs from prior studies on AI adoption and cybersecurity effectiveness. The questionnaire was evaluated and approved by the supervisor before distribution.
| Test | Value |
|---|---|
| KMO Measure of Sampling Adequacy | 0.933 |
| Bartlett's Test — Approx. Chi-Square | 2151.226 |
| df | 66 |
| Sig. | <.001 |
| Construct | Item | Factor Loading | AVE |
|---|---|---|---|
| AI-based Threat Detection | V14 | 0.860 | 0.767 |
| V15 | 0.888 | ||
| V16 | 0.903 | ||
| V17 | 0.850 | ||
| AI-based Incident Response | V18 | 0.841 | 0.788 |
| V19 | 0.907 | ||
| V20 | 0.907 | ||
| V21 | 0.894 | ||
| Technical Features of AI | V22 | 0.898 | 0.833 |
| V23 | 0.923 | ||
| V24 | 0.913 | ||
| V25 | 0.916 | ||
| Knowledge and Skills | V26 | 0.890 | 0.807 |
| V27 | 0.891 | ||
| V28 | 0.920 | ||
| V29 | 0.893 | ||
| Usefulness of AI | V30 | 0.903 | 0.840 |
| V31 | 0.931 | ||
| V32 | 0.914 | ||
| V33 | 0.919 | ||
| Ease of Use | V34 | 0.918 | 0.859 |
| V35 | 0.945 | ||
| V36 | 0.916 | ||
| V37 | 0.928 |
Cronbach's Alpha was used to test the internal consistency of the six constructs. Values of alpha varied between 0.897 and 0.935, suggesting a very high level of reliability in all constructs.
| Construct | Items | Cronbach's Alpha | Interpretation |
|---|---|---|---|
| AI-based Threat Detection (ATIDC) | 4 | 0.935 | Excellent |
| AI-based Incident Response (AIIRC) | 4 | 0.926 | Excellent |
| Technical Features of AI (TFIS) | 4 | 0.901 | Excellent |
| Knowledge and Skills (SKILL) | 4 | 0.897 | Good |
| Usefulness of AI (USEFULNESS) | 4 | 0.909 | Excellent |
| Ease of Use (EASE) | 4 | 0.914 | Excellent |
This section reports a descriptive analysis of the findings for all constructs from Sections B, C, and D. A mean score >3.0 indicates overall agreement; <3.0 indicates overall disagreement.
The total mean was 3.50 with moderate agreement on the four items. The statement "I believe AI technologies are becoming increasingly important in cybersecurity practices" had the highest mean rating of 3.56, with 60.0% of participants agreeing or strongly agreeing. Overall, the trend is positive, indicating that the awareness of AI among the samples is somewhat solid.
The second construct yielded an overall mean value of 3.56. The item with the highest score was "I believe I am aware of cybersecurity policies in my organization or school" with a mean of 3.60 and 61.2% agreement. Most respondents possess a decent amount of cybersecurity knowledge and awareness.
Overall mean of 3.58. The highest response was "I believe AI improves the accuracy of detecting cyber threats" (mean 3.60, 61.2% agreement). The consistency of responses with agreement rates ranging above 58% indicates respondents have a generally positive perception of AI's threat detection capabilities.
The overall mean for this construct was the lowest at 3.49. "I believe using AI makes handling cybersecurity incidents easier" scored mean 3.55 with 55.2% agreement. The lowest item was "I believe AI can help an organization's systems respond faster to any cyber-attack incidents" with mean 3.42 and 53.2% agreement. This may suggest respondents are slightly less confident of AI's role during the response phase.
Average of 3.58. The strongest item was "I believe AI systems can adapt to new cybersecurity threats over time" with mean 3.63 and 63.6% agreement, highlighting the technical benefits of learning and evolving AI.
Overall average of 3.57. The statement "I believe lack of technical expertise can limit the effectiveness of AI in cybersecurity" had the highest average at 3.64 with 61.2% agreement. Respondents were less sure about organizations investing adequately in training—pointing to a practical challenge organizations need to address.
Overall mean of 3.57. "I believe AI is useful for improving cybersecurity in the organization" and "I believe using AI improves the overall performance of cybersecurity operations" both scored mean 3.61 with agreement rates of 64.0% and 64.4% respectively.
Ease of use had the lowest overall mean (3.40) among all constructs. "I believe it is easy to integrate AI tools into existing cybersecurity systems" had the lowest mean at 3.44. Lower ratings in learning simplicity and integration make sense given that many respondents were students who may not have first-hand experience of enterprise-grade AI security tools.
Overall mean of 3.52. "I believe AI enhances overall cybersecurity performance in the organization" had the highest mean at 3.58 and 58.8% agreement. The relative lack of consensus for false alarm reduction and attack prevention may demonstrate awareness that these outcomes depend on the quality of the AI model and the expertise available to maintain it.
Section D produced the highest construct mean in the entire survey at 3.64. "I support the use of AI in cybersecurity" scored the highest individual mean at 3.68 with 64.8% agreement. The consistently high agreement scores suggest that overall sentiment among respondents is strongly positive toward AI in cybersecurity.
Results from the multiple linear regression analysis indicate that the model is highly predictive of the dependent variable (F = 198.928, p < 0.001). Together with USEFULNESS (B = 0.750, p < 0.001) and EASE (B = 0.272, p = 0.028), the findings provide statistical support for the hypotheses that perceived usefulness and ease of use are critical to cybersecurity effectiveness.
| Variable | B | Std. Error | Beta | t-value | Sig. | VIF |
|---|---|---|---|---|---|---|
| Constant | -.149 | .327 | — | -.455 | .650 | — |
| Technical Features (TFIS) | .200 | .116 | .186 | 1.717 | .089 | 7.607 |
| AI Incident Response (AIIRC) | .099 | .081 | .131 | 1.233 | .220 | 7.314 |
| AI Threat Detection (ATIDC) | -.139 | .108 | -.140 | -1.282 | .202 | 7.710 |
| Knowledge and Skills (SKILL) | .133 | .140 | .121 | .953 | .343 | 10.536 |
| Ease of Use (EASE) | .272 | .122 | .178 | 2.226 | .028 | 4.161 |
| Usefulness of AI (USEFULNESS) | .750 | .147 | .504 | 5.090 | <.001 | 6.373 |
The whole model is very significant (F = 198.928, p < 0.001, R² = 0.918)—all six independent variables together explain 91.8% of the variability in cybersecurity effectiveness. In terms of individual hypotheses, H5 (Perceived Usefulness, p < 0.001) and H6 (Perceived Ease of Use, p = 0.028) were proven statistically significant. Hypotheses H1–H4 were not found to be significant as individual predictors due to high multicollinearity.
| Hypothesis | Statement | Beta | Sig. | Result |
|---|---|---|---|---|
| H1 | AI Threat Detection → Effectiveness | -0.140 | 0.202 | Not Supported |
| H2 | AI Incident Response → Effectiveness | 0.131 | 0.220 | Not Supported |
| H3 | Technical Features → Effectiveness | 0.186 | 0.089 | Not Supported |
| H4 | Knowledge and Skills → Effectiveness | 0.121 | 0.343 | Not Supported |
| H5 | Perceived Usefulness → Effectiveness | 0.504 | <0.001 | ✓ Supported |
| H6 | Ease of Use → Effectiveness | 0.178 | 0.028 | ✓ Supported |
| R | R Square | Adjusted R Square | Std. Error of the Estimate |
|---|---|---|---|
| 0.958 | 0.918 | 0.916 | 0.284 |
The results from the analysis of the research carried out on artificial intelligence for improving the detection of cybersecurity threats have been captured in this chapter. The research was conducted at the University of Bahrain and involved a study to determine how students and Information Technology personnel view the use of AI-based tools for enhancing cybersecurity.
Data collection was completed using an online survey on Google Forms. In total, the survey received 250 valid responses from students and professionals from technology-related industries. Questions related to six key areas were asked: awareness of AI, awareness of cybersecurity, application of AI for threat detection, application of AI for incident response, technical factors, and organizational factors.
Most participants appeared to be aware of what AI is and how it is applied in cybersecurity. Most saw it as an advantage against cyber threats. The largest number of respondents (49.6%) came from the IT field. There was also a high level of cybersecurity awareness—participants knew all the typical threats like phishing or malware.
In terms of threat detection, the use of AI technology was positively assessed. Participants believed that this approach helped in detecting cyber threats faster and with higher efficiency compared to traditional methods. AI received similarly positive assessments regarding incident response. Concerning organizational factors, the higher the rating in terms of usefulness and accessibility, the more confidence participants had regarding AI tools' overall effectiveness—perfectly corresponding to predictions made by Venkatesh et al. (2016) based on the UTAUT theory.
The current research evaluated how AI-based capabilities are perceived in relation to cybersecurity effectiveness, with a particular focus on the Bahrain context. Survey data from 250 participants revealed generally positive attitudes, with mean scores ranging from 3.40 to 3.64 across all constructs.
Multiple linear regression analysis (F = 198.928, p < 0.001, R² = 0.918) demonstrated that the six constructs collectively explain 91.8% of the variance in perceived cybersecurity effectiveness. Among the six hypotheses tested, two were statistically supported:
The remaining four hypotheses (H1–H4) were not statistically significant as individual predictors, most likely attributable to multicollinearity among the independent variables. Nevertheless, positive Beta values for H1–H4 confirm directional relationships consistent with the theoretical framework.
These findings align with the UTAUT framework advanced by Venkatesh et al. (2016). Organizations that invest in AI technology without ensuring that end-users find it useful and easy to work with are unlikely to realize its full potential.
References managed via mybib.com
SURVEY ON
This survey is conducted as part of a senior research project examining the integration of Artificial Intelligence (AI) in cybersecurity. The purpose of this study is to explore how AI-based technologies enhance threat detection, incident response, and overall cybersecurity effectiveness within organizations.
Prepared by: Maryam Abdulla (202100437) & Emaan Rashid Latif (202109038)
Supervised by: Dr. Ali Zolait
Scale: