1 / 0

IDS Analysis Scheme

IDS Analysis Scheme. Objectives. Able to compare between anomaly and signature based detection Able to explain hybrid characteristics for IDS To explain the benefits and drawbacks of IDS. IDS. Burglar alarm on doors and windows of your home = IDS for your home

adonia
Download Presentation

IDS Analysis Scheme

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IDS Analysis Scheme

  2. Objectives Able to compare between anomaly and signature based detection Able to explain hybrid characteristics for IDS To explain the benefits and drawbacks of IDS
  3. IDS Burglar alarm on doors and windows of your home = IDS for your home IDS used to protect your network operates in a similar manner. Locate intrusive activity by examining: Network traffic Host logs System calls Other areas – signal an attack against your network
  4. IDS Terminology Alert/Alarm:A signal suggesting that a system has been or is being attacked. True Positive:A legitimate attack which triggers an IDS to produce an alarm. False Positive:An event signaling an IDS to produce an alarm when no attack has taken place. False Negative:A failure of an IDS to detect an actual attack. True Negative:When no attack has taken place and no alarm is raised. Noise:Data or interference that can trigger a false positive. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and Prevention Systems (IDPS)". Computer Security Resource Center (National Institute of Standards and Technology) (800-94). http://csrc.ncsl.nist.gov/publications/nistpubs/800-94/SP800-94.pdf. Retrieved 1 January 2010. 
  5. IDS Terminology Site policy:Guidelines within an organization that control the rules and configurations of an IDS. Site policy awareness:The ability an IDS has to dynamically change its rules and configurations in response to changing environmental activity. Confidence value:A value an organization places on an IDS based on past performance and analysis to help determine its ability to effectively identify an attack. Alarm filtering:The process of categorizing attack alerts produced from an IDS in order to distinguish false positives from actual attacks. Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and Prevention Systems (IDPS)". Computer Security Resource Center (National Institute of Standards and Technology) (800-94). http://csrc.ncsl.nist.gov/publications/nistpubs/800-94/SP800-94.pdf. Retrieved 1 January 2010. 
  6. Evaluate IDS By looking at the following: Triggers Monitoring locations Hybrid characteristics
  7. 1. IDS Triggers
  8. IDS Triggers Current IDS use 2 major triggering mechanisms to generate intrusion alarms They are: Anomaly detection Signature based detection Trigger mechanisms: Refer to action that causes the IDS to generate an alarm
  9. IDS Triggers NIDS – generates alarm if it sees a packet to a certain port with a certain data in it. HIDS – generates alarm if a certain system call executes. A system call is a request made by any arbitrary program to the OS for performing tasks. Improper use of the system can easily cause a system crash.
  10. Anomaly Detection We are drowning in the overflow of data that are being collected world-wide, while starving for knowledge at the same time Anomalous events occur relatively infrequently However, when they occur, their consequences can be quite dramatic and quite often in a negative sense.
  11. What are anomalies? Anomaly is a pattern in the data that does not conform to the expected behaviour Also referred to as outliers, exceptions, peculiarities, surprise, etc. Anomalies translate to significant (often critical) real life entities Cyber intrusions A web server involved in ftp traffic Credit card fraud An abnormally high purchase made on a credit card
  12. What are anomalies? N1 and N2 are regions of normal behaviour Points O1 and O2 are anomalies Points in region O3 are anomalies
  13. Key Challenges Defining a representative normal region is challenging The boundary between normal and outlying behaviour is often not precise The exact notion of an outlier is different for different applications domains Availability of labeled data for training /validation Malicious adversaries Data might contain noise Normal behaviour keeps evolving
  14. Input Data Most common form of data handled by anomaly detection techniques is Record Data Univariate Multivariate
  15. Input Data Most common form of data handled by anomaly detection techniques is Record Data Univariate Multivariate 19 …. 163 … 172, 177, 180 … 192, 195, 195, 199 …. 285
  16. Input Data – Nature of Attributes Nature of attributes Binary Categorical Continuous Hybrid continuous categorical continuous categorical binary
  17. Types of Anomaly Point anomalies Contextual anomalies Collective anomalies * VarunChandola, ArindamBanerjee, and Vipin Kumar, Anomaly Detection - A Survey, in ACM Computing Surveys 2008.
  18. Y N1 o1 O3 o2 N2 X Point Anomalies An individual data instance is anomalous w.r.t. the data
  19. Contextual Anomalies An individual data instance is anomalous within a context Requires a notion of context Also referred to as conditional anomalies* Anomaly Normal * Xiuyao Song, Mingxi Wu, Christopher Jermaine, Sanjay Ranka, Conditional Anomaly Detection, IEEE Transactions on Data and Knowledge Engineering, 2006.
  20. Collective Anomalies A collection of related data instances is anomalous Requires a relationship among data instances Sequential Data Spatial Data Graph Data The individual instances within a collective anomaly are not anomalous by themselves Anomalous Subsequence
  21. Output of Anomaly Detection Label Each test instance is given a normal or anomaly label This is especially true of classification-based approaches Score Each test instance is assigned an anomaly score Allows the output to be ranked Requires additional threshold parameter
  22. Applications of Anomaly Detection Network intrusion detection Insurance / credit card fraud detection Healthcare informatics / medical diagnostics Industrial damage detection Image processing / video surveillance Novel topic detection in text mining … etc.
  23. Intrusion Detection Intrusion Detection: Process of monitoring the events occurring in a computer system or network and analyzing them for intrusions Intrusions are defined as attempts to bypass the security mechanisms of a computer or network ‏ Challenges Traditional signature-based intrusion detectionsystems are based on signatures of known attacks and cannot detect emerging cyber threats Substantial latency in deployment of newly created signatures across the computer system Anomaly detection can alleviate these limitations
  24. FraudDetection Fraud detection refers to detection of criminal activities occurring in commercial organizations Malicious users might be the actual customers of the organization or might be posing as a customer (also known as identity theft). Types of fraud Credit card fraud Insurance claim fraud Mobile / cell phone fraud Insider trading Challenges Fast and accurate real-time detection Misclassification cost is very high
  25. Classification Based Techniques Main idea: build a classification model for normal (and anomalous (rare)) events based on labelled training data, and use it to classify each new unseen event Classification models must be able to handle skewed (imbalanced) class distributions Categories: Supervised classification techniques Require knowledge of both normal and anomaly class Build classifier to distinguish between normal and known anomalies Semi-supervised classification techniques Require knowledge of normal class only! Use modified classification model to learn the normal behavior and then detect any deviations from normal behavior as anomalous
  26. Classification Based Techniques Advantages: Supervised classification techniques Models that can be easily understood High accuracy in detecting many kinds of known anomalies Semi-supervised classification techniques Models that can be easily understood Normal behaviour can be accurately learned Drawbacks: Supervised classification techniques Require both labels from both normal and anomaly class Cannot detect unknown and emerging anomalies Semi-supervised classification techniques Require labels from normal class Possible high false alarm rate - previously unseen (yet legitimate) data records may be recognized as anomalies
  27. Rule Based Techniques Creating new rule based algorithms (PN-rule, CREDOS)‏ Adapting existing rule based techniques Robust C4.5 algorithm [John95] Adapting multi-class classification methods to single-class classification problem Association rules Rules with support higher than pre specified threshold may characterize normal behaviour [Barbara01, Otey03] Anomalous data record occurs in fewer frequent item sets compared to normal data record [He04] Frequent episodes for describing temporal normal behaviour [Lee00,Qin04] Case specific feature/rule weighting Case specific feature weighting [Cardey97] - Decision tree learning, where for each rare class test example replace global weight vector with dynamically generated weight vector that depends on the path taken by that example Case specific rule weighting [Grzymala00] - LERS (Learning from Examples based on Rough Sets) algorithm increases the rule strength for all rules describing the rare class
  28. Contextual Anomaly Detection Detect contextual anomalies. Key Assumption : All normal instances within a context will be similar (in terms of behavioural attributes), while the anomalies will be different from other instances within the context. General Approach : Identify a context around a data instance (using a set of contextual attributes). Determine if the test data instance is anomalous within the context (using a set of behavioural attributes).
  29. Collective Anomaly Detection Detect collective anomalies. Exploit the relationship among data instances. Sequential anomaly detection Detect anomalous sequences Spatial anomaly detection Detect anomalous sub-regions within a spatial data set Graph anomaly detection Detect anomalous sub-graphs in graph data
  30. Compromised Machine Machine with vulnerability What are Intrusions? Intrusions are actions that attempt to bypass security mechanisms of computer systems. They are usually caused by: Attackers accessing the system from Internet Insider attackers - authorized users attempting to gain and misuse non-authorized privileges Typical intrusion scenario Computer Network Scanning activity Attacker
  31. IDS Analysis Strategy Misuse/signature detection is based on extensive knowledge of patterns associated with known attacks provided by human experts Existing approaches: pattern (signature) matching, expert systems, state transition analysis, data mining Major limitations: Unable to detect novel & unanticipated attacks Signature database has to be revised for each new type of discovered attack Anomaly detection is based on profiles that represent normal behaviour of users, hosts, or networks, and detecting attacks as significant deviations from this profile Major benefit - potentially able to recognize unforeseen attacks. Major limitation - possible high false alarm rate, since detected deviations do not necessarily represent actual attacks Major approaches: statistical methods, expert systems, clustering, neural networks, support vector machines, outlier detection schemes
  32. Intrusion Detection Intrusion Detection System combination of software and hardware that attempts to perform intrusion detection raises the alarm when possible intrusion happens Traditional intrusion detection system IDS tools (e.g. SNORT) are based on signatures of known attacks Example of SNORT rule (MS-SQL “Slammer” worm)‏ any -> udp port 1434 (content:"|81 F1 03 01 04 9B 81 F1 01|";content:"sock"; content:"send") Limitations Signature database has to be manually revised for each new type of discovered intrusion They cannot detect emerging cyber threats Substantial latency in deployment of newly created signatures across the computer system Data Mining can alleviate these limitations
  33. Data Mining for Intrusion Detection Increased interest in data mining based intrusion detection Attacks for which it is difficult to build signatures Attack stealthiness Unforeseen/Unknown/Emerging attacks Distributed/coordinated attacks Data mining approaches for intrusion detection Misuse detection Building predictive models from labelled data sets (instances are labelled as “normal” or “intrusive”) to identify known intrusions High accuracy in detecting many kinds of known attacks Cannot detect unknown and emerging attacks Anomaly detection Detect novel attacks as deviations from “normal” behaviour Potential high false alarm rate - previously unseen (yet legitimate) system behaviours may also be recognized as anomalies Summarization of network traffic
  34. Test Set Model Data Mining for Intrusion Detection continuous categorical categorical temporal class Misuse Detection – Building Predictive Models Learn Classifier Training Set Summarization of attacks using association rules Anomaly Detection Rules Discovered: {Src IP = 206.163.37.95, Dest Port = 139, Bytes  [150, 200]} --> {ATTACK}
  35. MINDS Anomaly Detection on Real Network Data Anomaly detection was used at U of Minnesota and Army Research Lab to detect various intrusive/suspicious activities Many of these could not be detected using widely used intrusion detection tools like SNORT Anomalies/attacks picked by MINDS Scanning activities Non-standard behavior Policy violations Worms MINDS – Minnesota Intrusion Detection System Association pattern analysis Summary and characterizationof attacks Anomaly scores network Detected novel attacks Anomaly detection … … MINDSAT Humananalyst Net flow tools tcpdump Data capturing device Labels Known attack detection Detected known attacks Feature Extraction Filtering
  36. Feature Extraction Three groups of features Basic features of individual TCP connections source & destination IP Features 1 & 2 source & destination port Features 3 & 4 Protocol Feature 5 DurationFeature 6 Bytes per packetsFeature 7 number of bytes Feature 8 Time based features For the same source (destination) IP address, number of unique destination (source) IP addresses inside the network in last T seconds – Features 9 (13) Number of connections from source (destination) IP to the same destination (source) port in last T seconds– Features 11 (15) Connection based features For the same source (destination) IP address, number of unique destination (source) IP addresses inside the network in last N connections - Features 10 (14) Number of connections from source (destination) IP to the same destination (source) port in last N connections - Features 12 (16)
  37. Typical Anomaly Detection Output 48 hours after the “slammer” worm Anomalous connections that correspond to the “slammer” worm Anomalous connections that correspond to the ping scan Connections corresponding to UM machines connecting to “half-life” game servers
  38. “Slammer” worm SQL Slammer is a computer worm that caused a denial of service on some Internet hosts and dramatically slowed down general Internet traffic spreads rapidly, infecting most of its 75,000 victims within ten minutes. Discovered: January 24, 2003 Also Known As: SQL Slammer Worm [ISS], DDOS.SQLP1434.A [Trend], W32/SQLSlammer [McAfee], Slammer [F-Secure], Sapphire [eEye], W32/SQLSlam-A [Sophos] Type: Worm Systems Affected: Windows 2000, Windows 95, Windows 98, Windows Me, Windows NT, Windows XP W32.SQLExp.Worm is a worm that targets the systems running Microsoft SQL Server 2000, as well as Microsoft Desktop Engine (MSDE) 2000. The worm sends 376 bytes to UDP port 1434, the SQL Server Resolution Service Port. Although titled "SQL slammer worm", the program did not use the SQL language; it exploited a buffer overflow bug in Microsoft's flagship SQL Server and Desktop Engine database products, for which a patch had been released six months earlier in MS02-039. The worm has the unintended payload of performing a Denial of Service attack due to the large number of packets it sends.
  39. IDS Triggers – Anomaly Detection Also referred – profile-based detection – must build profiles for each user group on the system Other systems might automatically build profiles for individual users. This profile incorporates: A typical user’s habits The services he normally uses Established baseline for the activities that a normal user routinely does to perform his job A user group: Represents a group of users who perform similar functions on the network Can build user groups based on job classification, such as engineers, clerks How you assign the groups is not important, as long as the users in the group perform on the network
  40. IDS Triggers – Anomaly Detection Building and updating these profiles represent a significant portion of the work required to deploy an anomaly-based IDS The quality of your profiles directly related to how successful your IDS is at detecting attacks against your network The most common approaches to build user profiles include the following: Statistical sampling Rule-based approach Neural networks
  41. Anomaly Detection – Statistical Sampling For profile creation: alarms are based on deviations from your defined normal state. you measure deviation from normal by calculating the standard deviation. Control the sensitivity of your IDS: by varying the number of standard deviations required to generate an alarm, to roughly regulate the number of false positives that your IDS generates because small user deviations are less likely to generate false positives/alarm.
  42. Anomaly Detection – Statistical Sampling Standard deviation measures the deviation from the median or average of a data set. When your data is based on a well-defined distribution, each standard deviation defines a percentage of data that falls within it. For example: maybe 90 percent of all data falls within one standard deviation, 95 percent of the data falls within two standard deviations, and 98 percent of the data falls within three standard deviations. In this example, only 2 percent of the data falls outside three standard deviations from the mean. By using this process, you can define statistically how abnormal specific data is.
  43. Statistics Based Techniques Key Assumption: Normal data instances occur in high probability regions of a statistical distribution, while anomalies occur in the low probability regions of the statistical distribution. General Approach: Estimate a statistical distribution using given data, and then apply a statistical inference test to determine if a test instance belongs to this distribution or not. If an observation is more than 3 standard deviations away from the sample mean, it is an anomaly. Anomalies have large value for
  44. Statistics Based Techniques Advantages Utilize existing statistical modeling techniques to model various type of distributions. Provide a statistically justifiable solution to detect anomalies. Drawbacks With high dimensions, difficult to estimate parameters, and to construct hypothesis tests. Parametric assumptions might not hold true for real data sets.
  45. Types of Statistical Techniques Parametric Techniques Assume that the normal (and possibly anomalous) data is generated from an underlying parametric distribution. Learn the parameters from the training sample. Non-parametric Techniques Do not assume any knowledge of parameters. Use non-parametric techniques to estimate the density of the distribution – e.g., histograms, parzen window estimation.
  46. Anomaly Detection – Rule-Based Approach Analyze the normal traffic for different users over a period of time and then createrules that model this behavior. Any other behavior then can be considered abnormal and generate an alarm. Creating the rules that define normal behavior can be a complicated task.
  47. Anomaly Detection – Neural Networks Neural networks are a form of artificial intelligence in which you attempt to approximate the working of biological neurons, such as those found in the human brain. With these systems, you train them by-presenting them with a large amount of data and rules about data relationships. This information is used to adjust the connection between the neurons. After the system is trained, network traffic is used as a stimulus to the neural network to determine whether the traffic is considered normal.
  48. Anomaly Detection – Neural Networks Multi-layer Perceptrons Measuring the activation of output nodes [Augusteijn02] Extending the learning beyond decision boundaries Equivalent error bars as a measure of confidence for classification [Sykacek97] Creating hyper-planes for separating between various classes, but also to have flexible boundaries where points far from them are outliers [Vasconcelos95] Auto-associative neural networks Replicator NNs [Hawkins02] Hopfield networks [Jagota91, Crook01] Adaptive Resonance Theory based [Dasgupta00, Caudel93] Radial Basis Functions based Adding reverse connections from output to central layer allows each neuron to have associated normal distribution, and any new instance that does not fit any of these distributions is an anomaly [Albrecht00, Li02] Oscillatory networks Relaxation time of oscillatory NNs is used as a criterion for novelty detection when a new instance is presented [Ho98, Borisyuk00]
  49. Anomaly Detection – Issues The USER PROFILES form the heart of an anomaly-based IDS. Some systems use an initial training period that monitors the network for a predetermined period of time. This traffic then is used to create a user baseline. This baseline determines what normal traffic on the network looks like. The disadvantage with this approach: if users’ jobs change over time, they start generating false alarms. a determined attacker can gradually train the system incrementally until his actual attack traffic appears as normal traffic on the network.
  50. Anomaly Detection – Benefits It can easily detect many insider attacks or account theft, If a particular account belonging to an office clerk starts attempting network administration functions, for example, this probably triggers an alarm. An attacker is not quite sure what activity generates an alarm. With a signature-based IDS, an attacker can test which traffic generates alarms in a lab environment. By using this information, he can then craft tools that bypass the signature-based IDS. With the anomaly detection system, the attacker does not know the training data that has been used; therefore, he cannot assume any particular action will go undetected. Not based on signatures for specific, known attacks. based on a profile can generate alarms for previously unpublished attacks as long as the new attack deviates from normal user activity. can detect new attacks the first time they are used.
  51. Anomaly Detection – Drawbacks High initial training time No protection of network during training Difficult to define normal Must update user profiles as habits change Generates false negatives if traffic appears normal Difficult to understand alarming Complicated and hard to understand
  52. False Negative When an IDS fails to generate an alarm for known intrusive activity, it is called a false negative. False negatives represent actual attacks that the IDS missed even though it is programmed to detect the attack. Most IDS developers tend to design their systems to prevent false negatives. It is difficult, however, to totally eliminate false negatives. Furthermore, as you sensitize your system to report fewer false negatives, you tend to increase the number of false positives that get reported. It is a constant trade-off.
  53. Signature-Based Detection It looks for intrusive activity that matches specific signatures. These signatures are based on a set of rules that match typical patterns and exploits used by attackers to gain access to your network. Highly skilled network engineers research/study known attacks and vulnerabilities to develop the rules for each signature.
  54. Signature-Based Detection: Benefits Signatures are based on known intrusive activity Detected attacks are well-defined The system is easy to understand Attacks are challenged immediately after installation
  55. Signature-Based Detection: Drawbacks Maintaining state information (event horizon*) Updating signature database Attacks that circumvent the IDS (false negatives) Inability to detect unknown attacks
  56. *Event Horizon The maximum amount of time over which an attack signature can be successfully detected (from initial data to the final data needed to complete the attack signature) is known as the event horizon. The IDS must maintain state information during this event horizon. The important point to understand is that your IDS cannot maintain the state information indefinitely therefore, it uses the event horizon to limit the amount of time that it stores the state information
  57. 2. IDS Monitoring Locations
  58. IDS Monitoring Locations Examine where an IDS watches for the intrusive traffic IDS typically monitors one of two locations: The host The network
  59. HIDS Checks for intrusions by checking information at the host or OS level These IDSs examine many aspects of your host, such as system calls, audit logs, error and messages * Agent = IDS agent
  60. HIDS: Benefits It has first hand information on the success of the attack. Because a host-based IDS examines traffic after it reaches the target of the attack (assuming the host is the target), With a network-based IDS, the alarms are generated on known intrusive activity, Only a HIDS can determine the actual success or failure of an attack. HIDS can use the host's own IP stack to easily deal with variable Time-To-Live (TTL)* attacks TTL is difficult to detect using a network-based IDS.
  61. *Variable Time-to-Live Attacks All packets traveling across the network have a TTL value. Each router that handles the packet decreases the TTL value by one. If the TTL value reaches zero, the packet is discarded. An attacker can launch an attack that includes bogus packets with smaller TTL values than the packets that make up the real attack. If the network-based sensor sees all the packets, but the target host sees only the actual attack packets, the attacker has managed to distort the information that the sensor used, causing the sensor to potentially miss the attack.
  62. Variable Time-to-Live Attacks The picture illustrates this attack. The fake packets start with a TTL of 3, whereas the real attack packets start with a TTL of 7. The sensor sees both sets of packets, but the target host sees only the real attack packets. Although this attack is possible, it is not easy to use in practice because it requires a detailed understanding of the network topology and location of IDS sensors. Real attack packet Fake packet
  63. HIDS: Drawbacks Limited network view Most host-based IDSs, for example, do not detect port scans against the host. It is almost impossible for a host-based IDS to detect reconnaissance (“spy”) scans against your network. These scans represent a key indicator to more attacks against your network. Must operate on every OS on the network HIDS must communicate this information to some type of central management facility. An attack might take a host's network communication offline. This host then cannot communicate any information to the central management facility.
  64. NIDS: Benefits A network-based IDS examines packets to locate attacks against the network. The IDS sniffs the network packets and compares the traffic against signatures for known intrusive activity. Benefits: Overall network perspective Does not have to run on every OS on the network
  65. NIDS: Drawbacks Bandwidth As network pipes grow larger and larger, it is difficult to successfully monitor all the traffic going across the network at a single point in real time, without missing packets. Need to install more sensors throughout the network at locations. Fragment reassembly Network packets have a maximum size. If a connection needs to send data that exceeds this maximum bound, the data must be sent in multiple packets. This is known as fragmentation. When the receiving host gets the fragmented packets, it must reassemble the data. Not all hosts perform the reassembly process in the same order. Some OSs start with the last fragment and work toward the first. Others start at the first and work toward the last. The order does not matter if the fragments do not overlap. If they overlap, the results differ for each reassembly process. Encryption
  66. Hybrid Characteristics
  67. Hybrid Characteristics Hybrid systems combine the functionality from several different IDS categories to create a system that provides more functionality than a traditional IDS. Some hybrid systems might incorporate multiple triggering techniques, such as anomaly and signature-based detection. Other hybrid IDSs might combine multiple monitoring locations, such as host-based and network-based monitoring. The major hurdle to constructing a hybrid IDS is getting the various components to operate in harmony, and presenting the information to the end user in a user-friendly manner.
  68. Hybrid Characteristics: Benefits Different IDS technologies are combined. A combined host-based and network-based system, for example, provides the overall network visibility of a network-based IDS, as well as detailed host-level visibility. Combining anomaly detection with misuse detection can produce a signature-based IDS that can detect previously unknown attacks. Each hybrid system needs to be analyzed on its unique strengths.
  69. Hybrid Characteristics: Drawbacks Getting these different technologies to work together in a single IDS can be difficult Normally, hybrid systems attempt to merge multiple diverse intrusion detection technologies. Combining these technologies can produce a stronger IDS. Presenting the information from these multiple technologies to the end user in a coordinated fashion can also be a challenge. Each hybrid system needs to be examined to understand its strengths and weaknesses
  70. Summary The common triggering mechanisms are as follows: Anomaly detection Signature based detection Anomaly detection is more complex than signature based detection, but it provides the capability to detect previously unpublished attacks Each different types of the IDS has it owns strengths and weaknesses.
  71. Summary Anomaly detection can detect critical information in data. Highly applicable in various application domains. Nature of anomaly detection problem is dependent on the application domain. Need different approaches to solve a particular problem formulation.
  72. Exercise - Discuss What are the two major types of IDS monitoring? What are the two types of IDS triggering? What are some drawbacks to anomaly detection? What is the difference between a false positive and a false negative?
More Related