1 / 27

Bayesian Bot Detection Based on DNS Traffic Similarity

Bayesian Bot Detection Based on DNS Traffic Similarity. Ricardo Villamarín-Salomón , José Carlos Brustoloni Department of Computer Science University of Pittsburgh SAC '09, Proceedings of the 2009 ACM symposium on Applied Computing 陳怡寧. Outline. Introduction Bayesian method

kent
Download Presentation

Bayesian Bot Detection Based on DNS Traffic Similarity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian Bot Detection Based on DNS Traffic Similarity Ricardo Villamarín-Salomón, José Carlos Brustoloni Department of Computer Science University of Pittsburgh SAC '09, Proceedings of the 2009 ACM symposium on Applied Computing 陳怡寧

  2. Outline • Introduction • Bayesian method • Methodology • Experimental results • Discussion and limitations • Conclusion

  3. Introduction -- Problem • Many botnets have centralized command and control (C&C) servers with fixed IP address or domain names. • In such botnets, Bots can be detected by their communication with hosts whose IP address or domain name is that of a known C&C server. • To evade detection, botmasters are increasingly obfuscating C&C communication, e.g., using fast-fluxor P2P.

  4. Introduction -- Goal • Hypothesis: • Regardless of obfuscation, commands tend to cause similar activities in bots belonging to a same botnet. • Through which they can be distinguished from other hosts. • Assume at least one bot in a botnet is known. • Then using the Bayesian approach to find other hosts with similar DNS traffic.

  5. (4) Ask M how to answer (9) Response malicious website M: mothership (8)GET redirection (5) Answer B2 B1: Name servers B2: Web servers (6) Answer B2 (7) HTTP GET (3) Query B1 Analyze domains queried (10) Download website (2) Ask B1 (1) Query FQDN Normal host Normal dns server

  6. Bayesian method (1/4) • B: blacklist (domain name of known C&Cserver) • DI : domain names queried by hosts in Hbl(hosts in the blacklist B) • DN : domain names queried by hosts in H-Hbl Infected hosts but not in Hbl Uninfected hosts

  7. Bayesian method (2/4) • Assign a score to every q∈Q indicating a probability that a host making it is infected • Assign to each host a score that combines the scores of all the queries it made.

  8. Bayesian method (3/4) • qj : query • Ihi: whether the host hi is infected • The probability that a host hi will send query qj

  9. Bayesian method (4/4) • Assume P(Ihi=1) = o.5 • An extreme case • If the only host querying the said domain belongs to Hbl, Sh(qj) will be 1 (and 0 if h doesn’t belongs to Hbl) • So we need tune this value…

  10. Beta distribution (1/2) • Beta distribution is a continuous probability distributions defined on the interval (0, 1) parameterized by two positive shape parameters, α and β. • The tuning calculation is based on • Observed DNS traffic • x : the a prior belief that a domain name that was never queried before will be queried by an infected host.

  11. Beta distribution (2/2) • n : the number of trials • s : number of successes involving q • Nqj:the total number of times a query qj has been made during the traffic monitoring period. • f = α + β , a constant interpreted as the strength we want to give to x. • α = f *x • f=1, x = 0.5, Nqj = 0, the result will be 0.5 => avoiding extreme value

  12. Select indicators • Previous studies [14][15] show that robust indicators are obtained by taking the geometric mean of the host’s most extreme S’h(q) values (closet to 0 and 1). [14] Gary Robinson, “Spam Detection”, [Online] http://radio.weblogs.com/0101454/stories/2002/09/16/spamDetection.html [15] Greg Louis, “Bogofilter Calculations: Comparing Geometric Mean with Fisher’s Method for Combining Probabilities,” [Online] http://www.bgl.nu/bogofilter/fisher.html

  13. Combined score • N(h) and I(h) indicate how likely it is that a host is infected or non-infected, respectively. • Combined score definition: • Modify C(h) so that we can get a score between 0 and 1 • P(h) indicates our degree of belief that a host is infected.

  14. Methodology • In this experiments, they use two sets. • computers that they know with certainty to be infected. (run variant of the same bot in computer under control to collecting DNS traffic of infected host) • hosts they confidently know to be uninfected. • In infected host set, we altered traces to let the hosts to be masked (others that are unmodified => unmaskedhosts). • We apply Bayesian method to the merge traces and observe • which uninfected hosts were classified as such • which masked hosts were identified as infected, based on non-blacklisted names that both masked and unmasked hosts queried.

  15. Blacklist and Bot Specimens • Malware sample : MWCollect • Blacklist of C&C server : Shadowserver • Bot selection • Had the same name in both VirusTotal and Kaspersky antivirus • Contacted same known C&C server • Had distinct MD5 signatures • Backdoor.Win32.SdBot.cmz • Net-Worm.Win32.Bobic.k

  16. DNS Data Collection • Uninfected hosts • CSL-1: 89 PCs in instructional laboratories of Pittsburgh university, February 13-14, 2008 • CSL-2: 89 PCs in instructional laboratories of Pittsburgh university, February 14-15, 2008 • Infected hosts • sandnet + a DNS server + bot specimens

  17. Test Traces • Altered traces: obfuscation names by appending to them a non-existent ccTLD (.nv) to each blacklisted name. • SdBot-V1-1-T:the traces of all infected hosts except SdBot-V1-1 are altered.

  18. Evaluation Metrics • Recall, or True Positive Rate (TPR) • False Positive Rate

  19. Experimental Results • We wanted to find parameters that could yield good classification results with trace CSL-1-SdBot-T, and then see if these same parameters were effective in trace CSL-2-Bobic.k-T. • We set Th=0.95, P(Ih)=0.5, and threshold of P(h) to be 0.9. • How about Tl?

  20. Selecting Tl

  21. FPR & TPR

  22. True Positive • TP is caused by the name ad.doubleclick.net which was queried by 0.87% of the uninfected hosts and the only misclassified masked hosts.

  23. CSL-2-Bobic.k-T

  24. Discussion and Limitation • FP occurs: • If the parameters are not well tuned • If a domain name is queried only by an infected hosts and one or a few of the uninfected hosts. • FN occurs: • If the parameters are not well tuned • While very popular domain names during a time period are queried by both infected and uninfected hosts.

  25. Conclusion • Proposed and evaluated a Bayesian method for botnet detection. • In this study, we found that the technique successfully recognized C&C servers with multiple domain names, while at the same time generating few or no false positives.

  26. Comments • The sample size of DNS traffic of infected hosts is too small. • Are parameters of Bayesian method really suitable for all kinds of bots? • We can use the bots found by M8000 as seeds and collect DNS traffic to find other unspecified infected hosts.

More Related