1 / 67

Lecture 14

Lecture 14. Data mining and knowledge discovery. Introduction, or what is data mining? Data warehouse and query tools Decision trees Case study: Profiling people with high blood pressure Summary. What is data mining?.

aysel
Download Presentation

Lecture 14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 14 Data mining and knowledge discovery • Introduction, or what is data mining? • Data warehouse and query tools • Decision trees • Case study: Profiling people with high blood pressure • Summary

  2. What is data mining? • Data is what we collect and store, and knowledge is what helps us to make informed decisions. • The extraction of knowledge from data is called data mining. • Data mining can also be defined as the exploration and analysis of large quantities of data in order to discover meaningful patterns and rules. • The ultimate goal of data mining is to discover knowledge.

  3. Why data mining • The Explosive Growth of Data: from terabytes to petabytes • Data collection and data availability • Automated data collection tools, database systems, Web, computerized society • Major sources of abundant data • Business: Web, e-commerce, transactions, stocks, … • Science: Remote sensing, bioinformatics, scientific simulation, … • Society and everyone: news, digital cameras, YouTube • knowledge!

  4. Why Not Traditional Data Analysis? • Tremendous amount of data • Algorithms must be highly scalable to handle such as tera-bytes of data • High-dimensionality of data • Micro-array may have tens of thousands of dimensions

  5. High complexity of data • Data streams and sensor data • Time-series data, temporal data, sequence data • Structure data, graphs, social networks and multi-linked data • Heterogeneous databases and legacy databases • Spatial, spatiotemporal, multimedia, text and Web data • Software programs, scientific simulations • New and sophisticated applications

  6. Knowledge Discovery (KDD) Process Knowledge • Data mining—core of knowledge discovery process Pattern Evaluation Data Mining Task-relevant Data Selection Data Warehouse Data Cleaning Data Integration Databases

  7. KDD Process: Several Key Steps • Learning the application domain • relevant prior knowledge and goals of application • Creating a target data set: data selection • Data cleaning and preprocessing: (may take 60% of effort!) • Data reduction and transformation • Find useful features, dimensionality/variable reduction, invariant representation

  8. KDD Process: Several Key Steps • Choosing functions of data mining • summarization, classification, regression, association, clustering • Choosing the mining algorithm(s) • Data mining: search for patterns of interest • Pattern evaluation and knowledge presentation • visualization, transformation, removing redundant patterns, etc. • Use of discovered knowledge

  9. Database Technology Statistics Data Mining Visualization Machine Learning Pattern Recognition Other Disciplines Algorithm Data Mining: Confluence of Multiple Disciplines

  10. Graphical User Interface Pattern Evaluation Knowledge-Base Data Mining Engine Database or Data Warehouse Server data cleaning, integration, and selection Other Info Repositories Data Warehouse World-Wide Web Database Architecture: Typical Data Mining System

  11. Data Mining Functionalities(1) • Frequent patterns, association, correlation vs. causality • Diaper  Beer [0.5%, 75%] (Correlation or causality?) • Classification and prediction • Construct models (functions) that describe and distinguish classes or concepts for future prediction • E.g., classify countries based on (climate), or classify cars based on (gas mileage) • Predict some unknown or missing numerical values

  12. Data Mining Functionalities(2) • Cluster analysis • Class label is unknown: Group data to form new classes, e.g., cluster houses to find distribution patterns • Maximizing intra-class similarity & minimizing interclass similarity • Outlier analysis • Outlier: Data object that does not comply with the general behavior of the data • Noise or exception? Useful in fraud detection, rare events analysis

  13. Data Mining Functionalities(3) • Trend and evolution analysis • Trend and deviation: e.g., regression analysis • Sequential pattern mining: e.g., digital camera  large SD memory • Periodicity analysis • Similarity-based analysis • Other pattern-directed or statistical analyses

  14. Top-10 Most Popular DM Algorithms:18 Identified Candidates (I) • Classification • #1. C4.5: Quinlan, J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann., 1993. • #2. CART: L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, 1984. • #3. K Nearest Neighbours (kNN): Hastie, T. and Tibshirani, R. 1996. Discriminant Adaptive Nearest Neighbor Classification. TPAMI. 18(6) • #4. Naive Bayes Hand, D.J., Yu, K., 2001. Idiot's Bayes: Not So Stupid After All? Internat. Statist. Rev. 69, 385-398.

  15. (II) • Statistical Learning • #5. SVM: Vapnik, V. N. 1995. The Nature of Statistical Learning Theory. Springer-Verlag. • #6. EM: McLachlan, G. and Peel, D. (2000). Finite Mixture Models. J. Wiley, New York. Association Analysis • #7. Apriori: Rakesh Agrawal and Ramakrishnan Srikant. Fast Algorithms for Mining Association Rules. In VLDB '94. • #8. FP-Tree: Han, J., Pei, J., and Yin, Y. 2000. Mining frequent patterns without candidate generation. In SIGMOD '00.

  16. (III) • Link Mining • #9. PageRank: Brin, S. and Page, L. 1998. The anatomy of a large-scale hypertextual Web search engine. In WWW-7, 1998. • #10. HITS: Kleinberg, J. M. 1998. Authoritative sources in a hyperlinked environment. SODA, 1998.

  17. (IV) • Clustering • #11. K-Means: MacQueen, J. B., Some methods for classification and analysis of multivariate observations, in Proc. 5th Berkeley Symp. Mathematical Statistics and Probability, 1967. • #12. BIRCH: Zhang, T., Ramakrishnan, R., and Livny, M. 1996. BIRCH: an efficient data clustering method for very large databases. In SIGMOD '96. • Bagging and Boosting • #13. AdaBoost: Freund, Y. and Schapire, R. E. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 1 (Aug. 1997), 119-139.

  18. (V) • Sequential Patterns • #14. GSP: Srikant, R. and Agrawal, R. 1996. Mining Sequential Patterns: Generalizations and Performance Improvements. In Proceedings of the 5th International Conference on Extending Database Technology, 1996. • #15. PrefixSpan: J. Pei, J. Han, B. Mortazavi-Asl, H. Pinto, Q. Chen, U. Dayal and M-C. Hsu. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In ICDE '01. • Integrated Mining • #16. CBA: Liu, B., Hsu, W. and Ma, Y. M. Integrating classification and association rule mining. KDD-98.

  19. (VI) • Rough Sets • #17. Finding reduct: Zdzislaw Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Norwell, MA, 1992 • Graph Mining • #18. gSpan: Yan, X. and Han, J. 2002. gSpan: Graph-Based Substructure Pattern Mining. In ICDM '02.

  20. Top-10 Algorithm Finally Selected at ICDM’06 • #1: C4.5 (61 votes) • #2: K-Means (60 votes) • #3: SVM (58 votes) • #4: Apriori (52 votes) • #5: EM (48 votes) • #6: PageRank (46 votes) • #7: AdaBoost (45 votes) • #7: kNN (45 votes) • #7: Naive Bayes (45 votes) • #10: CART (34 votes)

  21. Conferences and Journals on Data Mining • KDD Conferences • ACM SIGKDD Int. Conf. on Knowledge Discovery in Databases and Data Mining (KDD) • SIAM Data Mining Conf. (SDM) • (IEEE) Int. Conf. on Data Mining (ICDM) • Conf. on Principles and practices of Knowledge Discovery and Data Mining (PKDD) • Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD)

  22. Other related conferences • ACM SIGMOD • VLDB • (IEEE) ICDE • WWW, SIGIR • ICML, CVPR, NIPS • Journals • Data Mining and Knowledge Discovery (DAMI or DMKD) • IEEE Trans. On Knowledge and Data Eng. (TKDE) • KDD Explorations • ACM Trans. on KDD

  23. Why Not Traditional Data Analysis?(1) • Tremendous amount of data • Algorithms must be highly scalable to handle such as tera-bytes of data • High-dimensionality of data • Micro-array may have tens of thousands of dimensions

  24. (2) • High complexity of data • Data streams and sensor data • Time-series data, temporal data, sequence data • Structure data, graphs, social networks and multi-linked data • Heterogeneous databases and legacy databases • Spatial, spatiotemporal, multimedia, text and Web data • Software programs, scientific simulations • New and sophisticated applications

  25. Data warehouse • Modern organisations must respond quickly to any change in the market. This requires rapid access to current data normally stored in operational databases. • However, an organisation must also determine which trends are relevant. This task is accomplished with access to historical data that are stored in large databases called data warehouses.

  26. The main characteristic of a data warehouse is its capacity. A data warehouse is really big – it includes millions, even billions, of data records. • The data stored in a data warehouse is • time dependent – linked together by the times of recording – and • integrated – all relevant information from the operational databases is combined and structured in the warehouse.

  27. Query tools • A data warehouse is designed to support decision-making in the organisation. The information needed can be obtained with query tools. • Query tools are assumption-based – a user must ask the right questions.

  28. How is data mining applied in practice? • Many companies use data mining today, but refuse to talk about it. • In direct marketing, data mining is used for targeting people who are most likely to buy certain products and services. • In trend analysis, it is used to determine trends in the marketplace, for example, to model the stock market. In fraud detection, data mining is used to identify insurance claims, cellular phone calls and credit card purchases that are most likely to be fraudulent.

  29. Motivation: Finding latent relationships in data • What products were often purchased together?— Beer and diapers?! • What are the subsequent purchases after buying a PC? • What kinds of DNA are sensitive to this new drug? • Can we automatically classify web documents?

  30. Applications • Market basket data analysis (shelf space planning/increasing sales/promotion) • cross-marketing • catalog design • sale campaign analysis • Web log (click stream) analysis • DNA sequence analysis

  31. Data mining tools Data mining is based on intelligent technologies already discussed in this book. It often applies such tools as neural networks and neuro-fuzzy systems. However, the most popular tool used for data mining is a decision tree.

  32. Decision trees A decision tree can be defined as a map of the reasoning process. It describes a data set by a tree-like structure. Decision trees are particularly good at solving classification problems.

  33. ID3 • (tall, blond, blue) w • (short, silver, blue) w • (short, black, blue) w • (tall, blond, brown) w • (tall, silver, blue) w • (short, blond, blue) w • (short, black, brown) e • (tall, silver, black) e • (short, black, brown) e • (tall, black, brown) e • (tall, black, black) e • (short, blond, black) e

  34. A decision tree consists of nodes, branches and leaves. • The top node is called the root node. The tree always starts from the root node and grows down by splitting the data at each level into new nodes. The root node contains the entire data set (all data records), and child nodes hold respective subsets of that set. • All nodes are connected by branches. • Nodes that are at the end of branches are called terminal nodes, or leaves.

  35. How does a decision tree select splits? • A split in a decision tree corresponds to the predictor with the maximum separating power. The best split does the best job in creating nodes where a single class dominates. • One of the best known methods of calculating the predictor’s power to separate data is based on the Gini coefficient of inequality.

  36. Major Issues in Data Mining(1) • Mining methodology • Mining different kinds of knowledge from diverse data types, e.g., bio, stream, Web • Performance: efficiency, effectiveness, and scalability • Pattern evaluation: the interestingness problem • Incorporation of background knowledge • Handling noise and incomplete data • Parallel, distributed and incremental mining methods • Integration of the discovered knowledge with existing one: knowledge fusion

  37. (2) • User interaction • Data mining query languages and ad-hoc mining • Expression and visualization of data mining results • Interactive mining ofknowledge at multiple levels of abstraction • Applications and social impacts • Domain-specific data mining & invisible data mining • Protection of data security, integrity, and privacy

  38. Summary(1) • Data mining: Discovering interesting patterns from large amounts of data • A natural evolution of database technology, in great demand, with wide applications • A KDD process includes data cleaning, data integration, data selection, transformation, data mining, pattern evaluation, and knowledge presentation

  39. (2) • Mining can be performed in a variety of information repositories • Data mining functionalities: characterization, discrimination, association, classification, clustering, outlier and trend analysis, etc. • Data mining systems and architectures • Major issues in data mining

  40. Thank you

  41. An example of a decision tree

  42. The Gini coefficient The Gini coefficient is a measure of how well the predictor separates the classes contained in the parent node. Gini, an Italian economist, introduced a rough measure of the amount of inequality in the income distribution in a country.

  43. Computation of the Gini coefficient The Gini coefficient is calculated as the area between the curve and the diagonal divided by the area below the diagonal. For a perfectly equal wealth distribution, the Gini coefficient is equal to zero.

  44. Selecting an optimal decision tree: (a) Splits selected by Gini

  45. Selecting an optimal decision tree: (b) Splits selected by guesswork

More Related