1 / 54

Knowledge Acquisition and Modelling

Knowledge Acquisition and Modelling. Decision Tables and Decision Trees. Decision Trees. Map of a reasoning process Describes in a tree like structure A graphical representation of a decision situation Decision situation points are connected together by arcs and terminate in ovals

chaka
Download Presentation

Knowledge Acquisition and Modelling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Knowledge Acquisition and Modelling Decision Tables and Decision Trees

  2. Decision Trees • Map of a reasoning process • Describes in a tree like structure • A graphical representation of a decision situation • Decision situation points are connected together by arcs and terminate in ovals • Main components • Decision points represented by nodes • Actions • Particular choices from a decision point represented by arcs (or straight lines) • Can be left to right • Or top down

  3. Decision Trees • Essentially flowcharts • A natural order of ‘micro decisions’ (Boolean – yes/no decisions) to reach a conclusion • In simplest form all you need is • A start • A cascade of Boolean decisions (each with exactly outbound branches) • A set of decision nodes and representing all the ‘leaves’ of the decision tree

  4. An Example • Bank - loan application • Classify application : approved class, denied class  • Criteria - Target Class approved if 3 binary attributes have certain value: • (a) borrower has good credit history (credit rating in excess of some threshold) • (b) loan amount less than some percentage of collateral value (e.g., 80% home value) • (c) borrower has income to make payments on loan • Possible scenarios = 32 = 8 • If the parameters for splitting the nodes can be adjusted, the number of scenarios grows exponentially.

  5. How They Work • Decision rules - partition sample of data • Terminal node (leaf) indicates the class assignment • Tree partitions samples into mutually exclusive groups • One group for each terminal node  • All paths • start at the root node • end at a leaf (terminal node = decision node)

  6. How They Work • Each path represents a decision rule • joining (AND) of all the tests along that path • separate paths that result in the same class are disjunctions (ORs) • All paths - mutually exclusive • for any one case - only one path will be followed • false decisions on the left branch • true decisions on the right branch

  7.   Disjunctive Normal Form • Non-terminal node - model identifies an attribute to be tested • test splits attribute into mutually exclusive disjoint sets • splitting continues until a node - one class (terminal node or leaf)  • Structure - disjunctive normal form • limits form of a rule to conjunctions (adding) of terms • allows disjunction (or-ing) over a set of rules

  8. Predicting Commute Time Leave At If we leave at 10 AM and there are no cars stalled on the road, what will our commute time be? 10 AM 9 AM 8 AM Stall? Accident? Long No Yes No Yes Short Long Medium Long

  9. Inductive Learning • Making a series of Boolean decisions and following the relevant branch: • Did we leave at 10 AM? • Did a car stall on the road? • Is there an accident on the road? • Answering each yes/no question allows traversal of tree to reach a conclusion

  10. Decision Tree as a Rule Set • IF hour == 8am THENcommute time = long • IF hour == 9am AND accident == yes THENcommute time = long • IF hour == 9am AND accident == no THEN commute time = medium • IF hour == 10am and stall == yes • THEN commute time = long • IF hour == 10am and stall == no • THEN commute time = short

  11. How to Create a Decision Tree • Make a list of attributes that we can measure • These attributes (for now) must be discrete • We then choose a target attribute that we want to predict • Then create an experience table that lists what we have seen in the past

  12. Sample Experience Table

  13. Choosing Attributes • The previous experience decision table showed 4 attributes: hour, weather, accident and stall • But the decision tree only showed 3 attributes: hour, accident and stall • Why is that?

  14. Choosing Attributes • Methods for selecting attributes show that weather is not a discriminating attribute • We use the principle of Occam’s Razor: Given a number of competing hypotheses, the simplest one is preferable

  15. Decision Tree Algorithms • The basic idea behind any decision tree algorithm is as follows: • Choose the best attribute(s) to split the remaining instances and make that attribute a decision node • Repeat this process for recursively for each child • Stop when: • All the instances have the same target attribute value • There are no more attributes • There are no more instances

  16. Identifying the Best Attributes • Refer back to our original decision tree Leave At 9 AM 10 AM 8 AM Accident? Stall? Long Yes No Yes No Short Long Medium Long • How did we know to split on leave at and then on stall and accident and not weather?

  17. ID3 Heuristic • To determine the best attribute, we look at the ID3 heuristic • ID3 splits attributes based on their entropy. • Entropy is the measure of disinformation • Entropy is minimized when all values of the target attribute are the same. • If we know that commute time will always be short, then entropy = 0 • Entropy is maximized when there is an equal chance of all values for the target attribute (i.e. the result is random) • If commute time = short in 3 instances, medium in 3 instances and long in 3 instances, entropy is maximized

  18. Entropy • Calculation of entropy • Entropy(S) = ∑(i=1 to l)-|Si|/|S| * log2(|Si|/|S|) • S = set of examples • Si = subset of S with value vi under the target attribute • l = size of the range of the target attribute

  19. ID3 • ID3 splits on attributes with the lowest entropy • We calculate the entropy for all values of an attribute as the weighted sum of subset entropies as follows: • ∑(i = 1 to k) |Si|/|S| Entropy(Si), where k is the range of the attribute we are testing • We can also measure information gain (which is inversely proportional to entropy) as follows: • Entropy(S) - ∑(i = 1 to k) |Si|/|S| Entropy(Si)

  20. Given our commute time sample set, we can calculate the entropy of each attribute at the root node ID3

  21. Problems with ID3 • ID3 is not optimal • Uses expected entropy reduction, not actual reduction • Must use discrete (or discretized) attributes • What if we left for work at 9:30 AM? • We could break down the attributes into smaller values…

  22. Problems with ID3 • If we broke down leave time to the minute, we might get something like this: 8:02 AM 8:03 AM 9:05 AM 9:07 AM 9:09 AM 10:02 AM Long Medium Short Long Long Short Since entropy is very low for each branch, we have n branches with n leaves. This would not be helpful for predictive modeling.

  23. Problems with ID3 • Can use a technique known as discretization • choose cut points, such as 9AM for splitting continuous attributes • generally lie in a subset of boundary points, such that a boundary point is where two adjacent instances in a sorted list have different target value attributes

  24. Pruning (another technique for attribute selection) • Pre-Pruning • Decide during the building process when to stop adding attributes • (possibly based on their information gain) • May be problematic • Individually attributes may not contribute much to a decision • But what about in combination with other attributes • may have a significant impact • Post-Pruning • waits until the full decision tree has built and then prunes the attributes • Subtree Replacement • Subtree Raising

  25. Subtree Replacement • Entire subtree is replaced by a single leaf node A B C 5 4 1 2 3

  26. Subtree Replacement • Node 6 replaced the subtree • Generalizes tree a little more, but may increase accuracy A B 6 5 4

  27. Subtree Raising • Entire subtree is raised onto another node A B C 5 4 1 2 3

  28. Subtree Raising • Entire subtree is raised onto another node • This was not discussed in detail as it is not clear whether this is really worthwhile (as it is very time consuming) A C 1 2 3

  29. Problems with Decision Trees • While decision trees classify quickly, the time for building a tree may be higher than another type of classifier • Decision trees suffer from a problem of errors propagating throughout a tree • A very serious problem as the number of classes increases

  30. Error Propagation • Since decision trees work by a series of local decisions, what happens when one of these local decisions is wrong? • Every decision from that point on may be wrong • We may never return to the correct path of the tree

  31. Decision tree representation of salary decision Modern Systems Analysisand DesignFourth Edition Jeffrey A. Hoffer , Joey F. George, Joseph S. Valacich

  32. Decision Tables • Used to lay out in tabular form all possible situations which a decision may encounter and to specify which action to take in each of these situations. • A matrix representation of the logic of a decision • Specifies the possible conditions and the resulting actions

  33. Terminology • Decision Table • A decision table is a tabular form that presents a set of conditions and their corresponding actions. • Condition Stubs • Condition stubs describe the conditions or factors that will affect the decision or policy. • They are listed in the upper section of the decision table. • Action Stubs • Action stubs describe, in the form of statements, the possible policy actions or decisions. • They are listed in the lower section of the decision table. • Rules • Rules describe which actions are to be taken under a specific combination of conditions. • They are specified by first inserting different combinations of condition attribute values and then putting X's in the appropriate columns of the action section of the table.

  34. Example Modern Systems Analysisand DesignFourth Edition Jeffrey A. Hoffer , Joey F. George, Joseph S. Valacich

  35. Decision Table Methodology • 1. Identify Conditions & Values • Find the data attribute each condition tests and all of the attribute's values. • 2. Compute Max Number of Rules • Multiply the number of values for each condition data attribute by each other. • 3. Identify Possible Actions • Determine each independent action to be taken for the decision or policy. • 4. Enter All Possible Rules • Fill in the values of the condition data attributes in each numbered rule column. • 5. Define Actions for each Rule • For each rule, mark the appropriate actions with an X in the decision table. • 6. Verify the Policy • Review completed decision table with end-users. • 7. Simplify the Table • Eliminate and/or consolidate rules to reduce the number of columns.

  36. A Simple Example • Scenario: A marketing company wishes to construct a decision table to decide how to treat clients according to three characteristics: • Gender, • City Dweller, and • age group: A (under 30), B (between 30 and 60), C (over 60). • The company has four products (W, X, Y and Z) to test market. • Product W will appeal to male city dwellers. • Product X will appeal to young males. • Product Y will appeal to Female middle aged shoppers who do not live in cities. • Product Z will appeal to all but older males.

  37. Identify Conditions & Values • The three data attributes tested by the conditions in this problem are • gender, with values M and F; • city dweller, with value Y and N; and • age group, with values A, B, and C • as stated in the problem.

  38. 2. Compute Maximum Number of Rules • The maximum number of rules is 2 x 2 x 3 = 12

  39. 3. Identify Possible Actions • The four actions are: • market product W, • market product X, • market product Y, • market product Z.

  40. 4. Enter All Possible Conditions

  41. 5. Define Actions for each Rule

  42. Full table

  43. 6. Verify the Policy • Let us assume that the client agreed with our decision table.

  44. 7. Simplify the Table • There appear to be no impossible rules. • Note that rules 2, 4, 6, 7, 10, 12 have the same action pattern. • Rules 2, 6 and 10 have • two of the three condition values (gender and city dweller) identical and • all three of the values of the non- identical value (age) are covered, • so they can be condensed into a single column 2. • The rules 4 and 12 have identical action pattern, but they cannot be combined because the indifferent attribute "Age" does not have all its values covered in these two columns. • Age group B is missing.

  45. The revised table is as follows:

  46. Step 1. Identify Conditions & Values • We first examine the problem and identify the data attributes upon which the decision or policy depends. • We then list the possible values of each data attribute. • Often, answering the question: "What do I need to know in order to take action in this situation?" will help identify the appropriate condition attributes.

  47. Step 2. Compute Maximum Number of Rules • A rule is determined by a different combination of the condition attributes values. • Since we have listed these values in the previous step, the multiplication rule of counting tells us that there will be no more columns than the product of the number of values for each of the condition attributes. • This can be easily verified by constructing a tree diagram listing all possible values of each attribute for each branch of the preceding attribute. • The number of leaves of the tree will be the product described above. • Since some combinations of attribute values may be impossible, the actual number of rules may be less that the maximum.

  48. Step 3. Identify Possible Actions • The actions describe the decisions to be made or the policy rules to be followed. • Asking the question, "What are the different options for implementing the decision or policy?", should help identify the possible actions.

  49. Step 4. Enter All Possible Rules • We now begin to build the decision table by listing • the condition descriptions in the left margin of the upper part of the table and • the action descriptions in the left margin of the lower part. • Then we write consecutive numbers from 1 to the maximum number of rules across the top. • In the rule columns and the condition rows, we list all possible combinations of condition attribute values. • A rule of thumb for arranging the rule combinations is to alternate the possible values for the first condition, then repeat each value of the second condition as many times as there are values in the first condition, repeat each value of the third condition as many times as needed to cover one iteration of the second condition values, etc.

More Related