770 likes | 884 Views
Search and the New Economy Session 5 Mining User-Generated Content. Prof. Panos Ipeirotis. Today’s Objectives. Tracking preferences using social networks Facebook API Trend tracking using Facebook Mining positive and negative opinions Sentiment classification for product reviews
E N D
Search and the New EconomySession 5Mining User-Generated Content Prof. Panos Ipeirotis
Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics
Top-10, Zeitgeist, Pulse, … • Tracking top preferences have been around for ever
Online Social Networking Sites • Preferences listed and easily accessible
Facebook API • Content easily extractable • Easy to “slice and dice” • List the top-5 books for 30-year old New Yorkers • List the book that had the highest increase across female population last week • …
Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics
Customer-generated Reviews • Amazon.com started with books • Today there are review sites for almost everything • In contrast to “favorites” we can get information for less popular products
Questions • Are reviews representative? • How do people express sentiment?
Helpfulness of review(by other customers) Rating(1 … 5 stars) Review
Do People Trust Reviews? • Law of large numbers: single review no, multiple ones, yes • Peer feedback: number of useful votes • Perceived usefulness is affected by: • Identity disclosure: Users trust real people • Mixture of objective and subjective elements • Readability, grammaticality • Negative reviews that are useful may increase sales! (Why?)
Are Reviews Representative? counts 1 2 3 4 5 counts counts 1 2 3 4 5 1 2 3 4 5 What is the Shape of the Distribution of Number of Stars? counts Guess? 1 2 3 4 5
Observation 1: Reporting Bias counts 1 2 3 4 5 Why? Implications for WOM strategy?
Possible Reasons for Biases • People don’t like to be critical • People do not post if they do not feel strongly about the product (positively or negatively)
Observation 2: The SpongeBob Effect versus SpongeBob Squarepants Oscar
Oscar Winners 2000-2005 Average Rating 3.7 Stars
SpongeBob DVDs Average Rating 4.1 Stars
And the Winner is… SpongeBob! If SpongeBob effect is common, then ratings do not accurately signal the quality of the resource
What is Happening Here? • People choose movies they think they will like, and often they are right • Ratings only tell us that “fans of SpongeBob like SpongeBob” • Self-selection • Oscar winners draw a wider audience • Rating is much more representative of the general population • When SpongeBob gets a wider audience, his ratings drop
Effect of Self-Selection: Example • 10 people see SpongeBob’s 4-star ratings • 3 are already SpongeBob fans, rent movie, award 5 stars • 6 already know they don’t like SpongeBob, do not see movie • Last person doesn’t know SpongeBob, impressed by high ratings, rents movie, rates it 1-star Result: • Average rating remains unchanged: (5+5+5+1)/4 = 4 stars • 9 of 10 consumers did not really need rating system • Only consumer who actually used the rating system was misled
Bias-Resistant Reputation System • Want P(S) but we collect data on P(S|R) S = Are satisfied with resource R = Resource selected (and reviewed) • However, P(S|E) P(S|E,R) E = Expects that will like the resource • Likelihood of satisfaction depends primarily on expectation of satisfaction, not on the selection decision • If we can collect prior expectation, the gap between evaluation group and feedback group disappears • whether you select the resource or not doesn’t matter
Bias-Resistant Reputation System Before viewing: I think I will: Love this movie Like this movie It will be just OK Somewhat dislike this movie Hate this movie After viewing: I liked this movie: Much more than expected More than expected About the same as I expected Less than I expected Much less than I expected Skeptics Everyone else Big fans
Conclusions • Reporting bias and Self-selection bias exists in most cases of consumer choice • Bias means that user ratings do not reflect the distribution of satisfaction in the evaluation group • Consumers have no idea what “discount” to apply to ratings to get a true idea of quality • Many current rating systems may be self-defeating • Accurate ratings promote self-selection, which leads to inaccurate ratings • Collecting prior expectations may help address this problem
OK, we know the biases • Can we get more knowledge? • Can we dig deeper than the numeric ratings? • “Read the reviews!” • “They are too many!”
Independent Sentiment Analysis • Often we need to analyze opinions • Can we provide review summaries? • What should the summary be?
Basic Sentiment classification • Classify full documents (e.g., reviews, blog postings) based on the overall sentiment • Positive, negative and (possibly) neutral • Similar but also different from topic-based text classification. • In topic-based classification, topic words are important • Diabetes, cholesterol health • Election, votes politics • In sentiment classification, sentiment words are more important, e.g., great, excellent, horrible, bad, worst, etc. • Sentiment words are usually adjectives or adverbs or some specific expressions (“it rocks”, “it sucks” etc.) • Useful when doing aggregate analysis
Can we go further? • Sentiment classification is useful, but it does not find what the reviewer liked and disliked. • Negative sentiment does not mean that the reviewer does not like anything about the object. • Positive sentiment does not mean that the reviewer likes everything • Go to the sentence level and featurelevel
Extraction of features • Two types of features: explicit and implicit • Explicit features are mentioned and evaluated directly • “The pictures are very clear.” • Explicit feature: picture • Implicit features are evaluated but not mentioned • “It is small enough to fit easily in a coat pocket or purse.” • Implicit feature: size • Extraction: Frequency based approach • Focusing on frequent features (main features) • Infrequent features can be listed as well
Identify opinion orientation of features • Using sentiment words and phrases • Identify words that are often used to express positive or negative sentiments • There are many ways (dictionaries, WorldNet, collocation with known adjectives,…) • Use orientation of opinion words as the sentence orientation, e.g., • Sum: • a negative word is near the feature, -1, • a positive word is near a feature, +1
Two types of evaluations • Direct Opinions: sentiment expressions on some objects/entities, e.g., products, events, topics, individuals, organizations, etc • E.g., “the picture quality of this camera is great” • Subjective • Comparisons: relations expressing similarities, differences, or ordering of more than one objects. • E.g., “car x is cheaperthancar y.” • Objective or subjective • Compares feature quality • Compares feature existence
Visual Summarization & Comparison + Summary _ Picture Battery Zoom Size Weight + Comparison _ Digital camera 1 Digital camera 1 Digital camera 2
Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics
Are Customers Irrational? BuyDig.com gets Price Premium (customers pay more than the minimum price) $11.04
Price Premiums @ Amazon Are Customers Irrational (?)
Why not Buying the Cheapest? You buy more than a product • Customers do not pay only for the product • Customers also pay for a set of fulfillment characteristics • Delivery • Packaging • Responsiveness • … Customers care about reputation of sellers! Reputation Systems are Review Systems for Humans
Basic idea Conjecture: Price premiums measure reputation Reputation is captured in text feedback Examine how text affects price premiums(and do sentiment analysis as a side effect)
Outline • How we capture price premiums • How we structure text feedback • How we connect price premiums and text
Data Overview • Panel of 280 software products sold by Amazon.com X 180 days • Data from “used goods” market • Amazon Web services facilitate capturing transactions • No need for any proprietary Amazon data
Data: Capturing Transactions Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 time We repeatedly “crawl” the marketplace using Amazon Web Services While listingappears item is still available no sale
Data: Capturing Transactions Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 time We repeatedly “crawl” the marketplace using Amazon Web Services When listingdisappearsitem sold
Data: Transactions Capturing transactions and “price premiums” Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 time Item sold on 1/9 When item is sold, listing disappears
Data: Variables of Interest Price Premium • Difference of price charged by a seller minus listed price of a competitor Price Premium = (Seller Price – Competitor Price) • Calculated for each seller-competitor pair, for each transaction • Each transaction generates M observations, (M: number of competing sellers) • Alternative Definitions: • Average Price Premium (one per transaction) • Relative Price Premium (relative to seller price) • Average Relative Price Premium (combination of the above)