1 / 62

Computational Social Systems: Reputation and Non-Cooperative Computation

Computational Social Systems: Reputation and Non-Cooperative Computation. Moshe Tennenholtz Technion. Acknowledgements.

caia
Download Presentation

Computational Social Systems: Reputation and Non-Cooperative Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Social Systems: Reputation and Non-Cooperative Computation Moshe Tennenholtz Technion

  2. Acknowledgements • Many thanks to Dov Monderer for many discussions on Internet reputation systems, and to Yoav Shoham and Rann Smorodinsky for joint work on multi-party computation games. • The part of talk dealing with sequential information elicitation is a joint work with Rann Smorodinsky.

  3. The Internet: A Computational Social System The Internet allows several remarkably powerful capabilities: • Powerful search capabilities based on page ranking technology • Reputation-based commerce adopting users ranking technology • Information aggregation and elicitation by brands based on voting technology • The above are based on viewing the Internet as a computational social system, where computers/people/organizations provide input about one another or about product/service features. As a result, the theory of social choice and game-theory may provide essential tools for understanding and improving upon these technologies.

  4. Social Choice: Voters and Alternatives Alice Yahoo Bob M’Soft Chris Amazon

  5. The Internet: Voters and Alternatives Coincide Yahoo Positive Reputation Systems: An important page is a page that important pages link to it. Amazon M’soft

  6. The Basic Setup • G=(V,E) – a (positive) reputation system setting V – agents E  V2 -- set of positive feedbacks/links R(v)={u  V: (u,v)  E} – the supporters (support set) of v The social ranking S takes a graph G, and returns a ranking (total pre-order) S(G):V  {1,2,……,|V|} of its nodes.

  7. Requirements • Classical social choice attempts to identify good social rules for the aggregation of individual preferences into a social preference, by introducing a set of postulates/axioms/requirements. • Classical requirements of the theory of social choice such as the independent of irrelevant alternative make no sense in our setting: the social ranking of agents based on individual rankings will change when new alternatives are added, since these alternatives are agents that may link to the previously existing alternatives/agents. • Google’s PageRank is a particular approach to aggregating individual preferences into a social preference in this setting!

  8. Positive Reputation Systems: The importance relation Low rank = 5 R(David) is more important than R(Chris) Low rank = 5 Alice Bob Jon High rank=100 David Chris Low rank=5 Jeff

  9. Positive Reputation Systems: The importance relation rank = 5 R(Chris) is more important than R(David) rank = 5 Alice Bob rank=2 Jon rank=5 Jane David Chris rank=5 Jeff

  10. Positive Reputation Systems: The importance relation • Given a reputation system setting G=(V,E), and a social ranking S(G), R(vi) is more important than R(vj) if there is a 1-1 mapping f: R(vj) R(vi)such that for every v  R(vj) there exist f(v)  R(vi) such that v  f(v) and either f is not onto or there exist v  R(vj) such that v < f(v).

  11. Positive Reputation Systems: Transitivity • Transitivity [T]: Given a positive reputation systems setting G=(V,E) and a social ranking S(G), then for everyvi ,vj  V, if R(vi) is more important thanR(vj)then vi >vj .

  12. Positive Reputation Systems: Transitivity Low rank = 5 David should be ranked higher than Chris since his support is stronger Low rank = 5 Alice Bob M’Soft High rank=100 David Chris High rank=100 ? Amazon ?

  13. Positive Reputation Systems: Beyond Transitivity Low rank = 5 Chris should not be ranked higher than David (but may be ranked similarly) since no one in Chris support is as strong as someone in David support. Low rank = 5 Alice Bob M’Soft High rank=100 David Chris ? ?

  14. Positive Reputation Systems: Weak Monotonicity • Weak Monotonicity [M]: Given a positive reputation systems setting G=(V,E) and a social ranking S(G), then for everyvi ,vj  V, we have that if R(vi ) is not more important thanR(vj)but vi >vj then it must be the case that there exist v1  R(vi) and v2  R(vj)such that v1 > v2.

  15. Generality • Generality [G]: A positive reputation system S should associate with any reputation system setting G a social ranking S(G).

  16. An Impossibility Result • Theorem: there is no social reputation rule that satisfies G,T,M.

  17. A Possibility Result • Theorem: We can satisfy any pair of the postulates G,T,M.

  18. General Transitive Ranking • Iteration 0 – rank the nodes according to their in-degree. • Iteration I+1 refines the ranking of iteration I: A). Choose a node v, such that R(v) > R(t) and there is no s such that R(S) > R(v) [according to the rankings in iteration I, where s,v,t refer to nodes of the same rank in that iteration]. B). Refine the ranking, so that the nodes of rank of v in I will be partitioned into two: v and all nodes in its (previous) rank who have a support of the same power and the rest of nodes (including t) in its (previous) rank.

  19. General Transitive Ranking 3(I) Kim Bob 2(I) 3(I) Jon Alice Mark 3 (I) 5(I) 5(I) Jeff Helen Jane 4(I) 6 (I)

  20. General Transitive Ranking Notice that there always will be the case that the second lowest agent in the support of Alice (Jane) which is ranked higher than than the two lowest agents in the support of Bob (while the supports are of equal size), so we won’t get into cycles. 3 (I+1) Kim Bob 2(I+1) 3(I+1) Jon Alice Mark 3.5(I+1) 5(I+1) 5(I+1) Jeff Helen Jane 4(I+1) 6 (I+1)

  21. General Transitive Ranking Bob Chris 1 1 1 0 2 Jane Alice David

  22. General Transitive Ranking Bob Chris 1 1 1.5 0 2 Jane Alice David

  23. General Transitive Ranking Bob Chris 1.4 1 1.5 0 2 Jane Alice David

  24. General Transitive Ranking Bob Chris 1.4 1.3 1.5 0 2 Jane Alice David

  25. General Transitive Ranking Bob Chris 2 1 3 0 4 Jane Alice David

  26. Negative Reputation Systems Alice Chris provides negative feedback about Bob, and Bob provides negative feedback about Alice. Bob Chris Ranking agents based on such information is the basis of reputation based commerce!

  27. Negative Reputation Systems Alice Chris complains about Bob, and Bob complains about Alice. Bob Chris Given a reputation system setting G=(V,E), and a social ranking S(G), R(vi) is more reliable than R(vj) if there is a 1-1 mapping f: R(vj) R(vi) such that every v  R(vj) there exist f(v)  R(vi) such that v  f(v) and either f is not onto or there exist v  R(vj) such that v < f(v).

  28. Negative Reputation Systems: Transitivity • B-Transitivity [BT]: Given a negative reputation systems setting G=(V,E) and a social ranking S(G), then for everyvi ,vj  V, if R(vi ) is more reliable thanR(vj )then vi <vj .

  29. Negative Reputation System: Transitivity Alice No one complains about Chris, who should be ranked the highest. This means that Bob should be ranked the lowest. Alice will be ranked in between Bob and Chris. Bob Chris Chris > Alice > Bob

  30. Negative Reputation Systems: Weak Monotonicity • B-Weak-Monotonicity [BM]: Given a negative reputation systems setting G=(V,E) and a social ranking S(G), then for everyvi ,vj  V, we have that if R(vi ) is not more reliable thanR(vj )but vi <vj then it must be the case that there exist v1  R(vi ) and v2  R(vj )such that v1 > v2.

  31. Negative Reputation Systems: Weak Monotonicity Chris should not be ranked higher than David (but may be ranked similarly) since no complain about David is by someone as reliable as at least one of the agents who complain about Chris. High rank = 100 High rank = 100 Alice Bob Low rank=5 Jon Judith Low rank=5 David Chris ? ?

  32. An Impossibility Result – Negative Reputation Systems • Theorem: there is no social reputation rule that satisfies G,BT,BM.

  33. A Possibility Result – Negative Reputation Systems • Theorem: We can satisfy any pair of the postulates G,BT,BM.

  34. Reputation Systems with both negative and positive feedbacks • Two types of edges/links – good and bad. • Rb(v) – the agents who provide negative feedback on v. • Rg(v) – the agents who provide positive feedback on v. • R(V) – the agents that link/point to V. R(vi) is socially stronger than R(vj) if Rb(vi) is less reliable or as reliable as Rb(vj), and Rg(vi) is more important or as important as Rg(vj), with at least one strict inequality.

  35. Reputation Systems with both negative and positive feedbacks • Tc ---for everyvi ,vj  V, if R(vi ) is socially stronger thanR(vj ) then vi >vj . • Mc ---for everyvi ,vj  V, we have that if R(vi ) is not socially stronger thanR(vj )but vi >vj then it must be the case that there exist v1  Rg(vi ) and v2  Rg(vj )such that v1 > v2 or that there exist v3  Rb(vi ) and v4 Rb(vj )such that v3 < v4

  36. Reputation Systems with both negative and positive feedbacks • Theorem: there is no social reputation rule that satisfies G,Tc,Mc. • Theorem: We can satisfy any pair of the postulates G,Tc,Mc.

  37. Relaxing the axioms: strongly connected systems • One issue brought by practitioners is that it may be useful to restrict our attention to strongly connected graphs, where there is a directed path between any pair of nodes. • We refer to the related axiom as WG (“weak generality”).

  38. Relaxing the axioms: very weak monotonicity • The complain against weak monotonicity is that vi might be preferable (in e.g. positive reputation systems) to vj although transitivity do not hold and there is no one that links to vi who is preferable to someone who links to vj , since the number of agents that link to vi is much larger than the number of agents that link to vj. • One (strong) relaxation is very-weak monotonicity (VWM): Given a positive reputation systems setting G=(V,E) and a social ranking S(G), then for every vi ,vj  V, where |R(vi)| |R(vj)|+1 we have that if R(vi ) is not more important thanR(vj )but vi >vj then it must be the case that there exist v1  R(vi ) and v2  R(vj )such that v1 > v2.

  39. Relaxing the axioms • Theorem: There is no social reputation rule that satisfies WG,T,VWM. • Similar results can be obtained for negative reputation systems.

  40. Further work • The approach presented is a normative one, but a complementary study deals with a descriptive approach, where sound and complete axiomatization is provided to known reputations systems. • In a pending paper Altman and Tennenholtz provide such (ordinal, graph-theoretic) representation to Google’s PageRank. • Other parts of study refer to agent incentives, and to the uniqueness of the ranking procedure.

  41. Conclusion(reputation systems) • We introduced an axiomatic study of reputation systems, adopting a social choice setting where the set of voters and the set of alternatives coincide. • We provided impossibility and possibility results for a variety of settings, including both positive and negative reputation systems.

  42. The Internet: A Computational Social System The Internet allows several remarkably powerful capabilities: • Powerful search capabilities based on page ranking technology • Reputation-based commerce adopting users ranking technology • Information aggregation and elicitation by brands based on voting technology • The above are based on viewing the Internet as a computational social system, where computers/people/organizations provide input about one another or about product/service features. As a result, the theory of social choice and game-theory may provide essential tools for understanding and improving upon these technologies.

  43. Information Aggregation and Elicitation: Motivation • Voting about product or service features is a most popular tool in Internet sites of brand name companies, and widely exploit the power of the Internet. • When visiting a brand’s web-site a participant may be asked to learn about a new product/service feature and vote for or against it. • Visitors are typically interested in learning the public’s opinion but might not be willing to spend the time learning the required information, challenging very popular market research tools.

  44. Example 1: should we offer the position? A candidate to the economics department has already written 11 research papers, and the department would like to decide on whether to make her a job offer based on the quality of the papers. There are 11 agents (committee members) who are each given one paper to read in order to make a recommendation. Initially, each paper may be "good" or "bad" with equal probabilities, and the department has chosen to make an offer to a candidate if he has a majority of "good" papers. Committee member values the correct recommendation of the committee, at 1000 USD to him, but values the time he needs to spend on reading the paper at 400 USD.

  45. Example 1: should we offer the position? A simple mechanism asks all the agents, simultaneously, for their recommendations. The strategy tuple where all agents choose to read the papers and report truthfully is not an equilibrium. Consider the perspective of agent 1: assuming all agents replied (truthfully, or not), then agent 1 can alter the outcome only if the other 10 replies split evenly between 0 and 1 which has a probability of approximately 0.25. Therefore, by guessing, and assuming all other agents compute, he will gain 0.25 X 500 + 0.75 X 1000 = 875 USD. However, by computing an agent gains at most 1000-400=600 USD, and so player 1 has no incentive to compute (the same for all 11 agents).

  46. Example 1: should we offer the position? This elicitation mechanism will also fail if only agent 1 has the above cost and all other agents have zero costs (the same analysis will hold for agent 1). If however agents 2,3,….,11 are asked first for their recommendations, and agent 1 is approached only if there is a tie among the ten recommendations, then all agents will have enough incentive to invest the effort! This illustrates the power of sequential mechanisms.

  47. Model • N = {1, 2,…, n} -- a finite set of agents. • Each agent j has a unique secret sj  {0,1} that he may compute. • Let 0.5  q <1 be the prior probability of sj=1 and assume these events are independent (the results apply for all 0<q<1). • Agents may compute their own secrets. However, computation is costly and agent j pays cj 0 for computing sj. • Assume wlog that c1 c2 …….  cn

  48. Model (Cont.) • Agents are interested in computing some joint binary parameter (e.g., the majority vote or whether they have a consensus) that depends on the vector of private inputs, defined by an anonymous function G: {0,1}n{0,1}. • Agent j has a utility of vj from learning the real value of G. • We will assume that vj > cj • We use wlog the convention that vj=1 (the more general case is equivalent to the case where the value of agent j is 1 but the cost is cj/vj).

  49. Sequential Mechanisms • Hi= {0,1}i -- the set of histories of length i • H0 = null/empty history • H = H0 H1 ……  Hn • A sequential mechanism is a pair (g,f) where g: H  N determines the agent to be approached, and f:H  {0,1,*} is a function that expresses a decision about whether to halt and output either 0 or 1, or continue the elicitation process. • We assume that if g(h)=j then g(h') j for every h' where h is a prefix of h‘.

  50. Strategies and Equilibrium •  = {don't compute and submit 0, don't compute and submit 1, compute and submit 0, compute and submit 1, compute and if 0 submit 0 and else submit 1, compute and if 1 submit 0 and else submit 1} • A pure strategy for agent j, xj:H   • An equilibrium for the mechanism A = (f,g) is a vector of n strategies, one for each agent, such that each agent's strategy is the best response against the other agents' strategies. • A mechanism A is appropriate for G at 0.5  q<1 if there exists an equilibrium where G can surely be computed for all vector of agents' secrets. • Such an equilibrium is referred to as a computing equilibrium, and A is call q-appropriate.

More Related