1 / 18

WWW 2014 Seoul, April 8 th

SNOW 2014 Data Challenge Symeon Papadopoulos (CERTH) David Corney (RGU) Luca Aiello (Yahoo! Labs). WWW 2014 Seoul, April 8 th. Overview of Challenge. Goal: Detection of newsworthy topics in a large and noisy set of tweets

indiya
Download Presentation

WWW 2014 Seoul, April 8 th

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SNOW 2014 Data Challenge Symeon Papadopoulos (CERTH) David Corney (RGU) Luca Aiello (Yahoo! Labs) WWW 2014 Seoul, April 8th

  2. Overview of Challenge • Goal: Detection of newsworthy topics in a large and noisy set of tweets • Topic: a news story represented by a headline + tags + representative tweets + representative images (optional) • Newsworthy: A topic that ends up being covered by at least some major online news sources • Topics are detected per timeslot (small equally-sized time intervals) • We want a maximum number of topics per timeslot

  3. Challenge Activity Log • Challenge definition (Dec 2013) • Challenge toolkit and registration (Jan 20, 2014) • Development dataset collection (Feb 3, 2014) • Rehearsal dataset collection (Feb 17, 2014) • Test dataset collection (Feb 25, 2014) • Results submission (Mar 4, 2014) • Paper submission (Mar 9, 2014) • Results evaluation (Mar 5-18, 2014) • Workshop (Apr 7, 2014)

  4. Some statistics • Registered participants: 25 • India: 4, Belgium: 3, Germany: 3, UK: 3, Greece: 3, Ireland: 2, USA: 2, France: 2, Italy: 1, Spain: 1, Russia: 1 • Participants that signed the Challenge agreement: 19 • Participants that submitted results: 11 • Participants that submitted papers: 9

  5. Evaluation Protocol • Defined several evaluation criteria: • Newsworthiness  Precision/Recall, F-score • Readability  scale [1-5] • Coherence  scale [1-5] • Diversity  scale [1-5] • List of reference topics • Set up precise evaluation guidelines • Blind evaluation (i.e. evaluator not aware of which method a topic comes from) based on Web UI • Participants submitted topics for 96 timeslots, but manual evaluation happened for 5 sample timeslots. • Result validation and analysis

  6. Results – Reference topic recall Recall computed with respect to 59 reference topics. Those were partitioned in three groups (20, 20, 19) and each of the three evaluators manually matched the topics of participants to the topics assigned to him.

  7. Results – Pooled topic recall (1/2) • Each evaluator independently evaluated the topics of each participant as newsworthy or not • Selected all topics that were marked as newsworthy by at least two evaluators • Manually extracted the unique topics (70 in total, partially overlapping with reference topic list) • Manually matched correct topics of each participants to the list of newsworthy topics • Computed precision, recall and F-score

  8. Results – Pooled topic recall (2/2)

  9. Results - Readability

  10. Results - Coherence

  11. Results - Diversity

  12. Results – Image Relevance

  13. Results – Aggregate (1/2) • For each criterion Ci, we computed the score of each team relative to the best team for this criterion: Ci*(team) = Ci(team) / max(Ci(teamj)) • We then aggregated over the different norm. scores: Ctot = 0.25*Cref*Cpool + 0.25*Cread + 0.25*Ccoh + 0.25*Cdiv where Cref is computed from the recall of reference topics, Cpool from the F-score of the pooled topics, and Cread, Ccoh and Cdiv from readability, coherence and diversity respectively.

  14. Results – Aggregate (2/2) We tried several other alternative aggregation scores. The top three teams were the same!

  15. Program 15:20-15:30: Carlos Martin-Dancausa and AyseGoker: Real-time topic detection with bursty n-grams. 16:00-16:20: GopiChandNuttaki, OlfaNasraoui, BehnoushAbdollahi, MahsaBadami, Wenlong Sun: Distributed LDA based topic modelling and topic agglomeration in a latent space. 16:20-16:40: Steven van Canneyt, Matthias Feys, Steven Schockaert, Thomas Demeester, Chris Develder, Bart Dhoedt: Detecting newsworthy topics in Twitter. 16:40-17:00: Georgiana Ifrim, Bichen Shi, Igor Brigadir: Event detection in Twitter using aggressive filtering and hierarchical tweet clustering. 17:00-17:20: Gerard Burnside, DimitriosMilioris, Philippe Jacquet: One day in Twitter: Topic detection via joint complexity. 17:20-17:30: GeorgiosPetkos, Symeon Papadopoulos, YiannisKompatsiaris: Two-level message clustering for topic detection in Twitter. 17:30-17:40: Winners’ announcement!

  16. Limitations – Lessons Learned • Did not take into account time • However, methods that produce a newsworthy topic earlier should be rewarded • Did not take into account image relevance • since we considered it an optional field • Coherence and diversity had extreme values in numerous cases • e.g. when a single relevant tweet was provided as representative • Evaluation turned out to be a very complex task! • Assessing only five slots (out of the 96) is definitely a compromise: (a) consider use of more evaluators/AMT, (b) consider simpler evaluation tasks

  17. Plan • Release evaluation resources • list of reference topics • list of pooled newsworthy topics • evaluation scores • Papers • SNOW Data Challenge paper • Resubmission of participants’ papers with CEUR style • Submission to CEUR-ws.org • Open-source implementations? • Further plans?

  18. Thank you!

More Related