Filippo Menczer

Source: Wikipedia, the free encyclopedia.
Filippo Menczer
Born16 May 1965
Alma materSapienza University of Rome
University of California, San Diego
Scientific career
FieldsCognitive science
Computer science
Physics
InstitutionsIndiana University Bloomington
Websitecnets.indiana.edu/fil/

Filippo Menczer is an American and Italian academic. He is a University Distinguished Professor and the Luddy Professor of

Kinsey Institute, a fellow of the Center for Computer-Mediated Communication,[4] and a former fellow of the Institute for Scientific Interchange in Turin, Italy. In 2020 he was named a Fellow of the ACM
.

Education, career, service

Menczer holds a

Web Science 2014 Conference[9] and general co-chair of the NetSci
2017 Conference.

Research

Menczer's research focuses on Web science, social networks, social media, social computation, Web mining, data science, distributed and intelligent Web applications, and modeling of complex information networks. He introduced the idea of topical and adaptive Web crawlers, a specialized and intelligent type of Web crawler.[10][11]

Menczer is also known for his work on social

search engine bias.[24][25]

The group led by Menczer has analyzed and modeled how

White Helmets,[31] and in taking down voter-suppression bots on Twitter.[32] Menczer and coauthors have also found a link between online COVID-19 misinformation and vaccination hesitancy.[33]

Analysis by Menczer's team demonstrated the echo-chamber structure of information-diffusion networks on Twitter during the 2010 United States elections.[34] The team found that conservatives almost exclusively retweeted other conservatives while liberals retweeted other liberals. Ten years later, this work received the Test of Time Award at the 15th International AAAI Conference on Web and Social Media (ICWSM).[35] As these patterns of polarization and segregation persist,[36] Menczer's team has developed a model that shows how social influence and unfollowing accelerate the emergence of online echo chambers.[37]

Menczer and colleagues have advanced the understanding of information virality, and in particular the prediction of what memes will go viral based on the structure of early diffusion networks[38][39] and how competition for finite attention helps explain virality patterns.[40][41] In a 2018 paper in Nature Human Behaviour, Menczer and coauthors used a model to show that when agents in a social networks share information under conditions of high information load and/or low attention, the correlation between quality and popularity of information in the system decreases.[42] An erroneous analysis in the paper suggested that this effect alone would be sufficient to explain why fake news are as likely to go viral as legitimate news on Facebook. When the authors discovered the error, they retracted the paper.[43]

Following influential publications on the detection of astroturfing[44][45][46][47][48] and social bots,[49][50] Menczer and his team have studied the complex interplay between cognitive, social, and algorithmic factors that contribute to the vulnerability of social media platforms and people to manipulation,[51][52][53][54] and focused on developing tools to counter such abuse.[55][56] Their bot detection tool, Botometer, was used to assess the prevalence of social bots[57][58] and their sharing activity.[59] Their tool to visualize the spread of low-credibility content, Hoaxy,[60][61][62][63] was used in conjunction with Botometer to reveal the key role played by social bots in spreading low-credibility content during the 2016 United States presidential election.[64][65][66][67][68] Menczer's team also studied perceptions of partisan political bots, finding that Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots.[69] Using bot probes on Twitter, Menczer and coauthors demonstrated a conservative political bias on the platform.[70]

As social media have increased their countermeasures against malicious automated accounts, Menczer and coauthors have shown that coordinated campaigns by inauthentic accounts continue to threaten information integrity on social media, and developed a framework to detect these coordinated networks.[71] They also demonstrated new forms of social media manipulation by which bad actors can grow influence networks[72] and hide high-volume of content with which they flood the network.[73]

Menczer and colleagues have shown that political audience diversity can be used as an indicator of news source reliability in algorithmic ranking.[74]

Textbook

The textbook A First Course in Network Science by Menczer, Fortunato, and Davis was published by Cambridge University Press in 2020.[75] The textbook has been translated into Japanese, Chinese, and Korean.

Projects

  • Observatory on Social Media (OSoMe, pronounced awesome):[76] A research center aimed to study and visualize how information spreads online.[77] Includes data and tools to visualize Twitter trends, diffusion networks, detect social bots, etc.[78][79]
  • Botometer:[80] A machine learning tool to detect social bots on Twitter. Previously known as BotOrNot. Includes a public API, a social bot dataset repository, and the BotAmp tool[81] to assess the role of automated accounts in boosting a given topic.
  • Hoaxy:[82] An open-source search and network visualization tool to study the spread of narratives on Twitter. Includes a public API.
  • Fakey:[83] A mobile game for news literacy. Fakey mimics a social media news feed where you have to tell real news from fake ones.
  • Kinsey Reporter:
    Kinsey Institute. Reports are submitted via Web or smartphone, then available for visualization or offline analysis via a public API.[90][91]

References

  1. ^ "Observatory on Social Media (OSoMe)". Retrieved February 5, 2023.
  2. ^ "IUNI". Retrieved March 18, 2019.
  3. ^ "Center for Complex Networks and Systems Research (CNetS)". Retrieved May 8, 2014.
  4. ^ "Center for Computer-Mediated Communication". Retrieved March 18, 2019.
  5. ^ "Editorial Board". Network Science. Retrieved March 18, 2019.
  6. ^ "Editorial Board". EPJ Data Science Editorial Board. Retrieved March 18, 2019.
  7. ^ "PeerJ Academic Editors". PeerJ. Retrieved March 18, 2019.
  8. ^ "HKS Misinformation Review Editorial Board". Retrieved February 5, 2023.
  9. ^ "Web Science 2014". Retrieved May 4, 2014.
  10. S2CID 5931711
    .
  11. .
  12. .
  13. ^ LENZ, RYAN (July 22, 2007). "School Conducts Anti-Phishing Research". The Washington Post.
  14. S2CID 2011198
    .
  15. .
  16. .
  17. .
  18. .
  19. .
  20. .
  21. .
  22. Network World. March 15, 2006. Archived from the original
    on May 4, 2014. Retrieved May 4, 2014.
  23. .
  24. .
  25. ^ "Egalitarian engines". The Economist. November 17, 2005.
  26. PMID 23483885
    .
  27. .
  28. .
  29. .
  30. ^ Robb, Amanda (November 16, 2017). "Anatomy of a Fake News Scandal". Rolling Stone. Retrieved 18 March 2019.
  31. ^ Solon, Olivia (18 December 2017). "How Syria's White Helmets became victims of an online propaganda machine". The Guardian. Retrieved 18 March 2019.
  32. ^ Bing, Christopher (November 2, 2018). "Exclusive: Twitter deletes over 10,000 accounts that sought to discourage U.S. voting". Reuters. Retrieved 18 March 2019.
  33. S2CID 247939732
    .
  34. ^ Conover, Michael; Jacob Ratkiewicz; Matthew Francisco; Bruno Gonçalves; Filippo Menczer; Alessandro Flammini (2011). "Political Polarization on Twitter". Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media.
  35. ^ "ICWSM-2021 Award Winners". Retrieved February 5, 2023.
  36. S2CID 234356375
    .
  37. .
  38. .
  39. ^ Matson, John (December 17, 2013). "Twitter Trends Help Researchers Forecast Viral Memes". Scientific American.
  40. PMID 22461971
    .
  41. ^ McKenna, Phil (April 13, 2012). "Going viral on Twitter is a random act". New Scientist.
  42. S2CID 23363010
    .
  43. ^ Dancyger, Lilly (10 January 2019). "Researchers Retract Widely Cited Fake-News Study". Rolling Stone. Retrieved 18 March 2019.
  44. S2CID 1958549
    .
  45. ^ Ratkiewicz, Jacob; Michael Conover; Mark Meiss; Bruno Gonçalves; Alessandro Flammini; Filippo Menczer (2011). "Detecting and Tracking Political Abuse in Social Media". Proc. Fifth International AAAI Conference on Weblogs and Social Media.
  46. ^ Giles, Jim (27 October 2010). "Twitter tool roots out disguised mass postings". New Scientist.
  47. ^ Keller, Jared (November 10, 2010). "When Campaigns Manipulate Social Media". The Atlantic.
  48. ^ Silverman, Craig (November 4, 2011). "Misinformation Propagation". Columbia Journalism Review.
  49. S2CID 1914124
    .
  50. ^ Urbina, Ian (August 10, 2013). "I Flirt and Tweet. Follow Me at #Socialbot". The New York Times.
  51. S2CID 4410672
    .
  52. ^ Menczer, Filippo (November 27, 2016). "Misinformation on social media: Can technology save us?". The Conversation. Retrieved 18 March 2019.
  53. ^ Bergado, Gabe (December 14, 2016). "The Man Who Saw Fake News Coming". Inverse. Retrieved 18 March 2019.
  54. PMID 29146827
    .
  55. ^ Ciampaglia, Giovanni Luca; Menczer, Filippo (June 20, 2018). "Misinformation and biases infect social media, both intentionally and accidentally". The Conversation. Retrieved 18 March 2019.
  56. ^ Zamudio-Suaréz, Fernanda (December 22, 2016). "A Professor Once Targeted by Fake News Now Is Helping to Visualize It". The Chronicle of Higher Education. Retrieved 18 March 2019.
  57. S2CID 15103351
    .
  58. ^ Chong, Zoey (March 14, 2017). "Up to 48 million Twitter accounts are bots, study says". CNET. Retrieved 18 March 2019.
  59. ^ WOJCIK, STEFAN; MESSING, SOLOMON; SMITH, AARON; RAINIE, LEE; HITLIN, PAUL (2018-04-09). "Bots in the Twittersphere". Pew Research Center. Retrieved 18 March 2019.
  60. ^ Gershgorn, Dave (December 21, 2016). "There's a new tool to visualize how fake news is spread on Twitter". Quartz. Retrieved 18 March 2019.
  61. ^ Kauffman, Gretel (December 22, 2016). "Indiana University tech tool 'Hoaxy' shows how fake news spreads". The Christian Science Monitor. Retrieved 18 March 2019.
  62. ^ Skallerup Bessette, Lee (January 9, 2017). "Hoaxy Visualizes the Spread of Online News". The Chronicle of Higher Education. Retrieved 18 March 2019.
  63. ^ Reaney, Patricia (December 21, 2016). "U.S. university launches tool to show how fake news spreads". Reuters. Retrieved 18 March 2019.
  64. PMID 30459415
    .
  65. .
  66. ^ Ouellette, Jennifer (21 November 2018). "Study: It only takes a few seconds for bots to spread misinformation". Ars Technica. Retrieved 18 March 2019.
  67. ^ Boyce, Jasmin (November 21, 2018). "'Relatively few' Twitter bots were needed to spread misinformation and overwhelm fact checkers, study finds". NBC News. Retrieved 18 March 2019.
  68. ^ de Haldevang, Max (November 20, 2018). "Twitter could have partly blocked Russia's 2016 election attack with CAPTCHAs". Quartz. Retrieved 18 March 2019.
  69. S2CID 225633835
    .
  70. .
  71. .
  72. .
  73. .
  74. .
  75. .
  76. ^ "OSoMe: Home". Retrieved 18 March 2019.
  77. ^ Hotz, Robert Lee (October 1, 2011). "Decoding Our Chatter". The Wall Street Journal.
  78. ^ "OSoMe Tools". Observatory on Social Media. Retrieved 18 March 2019.
  79. .
  80. ^ "Botometer". Retrieved 5 February 2023.
  81. ^ "BotAmp". Retrieved 5 February 2023.
  82. ^ "Hoaxy". Hoaxy. Retrieved 5 February 2023.
  83. ^ "Fakey". Fakey. Retrieved 5 February 2023.
  84. ^ "Scholarometer". Retrieved 18 March 2019.
  85. ^ Kolowich, Steve (December 15, 2009). "Tenure-o-meter". Inside Higher Ed.
  86. PMID 22984414
    .
  87. .
  88. ^ Van Noorden, Richard (November 6, 2013). "Who is the best scientist of them all?". Nature.
  89. ^ "Kinsey Reporter". Retrieved 18 March 2019.
  90. ^ "Kinsey Reporter". Scientific American. Retrieved May 4, 2014.
  91. ^ Healy, Melissa (February 14, 2014). "Want to dish about Valentine's Day sex? There's an app for that". Los Angeles Times.

External links