Unless you’ve been hiding in a remote location with no access to news or social media, you know that the topic of and discussions around “fake news” have been top of mind — not only in the media, but also within librarianship (I found numerous library guides on the topic of fake news). Since information and media literacy are among the “super hero” skills that librarians bring to the table, the question is, what can we do to help solve this pervasive problem?
On May 11th I attended a workshop to help develop a “schema for credibility.” The workshop was hosted by Meedan (an organization that builds tools to help journalists) and Hacks/Hackers (an organization made up of journalists and technologists). As a sign of increasingly fluid boundaries of academia, the workshop was held on the San Francisco campus of Northwestern University, which is right in the heart of the financial district. (Truth be told, I had no idea that Northwestern had an Bay Area outpost before this meeting, but I can report that it is quite a lovely outpost indeed.)
This workshop grew out of a discussion at MisinfoCon and the gathering (which included folks from Silicon Valley tech companies, journalism, academia… and me) was intended as a way to explore the problem space and to make a start on defining what might go into a credibility schema — key characteristics that could be used in machine learning that would infer whether a news article is authentic or has been designed to mislead or misinform. The holy grail of such an effort would be to automatically identify quality (as well as distortion and propaganda) in published news. Although the end goal is a machine-driven process, what existing (human-driven), established art and social processes exist to evaluate sources and promote quality? Some of those named included scholarly peer review, practices within the Wikipedia community (the value placed on evaluating reliable sources and neutral point of view) and librarianship. Good work is going on within scholarly communications, digital humanities, social media studies, library and information science,
Since we seemed to be operating under a rough version of Chatham House Rules, I won’t go into details about who was attending the event or what they revealed–but it was very heartening to see expressions of interest in this topic in a group of participants from a diverse set of organizations.
This exploration of credibility relates to work we are undertaking here in OCLC Research. My colleagues Ixchel Faniel, Lynn Silipigni Connaway, Erin Hood and Brittany Brannon (along with a crackerjack team from the The University of Florida George A. Smathers Libraries and and Rutgers University School of Communication and Information ) are working on an IMLS National Leadership Grant: “Researching Students’ Information Choices: Determining Identity and Judging Credibility in Digital Spaces.” The team will study 180 students, from primary to graduate school, working in the science, technology, engineering and mathematics (STEM) disciplines. Using a task-based methodology, the project team will observe students’ cognition in action to understand student choices, behaviors and rationale around credibility. This research project compliments the work of library practitioners who work with students and others on a daily basis to help them understand and be better informed about what they find online.
I hope that the library community will be inspired to connect with this initiative. We have much to add to the discussion. The next convening will be on June 7th at Columbia University’s Brown Institute. If there is more interest and momentum, there may be a larger gathering to discuss these issues in greater depth and build consensus around a schema. Please let me know if you’d like a point of contact for any of these future efforts.