The transformation from data journalism to computational journalism

Some time now journalism used to be a traditional profession of people investigating issues, talking to sources, writing it down and publishing it. But then the Internet came and journalists when onto the Net, emailing and surfing the Web as a new way to contact sources gather information and disseminate the news.
Ultimately, this evolved into a new type of journalist, one that collects data freely available on the Net and aggregates this in such a way it reveals new insights. This is called data journalism. Now there appears even another type of journalist, one that computes: computational journalism. I would suggest that computational journalism is an extension of data(-driven) journalism. Of course data as such are meaningles and only through some filtering – aggregation, comparisons etc – sense can be made of the large amounts of data, possibly made easier through the use of visualizations.
Data and computational journalism especially used in investigative journalism has been around for quite some time already. However, it received a great push through the use of APIs and the increased accessibility of databases through the Internet in general. The data repository of the Guardian is a good example of the latter. Still, analyzing data and visualizing the findings to convey the message of the journalist can be quite tricky. A source on creative data visualisation or visualisations gone wrong can be found at Flowing Data.
The video below is a lecture on computational journalism’s agenda Journalism and Media Studies Centre of Hong Kong University

Media Research Seminar: Computational Journalism: Mapping the Research Agenda from JMSC HKU on Vimeo.

Published by

Maurice Vergeer

I am Maurice Vergeer, working at Communication Science department of the Radboud University Nijmegen, in the Netherlands.