4 Apr 2017

What I learned from teaching data journalism

I have been teaching an Introduction to Data Journalism module to a group of MA Journalism students at City University London this year.

They were a varied group of Interactive, Investigative, Finance and Erasmus students - of many different backgrounds, from many different countries and with many different interests. What united them was their interest in using data to help find new stories and improve their current storytelling.

Through teaching them some data-led techniques to help achieve this aim, I also learned about how best to start understanding the process of data journalism. It helped me clarify some thoughts I already had about data journalism, while also challenging some other assumptions I had settled into.

Build the foundation

It's important to build the groundwork first, and this involves finding data. How to source information, where to find open data and where to go if you can't find information on your subject.

There's no point giving your students data that's already been sourced and cleaned unless they understand how the data got to this point in the first place. They need to know what the starting point of data journalism is, and like all other strands of journalism, that's your source.

This means taking them through the process of using open data portals such as the World Bank's, as well as talking about how we can find our own data through scraping, information requests and other means.

Take it slow

Analysing data is the bread and butter of the practice. This is where we find our stories, how we prise valuable and engaging information from otherwise untapped and uninteresting data.

It's the essential - and fun - part where we find out what our story is. And so it's important that we invest the necessary time to look into this section of the data journalism process.

The variety of my students' skillsets when they started the course was surprising. While a minority had used statistical programmes such as R to crunch data, many more had come from arts-based degrees and were daunted by the prospect of "lots of numbers in a spreadsheet".

Accommodating the two to find a common ground in data analysis was key, and I thought it was the right decision to spend many weeks on different statistical analysis platforms in order to provide a variety of tools with which to analyse data.

Don't go straight in with the fun stuff

Almost everyone, when they want to get into data journalism, wants to learn how to visualise data. This usually involves wanting to create a pretty choropleth map or a complex interactive as soon as possible.

This, of course, is a mistake. It is useless in itself, unless it's twinned with visual and data literacy.

There are plenty of bad visualisations online and this is partly because of people who have the skills to build graphics but don't have the understanding of how to communicate statistics. I therefore spent the whole first term trying to help my students understand the best practices for visualising the data, without really going into detail on many different tools that could be used for doing so.

Simple and complete is better than complicated and unrefined

While, as said above, it's important to emphasise statistical literacy before going in with the "fun stuff", that's not to say aspiring data journalists cannot tell visual-led stories.

There's a lot that can be told through simple visualisations such as bar charts and line charts - and minor adaptations of these. And so while I focused on visual literacy instead of an arsenal of tools in my first term of teaching, I still highlighted some platforms for creating basic visualisation tools.

As then-Guardian Data Editor Simon Rogers has previously said, "anyone can do" data journalism. Introducing students - all keen to create many different types of visualisations - to free visualisation platforms such as Highcharts and Datawrapper allowed them to start producing data-led stories without getting carried away with overly complicated - and potentially flawed - visualisations.

This allowed them to practice with the basics first, learning the best visualisation practices while doing so, before moving onto more advanced (and fun) stuff.

What this says about data-driven journalism

None of these points above are ground-breaking, but they do reinforce an important point: you have to get the basics right first.

It's important to remember that the core of data-led journalism is in the analysis. In the finding of stories that other reporters couldn't find. In uncovering stories in vast quantities of information that the ordinary population does not have time to discover for themselves.

Data journalism can often be beautiful, attractive and technically brilliant, but none of this matters if the foundation isn't there.

Students will be biting at the bit to work on the huge interactive visualisations that are probably the reason they're interested the practice in the first place. But I think it's important to focus on the sourcing and analysis of data for first - as this is our starting point and the way that we discover groundbreaking stories.

1 Oct 2016

Sourcing information for journalism: Where to find your data

There are many different sources we can go to when looking for data to tell a story.

These can be more complex, like scraping, freedom of information requests or querying APIs, but there are also many websites that already host public datasets that are easy to find, download and analyse.

These data portals mean that it's easy for both beginners to get into data, and for more seasoned data journalists to continue finding useful information to assist their stories.

Sourcing data can often be the first stumbling block in the data journalism process, but there are many sites you can go to to find useful and up-to-date data. Below is a list of some places to go for reliable and informative data, giving you somewhere to start if you’re struggling to figure out where to find your story.

The ONS release calendar tells you what new datasets will be released in the coming days

Office of National Statistics

Government releases are a good source of up-to-date information and the ONS is one of the best places to get your data on the UK's demography, economic and business subjects.

As well as getting to grips with the national picture, we can also use the ONS is getting information on variables that shift across the country. For example, we can see differences in immigration across the different local authorities in the countries.

World Bank

The World Bank’s portal releases free and open data about development across the world. When looking for data on every country in the world, the World Bank is often a good starting point. 

It has information on demographics, global finances and public health and safety. They have an interface ready on the site for you to trawl through a multitude of datasets in an attempt to find global trends and issues, and all their data is ready to download.

Data.gov.uk and Data.gov

Data.gov.uk is the UK government’s data portal, releasing information on the topics the governments works on. The USA has a similar model with Data.gov, giving people access to their data, and many governments are beginning to adopt similar initiatives.

One warning, however: governments may not release information that makes them look bad. If you want to make sure you’re getting the full story about an institution, never just consult one source on it.


The European Union's own open data agency. If you’re looking to compare the UK against other European countries, or are looking to cover a story across the EU, Eurostat contains a variety of publications containing statistics on EU member states.

This site has information on economic output, labour markets and demographics – to name just a couple. Considering the fact that the UK's relationship with the EU is likely to dominate headlines for years to come, this source is going to become increasingly important.

Open Corporates

When looking into companies across the world, Open Corporates is an important source. It is the largest global open database of companies. Its eventual aim is to list a URL for every company in the world.

United Nations

The UN Data Portal has information on many different variables, broken down by countries across the world. If the World Bank doesn't have data on an international topic, it is worth going to the UN to see if it has it.

The UN Refugee Agency is a similar portal dedicated to one specific issue that is current in the news: data on migrants.


Data.police.uk is a hub of data on crime and policing in England, Wales and Northern Ireland. You can access CSVs on street-level information and explore the site’s API for data about individual police forces and neighbourhood teams.

This is a very handy site to see how the police are performing on a local basis. You can compare crimes by location and time, enabling you to find any correlations or patterns there are out there. The Metropolitan Police also publishes their data on each crime in London on police.uk.


Nomis is a good source for official labour market statistics – you can get detailed data based on local areas, and can search summary statistics by local authority, ward or constituency.


Want to see the data that the NHS and local councils use to monitor performance and shape the services you use? MyNHS gives you this chance: it is one of the best places to get your data on the UK’s health service.


WhatDoTheyKnow.com aggregates freedom of information requests and responses, making them available and open for us to find and analyse. We can also use this resource before sending our own requests off, check ing if the data you want has already been released.

World Health Organisation

The World Health Organisation Data is a huge data library with maps, reports and country-specific statistics. From air pollution to child stunting, to epidemics and data relating to the Sustainable Development Goals, WHO has lots of information that it is opening up to academics, researchers and journalists.

30 May 2016

Poisonous statistics: How bad numbers could influence a generation's future

£350m per week. According to some, that's the amount of money the UK gives to the EU. Of course, it's not. We instead pay around £250m per week, due to the rebate that reduces the amount we pay - but that doesn't stop people saying and believing the first number.

I've spent the last few weeks working with Full Fact to check some of the statistics in the EU referendum. From household income to immigration, jobs to red tape, we haven't yet found a claim that we can fully endorse - they're either completely wrong or at least misleading.

These claims are coming from major politicians with huge followings. Prime Minister David Cameron; ex-London Mayor Boris Johnson; Labour Leave leader Alan Johnson; Ukip leader Nigel Farage.

All of these people are getting away with twisting numbers to suit their own ends. Politicians have always done this - and they will always do so.

But there's something wrong when campaigns can keep on repeating the same incorrect statistic - that the UK sends £350 million a week to the EU - without any consequences.

A quick explanation

Just to quickly explain why this figure is plainly wrong and misleading. The UK’s rebate, or discount, reduces what we would otherwise pay. In 2015, we paid the EU £13 billion - working out at £250 million a week.

But then there's the EU payments given to the government, which makes our net contribution around £8.5 billion, or £160 million a week. This is the UK's net contribution: still a big cost, but less than half the figure that many people now believe is true.

This can be balanced against other ways in which the EU contributes to the UK: grants to British researchers, for example. The remain camp would then argue that it can also be weighed against advantages in business, trade and employment. Full Fact's guide to EU contributions goes into all of this in more detail.

Our chart showing how much the UK actually sends the EU annually (Telegraph Graphics)

Why does it matter?

The number's been featured on the side of the Vote Leave bus for weeks. It's been repeated by numerous public figures and campaigners, plastered all over social media. My own friends and family have repeated the number at me when the subject arises. It's become a fact for people.

But the problem is that it's not a fact. The UK Statistics Authority itself has said so. Sir Andrew Dilnot, chair of the UK Statistics Authority, said he was disappointed by the Brexit campaign's repetition of the claim, branding it "misleading and undermines trust in official statistics".
And yet the leave campaign are still going around saying it without any consequences. Every time it's repeated, "£350m per week" gains traction. It gets spread around more people and slowly becomes reality. Just this week, the figure was repeated live on TV during a BBC EU debate, allowing thousands of people to be persuaded by a dodgy statistic.
Where is the accountability for politicians and campaigns using poisonous statistics? They could influence the history of the United Kingdom - based on the misuse of numbers.

Tim Harford has previously written a piece on how politicians have poisoned statistics, and his points are only made more clear by what we're seeing in the EU campaign. Still, he gives us a gleam of light in the face of his misuse of statistical 'evidence'. He concludes:

But despite all this despair, the facts still matter. There isn’t a policy question in the world that can be settled by statistics alone but, in almost every case, understanding the statistical background is a tremendous help.

So the facts do still matter. That's reassuring. We just have to figure out which facts matter - and hopefully before the EU referendum vote on 23 June.

And for the future, there needs to be accountability for politicians and their use of statistics. They can't get away, as the Leave campaign might, by altering the history of a country through the misuse of data.

21 Feb 2016

Mapping with CartoDB: Solutions to problems faced by the journalist user

CartoDB is a great mapping tool for journalists. I've used it personally and professionally, to help tell the data-driven stories I produce.

It can be used to visually improve the stories you seek to tell, using interactive maps to help readers engage with your stories. You can produce these maps with no coding knowledge, all with the simple upload of an Excel file to the website, which will do all the hard work for you.

As soon as you upload your data, CartoDB will often show you a map of it immediately, which can be customised easily to improve it. This customisation can vary from simply changing the map type or information window, to editing the CSS to play with how the map shows your data.

But CartoDB is not perfect. There are drawbacks with using this tool, as I'll try to explain below, as well as highlight some ways to get around these issues.

Area names not matching

In the cleaning process, data journalists know that they have to look at their data's consistency - and this applies for area names.

Unless you're mapping large, major areas, such as countries, the chances are that you'll have to merge datasets - matching up area names in your dataset with shape files you've had to download yourself (often in the form of .kml files).

Often, when mapping UK constituencies or local authorities (shape file here), area names can prove an issue, as there are different ways to spell or present them. The text strings that you have in your data (even if t's downloaded from government websites) may not match up with the text strings in the shape file you downloaded - and so you won't be able to merge them.

This can be the difference between having "York Central" and "Central York"; "Wyre and Preston North" and "Wyre & Preston North"; "Weston-Super-Mare" and "Weston Super Mare". Any of these inconsistencies are present between your two spreadsheets, when you could to merge them in CartoDB, you will have gaps in your map.

This means you have to be careful when you are preparing your dataset, before you upload it to CartoDB. Look at your shape file and your dataset, and see two match up for area names. Checking will save you time later. The if function could be useful here, to ask Excel if your two columns are the same (once you have brought the two datasets together for your test).

Alternatively, using area codes overlooks this altogether - avoiding the possibility of variations in human input and making your work more reliable.

Area codes are individual and less susceptible to human error, and therefore are the best options to use in geolocating

Regional differences

Living in the UK? In England or Wales? Scotland? Northern Ireland? Thanks to devolution, each of these regions have different statistical agencies and so maps comparing a variable across the whole of the UK are rare.

This can be an issue if you're writing for a British newspaper, wanting to show differences across the whole country. You may get shape files and data for English regions, and perhaps Scotland and Wales - but often Northern Ireland may be missing.

Feel free to email me if you can't get a shape file for Northern Ireland - I have one somewhere. You may have to add this as a different layer on your map, if you don't wish to merge the files yourself (this can have implications for the map's size).

Even if you manage to get the shape files, consistent data across the whole of the UK can be rare. Be wary of comparing regional differences from different spreadsheets. These datasets may not be comparable, and so it's always best to try and seek the data you want all from one source for reliability.

As Simon Rogers says in his book Facts are Sacred:
"It is often easier to get across statistics from European countries via Eurostat than it is to get figures for the whole of the UK at a local level. That is because Eurostat has a single operation to combine data from across the European Union into single accessible datasets by coordinating all the national statistics agencies.
"The UK, with increasingly disparate data sources, needs that now. And it's kind of what we expect from the Office for National Statistics. The title says it all."
A choropleth map of the UK without Northern Ireland is an all too common sight

Mobile responsiveness

CartoDB maps are easy to embed in frames on your website, but they can be tricky to view on mobile. A map can often take over the whole screen, making it hard to scroll past to get to the rest of your story.

The map itself can also show too much information, or have the wrong focus, which means it's actually just confusing for your mobile audience, instead of helping them understand your story.

Advice to combat this would include:
  • Check the dimensions that are described in your iframe code. If the height is too much, the map can be hard to scroll past on mobile.
  • Keep the map free of clutter, such as lots of shapes, lines or dots. On mobile, too many of these can make the map unusable. 
  • The same goes for CartoDB's optional add-ons, such as sharing options and a search map. Unless it's important for your story, cut it. 
  • If your information windows have lots of information, they can dominate the screen when the reader clicks on them. This can crowd out what is underneath the window, which may be important context, and can hinder the reader. Omit all but essential information, and reserve what else you wish to tell for elsewhere in your story. 
CartoDB has now teamed up with Nutiteq, described as"pioneers in native mobile mapping", which could see developments in how their maps are viewed and engaged with on our smaller handheld screens.

Other geocoding issues

Other visual journalists I have spoken to have mentioned that it can sometimes be tricky to geocode data in CartoDB. There are options available for this in the tool, but they can be difficult.

I would suggest you geocode your data before entering it into the CartoDB platform. Either line up your shape file and check its suitability with your data, or create longitude and latitude columns before uploading your dataset.

This geocoding resource is invaluable: you can input a list of addresses into it, and it will automatically return the longitude and latitude of each point for you. It will also present them on a map for you, so you can quickly check the geographical distribution (and have a quick, first check to make sure it's accurate).

When merging spreadsheets by a column in order to geolocate your data against a shape file, make sure that the area names match up exactly

The restrictions of a third-party tool

Using a tool you haven't created yourself has obvious restrictions, in that you haven't planned and developed it with your specific needs in mind. You won't be able to do everything you want with its browser tool.

There are ways to get around this however. There are several ways to improve your maps in the tool, such as using HTML to make the data inside the information windows flow better, as well as filter and SQL query options.

CartoDB is also much more than just a browser tool and is available open source so you can make Carto more honed for your own ends. There is the CartoDB.js library, which can have several uses, such as wrap its APIs into complete visualisations.