A couple of things in my stream this morning about using newly liberated data:
Hadley pointing to Jo Ivens’s piece “Open data: we can’t just rely on developers” for demsoc.org
“I am a policy type who cares about social justice and believes that the voluntary and social enterprise sectors can have a central role in making communities better. Open data is a potentially really important part of this, but not if the only way it can be used it by filtering it through a set of technical experts or developers – that’s too much like what we have today.”
A big hurrah for a “policy type” jumping in and getting involved. More of these, please!
Charlie pointing to Simon Rogers on the Guardian DataBlog about the Data Journalism Workflow
“Before a dataset results in a data journalism story, there’s a whole process of sifting and finessing and generally sorting the data out. The split is roughly 70% tidying up the data, 30% doing the fun stuff of visualising and presenting it. So, how do we get through that 70%?”
The rest of the piece and the flowchart show that it’s not quite this simple… there’s also the tricky matter of finding the story…
And this is where these things come together in my mind. My experience in this field is ancient and comes from a time when things were all much simpler, my child. But I think there’s stuff to learn.
A formative experience for me was us sitting down as a tem towards the end of the first year of Joint Reviews to work out what to say in an annual report. We’d got a shedload of evidence, we’d made some sense of statistical returns and we’d collected quantitative and qualitative data as we went althoug, as we were learning as we went, not always consistently. It was a mess, but one with some level of order and a couple of us had enough of an overview to be able to guide others through it.
I prepared for the meeting in the way that “developers” are described in Jo’s piece. I looked for patterns, any patterns, interesting correlations or just consistent messages from the majority of places we’d studied.
Andrew Webster came at it from the other end and changed my way of thinking. He posited hypotheses, based on “what we’d expect to be happening if things were well run” and then we tried to see what the data told us about those things – it made for some more uncomfortable findings, but much more interesting, engaging stories and I expect, did much more to shift actual practice than anything we could have found looking from my direction.
It’s that kind of thinking that I don’t immediately see in hackdays or govcamp sessions, something that starts with policy or “what we want to happen”, what stories we want to be able to tell and then calls on supporting evidence rather than the other way round. I hope that I’m wrong, that it’s because I’m not paying close enough attention or that it’s coming soon. I think we need more of it.