Good evening all, As a young professional, I suppose you have to be open to the fact that your ideas and understanding of the world might shift. And especially in my field: there is a lot of innate optimism about the transformative capabilities of technology that veils are our judgement. Technology's purpose is improvement, right? That is kind of the premise of it, and certainly the approach to urban informatics that we have adopted in our work so far.
Beyond the usual debate of whether the flood of personalised data that is collectable these days has any privacy implications we should be worried about (I reckon there are but that is probably the subject of another blogpost!), there are some very serious questions we need to contend with in this age of real time information. What is the impact of instant feedback loops? Are these always good?
This brings me back to a great report that Dan Hill authored I think before I officially joined Arup but that we subsequently tried to generalise to make applicable to a range of readers, not just the City of Melbourne in the context of the C40, for whom the report was originally written. You can find a blogpost on Dan's blog that talks about the original report and a link here to the updated and generalised version for reference. The differences are really minimal, and if I am honest, I think Dan nailed the graphic design that was later 'undone'. Oh well! I guess corporate branding gets the best of all of us!
Somewhere in this report, the fundamental question is asked: "if bad decisions are made due to poor information, does better information enable us to make better decisions?" Almost instantly, you want to say 'yes'. And probably, on the whole, that holds true. The major caveat for me rests in a few key considerations about data: all information is not equal under the sun. Real time data can be of varying quality, collected for many different purposes and the size and reach of the feedback loop can encourage certain behavioural outcomes. The perverse side of real time data is what I want to explore with you tonight.
The idea for this blogpost came to me once again whilst listening to Ira Glass on this American Life. In 17 years of broadcasting, This American Life has perfected the art of investigative journalism. And last week's episode, titled 'The Right to Remain Silent' is no exception. One of the two stories that Ira covers is about Compstat, a breakthrough in the reporting of crime statistics in the US. Compstat started in NYC during the Giuliani years of 'Zero Tolerance', which marked a real turning point in the city's crime rates. It is lauded internationally as an example of how better data has helped target crime better and deliver safer neighbourhoods. And for the most part, it must be true. New York City is nowhere near as dangerous as it had been when my parents lived in the Upper West Side in 1968 and did not venture out of their little pocket of relative safety. Neither of my parents could image going traipsing around Harlem to take photographs like I did when I was taking my architectural photography class at Columbia.
So it's undeniable, the city has changed, it has become more attractive to knowledge workers, families, hipsters and midwesterners as a result. And on the whole, that is a pretty positive achievement. Having said that, the story of Adrian Schoolcraft covered by Ira and his team starts to cast some serious doubts on the effectiveness of Compstat as an almost instant feedback mechanism for reporting crime.
It appears that in the early days, Compstat was delivering miracles. Police precincts were seeing week on week improvements in the number of arrests, citations, summons, you name it. There was also a sharp decline in the number of serious offences. NYPD had been riding this wave of optimism and new found confidence in the police's ability to deliver real societal shift, it would appear. What Adrian Schoolcraft's story reveals is that NYPD, especially after a couple years of uninterrupted improvement in the figures, kind of got hooked on the idea of improving figures and wanted to deliver the expected arrests and so on, at any cost. Literally.
Adrian was a Police officer himself and spent the last couple of years he was in the police force for recording team meetings where hard KPIs were pushed onto the officers: you must deliver this amount of citations, this amount of this type of crime, and that amount of summons. You must also help us collect the data that will prove that there is no serious crime here so that our curve can look as God intended it... This is the kind of evidence Schoolcraft has on record and has been able to provide subsequently through the judicial system.
I won't spoil the rest of the story for you, but what appears clear in this example is that humans are very good at gaming the game. Once the rules are established and the behaviours rewarded, all that matters is the target, not what the target represents.
This is, I am sure, something we all see within the organisations we operate in, especially consultancies where sales and billability are simple and overused measures of success. Take billability: as a metric to measure utilisation of staff member, it incentivises staff to book time to projects, probably inflating the time spent on those projects and ultimately hurting the profitability of those projects. Measuring billability AND reporting on it in a short cycle, also breeds a short term view of utilisation, not a long term, strategic view of work pipeline that will build through client relationships, business development and strategic marketing activities. So once again, looking through the prism of billability, a team can look great: busy, utilised and billing the client. But what happens when those quarterly results are issued and all projects are being run at a loss?
I know this sounds like a rant, but the consulting business model is built on this premise. And usually, it is a pretty good proxy for utilisation and profitability until, in a market like the Australian property and development market is nowadays, margins are tight, projects are fewer and competition is stiff. That's when your tight feedback loop is not really serving the purpose you intended it for...
This behavioural outcome has been observed in the field of behaviour design and the work of Martin Tomitsch and Elmar Trefz at Sydney Uni on the Neighbourhood Scoreboards is a great example of this. The research project aimed at using a cluster of terrace houses and their inhabitants to pilot the public use of energy consumption data as a means to reduce consumption. The idea is that if the results are public, you are more likely to want to perform well and be able to gloat around your neighbours. It's fair to say that the research is well designed and there are a whole bunch of lessons learnt from this project that Martin has generously shared in his publications.
A key insight though is the fact that people don't react that well to absolute numbers. Knowing that I am consuming X KWh today means nothing to me if I have nothing to compare it to. People like competing against themselves and therefore it is comparative data (energy consumption today vs. this time last week, last month, this day last year, etc) that really gets people interested in participating in the 'game'. So far, so good.
However, because it is the day on day decrease of energy consumption that would get households ahead in this competition, the more you could cut your consumption day on day, the higher you scored. And this bread a whole lot of strange domestic behaviours such as not washing dishes and clothes for a week and then having a 'bad', i.e. energy intensive day, where all washing activities take place. I bet the cockroaches loved it.
I suppose these are three examples that show really clearly that real data does incentivise behaviour, and that we can be willing participants in gaming the game to get better numerical outcomes. What real time data doesn't do though is apply a moral compass to the behaviours, it allows for the focus to be on the behaviour for the sake of the numbers, not for the sake of the outcome. Getting the balance right between directly measuring impact and keeping the overall objective of the behaviour in check is the real challenge of behaviour design. I don't have all the answers, but I look forward to looking for them through my projects in the coming years.