Tag Archives: Kublai

Is evaluation overrated?

Policy wonks everywhere insist on hard, quantitative evaluation as an accountability device. The European Commission is spearheading the effort to drive the adoption of quantitative evaluation in traditionally “soft” areas, like social cohesion or social innovation. The message is quite simple: these are tough times for public budgets. You want something funded, you’d better make a strong case for it. It makes sense. How could it be wrong?

And yet, I wonder. Evaluation is theoretically rock-solid when it measures output in the same units as its input. The gold standard of that would be the famed Return on Investment (ROI): invest dollars. Reap dollars. Compute a ratio. Easy. When you invest dollars to reap, say, an increase in the heron population, or in the expected reduction in lung cancer incidence, things start to get blurred. And if you are comparing an increase in the heron population with an expected reduction in lung cancer incidence, they get really blurred.

I should know. I am a veteran of a similar battle.

In the 1980s, led by influential thinkers like the late David Pearce, Mrs. Thatcher’s environmental advisor, environmental economists tried to quantify the economic value to environmental goods. Their goal was to teach humanity to abandon the idea that the environment was there for free, and to start treating it as a scarce resource. This scene had its stronghold at University College London, where Pearce directed a research center and an M.Sc. program. I joined the latter in 1992. Our main tool was an augmentation of cost-benefit analysis, that old evaluation workhorse of the New Deal era. We had all kind of clever hacks to translate environmental benefits into dollars or pounds: hedonic pricing, contingent valuation, travel costs analysis. Once something is measured in money, it can be compared against anything else. Hard-nosed, quantitative evaluation ensued. Or did it?

As we moved on from our London classrooms to practice, we found out things were not nearly that simple. First, we had a very big theoretical issue: we were trying to emulate markets in order to value environmental goods, because, according to standard economic theory, well-behaved markets assign to goods exactly those prices that maximize collective well-being. However, the mathematical conditions for that to hold are very peculiar, such that they are rarely, if ever, observed in real life. Joseph Stiglitz, one of my favorite economists, was awarded a Nobel prize for showing that, removing just one of those conditions (perfect and symmetric information), the properties of the model go down in a big way. But even if you were prepared to take a leap of faith in the underpinning theory, man, getting to those values was hard. Very. Data are usually not available and impossibly expensive to generate, so people resorted a lot to surveys (“contingent evaluation” as we called them – it sounds more scientific). Bad move: that just got us entangled in cognitive psychology paradoxes explored in detail by Daniel Kahneman and Amos Tversky, who showed conclusively that humans simply do not value as (theoretical) markets do – and earned another Nobel.

Then there was very unfortunate politics. Just about the only people prepared to fund generously environmental evaluation were the biggest, baddest polluters. A whole body of literature sprang up as a consequence of the infamous Exxon Valdez oil spill, as Exxon fought in court to avoid having to pay for damages to the Arctic environment: we studied those papers in London. Their authors had the means to do a real evaluation exercise, but the people footing their bill had very strong preferences over its outcome. Not an easy situation.

We certainly succeeded in advancing the cause of evaluation as a requirement. Environmental impact assessment, used in America since the late 1960s, was made a requirement for many public projects by European regulation with a 1985 directive. Money was spent. A lot of consultants took some random course and started offering environmental impact evaluation as a service. But as to bringing about objective, evidence-backed evaluation, I am not so sure. Even now, 25 years later, environmentalists and general contractors are fighting court battles, each wielding their own environmental impact assessment, or simply claiming that the other side has intentionally commissioned a partial EIA to rig the debate (this is happening around the planned high speed rail link from Turin to Lyon). That does not mean EIA is not being useful: it does mean, however, it is not objective. The promise of “hard-nosed evidence” was delusional. I suspect this is fundamental, not only contingent: evalutation implies, you know, values. The ROI embeds a set of values, too: namely, it implies that all the information that matters is embedded in price signals, so you are making money you must be advancing social well-being.

I am curious to try an alternative path to evaluation: the emergence of a community that participates in a project, volunteers time, offers gifts. For example: in the course of a project I manage at the Council of Europe, called Edgeryders, I created a short introductory video in English. A member of our community loaded it onto Universal Subtitles, transcribed the audio into English subtitles and created a first translation into Spanish. Two weeks later, the video had been translated into nine languages, just as a gift. That does not happen every day: it made our lonely bunch of Eurocrats very happy, and – alongside a veritable stream of Twitter kudos, engagement on our online platform and other community initiatives like the map of citizen engagement – we took it as a sign we were doing something right. That’s evaluation: a vote, expressed in man-hours, commitment, good thinking. Such an evaluation is not an add-on activity performed by an evaluator, but rather an emergent property of the project itself; as such, quite likely, very fast, relatively cheap, and merciless in exposing failures to convince citizens of the value the project is bringing to the table.

Granted, online community projects like Edgeryders or Kublai lend themselves particularly well to being assessed this way – they contain thousands of hours of citizen-donated high quality human labor, a quite natural accounting unit for evaluation. But this criterion might be more generalizable than we think, or became so relatively soon. Recently a friend – the CEO of a software development company – astonished me with the following remark:

In the present day and age, half of a programmer’s work is nurturing a community on Github.

So it’s not just me: in more and more areas of human activity, complexity has become unmanageable unless you tackle it by collective “swarm” intelligence. In other words, more and more problems can – and maybe have to – be framed in terms of growing an online community around them. If that is true, that community can be used as a basis for evaluation activities. It should be a no brainer: I have never met an ecologist or a social worker who think that assessing an environmental or social cohesion impact with ROI makes the slightest sense. If we can figure out a theoretically sound and practically feasible path to evaluation we can and should get rid of ROI for nonprofit activities altogether. And good riddance.

The apprentice crowdsorcerer: learning to hatch online communities

I am working on the construction of a new online community, that will be called Edgeryders. This is still a relatively new activity, that deploys a knowledge not entirely coded down yet. There is no instruction manual that, when adhered to, guarantees good results: some things work but not every time, others work more or less every time but we don’t know why.

It is not the first time I do this, and I am discovering that, even in such a wonderfully complex and unpredictable field, one can learn from experience. A lot. Some Edgeryders stuff we imported from the Kublai experience, like logo crowdsourcing and recruiting staff from the fledgling community. Other design decisions are inspired from projects of people I admire, projects like Evoke or CriticalCity Upload; and many are inspired by mistakes, both my own and other people’s.

It is a strange experience, both exhalting and humiliating. You are the crowdsorcerer, the expert, the person that can evoke order and meaning from the Great Net’s social magma. You try: you say your incantations, wave your magic wand and… something happens. Or not. Sometimes everything works just fine, and it’s hard to resist the temptation of claiming credit for it; other times everything you do backfires or fizzles out, and you can’t figure out what you are doing wrong to save your life. Maybe there is no mistake – and no credit to claim when things go well. Social dynamics is not deterministic, and even our best efforts can not guarantee good results in every case.

As far as I can see, the skill I am trying to develop – let’s call it crowdsorcery – requires:

  1. thinking in probability (with high variance) rather than deterministically. An effective action is not the one that is sure to recruit ten good-level contributors, but the one that reaches out to one thousand random strangers. Nine hundred will ignore you, ninety will contribute really lame stuff, nine will give you good-level contibutions and one will have a stroke of genius that will turn the project on its head and influence the remaining ninety-nine (the nine hundred are probably a lost cause in every scenario). The trick is that no one, not even him- or herself, knows in advance who that random genius is: you just need to move in that general direction, and hope he or she will find you.
  2. monitoring and reacting rather than planning and controlling (adaptive stance). It is cheaper and more effective: if a community displays a natural tropism, it makes more sense to encourage it and trying to figure out how to use it for your purposes than trying to fight it. In the online world, monitoring is practically free (even “deep monitoring” à la Dragon Trainer), so don’t be stingy with web analytics.
  3. build a redundant theoretical arsenal instead of going pragmatic (“I do this because it works”). Theory asks interesting questions, and I find that trying to read your own work in the light of theory helps crowdsorcerers and -sorceresses to build themselves better tools and encourages their awareness of what they do. I am thinking a lot along a complexity science approach and using a little run-of-the-mill network math. For now.

These general principles translate into design choices. I have decided to devote a series of posts to the choices my team and I are making in the building of Edgeryders. You can find them here (for now, only the first one is online). If you find errors or have suggestions, we are listening.

Three futures for Kublai

Kublai Camp 2011 is happening today; it is the third of its kind and the first one that I can’t take part in.. My friend Tito Bianchi at the Ministry of Economic Development asked me to make a short video to tell the people convened how I would envision Kublai’s future. I am happy to oblige: in the video above (Italian language) I outline three scenarios, two of which I would approve of and one I would not. They are:

  1. shutdown at the end the next cycle and move on. We have gained a lot of useful knowledge we can deploy elsewhere, and that was the whole point of the exercise.
  2. devolution of the project to its community, maintaining its public mission. This would be an extraordinary outcome: a public policy so appreciated that its beneficiaries step in to do the heavy lifting themselves. But it is a tricky one to pull off, and at this point in time I deem it unlikely to happen for reasons I explain in the video.
  3. entrenchment and drift of Kublai into a kind of business planning online help desk, feeding into the plethora of contests for startups, creative projects etcetera. I think this outcome would be tired and – in the context of Italy’s constitutional architecture – not suited to a central government agency. I think it should be avoided.

I am curious to see what happens. More info on Kublai here.