Tag Archives: nudge

Do you need a Personal Charity Manager?

2969641664_80c1ecae03
Charity‘ – by flickr user Howard Lake under CC-BY-SA 2.0 license

As an offshoot of some recent work, I’ve been thinking a lot about intermediaries and user agents, who act on behalf of individuals to help them achieve their goals. Whether they are web browsers and related plugins that remember stuff for you or help you stay focused, or energy switching platforms like Cheap Energy Club who help you get the best deal on energy, these intermediaries provide value by helping you to follow through on your best intentions. I don’t trust myself to keep on top of the best mobile phone tariff for me, so I delegate that to a third party. I know that when I’m tired or bored, I’ll get distracted by Youtube, so I use a browser plugin to remove that option when I’m supposed to be working.

Intermediaries, user agents, personal information managers, impartial advisers – however you refer to them, they help us by overcoming our in-built tendencies to forget, to make bad choices in the heat of the moment, or to disregard important information. Behavioural economics has revealed us to be fundamentally less rational in our everyday behaviour than we think. Research into the very real concept of willpower shows that all the little everyday decisions we have to take exact a toll on our mental energy, meaning that even with the best intentions, it’s very unlikely that we consistently make the best choices day-to-day. The modern world is incredibly complex, so anything that helps us make more informed decisions, and actually act consistently in line with those decisions on a daily basis, has got to be a good thing.

Most of these intermediary systems operate on our interactions with the market and public services, but few look at our interactions with ‘third sector’ organisations. This is an enormous opportunity. Nowhere else is the gap between good intentions and actual behaviour more apparent than in the area of charitable giving. If asked in a reflective state of mind, most people would agree that they could and should do more to make the world a better place. Most people would agree that expensive cups of coffee, new clothes, or a holiday are not as important as alleviating world hunger or curing malaria. Even if home comforts are deserved, we would probably like to cut down on them just a little bit, if doing so would significantly help the needy (ethicist Peter Singer suggests donating just 10% of your income to an effective charity).

But on a day-to-day basis, this perspective fades into the background. I want a coffee, I can easily afford to buy one, so why not? And anyway, how do you know the money you donate to charity is actually going to do anything? International aid is horribly complex, so how can an ordinary person with a busy life possibly work out what’s effective? High net worth individuals employ full time philanthropy consultants to do that for them. So even if we recognise on an abstract, rational level that we ought to do something, the burden of working out what to do, the hassle of remembering to do it, and the mental effort of resisting more immediate conflicting urges, are ultimately overwhelming. The result is inertia – doing nothing at all.

Many charities attempt to bypass this by catching our attention with adverts which tug at the heartstrings and present eye-catching statistics. As a result, until recently I went about giving to charity in a completely haphazard way – one-off donations to whoever managed to grab my attention at the right moment. But wouldn’t it be better if we could take our rational, considered ethical commitments and find ways to embed them in our lives, to make them easy to adhere to, reducing the mental and administrative burden? I’ve found several organisations that can help you work out how to give more effectively and stay committed to giving (see Giving What We Can). But there is even more scope for intermediaries to provide holistic systems to help you develop and achieve your ethical goals.

Precisely what form they take (browser plugins, online services, or real, human support?), and what we call them (Personal Charity Managers, Ethical Assistants, Philanthropic Nudges, Moral Software Agents), I won’t attempt to predict. They wouldn’t be a panacea; ethical intermediaries will never replace careful, considered moral deliberation, rigorous debate about right and wrong, and practising virtue in daily life. But as services that practically help us follow through on our carefully considered moral beliefs, and manage our charitable giving, they could be revolutionary.

Nudge Yourself

It’s just over five years since the publication of Nudge, the seminal pop behavioural economics book by Richard Thaler and Cass Sunstein. Drawing from research in psychology and behavioural economics, it revealed the many common cognitive biases, fallacies, and heuristics we all suffer from. We often fail to act in our own self-interest, because our everyday decisions are affected by ‘choice architectures’; the particular way a set of options are presented. ‘Choice architects’ (as the authors call them) cannot help but influence the decisions people make.

Thaler and Sunstein encourage policy-makers to adopt a ‘libertarian paternalist’ approach; acknowledge that the systems they design and regulate inevitably affect people’s decisions, and design them so as to induce people to make decisions which are good for them. Their recommendations were enthusiastically picked up by governments (in the UK, the cabinet office even set up a dedicated behavioural insights team). The dust has now settled on the debate, and the approach has been explored in a variety of settings, from pension plans to hygiene in public toilets.

But libertarian paternalism has been criticised as an oxymoron; how is interference with an individual’s decisions, even when in their genuine best interests, compatible with respecting their autonomy? The authors responded that non-interference was not an option. In many cases, there is no neutral choice architecture. A list of pension plans must be presented in some order, and if you know that people tend to pick the first one regardless of its features, you ought to make it the one that seems best for them.

Whilst I’m sympathetic to Thaler and Sunstein’s response to the oxymoron charge, the ethical debate shouldn’t end there. Perhaps the question of autonomy and paternalism can be tackled head-on by asking how individuals might design their own choice architectures. If I know that I am liable to make poor decisions in certain contexts, I want to be able to nudge myself to correct that. I don’t want to rely solely on a benevolent system designer / policy-maker to do it for me. I want systems to ensure that my everyday, unconsidered behaviours, made in the heat-of-the-moment, are consistent with my life goals, which I define in more carefully considered, reflective states of mind.

In our digital lives, choice architectures are everywhere, highly optimised and A/B tested, designed to make you click exactly the way the platform wants you to. But there is also the possibility that they can be reconfigured by the individual to suit their will. An individual can tailor their web experience by configuring their browser to exclude unwanted aspects and superimpose additional functions onto the sites they visit.

This general capacity – for content, functionality and presentation to be altered by the individual – is a pre-requisite for refashioning choice architectures in our own favour. Services like RescueTime, which blocks certain websites for certain periods, represent a very basic kind of user-defined choice architecture which simply removes certain choices altogether. But more sophisticated systems would take an individuals’ own carefully considered life goals – say, to eat healthily, be prudent, or get a broader perspective on the world – and construct their digital experiences to nudge behaviour which furthers those goals.

Take, for instance, online privacy. Research by behavioural economist Alessandro Acquisti and colleagues at CMU has shown how effective nudging privacy can be. The potential for user-defined privacy nudges is strong. In a reflective, rational state, I may set myself a goal to keep my personal life private from my professional life. An intelligent privacy management system could take that goal and insert nudges into the choice architectures which might otherwise induce me to mess up. For instance, by alerting me when I’m about to accept a work colleague as a friend on a personal social network.

Next generation nudge systems should enable a user-defined choice architecture layer, which can be superimposed over the existing choice architectures. This would allow individuals to A/B test their decision-making and habits, and optimise them for their own ends. Ignoring the power of nudges is no longer a realistic or desirable option. We need intentionally designed choice architectures to help us navigate the complex world we live in. But the aims embedded in these architectures need to be driven by our own values, priorities and life goals.