Tuesday, May 31, 2016

Interview with Futurezone about Technological Unemployment and the Meaning of Life




I recently had the privilege of being interviewed for the Austrian website Futurezone about the future of work and the meaning of life. The interview was translated into German from a conversation conducted via email in English. The official published version is available here. I have reproduced the English email responses below. There are some discrepancies between them. I don't really speak German (minimal, direction-seeking competence) so if anyone does and can produce a better re-translation, I'd be interested in it.

Here it is...

1. Do you think that we are headed towards a world of technological unemployment? If so, by when might it reach a critical point?

I'm wary of making predictions about this, particularly since similar historical concerns about technological unemployment have always proved to be unfounded. There are some reasons to think that 'this time it's different'. If we create AI with general and not simply narrow intelligence then I think we are heading for a radically different world. But I'm not sure when this will happen.

I do, however, think that irrespective of achieving general AI, technology is dramatically altering the types of jobs that are done by humans (e.g. Autor's polarisation effect). This is happening already and is having a noticeable impact on job stability and income.

2. What would it mean for your argument if only part of the world would reach a point where machines take over most of the work?

My argument is about the impact on meaning and human flourishing. If only part of the world reaches a point where machines take over most work, then presumably the same issues would arise in that part of the world. The only thing that would disrupt the analysis is what is happening in the rest of the world and how they react. Will they be jealous of those who have achieved technological unemployment or thankful that they still have jobs? Will people try to migrate away from or to those regions? I can see arguments for both sides, and I try to outline them in paper (i.e. reasons why people would want to have jobs and want to avoid them).

I should also add that the international picture would depend very much on how the income distribution problem is resolved.

3. What would happen to the quest for social status in a world where differences in income and class would presumably be leveled?

People will fight for status in other ways, I suspect. One thing I didn't discuss in the paper in much detail, but which I have discussed elsewhere (see here and here) is the suggestion that games and other leisure activities will become a major outlet for the unemployed. I imagine that success and proficiency in these games could be a significant source of status.

One minor point: I'm not sure that 'class' would be leveled in a world of technological unemployment with income redistribution. It depends on what you mean by 'class', but my interpretation of the term doesn't rest everything on levels of income.

4. Is the concept of “meaning in life” even valid in a world where machines optimally manage societies?

As long as there are humans around to ask questions about what it is all for, then I think the concept has validity. Unless you mean something by 'validity' that I don't quite grasp. To me, it just means whether or not it is sensible to ask the question and look for an answer. I think meaning will always be important to humans because it has been since the birth of civilisation.

5. Do you think these intelligent machines might have their own agenda?

'Agenda' might be the wrong way of putting it. Machines could have goals that are antithetical or inconsistent with our own, and if they have great power and adaptable intelligence this could lead to problems. This is something that is widely discussed in the literature about AI risk and motivates the doomsaying pronouncements of people like Elon Musk and Stephen Hawking.

6. If people can no longer contribute to objectively “good” developments and their activities become restricted to the domains of arts and culture, what happens to those that don’t have talent?

I could probably write many thousands of words about this. It depends on a couple of things. Is talent innate and fixed? If not, then people could use their free time to learn and master new skills in the worlds of art and culture (or games, as I mentioned above). If it is fixed, how exactly is it fixed? Is it genetic or biological in some way? If so, then developments in human enhancement technology might enable people to overcome those limitations. But then we get into questions about whether enhanced artistic or cultural creation is as valuable as unenhanced equivalents. Maybe it is less authentic and meaningful. Contrast the achievements of an athlete who wins based on innate talent with an athlete who uses performance enhancing drugs. Do we value the former more than the latter? My sense is that we do, but I'm not sure that it is a defensible distinction.

7. In how far could eastern philosophic approaches help to find meaning without the need to actively contribute?

I think there is great wisdom in some of the Eastern approaches to enlightenment and self-transcendence, particularly in the Buddhist tradition. This could be a major source of inspiration and meaning for people. My friend and colleague James Hughes (from the Institute of Ethics and Emerging Technologies) explores the intersection between buddhist thought and transhumanism in his work.

8. Is the ceding of control of our society to computers a bigger limitation to our freedom than compulsory work?

Potentially. I wrote another paper about this that you might be interested in. It's called 'The Threat of Algocracy' and discusses ways in which the takeover of public decision-making by machines could threat core values in liberal democratic society, specifically values associated with the participation in and comprehension of decision-making processes that interfere with our freedom.

9. Could technological unemployment pose a threat to the future of mankind, by breaking our “spirit”?

This is an interesting topic. If we feel completely inferior to machines and are unable to find another source of meaning in life, then it is possible that we would end up in a state of listless, frustrated boredom. I think that would amount to breaking our spirit.

10. Wouldn’t VR as an escapist solution invalidate the need for technological progress to reach a point where machines do the work?

I'm not sure I understand this question. First, I'm not convinced that VR is an escapist solution. This is something I discuss briefly in the paper (the 'primacy of the real' objection) and in more detail in this podcast. In other words, I'm not convinced that the kinds of experience and meaning found in a VR world are necessarily less meaningful than those in the real world. Second, as long as there is a demand/necessity for people to work in order to secure basic needs, then VR couldn't function as an escapist solution.

11. In your paper you assume that we will find a way to ensure egalitarian distribution of the gains from automation. How confident are you in that outcome, especially in light of political willingness, economic motivations, developmental differences and the elites’ willingness to give up their relative advantage?

I have some optimism - several countries in Europe are either experimenting with or seriously considering something like a basic income guarantee, and the cultural conversation about the idea is really taking off. But I doubt that we would ever achieve an egalitarian distribution (if by that you mean parity of income). I think there will always be some relative advantage. In the paper I don't assume egalitarian distributions of the gains; I only assume that people will have access to everything they need, i.e. that they live in a world of relative abundance.

Monday, May 30, 2016

Is Effective Altruism actually Effective?


Effective Altruism Logo


(Part one; part two; part three)

This is going to be my final post on the topic of effective altruism (for the time being anyway). I’m working my way through the arguments in Iason Gabriel’s article ‘Effective Altruism and its Critics’. Once I finish, Iason has kindly agreed to post a follow-up piece which develops some of his views.

As noted in previous entries, effective altruism (EA) is understood for the purposes of this series as the belief that one should do the most good one can do through charitable donations. EA comes in ‘thick’ and ‘thin’ varieties. The thick version, which is what this series focuses on, has three key commitments: (i) welfarism, i.e. it believes that you should aim to improve individual well-being; (ii) consequentialism, i.e. it adopts a consequentialist approach to ethical decision-making; and (iii) evidentialism, i.e. it insists upon favouring interventions with robust evidential bases.

Criticisms of this ‘thick’ version of EA can be orgainised into three main branches. The first argues that EA ignores important justice-related considerations. The second argues that EA is methodologically biased. And the third argues that EA is not that effective. I’ve looked at the first and second branches in previous entries. Today, I take on the third.

In many ways, this might be the most interesting — and for proponents of EA the most painful — criticism. After all, the chief virtue of EA is that it is supposed to be effective: it allows its adherents to claim that they are genuinely doing the most good they could do through their actions. But what if this is false? What if following the EA philosophy turns out to be relatively ineffective?There are three ways to flesh out this concern. I’ll go through them in turn.


1. Does EA neglect important counterfactuals?
Counterfactual analysis is at the heart of the EA worldview. Suppose you meet a homeless person on the street and they are begging for money. You have £1 in your pocket. You could give it to them, but the EA proponent will urge you to think it through. What could you have done with that £1 if you donated it elsewhere? What was the opportunity cost of giving it to the homeless person?

When you engage in this kind of counterfactual analysis, the EA proponent is confident that you’ll see that there were better ways to allocate that money. In particular, the EA proponent will suggest that there are charities in the developing world where the same £1 could have done orders of magnitude more good than it could do for that homeless person in your home country (assuming you live in a developed nation). You shouldn’t succumb to the proximate emotional appeal of your interaction with that person; you should be more rational in your decision-making. That’s the way you will do the most good. This is illustrated in the diagram below. It shows how you could give your money to the homeless person or to GiveWell’s top-ranked charity. When you think of it in these terms, the rational (and most morally effective) choice becomes obvious.



This kind of counterfactual analysis doesn’t just apply to decisions about how to allocate your spare cash. It also applies to decisions about how to spend your time. Many altruistically-inclined people might like to spend their time and energy working for a worthy charitable cause. But the EA proponent will argue that this isn’t necessarily the best use of their talents. Again, you should ask: what will happen if I don’t work for that organisation? Will someone else fill in for me? Are there better things I could be doing with my time and energy? One of the leading EA organisations is 80,000 hours, a charity that helps young people make career decisions. One of their more well-known bits of advice (which, to be fair, can be overemphasised by critics) is that many people should not work directly for charitable organisations. Instead, they should earn to give, i.e. take up a lucrative gig with an investment bank (or somesuch) and donate the cash they earn to worthy causes.



There is a seductive appeal to this kind of counterfactual analysis, particularly for those who like to be thoughtful and rational in their decision-making. But it cuts both ways. It can be used to call into question the advice that EAs give. After all, there are many possible counterfactual worlds to consider when making a decision. EAs emphasise a couple; they don’t emphasise all. In particular, they don’t often pause to ask: what would happen if I didn’t donate to GiveWell’s top-ranked charity? What would that counterfactual world look like?

This is where Gabriel makes an interesting argument. He doesn’t explain it in these terms but it’s like charitable variant on the efficient market hypothesis. In very crude terms, the efficient market hypothesis tells us that we can’t beat the market when making investment decisions. The market factors all relevant information into market prices already; we can’t assume that we have some informational advantage. This idea is often illustrated with the story about the man seeing a five dollar note on the street. He’s tempted to pick it up but his friend is an economist and a fan of the efficient market hypothesis. The friend tells him that the fiver must be illusory: if it really existed it would already have been picked up.

When you think about it, the market for charitable donations could be susceptible to a similar phenomenon. If you see some apparently worthy charitable cause — such as GiveWell’s top-ranked charity — going under-funded, you should ask yourself: why isn’t it being funded already? After all, there are a number of very wealthy and well-staffed philanthropic organisations out there. If they are rational, they should be donating all their money to GiveWell’s top-ranked organisations. Your donations shouldn’t really be able to make any significant difference:

The Efficient Charities Problem: If large philanthropic organisations are trying to be effective, and if they accept that evaluators like GiveWell are correct in the advice they give, then they should be giving all their money to the top-ranked charities. This would mean that your individual decision to donate to those charities won’t make any (counterfactual) difference.


How seriously should we take this problem? As you can see, there are a number of conditions built into its formulation. Each one of those conditions could be false. It could be that large philanthropic organisations are not trying to be effective (or are so irrational as to be incapable of appreciating effectiveness). This is not implausible. The efficient market hypothesis is also clearly vulnerable to this problem: people are not rational utility maximisers; investors constantly believe that they can beat the market. The same could be true for philanthropic organisations. That said, and as Gabriel points out, it is unlikely to be true across the board. There are at least some well-known philanthropic organisations (the Gates Foundation springs to mind) that do try to be rational and effective in their decision-making.

This leads us to the second possibility: they do not accept the evaluations given by organisations such as GiveWell. This is also not implausible. As noted in previous entries, there are reasons to question the value assumptions and methodologies used by such organisations (though GiveWell is responsive to criticisms). But this should not be reassuring to proponents of EA. The Gates Foundation is well-staffed and has a huge research budget. If they aren’t reaching the same conclusions as GiveWell and other EA organisations, then the counterfactual analysis of how to do the most good may not yield the clear and simple answers that EAs seem to promote.

There is, however, a third possibility. It could be that large philanthropic organisations do not donate to the top-ranked charities because they want to assist the EA movement. In other words, they want to grow EA as a social movement and they know that if they donate all their resources to those charities they would risk crowding out the people who want to do good and make it more difficult for them to identify other effective interventions.

Interestingly, Gabriel argues that there is some evidence for this third possibility. GiveWell has received considerable funding recently — enough to allow it to fully fund its top charities for a couple of years. But it has chosen not to do so, thereby retaining the need for ordinary and enthusiastic EAs to donate to those organisations. This may be a clever long-term strategy. It may allow the EA movement to grow to a scale where it can do even more good in the future. But if this is what is happening, it has two slightly disturbing features: (i) it undermines the EA claim that individual decisions really can make a big difference; and (ii) it is resting hope on a speculative future in which EA achieves mass appeal, not on the goodness of individual charitable donations.


2. Does EA neglect motivation and overemphasise rationality?
This criticism was cut out of the final version of Gabriel’s paper (in response to reviewer comments) but he thinks (personal correspondence) that it is still worthy of consideration. The gist of it is that the EA movement underplays the psychological barriers to achieving the kind of social change they want to achieve.

As we just saw, its possible (maybe even probable) that the current goal of the EA movement is to change the societal approach to charitable giving. This widescale change will require a lot of people to change the way they think and act. In encouraging this change, the EA movement prioritises rational, evidential, counterfactual analysis. It highlights neglected causes, and urges its followers to do things that may seem initially counter-intuitive (e.g. earning to give). How successful is it likely to be in achieving this widescale social change?

Two problems arise. First, the EA movement may be underestimating the difficulty in sustaining a counter-intuitive lifestyle choice like earning to give. Some people may not be able to stay in the highly lucrative career, and others may find their attitudes altered by working in some ruthless and highly competitive industry like investment banking. They may consequently lose the desire to earn-to-give. Gabriel notes that EA proponents respond to this criticism by highlighting apparently successful individual cases. But he urges proponents to be cautious in using these examples. The members of the EA movement at present are largely self-selecting. They tend to be wealthy, largely male, highly-educated, non-religious and so forth. If EA is to succeed it will have to attract more adherents and they are less-likely to be drawn from this demographic. It may be more difficult for such people to sustain the compartmentalisation inherent in the earning to give mentality.

To be fair to EA proponents, I think the importance of the earning to give approach tends to be over-emphasised by their critics. The 80,000 hours site does try to give career advice that is based on the individual’s characteristics. The earning to give approach is only really recommended for those with the right mix of attributes. Still, it has to be said that there is something in the logic of the EA position which seems to favour the suppression of one’s personal desires for the greater good. But this is a general feature of utilitarian ethical theories.

The other problem is that by emphasising cold rational analysis, EAs may be underplaying the importance of moral sentiment, particularly when it comes to creating a movement with mass social appeal. Again, the relatively self-selecting group that currently embraces EA may love to engage in detached, evidential assessment of charitable causes; they may love to second-guess their emotional reactions to some charitable nudging; but whether others require some emotional seduction is a separate matter. This is something that EAs may need to ramp up if they want to achieve the kind of social change they desire.


3. Do EAs neglect the importance of systemic change?
We come, at last, to the most popular critique of the EA movement. This one has featured in virtually every critical piece I have read on the topic. Others have assessed it at length. I will only provide a rough overview here.

The gist of the objection is that EAs are too conservative and individualistic in terms of the interventions they promote. They focus on how individuals can make a difference through the careful analysis of their charitable decision-making. But in doing this they take as a given the systems within which these individuals operate. They consequently neglect the importance of systemic change in achieving truly significant improvements in the well-being of the global citizenry. In its most common form, this criticism laments the fact that it is the institutions of global capitalism that are responsible for the welfare inequalities that EAs are trying to mitigate through their actions. If we really want to improve things we must alter those institutions themselves, not simply try to make incremental and piecemeal improvements within those systems.

Critics go on to argue that EAs, through their methods of evaluation and their general philosophy, don’t simply ignore or downplay the importance of systemic change, they actually hinder it. One reason for this is that EAs are too responsive to changes in evidence when it comes to issuing recommendations. This means that they shift their interests and priorities over time. Achieving systemic change requires constancy of purpose. It means ignoring the naysayers and critics; ignoring (at least some) of the facts on the ground until you achieve the desired change.

Now, as I say, I think others have dealt with this criticism at admirable length. I’ll mention two possible responses. First, I think it is possible for the EA to acknowledge the critic’s point and argue that nothing in the EA philosophy necessitates ignoring the importance of systemic change. It is perfectly coherent for the EA to (a) think carefully about how their charitable donations could achieve the most good and pick the ones that do; and (b) think carefully about how to initiate the kinds of systemic change that might improve things even further. In other words, I don’t think that EA has the corrosive effect that the critics seem to believe it does. This is one of those cases where it seems possible to sustain both goals at the same time.

Second, I think proponents of systemic change often underestimate (or simply ignore) the moral risks involved in their projects. Not only is it likely to be difficult to achieve systemic change, it is also uncertain whether the outcome will be genuinely morally better. Communist reformers in the 20th century presumably thought that their reforms would create a better world. It seems pretty clear that they were wrong. Are proponents of systemic change in the institutions of global capitalism sure that their preferred reforms will make the world a better place? I think there is reason to be cautious.
Again, this isn’t to dismiss the importance of systemic change, nor to reject the value of utopian thinking, it is simply to suggest that we should inject some realism into our aspirational thinking. This applies equally as well to those who think EA will achieve dramatic changes in global well-being.


4. Conclusion
Okay, that brings me to the end of my contribution to this series. I’ll let Iason himself have the last word in the next entry.

Thursday, May 26, 2016

New paper - Should we use commitment contracts to regulate student use of cognitive enhancement drugs?




I have a new paper coming out in the journal Bioethics. It's about the philosophy of education and student use of cognitive enhancement drugs. It suggests that universities might be justified in regulating their students' use of enhancement drugs, but only in a very mild, non-compulsory way. It suggests that a system of voluntary commitment contracts might be an interesting proposal. The details are below.


Title: Should we use commitment contracts to regulate student use of cognitive enhancement drugs?
Journal: Bioethics
Links: Philpapers; Academia; Official
Abstract: Are universities justified in trying to regulate student use of cognitive enhancing drugs? In this paper I argue that they can be, but that the most appropriate kind of regulatory intervention is likely to be voluntary in nature. To be precise, I argue that universities could justifiably adopt a commitment contract system of regulation wherein students are encouraged to voluntarily commit to not using cognitive enhancing drugs (or to using them in a specific way). If they are found to breach that commitment, they should be penalised by, for example, forfeiting a number of marks on their assessments. To defend this model of regulation, I adopt a recently-proposed evaluative framework for determining the appropriateness of enhancement in specific domains of activity, and I focus on particular existing types of cognitive enhancement drugs, not hypothetical or potential forms. In this way, my argument is tailored to the specific features of university education, and common patterns of usage among students. It is not concerned with the general ethical propriety of using cognitive enhancing drugs. 

The Life Cycle of Prescriptive (Legal) Theories

Caspar David Friedrich - The Stages of Life


Legal officials have to make decisions. Take the judge as an example. He or she is confronted with legal disputes everyday, some involving private legal disputes (e.g. breach of contract), some involving purported criminal acts (e.g. alleged murder), some involving the infringement of constitutional rights (e.g. limitations on the right to free speech). When confronted with these decisions the judge must decide whose case will prevail, which interests to prioritise, what can and cannot be done as a matter of law. Oftentimes these disputes involve contentious matters of political morality. For example, the judge may be asked: can the legislature of the country ban controversial forms of speech on the grounds that they offend the interests of minority groups, or does the right to free speech trump any such offensiveness?

Judges might try to decide these cases by directly engaging with the moral and political issues they raise. But oftentimes they are reluctant to do so. There is a worry that the judge is not politically empowered to use such criteria in making decisions. They simply apply the law, whatever it is. It is for others, usually directly elected assemblies, to weigh the values inherent in these controversial political matters. And what’s true for the judge is true for other legal officials. Bureaucrats and regulators are also granted decision-making authority and there are occasions on which this authority brings them face to face with controversial questions of political morality. They too are often reluctant to directly engage with these matters as they feel it is contrary to their political-legal role.

Worries about the legitimacy of such decision-making authority has often led legal scholars to propose apolitical prescriptive legal theories. These are theories that propose decision-making procedures that are shorn of any concern for the controversial political content at the heart of legal disputes, and allow legal officials to make their decisions in an objective, neutral fashion. Or so, at least, these theories often claim, but anyone who has read up about these theories will know that they often fail to be objective and neutral.

Indeed, there is a common life-cycle to many prescriptive legal theories. They start off strong, purporting to provide an apolitical solution to the legal official’s problem, only to become attenuated and weakened over time. They then either persist in the attenuated form or die off. This life-cycle is articulated in a recent paper by David Pozen and Jeremy Kessler. I want to describe their proposed model of the life-cycle in this post. I do so because I think it is an interesting idea, and because once you know about it you will start to spot the pattern elsewhere. I’ll give an example of one prominent contemporary debate in applied ethics that shares this pattern at the end of this post.


1. The General Idea
Pozen and Kessler’s life-cycle consists of six major stages. They don’t give these stages names, but I will since I like giving names to things in order to make them more memorable:

T1 - Birth: A decision procedure is introduced that purports to allow legal officials to resolve highly politicised legal conflicts in a way that does not appeal directly to the political values at stake in those conflicts. In other words, a decision procedure is introduced that depoliticises a decision-making function.
T2 - Critique: The proposed decision procedure is batted about for a while and critics start to spot flaws in it. Some of these are quite academic and technical, some of them are more value-laden. The most common, in legal contexts, is to point out how the procedure fails to yield the ‘right’ decision on some matter that is subject to universal (or near-universal) approval.
T3 - Response: Proponents of the theory respond to the critiques by modifying the decision procedure it in such a way as to avoid the technical and political objections.
T4 - Iteration: This process of critique and response cycles back and forth for some period of time. At each stage the theory adapts to accommodate a critique by incorporating commitments or assumptions that bring it back closer to the original highly politicised conflict.
T5 - Maturity: The theory reaches a point where it becomes so adulterated and attenuated that it essentially starts to reflect the ‘conflict-ridden field it had promised to transcend’. In other words, we arrive back at the same state we were in at around the time of the theory’s birth. At this point in time one of two things will happen:
T6(a) - Death: The theory falls out of favour and (possibly) something new is proposed in its stead.
T6(b) - Persistence: The theory persists, albeit in the highly adulterated and attenuated form. There are several reasons why this may happen (discussed in Pozen and Kessler’s paper), the main one is simply that the language and structure of the theory has certain side benefits for those who continue to couch their arguments in its terms.

Pozen and Kessler's Life Cycle of Legal Theories



The net result is that the prescriptive theories tend to ‘work themselves impure’ over time. This model will probably seem a little abstract right now. Pozen and Kessler illustrate it with several examples in their paper. I’ll go through one of them below. Before that, however, I want to note a couple of things. First, as the authors themselves point out, there is nothing particularly novel about this model. Similar life cycle models have been proposed in other fields. A notable example would be the model of scientific theories proposed in Thomas Kuhn’s famous book The Structure of Scientific Revolutions. Kuhn argued that scientific theories are originally proposed to explain some set of observations. Over time, new observations are made that seem to conflict with the theory. The theory is forced to accommodate these observations by adding auxiliary hypotheses or sub-theories to account for the anomalies. This results in some adulteration and attenuation of the theory, until eventually there is some ‘paradigm shift’ to a new theory.

Nevertheless, there is something unique about prescriptive legal theories that makes them particularly susceptible to the life cycle proposed by Pozen and Kessler. Apolitical prescriptive theories tell legal officials how they ought to resolve controversial moral-political debates. But they do so by encouraging them to avoid direct engagement with the values that are at stake. The problem is that those values are what ultimately matter and they consequently have a way of re-surfacing over time. There is a sense then in which the theories can never really do what they purport to do (these are my words, not Pozen and Kessler’s): they are always forced to encompass the moral contestation that sparked their formulation. To be clear, this is not true for all prescriptive legal theories — some are more honest and upfront about their attempt to accommodate core political values — but it is true for those that go down the apolitical route.


2. The Life-Cycle of Constitutional Originalism
One of the examples used in Pozen and Kessler’s article is that of originalist theories of constitutional interpretation. I’ll set out this example here because it is the one I am most familiar with and provides a very clear illustration of the life cycle (for those who don’t know, I’ve written two academic articles that are critical of the more philosophical versions of this theory). There are some very detailed and interesting histories of originalism out there. The gist of that history is as follows.

The US Supreme Court under Earl Warren (and, in the early years, under Warren Burger) was renowned for making a series of progressive and significant constitutional decisions. Some of these were widely celebrated (e.g. Brown vs Board of Education about the desegregation of schools) while others were more hotly contested (e.g. Roe v Wade on the right to abortion). The more hotly contested decisions provoked a backlash among conservative lawyers and legal scholars. They felt that decisions like Roe v Wade involved judges stepping beyond their constitutional authority and making judgments of political morality that were the proper preserve of the legislature or executive.

This backlash led to originalism. In its initial form, originalism promised to provide judges with a simple decision procedure that allowed them to reach determinate outcomes in controversial cases without implicating controversial political values. It thus prevented them from overstepping their constitutional authority. The decision procedure required them to interpret the provisions of the constitution in accordance with their originally intended meaning. That is to say, the meaning that the drafters and ratifiers of those constitutional provisions would have intended them to have. This would reduce the judicial task to one of factual and historical analysis; not normative or moral theorising.

In its original (!) form, originalism was simple and (to a certain mindset) appealing. It soon ran into difficulties. Critics pointed out that there was not always good evidence for the intentions of the original framers and ratifiers; and that the whole concept of a single original intent was philosophically and factually problematic. What’s more, critics argued that if you followed the originalist decision procedure to the hilt, you would have to overturn widely-accepted precedents like Brown v Board of Education. The challenge was to modify the theory so as to accommodate these critiques and enable consistency with widely-accepted precedents.

This led to several cycles of modification and elaboration. Originalists dropped their commitment to intent and switched instead to the originally understood public meaning. They acknowledged that certain provisions within the constitution might be vague or ambiguous and hence that there was room for moral or political creativity when it came to applying those provisions. They also started to draw distinctions between the normative and semantic versions of the theory, and between the interpretive and constructive tasks of the judge. Taking this more sophisticated theoretical structure onboard, scholars engaged in more detailed historical inquiries that allowed them to account for decisions like Brown v Board of Education. Indeed, so modified and elaborated did the theory become that one prominent liberal living constitutionalist (Jack Balkin) argued that it was possible to reconcile originalism and living constitutional theories of interpretation. The consequence was that originalism became so weak a theory that virtually anyone could embrace it and apply it in a way that accommodates different political values. We got back to where we started.

And yet, as Pozen and Kessler note, originalism is one of those theories that seems to persist in its attenuated state rather than dying off. They argue that this is because the language and structure of the theory has side benefits for those who endorse. In particular, with its complex structure and refined reasoning, it may tend to ‘enhance the power and prestige of lawyers as a privileged expert class, while raising barriers to entry for nonlegal actors’ (Pozen and Kessler 2016, 51).


3. Conclusion - Is Effective Altruism Working itself Impure?
I don’t have too much to say in response to this. I haven’t collected systematic evidence on the life cycle of all prescriptive legal theories, but the model proposed by Pozen and Kessler seems intuitively right to me. Furthermore, I think I see it in operation in other fields. One example which springs to mind is the ongoing debate about effective altruism (EA). I’ve been writing a series of posts about this theory so it is to forefront of my thinking at the moment.

As noted in that series, when it originally burst onto the scene, EA seemed to provide an attractive, rational and evidentially robust procedure for making decisions about charitable donation. This is a hotly contested field, with many different causes competing for our attention, often seeming to be equally worthy of our money. EA promised to cut through some of the noise. It adopted simple, appealing metrics of effectiveness, highlighted underappreciated causes, and allowed its followers to feel good about their charitable decision-making by convincing them that by prioritising certain charities they were doing the most good with their limited resources. But critics have started to identify flaws in this initially appealing theory. They argue that it ignores important moral goods, prioritises biased or incomplete metrics of effectiveness, and is not quite as rational or effective as its proponents would have you believe. Fans of EA have typically responded by trying to accommodate some of these criticisms, and by expanding the range of metrics and considerations that can go into the assessment of charitable donations.

We are at the early stages in this process of critique and response so it’s not entirely clear where things will end up. But I suspect it may end up following the life cycle outlined by Pozen and Kessler. In other words, I think that as the theory of EA grows to encompass the anomalies and omissions highlighted by its critics, it may become so attenuated as to leave us largely where we started. Where once EA provided clear guidance on which charities to support; it will eventually end up endorsing many, mutually inconsistent ones. It will thus fail to provide the clarity and simplicity it once provided. Where things will go from there is anyone’s guess. Will EA die out, or will the language and structure of EA have side benefits that enable its persistence?

Of course, this is all somewhat speculative but it will be interesting to see whether EA does indeed follow this life cycle.

Wednesday, May 25, 2016

Is Effective Altruism Methodologically Biased?


The roundabout playpump - A flawed intervention?


(Part One; Part Two)

After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title 'What's wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.

Gabriel’s article takes a critical look at the leading objections to effective altruism (EA). EA, for present purposes, is defined as the practice of trying to do the most good you can through charitable donations. In typical EA arguments, this practice brings with it a number of key commitments, three of which figure prominently: (i) welfarism, i.e. EAs think you should try to improve individual well-being; (ii) consequentialism, i.e. EAs tend to favour consequentialist approaches to ethics and (iii) evidentialism, i.e. EAs look to policy interventions with a robust evidential base.

Gabriel considers three main objections to this form of EA. The first is that it is unjust; the second that it is methodologically biased; and the third that it is not as effective as its proponents claim. I’ve looked at the first of these objections already. Today, I look at the second. That objection breaks down into three main sub-types of objection. I’ll discuss each of these in turn.

[Reader's note: I am basing this series on the original pre-published version of Gabriel's article because that's what I used when I originally structured this series and presented the taxonomy of objections. There have been some changes to the wording and framing of the critiques discussed below but, as best I can tell, it covers the same ground.]


1. Is EA too measurement focused and reductionist?
The first methodological critique highlights the evidential bias of the EA philosophy. The critique manifests itself in a couple of different ways. One of them is a variant on the classic ‘what gets measured gets managed’ concern. EAs place a premium on improving outcomes that are susceptible to quantification and measurement. This causes them to downplay other, less measurable and quantifiable outcomes, that might be equally morally worthy. To put the objection more formally:

  • (1) EAs emphasise moral goals that are readily measurable and quantifiable.
  • (2) There are many important moral goals that are not so readily measurable and quantifiable.
  • (3) Therefore, EAs tend to ignore important moral goals.

Unlike the previous round of objections, the concern here is not that EAs fail to recognise other important moral goods. Rather, the concern is that their evidentialist methodology biases them away from these other moral goods. To give an example, there might be some value that is intrinsic to political processes that respect and honour human rights. At the same time, it might be very difficult to measure and quantify those outcomes. Contrariwise, there might be some value to individual health and well-being that is relatively easier to measure and quantify. When it comes to deciding between policies, this will cause EAs to prefer policies that emphasise the latter moral goal to the former, even though they acknowledge the value of the former.

This can have two particularly negative consequences. The first is simply that proponents of EA become absorbed in assessing the relative merit of interventions that target measurable and quantifiable outcomes and forget to consider the less measurable and quantifiable. The other is that EAs become accustomed to standards of proof that are unreasonable in many domains. For instance, EAs love randomised controlled trials (RCTs), but RCTs are often only appropriate for small scale changes where it is possible to have control groups and to precisely measure outcomes. They are often not appropriate to larger country-wide or international reforms. Does this mean we should abandon these initiatives? Or does it mean that EAs need to moderate their standards of proof? That’s an issue that needs to be resolved.

Another, more specific version of the materialistic objection, worries that EAs tend to be reductionists when it comes to assessing the value of different interventions. One example of this is the tendency for EAs to rely on the DALY-measure (Disability-adjusted life year) when it comes to assessing interventions. The DALY measure allows us to make indirect inferences about a person’s subjective well-being and to compare different people according to this metric. This makes it a very attractive measurement system for EAs. The fear is that overreliance on it reduces everything to a comparison of subjective well-being.

How can EAs respond to these objections? Gabriel identifies a number of possibilities some of which are already happening. One example is that GiveWell — possibly the leading charity evaluator — has moved away from overreliance on the DALY measure and instead favours interventions that are supported by multiple lines of independent analysis. Gabriel thinks that EAs should also be more upfront about the bounded nature of the information they provide. They could do this by concluding that some intervention is ‘unprovable’ rather than ‘unproven’. He also thinks that they should engage more with other potential metrics, such as the Multidimensional Poverty Index, which evaluates outcomes in non-welfarist terms.


2. Is EA too individualistic?
The second version of the methodological critique argues that EA is overly individualistic in its focus. That is to say, it prioritises interventions that improve individual well-being and either ignores or downplays those that improve collective or community-based goods. Enhancing and empowering local communities is often a goal for NGOs, and it is also something favoured by certain schools of political morality, but because EAs are so resolutely welfarist in their outlook, they tend to value communities in instrumental ways, i.e. as vehicles for improving individual outcomes. This is similar to the reductionist critique given above (and, indeed, in the final version of the article Gabriel merges them together).

To put the objection in quasi-formal terms:

  • (4) EAs emphasise moral goods that accrue to the individual (i.e. that enhance individual well-being etc).
  • (5) There are important moral goods that accrue to the community.
  • (6) Therefore, EAs ignore an important set of moral goods.

The objection is defended and elaborated along similar lines to the previous one. Gabriel uses a thought experiment to highlight its practical consequences:

Medicine: Suppose it is known that condom distribution is more effective in minimizing the harm caused by HIV/AIDs than the provision of Anti-Retroviral drugs (AVRs). This is because AVRs only help those who have the disease while condoms can prevent people from contracting it. You are faced with the choice of funding two different programs. The first allocates all the money to condom distribution. The second allocates 90% to condom distribution and 10% to AVRs. Which do you choose?

Gabriel argues that if the evidence does indeed support the view that condom distribution is more effective than the provision of AVRs, then EAs will tend to favour the first program. It is, after all, the one that does the most good for the money provided. The problem is that this does not sit easily with most people. The idea of leaving those with the disease untreated seems wrong. Gabriel suggests that this might have something to do with the value of hope communities. People want to live in a society that will care for them if they are sick, even if this is not the most cost-effective approach. They want to have the hope that they will be looked after. Furthermore, hope may be an important resource for communities undergoing hardship, one that enables them to take collective action that addresses problems that cannot be addressed at the individual level. You get more buy-in at the community level if people have some sense of hope.

The upshot of this, for Gabriel, is that EAs shouldn’t move so quickly from claims about cost-effectiveness of policies at the individual level to claims about the overall value or desirability of a policy.


3. Is EA too instrumentalistic?
The final methodological critique holds that EAs are overly instrumental in their evaluation of policies. That is to say, they compare interventions based on the outcomes they achieve and not on the procedures they use to achieve those outcomes. This creates a problematic bias in their recommendations. Procedures that are inclusive and democratic in nature are often slower and messier than more non-inclusive and technocratic procedures. Consequently, EAs tend to favour technocratic interventions. This causes them to downplay or ignore important procedural values.

  • (7) EAs assess interventions in instrumental ways: i.e. how efficient are they at achieving the desired outcome; they often ignore or downplay the values attached to the procedures that lead to those outcomes.
  • (8) There are intrinsically valuable procedures (i.e. democratic and inclusive procedures) that may be less efficient than other technocratic and non-inclusive procedures.
  • (9) Therefore, EAs tend to favour technocratic and non-inclusive procedures for achieving their desired outcomes.

Gabriel again uses a thought experiment to support the argument:

Participation: Some villages need help developing a water and sanitation system to combat the spread of waterborne parasites. You can fund one of two projects that help them in this regard. The first will hire a group of contractors to build the system - something they have done successfully in the past. The second will work with members of the community and help them build and develop the system themselves. This has also worked in the past but because villagers are not experts in this area of construction the systems tend to be less functional.

The complaint is that EAs would naturally choose the first project because it is the most effective. But the second project might have numerous advantages that go unappreciated by the standard EA methodology. It values the agency and autonomy of the villagers; it allows them to build capacity and understanding; and it can assist with the acceptability and perceived legitimacy of the intervention.

This objection works at more national scales too. There are concerns about largescale philanthropic projects that subvert democratic processes in favour of technocratic solutions, and thereby worsen the governance problems in certain developing nations.

Gabriel thinks that EAs need to be more sensitive to this problem. They need to appreciate the importance of popular control over social outcomes and the value of strong, democratic decision-making procedures. It strikes me, however, that many EAs are already sensitive to this problem. Indeed, Will MacAskill’s book Doing Good Better opens with a lengthy critique of the ‘Playpump’. This was a device that helped villagers pump water through a child’s roundabout. The idea being that water could be pumped and children could play at the same time. The pump was a failure for several reasons one of which (highlighted by MacAskill) is that nobody really consulted with the villagers who were being given these things. Now perhaps MacAskill thinks that non-consultation was a problem purely because it led the inventors and promoters of the playpump to favour an ineffective intervention, but there is still some sensitivity to the value of more inclusive procedures on display.


4. Conclusion
As you can see, each of these criticisms is a variation on the same basic theme: EAs prioritise certain ways of assessing the value of charitable interventions and this causes them to ignore or downplay something of importance. The response to each criticism is the same. Either the EA says that it is right to downplay and ignore those things, or they must try to expand their metrics and methodologies to include those things.

Monday, May 23, 2016

Vacancy - Research Assistant on the Algocracy and Transhumanism Project





I'm hiring a research assistant as part of my Algocracy and Transhumanism project. It's a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.

Algocracy and the Transhumanist Project, IRC New Horizons NUI Galway
Whitaker Institute, NUI Galway
Ref. No. NUIG 067-16
Applications are invited from suitably qualified candidates for a full time, fixed term position as a Research Assistant with the Algocracy and Transhumanism Project at the Whitaker Institute, National University of Ireland, Galway. This position is funded by the Irish Research Council and is available for a five month period from July 2016.
 The project critically evaluates the interaction between humans and artificially intelligent, algorithm-based systems of governance.  It focuses on the role of algorithms in public decision-making processes and the increased integration between humans and technology. It examines how technology creates new governance structures and new governance subjects and the effect this has on core political values such as liberty and equality. Further information about the project can be found on the project webpage http://algocracy.wordpress.com
Job Description:The post holder will perform a variety of duties associated with the project.  They will participate in research, preparation and editing of interviews with leading experts in the areas of algorithmic governance and human enhancement. They will prepare literature reviews. They will review and edit manuscripts for publication. They will assist in the organisation of research seminars and one major workshop. They will contribute to the project webpage and provide general assistance in disseminating project results. The post holder will report to Dr John Danaher.
 Qualifications: Candidates should have completed a degree in a relevant field of study. Given the broad, interdisciplinary nature of the project, this includes (but is not limited to) law, philosophy, politics, sociology, psychology and information systems. Ideally, the candidate will have some experience in analytical and philosophical modes of research. Candidates should have a strong academic record, and good IT skills. Ideal candidates will be professional, highly motivated, and able to work effectively in a team environment, have an abundance of creativity, and have enthusiasm for research. Strong analytical skills, writing, and organisational abilities are important prerequisites. Support/training will be provided to the successful candidate interested in furthering their own academic/research career.
 Salary: €21,850 per annum, pro rata for this five month contract.   Start date: July 2016.
NB: Work permit restrictions apply to this category of post.
Further information on research and working at NUI Galway is available at http://www.nuigalway.ie/our-research/ Further information is available at www.whitakerinstitute.ie
Informal enquiries concerning the post may be made to Dr John Danaher – john.danaher@nuigalway.ie
To Apply: Applications by email, to include a covering letter, CV, and the contact details of three referees should be sent, via e-mail (in word or PDF only) to Gwen Ryan gwen.ryan@nuigalway.ie
Please state reference number NUIG 067-16 in the subject line of your e-mail application.
Closing date for receipt of applications is 5.00 pm on Wednesday, 15th June 2016.
National University of Ireland, Galway is an equal opportunities employer.

Friday, May 13, 2016

Episode #3 - Sven Nyholm on Love Enhancement, Deep Brain Stimulation and the Ethics of Self Driving Cars

photo

This is the third episode in the Algocracy and Transhumanism project podcast. In this episode I talk to Sven Nyholm who is an Assistant Professor of Philosophy at the Eindhoven University of Technology. Sven has a background in Kantian philosophy and currently does a lot of work on the ethics of technology. We have a wide ranging conversation, circling around three main themes: (i) how technology changes what we value (using the specific example of love enhancement technologies); (ii) how technology might affect the true self (using the example of deep brain stimulation technologies) and (iii) how to design ethical decision-making algorithms (using the example of self-driving cars).

The work discussed in this podcast on deep brain stimulation and the design of ethical algorithms is being undertaken by Sven in collaboration with two co-authors: Elizabeth O'Neill (in the case of DBS) and Jilles Smids (in the case of self-driving cars). Unfortunately we neglected to mention this during our conversation. I have provided links to their work above and below.

Anyway, you can download the podcast here, listen below or subscribe on Stitcher or iTunes.



 

Show Notes


0:00 - 1:30 - Introduction to Sven

1:30 - 7:30 - The idea of love enhancement

7:30 - 10:30 - Objections to love enhancement

10:30 - 12:30 - The medicalisation objection to love enhancement

12:30 - 21:10 - Medicalisation as an evaluative category mistake

21:10 - 24:00 - Can you favour love enhancement and still value love in the right way?

24:00 - 28:10 - Evaluative category mistakes in other debates about technology

28:10 - 30:50 - The use of deep brain stimulation (DBS) technology

30:50 - 35:20 - Reported effects of DBS on personal identity

35:20 - 41:20 - Narrative Identity vs True Self in debates about DBS

41:20 - 46:25 - Is the true self an expression of values? Can DBS help in its expression?

46:25 - 50:30 - Use of DBS to treat patients with Anorexia Nervosa

50:30 - 55:20 - Ethical algorithms in the design of self-driving cars

55:20 - 1:02:40 - Is the trolley problem a useful starting point?

1:02:40 - 1:06:30 - The importance of legal and moral responsibility in the design of ethical algorithms

1:06:30 - 1:09:00 - The important of uncertainty and risk in the design of ethical algorithms

1:09:00 - end - Should moral uncertainty be factored into the design?  


Links

  • Jilles Smids (Sven's Co-author on ethical algorithms for self-driving cars)

Wednesday, May 11, 2016

New Paper - Robots, Law and the Retribution Gap




Apologies for the dearth of posts lately, I'll be back to more regular blogging soon enough. To fill the gap, here's a new paper I have coming out in the journal Ethics and Information Technology. In case you are interested, the idea for this paper originated in this blogpost from late 2014. I was somewhat ignorant of the literature back then; I know more now.

Title: Robots, Law and the Retribution Gap
Journal: Ethics and Information Technology
Links: Philpapers; Academia; Official
Abstract: We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

Tuesday, May 3, 2016

Episode #2: James Hughes on the Transhumanist Political Project


James_Hughes

This is the second episode in the Algocracy and Transhumanism project podcast. In this episode I interview Dr. James Hughes, executive director of the Institute for Ethics and Emerging Technologies and current Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. James is leading figure in both transhumanist thought and political activism. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. I spoke to James about the origins of the transhumanist project, the political values currently motivating transhumanist activists, as well as some more esoteric and philosophical ideas associated with transhumanism. You can download the podcast here. You can listen below. You can also subscribe on Stitcher and iTunes.




Show Notes

0:00 - 1:00 - Introduction to James  
1:00 - 11:00 - The History of Transhumanist Thought (Religious and Mythical Origins) 
11:00 - 17:00 - Transhumanism and the Enlightenment Project  
17:00 - 25:30 - Transhumanism and Disability Rights Movement  
25:30 - 34:30 - The Political Values for Hiveminds and Cyborgs  
34:30 - 41:00 - The Dark Side of Transhumanist Politics  
41:00 - 43:00 - Technological Unemployment and Technoprogressivism  
43:00 - 51:00 - Building Better Citizens through Human Enhancement  
51:00 - 1:01:55 - The Threat of Algocracy?  
1:01:55 - 1:07:55 - Internal and External Moral Enhancement   

Links