Friday, March 16, 2018

Unjust Sex and the Opposite of Rape (1)

William Hogarth, Before

The Opposite of Rape” is the title of an article by the legal philosopher John Gardner. It was published in late 2017 in the Oxford Journal of Legal Studies. In the article, Gardner tries to delineate the differences between good sex and rape. Doing so, he argues, helps to identify certain blindspots in contemporary debates about sexual ethics, particularly debates about importance of consent. For some reason, several people have asked me for my thoughts on this article. I’m not quite sure why. The subtext of each of these requests, at least as I read it, has been that there is something unusual or improper about Gardner’s analysis — that his discussion of ‘the opposite’ of rape is unwelcome. Initially, I thought nothing of this, and I put the article at the end of my already extensive reading list. But then the requests kept coming so I figured I better read the thing.

Now that I have done so, I’m not quite sure what the fuss was about. Obviously, many philosophical discussions of rape and sexual assault are fraught with risk. If you take a overly analytical perspective on the different ethical grades of sexual activity (good/bad/indifferent/permissible/forbidden), you risk sounding detached and unsympathetic to the plight of victims of sexual assault. If you use an inapt or awkward analogy, the risk escalates. Gardner might be guilty of some of these things. But, overall, I find his argument relatively unobjectionable. Indeed, if I had any complaint, it would be that it is not particularly original. What Gardner says about the distinction between ‘rape’ and ‘good’ sex, and about the dangers of focusing too much on consent in discussions of both, has been said already. What he adds is a particular conceptual framework for thinking about the distinction (viz. that ‘good sex’ is ‘teamwork’), which has one potentially controversial implication that he tries to avoid.

Over the next two posts, I want to justify my take on Gardner’s article. I do so by initially comparing it to another article — ‘Unjust Sex vs Rape’ by Ann Cahill — which I think advances a very similar thesis. This comparison should help to confirm my suspicion that Gardner’s argument is not hugely original. I will then consider the one potentially disturbing implication of Gardner’s argument and assess the extent to which he avoids it.

1. Sex in the Gray Area
”All sex is rape.” You have probably come across this quote. It is often attributed to the radical feminist Catharine MacKinnon, and it is usually held up as a reductio of the radicalist school of thought. Surely not all sex can be rape? The quote is, however, of dubious provenance. MacKinnon herself has denied ever saying it, and I have never seen a direct quote from her work confirming that she did. Nevertheless, there are aspects of her theory of sexuality that might be taken to endorse it, or something pretty close. It would take a long time to explain MacKinnon’s theory in full, but the gist of it is that women’s sexuality is shaped by patriarchal social structures. From an early age, women learn and internalise a certain understanding of their own sexuality and men learn and internalise a complementary understanding. Specifically, they learn that:

Women’s sexuality is, socially, a thing to be stolen, sold, bought, bartered, or exchanged by others. But women never own or possess it, and men never treat it, in law or in life, with the solicitude with which they treat property. To be property would be an improvement. 
(MacKinnon 1989, 172 - taken from Cahill 2016, 749)

In other words, women adopt a passive mode of sexuality and men an aggressive mode. Sex is something that men take from women; and it is something that is defined solely by reference to male desires and needs. While this doesn’t quite mean that all sex under conditions of patriarchy is equivalent to rape, it does mean that all sex under conditions of patriarchy should be treated with suspicion. Because women are indoctrinated by patriarchal norms, their own expressions of consent to sex are not necessarily dispositive. This puts MacKinnon in something of a bind as it seems like no heterosexual sex could be morally legitimate from the perspective of her critique. As Cahill puts it:

Although MacKinnon has vociferously denied that her theory of sexual violence entails the claim that all heterosexual sex under patriarchal conditions is rape, one cannot find in her work a detailed, compelling articulation of how any given heterosexual interaction could sufficiently escape the pernicious effects of patriarchal structures. 
(Cahill 2016, 749)

Cahill, and others, try to escape from this bind. Unlike MacKinnon, they think that there are circumstances in which sex can escape from the pernicious effects of patriarchy; but like MacKinnon, they think that some heterosexual sex, even if freely and affirmatively consented to by women, is normatively suspicious. As a result, they call for a more nuanced understanding of the spectrum of possible forms of heterosexual sex. At one of the spectrum lies rape and sexual assault — both of which are undoubtedly impermissible and rightly criminalised. At the other end lies ‘good’ sex — which is undoubtedly permissible and probably good. In between, there is a gray area where sex can be indifferent/‘meh’ or, if not quite as bad as rape and sexual assault, not good either. Cahill calls this latter type of gray area sex ’unjust sex’. It’s bad, but not as bad as rape and probably not worthy of criminalisation either (though, to be clear, in the article under discussion Cahill doesn’t discuss criminalisation and I am not sure what her opinion on it is).

What are the distinctive features of unjust sex? The feminist scholar Nicola Gavey interviewed women about the phenomenon in her 2005 book Just Sex?. In these interviews, the women all felt that they had not been raped, but they had had sex under conditions in which they felt unable to express their true desires not to have sex. As Gavey put it, each of these women ‘found herself going along with sex that was neither desired nor enjoyed because she did not feel it was her right to stop it or because she did not know how to refuse’ (2005, 136 - sourced from Cahill 2016, 748). Cahill argues that in cases of unjust sex women typically go along with having sex because it helped to ease tension that might otherwise arise between themselves and their partners, and/or because it was the quickest way to achieve their own desires, such as the desire to sleep.

These descriptions give us a general sense of what unjust sex involves. The question now is whether anything more can be said. Is there a philosophically richer analysis of unjust sex that explains why it is bad and why it is different from rape and sexual assault? Cahill has tried to answer that question in her work.

2. What makes sex unjust?
Rape* and unjust sex have some obvious similarities. They both lie on the negative end of sexual spectrum: they are, in other words, both ‘bad’ forms of sex. What justifies this joint designation? It cannot be the lack of consent. Although the absence of consent is usually taken to be hallmark of rape; it is not the hallmark of unjust sex. Cahill and Gavey are adamant about that. In cases of unjust sex, women do signal consent to sexual activity and, what’s more, take themselves to have signalled consent to sex. This is important because it suggests that consent — whatever its merits as a moral concept — does not automatically render sex morally commendable. This is something that Gardner is also keen to emphasise, as we shall see when we discuss his work.

So if it’s not the lack of consent that justifies the joint designation what does? The answer, according to Cahill, is the disregard for female sexual desire. In both rape and unjust sex, female sexual desire is weightless/powerless: its presence, or more importantly, its absence, makes no difference to whether the sexual interaction occurs or not. Rape adds to this a total disregard for female sexual agency.* In rape, the sexual interaction occurs against the will of the woman, not just against her desire. She experiences the total dismantling or disenabling of her sexual agency.

Unjust sex is different. In cases of unjust sex, female sexual agency is not dismantled. Indeed, one of the distinctive features of all the cases discussed by Gavey is that the women were usually explicitly or implicitly required to express their agency by signalling consent. The men with whom they had sex sought some input from them. Nevertheless, their agency was highly restricted. Cahill uses a nice metaphor to explain what happens. She suggests that in cases of unjust sex female sexual agency is ‘hijacked’. The men (or, more broadly, the patriarchal structures within which the sexual interaction takes place) hijack the woman’s agency and look for it to be expressed in a particular way: to accredit the sex and, if necessary, use this against the woman at a later point. Cahill puts it like this:

…the woman’s agency can be deployed only to facilitate the specific sexual interaction whose content (that is, the particular acts that will make up the interaction) is predetermined and remains largely unmarked by the specific quality of the woman’s sexual subjectivity. Her sexual agency is employed in a weak way, as a mere accreditation of the sexual interaction that is being offered to her. 
(Cahill 2016, 755)

How does this hijacking take place? What happens is that it is clear to the woman in the situation that there is only one valid way in which to express her agency, namely: to affirm the proposed sexual interaction. If she refuses or suggests an alternative, her response will be questioned and challenged, and she will be perceived as being difficult or obtuse. At this point actual coercion may be used to force her into the sexual activity, moving us from unjust sex to rape. This hijacking has an insidious effect on female sexual agency. Because there is no obvious dismantling or overpowering agency — as there is in cases of rape — female sexual agency does appear to make a causal difference to the sexual activity. But the difference it makes is minimal and so it’s true limitations are hidden.

3. Conclusions and Implications
That brings us to the end of part one. I have gone through Cahill analysis of unjust sex in some detail both because I think it is interesting and because it points in a very similar direction to Gardner’s analysis in ‘The Opposite of Rape’. I’ll explain why in part two, but to set up that discussion let me mention two important propositions that I think it is possible to draw out of Cahill’s analysis (even if she does not explicitly endorse them herself):

Proposition 1: There is more to ‘bad’ (i.e. morally unwelcome) forms of sex than rape and sexual assault; or, to put it a different way, the mere presence of sexual consent is not enough to make a sexual interaction morally commendable.

Proposition 2: Because of this, an excessive focus on consent in discussions of sexual ethics can be misleading, and possibly unhelpful, because it does not move the needle sufficiently toward a normative ideal of male-female sexual relations.

What I will argue in part two is that Gardner’s analysis leads him to endorse the same propositions, albeit from a different direction. Instead of starting with ‘unjust’ forms of sex he starts with ideal forms.

* Note: I’m just going to refer to ‘rape’ from here on out, rather than ‘rape and sexual assault’. This is not because I don’t appreciate the distinction between the two things, but simply for convenience since they can be lumped together for the purposes of this discussion.

Cahill, like other feminist scholars, uses the term ‘agency’ in contradistinction to the term ‘autonomy’. This is because ‘autonomy’, to her, connotes an atomistic individual acting apart from social institutions and relations; ‘agency’ is supposed to correct for this and refers to the powers and capacities that we have as a function of our interaction with social institutions and relations. There is a rich literature on this distinction though it should be noted that modern theories of autonomy (e.g. the theory of ‘relational autonomy’) are not that different from feminist theories of agency. A classic paper on the topic is Kathryn Abrams ‘From autonomy to agency: feminist perspectives on self-direction’.

Monday, March 12, 2018

Should we care about inequality? A Critical Analysis of Pinker's Optimism

There is a spectre haunting the developed world — the spectre of inequality. In the wake of Thomas Piketty’s surprising bestseller Capital in the 21st Century and the disruptive presidential campaigns of Donald Trump and Bernie Sanders in 2016, a consensus seems to have emerged: developed economies are growing more unequal. We went through a golden era in the middle part of the 20th century: economic growth and equality went hand in hand. The rising tide lifted all boats. Then, somewhat unexpectedly, things changed. From 1980 onwards, growth and equality diverged. Disproportionate shares of income went to the top. Today, the incomes of the top 0.1% are incomprehensible. In 2015, the richest 62 people in the world owned more than the bottom 3.5 billion. In 2017, the richest man in the world shattered the glass ceiling of $100 billion in personal wealth.

The rise in inequality is generally assumed to be a bad thing. Its effects have been felt most sharply among the middle classes, particularly in the USA. They have lost their steady, well-paying, jobs-for-life. China and automation have seen to that. They are struggling with debt and drug addiction. They feel shut out from politics and political processes. The wealthy buy their way to power and influence. They lobby hard for tax reforms that benefit them the most. The result is an increasingly polarised and violent polity. It’s hard to see any chink of light in this world.

Unless you are Steven Pinker. In his recent book Enlightenment Now, Pinker casts doubt on the emerging consensus about inequality. He doesn’t deny the increase, but he calls for greater perspective and critical awareness about what it all means. In doing so, he offers two main objections to the emerging consensus: (i) a philosophical objection, which argues that the social/ethical significance of inequality has been overstated; and (ii) a measurement objection, which argues that the statistics that are bandied about in the debate can be misunderstood.

In this post, I want to examine both of these objections. I do so in a spirit of self-education and constructive criticism. I’m one of the people who has bought into the emerging consensus. Some of what Pinker says seems quite plausible to me; some less but so. I want to determine which parts of his critique have merit and which do not. That said, I have no desire to engage in the popular sport of Pinker-bashing. Several critical reviews of Pinker’s book have emerged in the past couple of weeks and I’ve been relatively unimpressed by the majority of them. For some odd reason, they all want to challenge Pinker’s interpretation of the Enlightenment and criticise his failure to engage more seriously with Enlightenment thinkers. This seems odd to me since Pinker clearly wasn’t trying to write an intellectual history. His book is about contemporary trends in human society. But, then, perhaps Pinker is a victim of his title. If he called his book ‘Optimism Now’ it might have been more accurate and he could have sidestepped some of this ire (though, no doubt, he would have attracted other kinds). Anyway, these issues are peripheral to this post. I’m not trying to offer an overall evaluation of his book. I just want to focus on the chapter about inequality.

[Interpretive Note: Throughout this post, when I refer to ‘inequality’ or ‘equality’ I am, unless I explicitly state otherwise, referring to inequality in outcomes, specifically the distribution of income, wealth or other material goods. I am not referring to inequality of opportunity or inequality of legal protection/rights etc.]

1. The Philosophical Objection: Inequality is not intrinsically bad
Is inequality a bad thing? That depends on what you mean by ‘bad’. Philosophers distinguish between two different senses in which something can be bad. Something can be intrinsically bad, i.e. bad in and of itself. Or something can be instrumentally bad, i.e. bad because of its consequences or effects. If the emerging consensus view is that inequality is a bad thing, in what sense is it bad?

Pinker’s primary philosophical objection to all the fretting that we do about inequality is that it fails to put the badness of inequality in its right place. Inequality might be instrumentally bad. It might, for example, have a negative impact on democratic politics. But it is not, according to Pinker, intrinsically bad, nor is its counterpart (equality) intrinsically good. This is a common enough philosophical view. Michael Huemer and Harry Frankfurt, to name but two philosophers, have defended it. Their thesis can be easily conveyed through a thought experiment (this is my invention not theirs). Imagine the following two scenarios and then ask yourself which is better:

Scenario 1: Yourself and two other people are stranded on a desert island. None of you have any resources when you arrive on the island. There is no food or shelter available and the island is exposed to repeated hurricanes.

Scenario 2: Yourself and two other people wash up on an island owned by a billionaire businessman. He welcomes you with open arms and allows you to live there. Everyone is given shelter and as much food as they desire, but the businessman prefers you to the others. He allows you to live in his main residence and to play with all his cool toys; he asks the others to live in smaller guest residences and stay away from the main house.

Scenario 1 exhibits perfect equality; scenario 2 exhibits considerable inequality. Nevertheless, few people would prefer Scenario 1 to Scenario 2. Why not? The answer is obvious: the equality in scenario 1 comes with a hefty price: it is equality of immiseration only. The argument made by Frankfurt and Huemer is that comparing these kinds of scenarios demonstrates that equality itself is not intrinsically valuable. What really matters is individual well-being. It’s important that people are given enough to survive and thrive, not that they are made perfectly equal. This is a position known as sufficientarianism. It is opposed to egalitarianism and it is Pinker’s philosophical starting point. I try to illustrate its logic in the diagrams below. The ‘sufficiency threshold’ in these diagrams is supposed to illustrate what is intrinsically valuable. The ‘bars’ indicating the distribution of wealth around that sufficiency threshold represent how equal or unequal the scenario is. The claim is that this lacks intrinsic value.

I would like to resist this critique. I would like to argue that equality is an intrinsic good, but it is not the only intrinsic good and that it can be overridden by others. Welfare and well-being are intrinsic goods that must be weighed against equality, and in certain circumstances they count for more. In a society in which everyone is below the sufficiency threshold, the most important thing to do is to bring everyone above it. But once that is achieved, the intrinsic value of equality might become more apparent and significant. For example, contrast Scenario 2 (above) with the following:

Scenario 3: Same as scenario 2 but when you arrive on the tropical island the billionaire decides to give everyone an equal share of his wealth and equal access to his resources.

I think most people would agree that scenario 3 is superior to scenario 2, and one possible explanation for this is that the intrinsic value of equality does matter above the sufficiency threshold. The problem with this is that there are other possible explanations for the superiority of scenario 3 over scenario 2. It could be that equalising wealth has the effect of increasing the overall level of utility or life satisfaction in the relevant world. But I tend to think you could run another version of the thought experiment in which the overall level of utility (above the sufficiency threshold) is kept constant but the distribution of utility is equalised. I think that world would still be better than one in which there is an unequal distribution of utility. This would again suggest that equality has intrinsic value, at least above a threshold of well-being. I’ve tried to illustrate this point in the diagram below, suggesting that we must compare the overall merits of a social arrangement along two dimensions: (i) the welfare dimension and (ii) the equality dimension, and that society A is preferable to B, even if B is preferable to C and D.

All of this may be academic, however, because I think Pinker’s larger point is that we do not live in a world in which everyone is above the sufficiency threshold, even though things are getting better, and so bringing people above that threshold should be our priority for the time being.

There are other ways to defend the intrinsic value of equality. Some people defend the intrinsic value of equality on the grounds that being treated equally is important in order to respect the agency/capacity of the individual. Shlomi Segall, for example, has argued that inequality in outcomes is intrinsically bad whenever it results from forces beyond the individual’s control. This implies that inequality might be tolerable when it results from forces within the individual’s control. What matters is that individual effort and responsibility determine one’s share in the social surplus. When this is not the case, inequality of outcomes is intrinsically bad.

Pinker has a response to this, although he does not flag it as such. He argues that this kind of critique results from a conflation of concepts. Specifically, he thinks people who make it are confusing inequality with theft or unfairness. Just because one person is richer than another does not mean that the former stole from the latter. Pinker uses the example of JK Rowling, author of the Harry Potter books, to make this point. She made a lot of money from those books, but she didn’t steal from anyone in the process, and the people who bought her books enjoyed them (or, at least, the majority did). She certainly gained a lot, but they all gained something too. It was a positive sum game in which more of the surplus flowed to her than to the individual readers. But there was no theft or underhand tactics involved. To bolster this point, Pinker cites experimental studies suggesting that people are content with unequal distributions of this sort, if they result from people getting what they deserve. If those who put in more effort get more in return, people seem to be happy with this.

I agree with Pinker that it is important to keep inequality conceptually distinct from theft and fairness. But it would be foolish to deny that they are often connected and that on at least some occasions inequality does result from theft or unfairness. Indeed, I take this to be one of the key arguments against current manifestations of inequality in the Western world. As Piketty points out in his discussion of income inequality, it is difficult to explain the high incomes of the CEOs of large corporations in meritocratic terms. It is highly improbable that they deserve to take home 50 to 100 times the salary of their workers. They certainly don’t add that much extra value to the output of their firms (this is obvious when you consider that financial CEOs still earned huge sums during the worst of the financial crisis). It’s more likely that they are taking an unfair share of their firm’s profits. This seems to be borne out by other evidence. Andrew Weil, for example, in his excellent book The Fissured Workplace argues that the increased use of precarious, outsourced, platform labour by companies results in an unfair distribution of the benefits and burdens of economic productivity. It is something that makes life much worse for most workers (less pay, less benefits, less security) and much better for company managers and shareholders (who take a greater share of firm profits as a result). The one moderating effect is that it may also make things better for consumers by charging them cheaper prices. Pinker makes much of this moderating effect, which we’ll return to later.

Overall then, Pinker thinks that concerns about inequality need to be put in their place. Inequality is not intrinsically bad, nor is it to be conflated with theft and unfairness. You could have perfectly equal societies in which everyone suffers terribly; and highly unequal societies in which everyone thrives. We should prefer the latter to the former. This is because individual welfare and well-being are the true intrinsic goods and count for far more in our moral deliberations. I agree with the overall spirit of this, but I would argue (a) that inequality might still be an intrinsic bad, just one that only becomes significant when societies cross a threshold of well-being; and (b) that some of the inequality we currently see in developed economies is caused by theft/unfairness.

2. The Measurement Objection: Lies, Damned Lies
The second part of Pinker’s critique focuses on measures of inequality. He claims that they are often misunderstood and misread. To understand his point, it’s worth briefly describing the two main measures of inequality that are used in contemporary debates:

Gini Coefficient: This is a measure of the statistical dispersion of values in a frequency distribution. Though it can be used to measure different types of statistical dispersion it is most commonly used to measure the dispersion in the values of incomes earned across a society. The Gini Coefficient is expressed as a number between 0 and 1. A society that scores 0 on the Gini coefficient is one of perfect equality, i.e. everyone in the society earns the same; a society that scores 1 is one of perfect inequality, i.e. one person gets everything. Developed economies typically have Gini coefficients that range from about 0.25 to 0.50.

Income Shares: This is a measure of the proportion of total income earned by different income cohorts across a society. The cohorts occupy different deciles or centiles in the distribution of incomes. If you have paid attention to recent debates about inequality you will be familiar with this lingo. It’s where all the talk of the top 10% or 1% comes from. What those labels mean in practice is that the top 10% (top decile) are those whose incomes that fall within the top 10% of the range of incomes; and the top 1% (top centile) are those whose incomes fall within the top 1% of the range of incomes earned. These cohorts can take different overall shares of the total income earned in a society in a given year. For example, Piketty presents data in his book suggesting that the top 10% of the income distribution in Europe take about 35% of the total income, whereas in the USA they take 50% (circa 2010).

It is typically these two measures that are used to bolster claims about increasing inequality in developed countries, though there are some others too. Pinker doesn’t deny the figures used by Piketty and others to support the emerging consensus: they really do show an increase in inequality in developed countries. He does think, however, that people misunderstand the significance of the figures. As I read him, he offers five correctives to the consensus view:

(a) Relative measures vs Absolute measures: The inequality figures track relative differences between people within a society not absolute differences. This gets back to his main philosophical objection. He thinks that how people are doing relative to one another is less important than how people are doing individually (i.e. in terms of their personal level of well-being). He thinks that poverty — which is usually measured in absolute terms (*though some countries use relative measures) — is a far more important measure than inequality. This seems to have been going down over the past fifty years.

(b) Global Inequality vs Developed World Inequality: The inequality figures reported by the likes of Piketty offer a biased snapshot of the global situation. If you look at measures of inequality across all countries, you see that there has been a marked decline. The increase is only seen in developed economies. The diagram below, which is taken from Pinker’s book, illustrates this point. It shows the percentage gain in real income across the different percentiles of the global income distribution between 1988-2008. As you can see, most income cohorts gained substantially in that period. It is only those in developed countries, who are between the top 30% and top 10% of the global income distribution who did not gain substantially (though they did gain something). They are now, relatively, worse off than their peers, particularly those in the top 10% in plight is real and should not be ignored, but it should be kept in perspective.

(c) Anonymous vs Individualised Measures: The measures, particularly the income share measures, do not track particular individuals across their lifetimes but, rather, track abstract income cohorts in any given year. This is important because the data consequently don’t account for income mobility, i.e. the ability of people to move from one income cohort to another. Pinker cites a study showing that half of all Americans find themselves within the top 10% for at least one year in their lives, which might be taken to indicate that income mobility is not so bad.

(d) Measures sometimes overlook the effect of social transfers: Most developed economies operate reasonably generous welfare systems. This corrects for some of the inequality in earned income and means that the situation of those in the bottom end of the income distribution may not be as bad as is first thought. When trying to assess the overall level of inequality it is important to include the effect of social transfers. For example, Pinker cites a study indicating that the US Gini coefficient was a relatively high 0.51 in 2013 before taxes and transfers, but a more moderate 0.38 after taxes and transfers.

(e) Measures ignore the impact of abundance economics: Inequality measures do not account for the effects of technology and globalisation on the volume, availability and cost of goods and services. This is one of Pinker’s main points in the book. He argues that globalisation, by reducing the cost of goods and services, increases their affordability even for those on very low incomes. Furthermore, he argues that gains in technology mean that people have access to luxuries that were formerly not available. As he puts it “a dollar today, no matter how heroically adjusted for inflation, buys far more betterment of life than a dollar yesterday.” (2018, 117) The lifestyles of those on moderate incomes today would have been envied by aristocrats in a former era.

Each of these points has some merit. It is undoubtedly true that inequality statistics must be interpreted sensibly. The mere fact that inequality is increasing in developed nations does not mean that the world is going to hell in a handbasket; and there are subtleties in the data that are easily glossed over. But I also think that Pinker’s criticisms of the statistics have to be interpreted sensibly too. While points (a) and (b) seem relatively sound to me, my reading of the literature suggests that points (c) - (e) are less compelling. Inequality researchers are not idiots and they have tried to address these points. The picture that emerges from their attempts to do so, indicate that things are less rosy than Pinker lets on.

For example, recent studies on income mobility are not entirely encouraging. Admittedly, a lot of this depends on how you define mobility. Jonathan Hopkin makes this point in relation to a 2014 Harvard study on social mobility which suggested that mobility had not changed much in the US in the past 50 years. The problem with this is that the study focused on mobility between different income cohorts, not on changes in how much income was earned by those cohorts. So, for example, the study showed that those who were born into the bottom 20% of the income distribution had an 8-9% likelihood of joining the top 20% in their lifetimes (not an altogether encouraging stat, even on face value), and that this figure hadn’t changed much since the 1970s. But this ignored the fact that the gap between incomes at the top and bottom has grown in this period of time, meaning that the starting difference counts for more than it did in 1971. In other words, we should be more concerned about the relative lack/ease of mobility nowadays due to stratification. On top of this, a more recent study from Raj Chetty and his colleagues suggests that ‘absolute income mobility’ in the US — i.e. the percentage of children earning more than their parents — has dropped quite dramatically when you compare children born in the 1940s with those born in the 1980s (from 90% to 50%).

In a similar vein, recent studies on the effect of social transfers on inequality are not entirely encouraging. Gabriel Zucman, writing with Thomas Piketty and Emmanuel Saez, has just published a paper that tries to assess the impact of social transfers on income inequality. It finds that the overall effect is negligible, doing very little to offset the massive increase in income inequality between the bottom 50% and the top 10%. The paper argues that the bottom 50% did see a 21% increase in post-tax income between 1980 and 2014, but this increase was less than the increase in national income (and should be compared to a 205% increase in income to the top 1% during the same period). Furthermore, real incomes after tax actually stagnated because most of the transfers went to the elderly and middle classes, and came in the form of in-kind health spending or public goods spending.

Finally, although I am sympathetic to Pinker’s point about abundance economics, I think it too needs to be kept in perspective. There have undoubtedly been technological gains, and some things that have become cheaper and more abundant. But there are also some things that have massively increased in price. The obvious examples of this are healthcare, education, and housing. These increases are not insignificant. They are leading to the immiseration of younger generations who now graduate from college with increased levels of personal debt only to work in an economy that doesn’t provide jobs with the same kinds of health benefits that their parents enjoyed and doesn’t pay enough for them to be able to afford housing in the urban centres in which they are employed.

In sum, Pinker is right to urge caution with respect to inequality statistics. You need to know what they actually tell you to keep the information in perspective. Nevertheless, even if you take his observations onboard, I think there are reasons to think that the picture is less rosy than he claims.

4. Conclusion
According to the emerging consensus, inequality is increasing in developed economies and this is a bad thing. In this post, I have considered Steven Pinker’s criticisms of that emerging consensus. Although Pinker does not dispute the rise of inequality, and accepts that it creates serious problems in some societies, he thinks those problems need to be kept in perspective. This is partly because inequality is not intrinsically bad and its presence does not mean that you live in an unfair society (the Philosophical Objection); and it is partly because inequality statistics can be misunderstand (the Measurement objection).

I have evaluated both objections in this post. Although I agree with parts of what Pinker has to say, I think there are reasons to believe that inequality is intrinsically bad, and that its badness becomes more significant when you achieve a certain level of societal well-being. Furthermore, I think his critique of inequality measurements is not entirely fair. Researchers have addressed some of the issues he raises and when they do the picture is not as positive as he claims.

Monday, March 5, 2018

The Extended Mind, Ethical Parity and the Replaceability Criterion: A Critical Analysis

I was recently watching Netflix’s sci-fi series Altered Carbon. The basic premise of the show — which is based on a series of books by Richard Morgan — is that future humans develop a technology for uploading their minds to digital ‘stacks’. These stacks preserve the identity (“soul”) of the individual and can be transferred between different physical bodies, even after one of them has been ‘killed’. This has many social repercussions, one of which is that biological death — i.e. the destruction or fatal mutilation of the body — becomes a relatively trivial event. An inconvenience rather than a tragedy. As long as the stack is preserved, the individual can survive by being transplanted to another body.

The triviality of biological form is explored in many ways in the show. Violence is common. There are various clubs that allow people to destroy one another’s bodies for sport. There is also considerable inequality when it comes to access to new bodies. The wealthy can afford to clone their preferred body types and routinely transfer between them; the poor have to rely on social distribution schemes, often ending up in the bodies of condemned prisoners. Bodies in general have become commodities: things to be ogled, prodded, bought and sold. The show has been criticised for its gratuitous nudity — the male and female performers are frequently displayed partially or fully nude — but the showrunner has defended this, arguing that it is what you would expect in a world in which the body has become disposable. I think there is some truth to this. I think our attitude toward our bodies would be radically different if they were readily ‘fungible’ (i.e. capable of being replaced by an identical or ‘as good’ item).

What if the same were true of our minds? What if we could swap out parts of our minds as readily as we swap out the batteries in an old remote control? Destroying a part of someone’s mind is currently held to be a pretty serious moral offence. If I intentionally damaged the part of your brain that allowed you to remember faces, you’d hardly take it in your stride. But suppose that as soon as I destroyed the face-recognition part you could quickly replace it with another, functionally equivalent part? Would it be so bad then?

These are not purely speculative questions. Neuroscientists and neurotechnologists are hard at work at ‘brain prosthetics’ that could enable us to swap out brain systems. Furthermore, there are plenty of philosophers and cognitive scientists who claim that we already routinely do this with parts of our minds. They take a broad interpretation of what counts as a part of a ‘mind’, claiming that our ‘minds’ extend beyond the boundaries of our bodies, arguing they are distributed between our brains, our bodies, and our surrounding environments. Some of them argue that if we take this cognitive extension seriously, it leads us to an ‘ethical parity’ thesis (Levy 2007). This thesis holds that interferences with the non-neural parts of our minds carries just as much moral weight as interfering with a neural part. This has two possible consequences, depending on the context and the nature of the interference: (i) it could mean that we ought to take non-neural interferences more seriously than we currently do; or (ii) that we should be less worried about neural interferences than we currently are.

In this post, I want to look at some arguments for taking the ethical parity thesis seriously. I do so by investigating an article by Jan-Hendrik Heinrichs which is skeptical of strong claims to ethical parity. I agree with much of what Heinrichs has to say, but his argument rests a lot of weight on the ‘replaceability’ criterion that I alluded to above and I’m not sure that this is a good idea. I want to explain why in what follows.

1. Understanding the Case for Ethical Parity
The ethical parity thesis (EPP) was originally formulated by Neil Levy in his 2007 book Neuroethics. It came in two forms (2007, 67), both of which were premised on accepting that mental processes/systems are not confined to the brain:

Strong Parity: Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.

Weak Parity: Alterations of external props are (ceteris paribus) ethically on a par with alterations of the brain, to the precise extent to which our reasons for finding alterations of the brain problematic are transferable to alterations of the environment in which it is embedded.

The Strong EPP works from something called the ‘extended mind hypothesis’, which holds that mental states can be constituted by a combination of the brain and the environment in which it is embedded. To use a simple example, the mental act of ‘remembering to pick up the milk’ could, according to the extended mind hypothesis, be constituted by the combined activity of my eyes/brain decoding the visual information on the screen of my phone and the device itself displaying a reminder that I need to pick up the milk. The use of the word ‘constituted’ is important here. The extended mind hypothesis doesn’t merely claim that the mental state of remembering to pick up the milk is caused by or dependent upon my looking at the phone; it claims that it is emergent from and grounded in the combination of brain and smartphone. It’s more complicated than that, of course, and I have examined the hypothesis in detail in previous blogposts. Suffice to say, proponents of the hypothesis don’t allow just any external prop to form part of the mind; they have some criteria for determining whether an external prop really is part of the mind and not just something that plays a causal role in it. I’ll return to this below.

Heinrichs thinks there is a major problem with the Strong EPP. He says that the argument for it is flawed. If you look back at the formulation given above, you’ll see that it presents an enythymeme. It claims that because the mind is extended, external mental ‘realisers’ (to use the jargon common in this debate) carry the same moral weight as internal mental realisers. But that inference can only be drawn if you accept another, hidden premise, as follows:

  • (1) The mind extends into the external environment, i.e. external props contribute (in a constitutive way) to mental processes.

  • [Hidden premise: (2) All contributors to mental processes are on a par when it comes to their moral value]

  • (3) Therefore, alterations of external mental props that contribute to mental processes are ethically on a par with alterations of the brain.

The problem is that the hidden premise is not persuasive. Not all contributors to mental processes are morally equivalent. Some contributors could be redundant, trivial or easily replaceable and that seems like it could make a difference. I could destroy your smartphone, but you might have another one with the exact same information recorded in it. You might have suffered some short-term harm from the destruction but to claim that it is on a par with, say, destroying your hippocampus, and thereby preventing you from ever remembering where you recorded the information about buying the milk, would seem extreme. So parity cannot be assumed, even if we accept the extended mind hypothesis.

The Weak EPP corrects for this problem with the Strong EPP by making moral reasons part and parcel of the parity claim. Although not stated clearly, the Weak EPP effectively says that (because of mental extension) the reasons for finding interferences with internal mental parts problematic transfer over to external mental parts, and vice versa. Furthermore, the Weak EPP doesn’t require the extended mind hypothesis, which many find implausible. It can work from more modest distributed/embodied theories of cognition, which hold that both the body and its surrounding environment play a critical causal role in certain mental processes, even if they aren’t technically part of the mind. An example here might be the use of a pen and paper while solving a mathematical problem. While in use, the pen and paper are critical to the resolution of the puzzle, so much so that it makes sense to say that the cognitive process of solving the puzzle is not confined to the brain but is rather distributed between the brain and the two external props. This is true even if you don’t think the pen and paper are part of the mind. There is, in other words, an important dependency relation between the two such that if you find it problematic to disrupt someone’s internal, math-solving, brain-module while they were trying to solve a problem, you should also find it problematic to do the same thing to their pen and paper when they are mid-solution (And vice versa).

But even the Weak EPP has its problems. When exactly do the reasons transfer over? What reasons could we have for finding internal and external interferences in mental processes problematic? In short: when might some form of weak ethical parity arise?

2. Three Criteria for Parity: Original Value, Integration and Replaceability
Heinrich’s article focuses on three criteria that he thinks are relevant when considering whether there is ethical parity. I want to consider each in turn.

The first criterion focuses on the distinction between original value and derivative value. It’s easy to explain the distinction; harder to defend it. Go back to the pen and paper example from the previous section. You could argue that the pen and paper have no original/intrinsic value in this scenario. The value that they have is entirely derivative: it derives from the fact that they are currently playing an important part in your mathematical problem-solving process. If you had acted differently, they would have no value. For example, if you transferred to a different pen and paper because you made a critical error the first time round, the original pen and paper would have no value; or if tried to solve the problem in your head, they would never acquire any value. In other words, you, and all your constituent parts, have original/intrinsic value: the value of the external props and artifacts is entirely dependent on the uses to which you put them. Focusing on this distinction scuppers most claims to ethical parity. External props can never have quite the same moral weight as internal mental realisers because they will always lack original value.

Original Value Criterion: You and your constituent parts have original/intrinsic value but external props and artifacts have merely derivative value. Thus there will always be an important ethical distinction between what’s internal to you and what’s not.

The criterion is easy to explain because it has intuitive pull: we probably do think about ourselves (and what is a proper part of ourselves) in this way. But I think it is slightly more difficult to defend because it depends on a number of contested claims. The first contested claim concerns what actually counts as a proper part of ourselves. If we accept the extended mind hypothesis, then external props could count as proper parts of our selves and hence could have the same original value as internal parts (Heinrichs seems to accept this point). The second contested claim follows from this and concerns whether or not internal parts are always intrinsically valuable. If we cast a more critical eye over our internal parts, we might find that some of them do not really count as proper parts of our selves because they do not form some essential or integral element of who were are. In that case their destruction could be ethically trivial. For example, the destruction of one of my neurons is hardly an ethical tragedy: I can survive perfectly fine without it. This suggests, to me, that a single neuron lacks intrinsic value: it is not an essential part of who I am, even if it is internal to my body/brain. The third contested claim concerns whether or not all external props lack intrinsic value. I think this could be challenged. Some external props might have their own, independent value, e.g. aesthetic beauty. Admittedly, this is a tangential point in this debate, but it is worth bearing in mind.

The second criterion for assessing ethical parity is the degree of integration between the user and the external prop. Even though Heinrichs makes much of the original/derivative distinction he acknowledges that some external props could be so closely integrated with a person’s cognitive processes that their value, even though derivative, could be very high. Consider the surgeon who relies on robotic arms to help her complete a delicate operation; or the blind person who uses a cane to help them navigate. There is a high degree of integration between the external props and the user in both of these cases. If you broke down the robotic arms, or stole the cane, you would be doing something with a lot of moral disvalue. This is because the user depends so heavily on the prop that you would seriously disrupt their mental/cognitive processes by interfering with it.

Integration Criterion: When a user is highly integrated with an external prop it can have a high degree of moral value.

But how do you assess degrees of integration? Various sub-criteria have been proposed over the years. Richard Heersmink has argued that there are eight sub-criteria of integration, including (i) the amount information that flows between the user and prop; (ii) the reliability of that information; (iii) the durability of the prop; (iv) the degree of trust placed in the prop; (v) the procedural transparency of the prop; (vi) the informational transparency of the prop; (vii) the degree of individualisation/customisation of the prop and (viii) how much the prop transforms the capabilities of the user. All of these seem sensible, and I agree that the more integrated a user is with the external prop the higher the moral value attached to it. But as Heinrichs points out, Heersmink’s criteria work best when we are dealing with information technologies, and not with other kinds of external props (e.g. brain stimulation devices).

This leads Heinrichs to consider another criterion, one that he thinks is particularly important: the replaceability criterion. To explain how he thinks about it, I will quote directly from his article:

Replaceability Criterion: “Generally, an irreplaceable contributor to one and the same cognitive process is, ceteris paribus, more important [i.e. carries more value] than a replaceable one.: (Heinrichs 2017, 11)

Using this criterion, Heinrichs suggests that many internal parts are irreplaceable and so their destruction carries a lot of moral weight, whereas many external props are replaceable and so their destruction carries less weight. That said, he also accepts that some external props could be irreplaceable which means that destroying them would mean doing a serious wrong to an agent. However, he argues that such irreplaceability needs to be assessed over two different timescales. An external prop might be irreplaceable in the short-term — when mid-activity — but not in the long-term. Someone could steal a blind person’s cane while they are walking home, thereby doing significant harm to them with respect to the performance of that activity, but the cane could be easily replaced in the long-term. The question is whether this kind of long-term replaceability makes any moral difference? Intuitively, it seems like it might. Destroying something that is irreplaceable in both the short and long-term would seem to be much worse than destroying something that is replaceable in the long-term. Both are undoubtedly wrong, but they are not ethically on a par.

This brings us, at last, to the question posed in the introduction. If technology continues to advance, and if we develop more external props that allow us to easily replace parts of our brains and bodies — if, in some sense, the component parts of all mental processes are readily fungible — will that mean that there is something trivial about the destruction of the original biological parts? Here’s where the replaceability criterion starts to get into trouble. If you accept that the degree of replaceability makes a moral difference, you start slipping down a slope to a world in which many of our commonsense moral beliefs lose traction. The destruction of limbs and brain parts could be greeted with equanimity because they can be easily replaced. The counterintuitive nature of this world has led others to argue that the replaceability criterion should be deployed with some caution in this context. It clearly doesn’t capture everything we care about when it comes to understanding interpersonal wrongs: there are intrinsic wrongs/harms associated with destroying parts of someone’s body or mental processes that need to be taken very seriously, even if those parts are easily replaceable. Replaceability cannot erase all wrongdoing.

But this suggests to me that more needs to said about when replaceability really matters and when it doesn’t. It’s possible, after all, that our moral intuitions about right and wrong should not be trusted in a world of perfect technological fungibility. One suggestion I have is that the intrinsic/instrumental value distinction could play an important role in determining when replaceability really matters. Some things are intrinsically value: any replacement with a functional equivalent will fail to provide the same level of value. Consider a beloved family pet. You could replace it with another pet, but it wouldn’t be the same. Other things are instrumentally valuable: replacement with a functional equivalent would provide the same level of value. Consider a knife or fork that you use to eat your food. If one falls on the ground and is replaced by functional equivalent we don’t lament the loss of the original.

So I think the critical question, then, is whether the parts of our cognitive/mental processes (or biological systems) have some intrinsic value, such that any replacement would fail to provide the same level of value, or whether they are merely instrumentally valuable: they matter because they help to sustain us in certain activities and, ultimately, sustain our personal identities. I tend to favour the instrumental view, which would imply that nostalgia for our original biological parts is irrational in a future of perfect technological fungibility. This does not mean that there is nothing wrong with attacking someone and interfering with those parts. It just means that it might be less wrong than it is in our current predicament.

That might be a disturbing conclusion for some.

Saturday, March 3, 2018

Episode #37 - Yorke on the Philosophy of Utopianism


In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a 'utopia' is, why space exploration is associated with utopian thinking, and whether Bernard Suits' is correct to say that games are the highest ideal of human existence.   You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 2:00 - Why did Christopher choose to study utopianism?
  • 6:44 - What is a 'utopia'? Defining the ideal society
  • 14:00 - Is utopia practically achievable?
  • 19:34 - Why are dystopias easier to imagine that utopias?
  • 23:00 - Blueprints vs Horizons - different understandings of the utopian project
  • 26:40 - What do philosophers bring to the study of utopia?
  • 30:40 - Why is space exploration associated with utopianism?
  • 39:20 - Kant's Perpetual Peace vs the Final Frontier
  • 47:09 - Suits's Utopia of Games: What is a game?
  • 53:16 - Is game-playing the highest ideal of human existence?
  • 1:01:15 - What kinds of games will Suits's utopians play?
  • 1:14:41 - Is a post-instrumentalist society really intelligible?

Relevant Links

Wednesday, February 28, 2018

But is it cheating? Some thoughts on robots and sexual infidelity

[This is short article that I wrote in collaboration with Neil McArthur, for promotional reasons, when the Robot Sex book (pictured above) was coming out. Since it is unlikely to be published now, I thought I would share it here. It's not the most rigorous piece I've ever written, but I think the core insight is worthwhile.]

The comedian Richard Herring recently created a series of sketches centred around the question ‘is it cheating if you have sex with a robot?’ As someone who has been researching the topic of sexual relationships with robots for several years, I am dismayed to find that this is the most common question I get asked. The advent of sophisticated sex robots raises a number of important ethical questions for society, but the cheating question does seem to be among them.

But since others think it is important it probably behooves me to provide an answer. Here’s my best shot.

First, I presume that people have something like the following in mind when they ask the question:

“Given that I am in a serious and committed relationship with another human being, if I have sex with a robot, does this count as cheating on my human partner?”

To answer that, you need to think about what it means to ‘cheat’. There are at least two distinct meanings of the word in common parlance. The first, which is specific to intimate relationships, is that cheating occurs when one person engages in sexual contact with someone other than their ‘official’ intimate partner. The second, which applies more generally, is that cheating occurs when you break the rules of a given practice to gain an advantage. To avoid confusion, we can refer to the first as ‘cheating*’ and the second as simply ‘cheating’.

When people ask the cheating-question, they usually focus on cheating*, but I argue that this is a mistake. They really should focus on cheating. Why? Well, for one thing, cheating* is not an issue in some relationships. Some people have ‘arrangements’ that open the doors to intimate sexual contact with third parties. They have established internal ground rules for their relationships that say that this is permissible, under certain circumstances. They care about breaking those rules, not about infidelity per se.

Furthermore, the focus on cheating* forces us into endless debates about which forms of intimate contact count as cheating*. Do you cheat* if you kiss someone else? What about if you send them explicit text messages? This encourages people to take an overly technical and legalistic approach to their relationships — to hope that they can avoid their partner’s ire or disappointment by falling outside the technical definition of cheating*. They consequently overlook or ignore what’s really bothering their partner about their conduct: the sense of betrayal or emotional harm. They care about cheating* when they should care about cheating.

If we accept this shift in focus, the cheating-question becomes relatively easy to resolve. It’s simply a question of whether the internal rules of the relationship forbid the use of sex robots. That’s something that the parties to the relationship should determine for themselves, through negotiation and agreement. In a liberal society, this seems like the right approach: intimate partners should be able to determine the rules of engagement for their own relationships without being dictated to by societal norms (provided that their own rules don’t breach other legitimate laws).

But, of course, it’s not that simple. Most people don’t set down explicit rules of engagement for their relationships -- even though they probably should. It could save a lot of heartache and upset if they did. Instead, they figure things out as they go along and rely on general social conventions to fill the gaps in any rules they may have agreed. This is not an unusual practice. In education, for example, official assessments are often governed by explicit rules that determine what counts as cheating, but those rules don’t cover every possible form of cheating or address novel technologies that enable new forms of cheating. Assessors rely on background norms of fair play to fill in the gaps. These background norms form the basis for ‘default rules’ that apply until they are overridden (or confirmed) by explicitly articulated rules.

This means that even if we do focus on cheating rather than cheating* we cannot completely avoid the technical questions as to whether having sex with a robot counts as cheating. We have to also consider whether society’s default rules forbid the use of sexbots in relationships. This is tricky since it is a novel and emerging technology and we don’t have agreed upon societal expectations in relation to them. We only have analogies. At the moment, the default rule in most Western societies seems to favour monogamy. Things may be changing in this respect, of course, and certain pockets of society may have clearly adopted non-monogamous default rules, but within the 'pockets' that I frequent I don't see anything happening to shift the presumption in favour of monogamy. This default rule holds that having intimate contact with a person other than your ‘official’ partner is a form of cheating. If you are entering into a relationship within someone, you would need to explicitly agree to deviate from this default rule. But what about other tricky cases that threaten the default rules. For example, what are society’s default rules in relation to the use of masturbatory sex toys and pornography? Things seem much fuzzier here. Historically, I suspect that the use of both would have counted as a form of intimate betrayal (i.e. cheating). Nowadays, I am not so sure. People now often take it as a given that their partners will watch pornography or use sex toys without their explicit consent.

Using these two examples as a guide suggests that whether a couple needs an explicit override or not with respect to sexbots depends on whether they think they sexbot is more like a person or like a sex toy. For the foreseeable future, sexbots will not be persons, but they will look and act in person-like ways. And because they look and act like persons, it’s unlikely that sexbots will be viewed simply as another sex toy. They will lie in a zone of uncertainty. That means, for the foreseeable future, if you plan on using a sex robot whilst in a committed relationship, you should probably explicitly negotiate for this with your partner. That’s the only way to be sure you are not cheating. But in the long-term, as sex robots proliferate, their use may be normalised, and so not something that needs to be explicitly negotiated.

Monday, February 12, 2018

Taking the Relational Turn: How should we think about the moral status of animals, robots and Others?

How should we think about the moral status of non-human (or pre-human) entities? Do animals/robots/foetuses have moral status? If so, why? It is important to get the answer right. Entities with moral status are objects of moral concern. We typically owe duties to them and they may have rights against us. Furthermore, we don’t want to make any moral errors. We don’t want to mistreat a proper object of moral concern or impose burdensome and unnecessary duties. How can we avoid this?

David Gunkel and Mark Coeckelbergh try to provide some answers in their paper ‘Facing Animals: A Relational, Other-Oriented Approach to Moral Standing’. As you might guess from the title, the paper is primarily about the moral status of animals, but the position defended therein is of broader ethical significance. In essence, Gunkel and Coeckelbergh argue that when thinking about the moral status of animals (and other entities) we should take the ‘relational turn’:

The Relational Turn: When thinking about the moral status of non-human entities we should focus less on their intrinsic metaphysical properties and more on how we relate to them.

In the remainder of this post, I want to set out Gunkel and Coeckelbergh’s case for the relational turn, explaining what that means in more concrete terms, and offering some critical reflections of my own.

1. Against the Properties Approach
Gunkel and Coeckelbergh present the relational turn as an alternative to what they claim is the dominant approach in the field of animal ethics: the properties approach. The properties approach answers questions of moral status of by focusing on the ontological properties of the entity in question. To give an example, two of the most famous voices in the field of animal ethics are Peter Singer and Tom Regan. Both make strong arguments in favour of the moral status of animals, but do so from different moral traditions. Singer is a utilitarian; Regan is a Kantian. Nevertheless, both build their arguments from claims about the ontological properties of animals. For Singer, what matters for moral status is the capacity for suffering. If animals have this capacity, then they are proper objects of moral concern, and we have a duty to prevent their suffering. For Regan, what matters is whether animals can be the ‘subject of a life’. If they can express this property, then they are proper objects of moral concern and we have corresponding duties toward them.

Both arguments are quintessential examples of the properties approach in action. To put that approach on a more formal footing, we can say that the following represents the Singer/Regan-style argument for moral status:

  • (1) Any entity that exemplifies property P [‘capacity for suffering’/‘being the subject of a life’] has moral status.

  • (2) Animals exemplify property P.

  • (3) Therefore, animals have moral status.

This is an abstract template. Singer and Regan fill it out in particular ways, and those ways have proven quite influential, but you could dispute their take on it. Perhaps it is some other property (or combination of properties) that really matters when it comes to determining moral status (e.g. the capacity for conversation/speech or the capacity for religious belief)? It is important to stress the flexibility of the properties approach here. It becomes important below.

Despite this flexibility, Gunkel and Coeckelbergh argue that this properties approach to animal ethics is fundamentally misguided. They offer four main criticisms. These criticisms do not target particular premises of the Singer/Regan-style argument; instead, they take issue with the entire Singer/Regan enterprise.

The first criticism is that the properties approach proceeds from an unexamined anthropocentric bias. In other words, proponents of the approach start with properties that humans clearly exemplify, such as sentience or self-awareness, and then work outwards from those properties to determine the moral status of animals. If animals are sufficiently human-like with respect to those properties, they will be welcomed into the community of moral concern. If they are not, they are excluded. This, then, is a critique of the reasoning procedure followed by proponents of the properties approach.

The second criticism is that the properties approach faces significant epistemological problems. Many of the properties favoured in Singer/Regan-style arguments are epistemically opaque. How can we know if an animal suffers or is the subject of a life? We don’t have direct epistemic access to these states of being? We have to infer them from outward behaviour, and this leads us to many interminable disputes. Is the dog really suffering because it yelps? Does it have the concept of itself as a continuing being? We can never know for sure. Of course, if this is really a problem, then it is a problem for how we determine the moral status of humans too. After all, we don’t have direct access to another human being’s inner mental life. But Gunkel and Coeckelbergh argue that there is just much more ambiguity and doubt in the case of animals.

The third criticism is that the properties approach creates an illusion of neutrality when it comes to determining moral status. The idea is that the presence or absence of the relevant properties can be objectively and neutrally determined. It is a matter of fact whether or not an animal can suffer; it is a matter of fact whether they are the subject of a life. These are matters to be determined by scientists and animal behaviourists, not ethicists. But this ignores how deeply moral/ethical the determination of moral status really is.

The fourth criticism is that the properties approach often involves sticking with a traditional and defective method for determining moral status. The decisions as to which properties ‘count’ are ones that are typically made before we are born and are deeply embedded in social norms and practices. This is why, historically, women and slaves were excluded from moral communities. To persist with the properties approach is to persist with the dubious social and cultural norms.

I have to say I have some problems with each of these criticisms. I certainly don’t think any of them poses a fatal problem for the properties approach. On the contrary, most of them seem to be either unavoidable (the first and second criticisms) or just problems with how the method has or might be employed (the third and fourth criticisms). These problems seem surmountable to me. But I’ll set those concerns to the side for now and consider the merits of the ‘relational’ alternative.

2. What does the relational turn entail?
As noted in the intro, taking the relational turn involves focusing on how other beings relate to us and enter into our lives, and not on their metaphysical properties. It is our relations to these ‘Others’ that raise ethical questions about their status, not some prior knowledge of their metaphysical properties. In promoting this relational approach, Gunkel and Coeckelbergh are heavily influenced by the work of the phenomenologist Emmanuel Levinas. He argued that ontology does not precede morality. On the contrary, the primary fact of existence was its relationality, i.e. the fact that we are in the world with others who intrude upon us in various ways. This intrusion necessitates a moral response and as part of that response we start to parse our relations into ontological categories. Moral engagement with the Other is the more fundamental fact of existence.

Here’s where things get a little obscure and linguistically challenging. Levinas (and others) explain this way of thinking by asking whether other beings in the world ‘take on a face’. This ‘taking on a face’ seems to be the equivalent of taking on ‘moral status’. Gunkel and Coeckelbergh like this terminology and argue that the ‘face-taking’ question is the central one in animal ethics because it is distinct from the properties question . They formulate the ‘face-taking’ question in the following terms:

Face-taking question: What does it take for an animal (or an ‘Other) to supervene and be revealed as having face? Or, to put in another way, under what practical conditions does an animal get included in a moral community?

Asking this question takes us away from the properties-oriented mindset. To further explain the shift, Gunkel and Coeckelbergh reference the work of Donna Haraway, who argued that the crucial question in animal ethics is not whether animals can suffer but whether they can ‘work’ or ‘play’, whether we can enter into an embodied interpersonal interactions with them, and so on. Gunkel and Coeckelbergh then focus on the conditions under which animals start enter into meaningful and morally significant relations with us. Their discussion gets quite detailed, but they single out two things that seem to be quite important in determining whether animals get included or not.

The first is the ‘naming’ of an animal. Giving an animal a proper name is a speech act with moral consequences. It draws the animal inside your moral circle. This is an idea that often features in media representations of animals. I recall many TV shows from when I was a younger that involved plotlines in which a child named some farm animal that they were later told was going to be slaughtered and eaten. The prior naming gave the subsequent awareness of slaughter a moral seriousness that it would otherwise have lacked. We feel closer to the animals we name and care more about their fate.

The second important condition is the physical location of the animal. Animals that live outside our homes — in the fields and countryside — are different from animals that share our homes. By inviting them into our homes we invite them into our moral circles:

For an animal, it matters a great deal where it is, in which place it is, and what techniques and technologies have been used to position it. For example, a ‘‘pet’’ is in the house. This means it is part of the human domicile, the sphere of the ‘‘who-s’’ as opposed to the ‘‘what-s.’ 
(Gunkel and Coeckelbergh 2014, 727)

These are two clear examples of how we might answer the face-taking question. They give us a sense of the conditions under which animals can ‘take on a face’. But where does it actually get us? Gunkel and Coeckelbergh acknowledge that taking the relational turn does not necessarily give us clear ethical guidance:

Note that this analysis of conditions of possibility for relations does not in itself advance a straightforward normative position; it does not say that we should treat domesticated farm animals in a more personal way. 
(2014, 730)

But they claim that this was not their goal. Their goal was to get us to think differently about the question of moral status.

3. Criticisms and Reflections
Gunkel and Coeckelbergh’s case for the relational turn is an interesting one, and I think the basic idea of the relational turn is worth taking seriously, but I have some concerns about their whole project. I would like to close by offering these up for consideration.

First, I’m not convinced that taking the relational turn draws us that far away from the properties approach. I guess it all depends on what you mean by a ‘property’, but I would argue that the face-taking question posed by Gunkel and Coeckelbergh is very much cut from the same cloth as the properties approach they are so keen to criticise. As noted above, the properties approach follows an abstract argument template. The kinds of properties that are relevant to determinations of moral status could be different from those appealed to by Singer and Regan. There is some in-built flexibility. Indeed, I think it could include the relational properties (does the Other ‘have a name’ or ‘live in close proximity to us’) mentioned by Gunkel and Coeckelbergh. If there is any real distinction between the approaches it is that the Singer/Regan approach focuses on properties that are (allegedly) intrinsic to the animal, whereas the Gunkel/Coeckelbergh approach focuses on properties that arise from the relations between the animal and its environment. But I don’t see why that distinction means the two approaches have to be in opposition to one another; both sets of properties could be crucial when determining moral status.

Second, in not offering any normative guidance, and in claiming this was not their intention, Gunkel and Coeckelbergh are doing something that I find a little bit disingenuous. After all, surely it is the normative question that motivates this entire inquiry? We want to know when and whether we are making errors in the ascription of moral status. That’s certainly what motivates the Singer/Regan-style argument. To shift focus to the more descriptive question — ‘under what conditions do animals enter our moral communities’ — is at best an interesting diversion and, at worst, a distraction from what really matters. I suspect that if we want to answer the normative questions, we will need to stick with the more traditional properties-style of reasoning.

Finally, although Gunkel and Coeckelbergh criticise the properties approach on the grounds that it is anthropocentric and (potentially) premised on defective moral traditions, it seems to me that the relational approach is equally susceptible to these critiques. Indeed, focusing on relational properties in preference to intrinsic properties makes human beings far more central to ascriptions of moral status than the properties approach of Singer and Regan does. Furthermore, the relational approach also risks morally ossifying traditional conceptions of how we ought to relate to animals and other non-human entities. For example, we might continue to think that we don’t need to care about the cows in the fields because (a) we don’t give them names and (b) we don’t invite them to live in our homes. If anything, the Singer/Regan approach is more potentially disruptive of this traditional moral complacency about animals. Again, I appreciate that Gunkel and Coeckelbergh don’t make normative claims on behalf of the conditions they identify, but in not doing so I fear they make such complacency more excusable.

Wednesday, February 7, 2018

The Quantified Relationship: Target Article with Replies

Along with Sven Nyholm and Brian Earp, I have just published a target article in the American Journal of Bioethics on the use of quantified self technologies in intimate relationships. There are eleven response papers, including one from us responding to the responses.  You can access the issue here (and -- shhh! -- a version of the target article here). Full details of each paper below:

The Quantified Relationship (2018) AJOB 18(2): 3-19  by Danaher, Nyholm and Earp
Abstract: The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies.