Monday, June 29, 2015

Technological Unemployment and Personal Well-being: Does work make us happy?




Let’s assume technological unemployment is going to happen. Let’s assume that automating technologies will take over the majority of economically productive labour. It’s a controversial assumption, to be sure, but one with some argumentative basis. Should we welcome this possibility? On previous occasions, I have outlined some arguments for thinking that we should. In essence, these arguments claimed that if we could solve the distributional problems arising from technological unemployment (e.g. through a basic income guarantee), then freedom from work could be a boon in terms of personal autonomy, well-being and fulfillment.

But maybe this is wrong. Maybe the absence of work from our lives will make us miserable and unfulfilled? Today, I want to look at an argument in favour of this alternative point of view. The argument comes from Nicholas Carr’s recent book on automation. Carr has a bit of a reputation as a technology-doomsayer. But I think he sometimes makes some reasonable points. His argument on work is quite interesting. When I first read it, I didn’t think much of it. But upon re-reading, I saw that it is slightly more subtle and interesting than I first supposed.

Carr’s argument rests on two main claims: (i) the importance of the ‘flow’ state in human well-being; (ii) our inability to be good judges of what will get us into such ‘flow’ states. These two claims directly challenge the typical anti-work arguments. Let’s see exactly how it all fits together.


1. A Simple Anti-Work Argument
We start by considering the anti-work view, i.e. the one that is opposed to what Carr has to say. I won’t consider any particular proponent of this view, though there are many. Instead, I’ll consider a simple, generic version of it.

The anti-work view is premised on the notion that work is generally unpleasant and undertaken against our will. Proponents of the view highlight the valorisation and glorification of the work ethic in contemporary capitalist societies. They claim that we have all been duped into making a virtue of an economic necessity. Work is labour undertaken for some economic reward (or hope of such a reward), but we don’t really get to choose our preferred form of labour. The market dictates what is economically valuable. If we are lucky, we get to do something we don’t hate. But even if we are lucky, we will soon find that work invades our lives. We will spend the majority of our time doing it; and the time that we are not working will be spent recovering from or preparing for it. And it gets even worse. In the modern era, there is a creeping erosion of our leisure time, and a collapse in the possibility of achieving a work-life balance. Communications technologies mean that we are always-contactable; always-switched on, and always-working.

Wouldn’t it be so much better if we could remove these work-related pressures from our lives? If machines could take over all economically important labour, we would be free to spend our time as we wish. We could pursue projects of genuine personal, social and moral interest. We could rebalance our lives, spending more time at leisure, engaging in what Bob Black has called the ‘ludic life’. Surely, this would be a more healthful, meaningful and fulfilling existence?

To put all this into a slightly more formal argument:


  • (1) If we are free to choose how to spend our time (rather than being forced to work for a living), then we will engage in activities that confer greater levels of well-being and meaning on our lives.
  • (2) If there is technological unemployment, we will be free to spend our time as we please.
  • (3) Therefore, if there is technological unemployment, we will be able to engage in activities that confer greater levels of well-being and fulfillment on our lives.


There are several problems with this argument. For one thing, I suspect that premise (2) is unpersuasive in its present form. The notion that freedom from work will automatically free us up to spend our time as we please sounds naive. As hinted at above, a lack of employment could lead to a severe existential crisis as people need to find resources to meet their basic needs. That might make them even less ‘free’ than they were before they lost their jobs. Unless that distributional problem can be addressed, premise (2) will be a weak link in the chain of reasoning.

But as I mentioned above, let’s assume that this particular issue can be resolved. Focus could then shift to premise (1). This is the one that Carr seems to cast into doubt.


2. The Importance of Flow and the Paradox of Work
Carr’s argument centres around the concept of the ‘flow state’. This is something that was brought to popular attention by the psychologist Mihaly Csikszentmihalyi. It is a state of mental concentration and immersion that is characterised by a strong positive affective experience (sometimes described as ‘rapture’ or ‘joy’). It is distinct from states of extreme mental concentration that are characterised by negative affective experience. A flow state is something you have probably experienced at some point in your life. I know I sometimes get it while writing.

The interesting thing, from Carr’s perspective, is that the flow state seems to be an important component of well-being and fulfillment. And, perhaps more importantly, that we aren’t very good at identifying the activities that help us to bring it about. This is due to the ‘paradox of work’, which was also described by Csikszentmihalyi.

In a series of experiments, Csikszentmihalyi used something called the Experience Sampling Method (ESM) to gauge what sorts of activities most increased people’s feelings of subjective well-being and happiness. The ESM tries to sample experimental subjects’ moods at separate intervals during the course of a typical day. The subjects’ wear a device (in the original studies it was a pager) that beeps them at certain times and asks them to complete a short survey. The survey itself asks them to explain what they were doing at that moment in time, what skills they were deploying, the challenges they faced and their psychological state.

In the 1980s, Csikszentmihalyi used this method on groups of workers from around Chicago. The workers came from different industries. Some were in skilled jobs; some were in unskilled jobs. Some were blue-collar; some were white collar. They were given pagers that beeped on seven occasions during the course of the day, and complete the associated surveys.

The results were interesting. Csikszentmihalyi and his colleagues found that people were happier working than they were during leisure time. People felt fulfilled and challenged by work-related activities; whereas they felt bored and anxious during their time off. And yet, despite this, people said that they didn’t like working and that they would prefer to be taking time off. This is where the so-called ‘paradox of work’ comes into play. According to the results of the ESM, people are happier at work than they are at leisure; and yet people still express a desire not to be working.

What are we to make of this? Carr thinks that the results of Csikszentmihalyi’s study provide an example of a broader psychological phenomenon: the problem of miswanting. This is something that has been documented by the psychologists Daniel Gilbert and Timothy Wilson: people often want things that they think will make them happy but end up having the opposite effect. In this respect, certain social conventions surrounding the importance of spending time with one’s friends and families may be encouraging people to block-out the positive feelings associated with work, and biasing them in favour of activities that don’t really make them happy.

But why is it that leisure time is not as fulfilling as work? The answer comes from the importance of having some level of challenge and pressure in one’s life. Csikszentmihalyi identifies nine different factors that contribute to the attainment of the flow state. These include achieving the right balance of mental exertion and anxiety. Too much external pressure, arousal and anxiety and you won’t be able to enter a flow state; too little and you will also miss it. The problem is that during ‘down time’ we often fail to have the right amount of pressure, arousal and anxiety. Consequently, we lapse into the bored and listless state that Csikszentmihalyi found amongst his experimental subjects. Work has the benefit of imposing a structure and schedule that encourages the right level of arousal and anxiety.

Carr sums up the position in the following quote:

…a job imposes a structure on our time that we lose when we’re left to our own devices. At work, we’re pushed to engaged in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work…that we tune out distractions…Our usually wayward attention becomes fixed on what we’re doing. 
(Carr 2015, 16)

In short, as Carr sees it, we are often happiest while working.


3. The Case against Anti-Work and Technological Unemployment
How does all this translate into an argument against technological unemployment? The simplest thing to say is that the evidence introduced by Carr casts into doubt the conditional claim embodied in premise (1). This premise seems to be claiming that there is a causal link between the freedom to choose how to fill one’s time and the level of well-being and fulfillment that one experiences. This now seems to be in doubt. It looks like mere freedom to choose how to fill one’s time is not enough. One must fill one’s time with the right kinds of activities. People might be able to do this without the rigid structure of a job — Carr himself concedes as much — but often they will not. They will be tempted to rest on their laurels and won’t have the pressures and challenges required for truly immersive engagement.

This then is the problem with technological unemployment: The kinds of automating technology that take away human jobs will taken away the pressures, anxieties and structures needed to attain flow. Indeed, the situation will be exacerbated if the same kinds of automating technology filter into our leisure time as well (e.g. if people start to use automating technologies to assist with the challenging and difficult aspects of their hobbies). In short:


  • (4) The attainment of flow states is an important component of human well-being.
  • (5) If left to their own devices, people are often bad judges of what will get them into a flow state: they may need the pressure and structure imposed by employment to get them to engage in the right sorts of activity (support: Csikszentmihalyi’s work)
  • (6) Therefore, mere freedom to choose how to spend one’s time is no guarantee that the time will be spent engaging in activities that confer greater levels of fulfillment and well-being.


The result is the negation of premise (1).

Is this argument any good? Even if I concede premise (4), I have a few worries. For one thing, I worry about the over-reliance on Csikszentmihalyi’s work. I know the concept of the flow state is widely endorsed, but I’m not so sure about the paradox of work. The study Carr refers to was performed during the 1980s. Has it been confirmed in subsequent studies? I don’t know and I simply have to plead ignorance on the psychological science front here (if you know of follow-up studies or similar studies please let me know in the comments section). One thing that does strike me, however, is that in discussing this one example, Carr refers to the notion that people were socially conditioned into thinking that leisure time should be more pleasurable than work. It seems to me that there is a countervailing type of social conditioning that tries to glorify the ideal of being ‘busy’ and ‘working’. Could this be tricking us into thinking that our working lives are more valuable than they actually are?

The second worry I have relates to premise (5). As someone who effectively sets their own agenda for work, I see no reason to suppose the absence of the employment-relation would rob us of the ability to achieve true flow states. In particular, I see no reason to suppose that waged labour is the only thing that could provide us with the pressures, challenges and structures needed to engage in truly immersive activity. Indeed, it seems somewhat patronising to suggest that employment is the best way for most people to achieve this. There are plenty of other pressures and challenges in life (e.g. self-imposed goal setting and reinforcement from one’s social peers). Indeed, modern technology may actually help to provide a framework for such pressures and challenges outside of waged labour, for example through social-sharing and gamification. I’m not saying these are good things; I am just saying there are other ways of achieving the end that Carr seems to desire.

That said, I do think there is something to worry about when it comes to automation and personal fulfillment. There is a danger that automation will be used by people to avoid all seemingly unpleasant or challenging activities, in the private sphere as well as in the economic sphere. But the danger associated with this must be kept in perspective. There is tendency among automation doomsayers to assume that automation will take over everything and we will be left with nothing. But this is just as naive as the view that being free to choose one’s activities will make one happier. Automating some activities can free us up to pursue others, i.e. to exercise our creativity and ingenuity in other ways. The potential benefits of this, when weighed agains the degrading and negative aspects of waged labour, ought to be kept in mind.

Anyway, that’s it for this post. To briefly recap, anti-work enthusiasts often make the case against work by appealing to the notion that being free to spend one’s time as one chooses will allow one to engage in activities that confer greater fulfillment and well-being. Carr, relying on the work of Csikszentmihalyi, argues that this is too simplistic. People are often bad judges of what kinds of activities confer the most benefits. In particular, they are bad at choosing activities that will help them to reach a flow state. Cskikszentmihalyi’s studies suggest that people are often happier working than they are at leisure. This is because they need some pressure and challenge in life. Work may be the best source of this pressure and challenge. Although I think this is an interesting argument, and I agree about the simplicity of some anti-work arguments, it seems to me to have several weaknesses. In particular, it seems to over-rely on one study; ignore many of the negative aspects of work; and assume too readily that work is the best (or only) source of pressure and challenge.

Thursday, June 25, 2015

The Logic of Surveillance Capitalism




You have probably noticed it already. There is a strange logic at the heart of the modern tech industry. The goal of many new tech startups is not to produce products or services for which consumers are willing to pay. Instead, the goal is create a digital platform or hub that will capture information from as many users as possible — to grab as many ‘eyeballs’ as you can. This information can then be analysed, repackaged and monetised in various ways. The appetite for this information-capture and analysis seems to be insatiable, with ever increasing volumes of information being extracted and analysed from an ever-expanding array of data-monitoring technologies.

The famous Harvard business theorist Shoshana Zuboff refers to this phenomenon as surveillance capitalism and she believes that it has its own internal ‘logic’ that we need to carefully and critically assess. The word ‘logic’ is somewhat obscure in this context. To me, logic is the study of the rules of inference and argumentation. To Zuboff, it means something more like the structural requirements and underlying principles of a particular social institution — in this instance the institutions of surveillance capitalism. But there’s no sense in getting hung up about the word. The important thing is to understand the phenomenon.

And that’s what I want to do in this post. I want to analyse Zuboff’s characterisation and assessment of the logic of surveillance capitalism. That assessment is almost entirely negative in nature, occasionally hyperbolically so, but contains some genuinely provocative insights. This is marred by the fact that Zuboff’s writings are esoteric and not always enjoyable to read. This is largely due to her opaque use of language. I’m going to try to simplify and repackage what she has to say here.

Zuboff identifies four key features in the logic of surveillance capitalism. In doing so, she explicitly follows the four key features identified by Google’s chief economist, Hal Varian. These four features are: (i) the drive toward more and more data extraction and analysis; (ii) the development of new contractual forms using computer-monitoring and automation; (iii) the desire to personalise and customise the services offered to users of digital platforms; and (iv) the use of the technological infrastructure to carry out continual experiments on its users and consumers.




Each of these four features has important social repercussions. Let’s look at them in more depth.


1. Data Extraction and Analysis
The first feature of surveillance capitalism is probably the most obvious. It is the insatiable appetite for data extraction and analysis. This what many refer to under the rubric of ‘big data’ and what people worry about when they worry about data protection and privacy. Zuboff says that there are two things you need to understand about this aspect of surveillance capitalism.

First, you need to understand the sources of the data, i.e. what it is that makes it fair to refer to this as the era of ‘big data’. There are several such sources, all of which feed into ever-increasing datasets, that are far beyond the ability of a human being to comprehend. The most obvious source of data is the data from computer-mediated transactions. The infrastructure of modern computing is such that every computer-mediated transaction is recorded and logged. This means that there is rich set of transaction-related data to be mined. In addition to this, there is the rise of the so-called internet of things, or internet of everything. This is the world being inaugurated by the creation of smart devices that can be attached to every physical object in the world, and can be used to record and upload data from those objects. Think about the computers in cars, lawnmowers, thermostats, wristwatches, washing machines and so on. Each one of these devices represents an opportunity for more data to be fed to the institutions of surveillance capitalism. On top of that there are the large datasets kept by governments and other bureaucratic agencies that have been digitised and linked to the internet, and the vast array of private and personal surveillance equipment. Virtually everything can now be used as a datasource for surveillance capitalism. What’s more, the ubiquity of data-monitoring is often deliberately hidden or ‘hidden in plain sight’. People simply do not realise how often, or how easy it is, for their personal data to be collected by the institutions of surveillance capitalism.

Second, you need to understand the relationship between the data-extracting companies, like Google, and the users of their services. The relationship is asymmetric and characterised by formal indifference and functional independence. Each of these features needs to be unpacked. The asymmetry in the relationship is obvious. The data is often extracted in the absence of any formal consent or dialogue. Indeed, companies like Google seem to have adopted an ‘extract first, ask later’ attitude. The full extent of data extraction is often not revealed until there is some scandal or leak. This was certainly true of the personal data about wi-fi networks extracted by Google’s Street View project. The formal indifference in the relationship concerns Google’s attitude toward the content of the data it extracts. Google isn’t particularly discriminating in what it collects: it collects everything it can and finds out uses for it later. Finally, the functional independence arises from the economic use to which the extracted data is put. Big data companies like Google typically do not rely on their users for money. Rather, they use the information extracted as a commodity they can sell to advertisers. The users are the product, not the customers.

It is worth dwelling on this functional independence for a moment. As Zuboff sees it, this feature of surveillance capitalism constitutes an interesting break from the model of the 20th century corporation. As set out in the work of economists like Berle and Means, the 20th century firm was characterised by a number of mutual interdependencies between its employees, its shareholders and its customers. Zuboff uses the example of the car-manufacturing businesses that dominated American in the mid-20th century. These companies relied on large and stable networks of employees and consumers (often one and the same people) for their profitability and functionality. As a result, they worked hard to establish durable careers for their employees and long-term relationships with their customers. It is not clear that surveillance companies like Google are doing the same thing. They do not rely on their primary users for profitability and often do not rely on human workers to manage their core services. Zuboff thinks that this is reflected in the fact that the leading tech companies are far more profitable than the car-manufacturers ever were, while employing far fewer people.

For what it’s worth, I fear that Zuboff may be glorifying the reality of the 20th century firm, and ignoring the fact that many of Google’s customers (and Facebook’s and Twitter’s) are also primary users. So there are some interdependencies at play. But it might be fair to say that the interdependencies have been severely attenuated by the infrastructure of surveillance capitalism. Companies really do require fewer employees, with less stable careers; and there is not the same one-to-one relationship between service users and customers.

One final point about data extraction and analysis. There is an interesting contrast to be made between the type of market envisaged by Varian, and made possible by surveillance capitalism, and the market that was beloved by the libertarian free-marketeers of the 20th century. Hayek’s classic defence of the free market, and attack on the centrally-planned market, was premised on the notion that the information needed to make sensible economic decisions was too localised and diffuse. It could not be known by any single organisation or institution. In a sense, the totality of the market was unknowable. But surveillance capitalism casts this into doubt. The totality of the market may be knowable. The implications of this for the management of the economy could be quite interesting.


2. New Contractual Forms
Whereas data extraction and analysis are obvious features of surveillance capitalism, the other three features are slightly less so. The first of these, and arguably the most interesting, is the new forms of contractual monitoring and enforcement that are made possible by the infrastructures of surveillance capitalism.

These infrastructures allow for real-time monitoring of contractual performance. They also allow for real-time enforcement. You will no longer need to go to court to enforce the terms of a contract or terminate a contract due to breach of terms. The technology allows you to do that directly and immediately. Varian himself gives some startling examples (I’m here quoting Zuboff describing Varian’s ideas):

New Contractual Monitoring and Enforcement: “If someone stops making monthly car payments, lenders can ‘instruct the vehicular monitoring system not to allow the car to be started and to signal the location where it can be picked up.’ Insurance companies, he suggests, can rely on similar monitoring systems to check if customers are driving safely and thus determine whether or not to maintain their insurance or pay claims.” 
(Zuboff 2015, 81)

I can imagine similar scenarios. My health insurance company could use the monitoring technology in my smartwatch to check to see whether I have been doing my 10,000 steps a day. If I have not, they could refuse to pay for my medical care. All sorts of social values could be embedded into these new contractual forms. The threat of withdrawing key services or disabling products will be ever-present.

Zuboff argues that if such a system of contractual monitoring and enforcement becomes the norm it will represent a radical restructuring of our current political and legal order. Indeed, she argues that it would represent an a-contractual form of social organisation. Contract, as conceived by the classic liberal writers, is a social institution built upon a foundation of trust, solidarity and rule of law. We know that we cannot monitor and intervene in another person’s life whenever we wish, thus when we rely on them for goods and services, we trust that they will fulfil their promises. We have recourse to the law if they do not. But this recourse to the law is in explicit recognition of the absence of perfect control.

Things are very different in Varian’s imagined world. With perpetual contractual monitoring and enforcement, there is no real need for social solidarity and trust. Nor is there any real need for the residual coercive authority of the law. This is because there is the prospect of perfect control. The state need no longer be a central mediator and residual enforcer of promises. Indeed, there is no real need for the act of promising anymore: you either conform and receive the good/service; or you don’t and have it withdrawn/disabled. Your promise to conform is irrelevant.

This new contractual world has one other important social repercussion. According to Zuboff, under the traditional contractual model there was a phenomenon of anticipatory conformity. People conformed to their contractual obligations, when they were otherwise unwilling to do so, because they wished to avoid the coercive sanction of the law. In other words, they anticipated an unpleasant outcome if they failed to conform. She believes that Varian’s model of contractual monitoring and enforcement will give rise to a distinct phenomenon of automatic conformity. The reality of perpetual monitoring and immediate enforcement will cause people will instinctively and habitually conform. They will no longer choose to conform; they will do so automatically. The scope of human agency will be limited.

This is all interesting and provocative stuff. I certainly share some of Zuboff’s concerns about the type of monitoring and intervention being envisaged by the likes of Varian. And I agree that it could inaugurate a radical restructuring of the political-legal order. But it may not come to that. Just because the current technology enables this type of monitoring and intervention doesn’t necessarily mean that we will allow it do so. The existing political legal order still dominates and has a way of (eventually) applying its principles and protections to all areas of social life. And there is still some scope for human agency to shape the contents of those principles and protections. These combined forces may make it difficult for insurance companies to set-up the kind of contractual system Varian is imagining. That said, I recognise the countervailing social forces that desire that kind of system too. The desire to control and minimise risk being one of them. There is a battle of ideas to be fought here.


3. Personalisation and Customisation of Services
The third structural feature of surveillance capitalism is its move towards the customisation and personalisation of services. Google collects as much personal data as it can in order to tailor what kinds of searches and ads you see when you use its services. Other companies do the same. Amazon tries to collect information about my book preferences; Netflix tries to collect information about my viewing habits. Both do so in an effort to customise the experience I have when I use their services, recommending particular products to me on the basis of what they think I like.

Zuboff thinks that there is something of a ‘Faustian pact’ at the heart of all this. People trade personal information for the benefits of the personal service. As a result, they have given up privacy for an economic good. Varian thinks that there is nothing sinister or worrisome in this. He uses the analogy of the doctor-patient or lawyer-client relationships. In both cases, the users of services share highly personal information in exchange for the benefits of the personal service, and no one thinks there is anything wrong about this. Indeed, it is typically viewed as a social good. Giving people the option of trading privacy for these personalised services can improve the quality of their lives.

But Zuboff resists this analogy. She argues that something like the doctor-patient relationship is characterised by mutual interdependencies (i.e. the doctor relies on the patient for a living; the patient relies on the doctor to stay alive) and are protected and grounded in the rule of law. The disclosures made by are limited, and subject to an explicit consensual dialogue between the service user and service provider. The relationship between Google and its users is not like this. The attempts at consensual dialogue are minimal (and routinely ignored). It is not characterised by mutual interdependencies; it often operates in a legal vacuum (extract first, ask questions later); and there are no intrinsic limits to the extent of the information being collected. In fact, the explicit goal of companies like Google is to collect so much personal information that they know us better than we know ourselves.




The Faustian pact at the heart of all this is that users of these digital services are often unaware of what they have given up. As Zuboff (and others) put it: surveillance capitalism has given rise to a massive redistribution of privacy rights, from private citizens to surveillance companies like Google. Privacy rights are, in effect, decision rights: they confer an entitlement to choose where on the spectrum between complete privacy and total transparency people should lie. Surveillance capitalism has allowed large companies to exercise more and more control over these kinds of decisions. They collect the information and they decide what to do with it.

That said, Zuboff thinks people may be waking up to the reality of this Faustian pact. In the aftermath of the Snowden leak, and other data-related scandals, people have become more sensitive to the loss of privacy. Legal regimes (particularly in Europe) seem to be resisting the redistribution of privacy rights. And some companies (like Apple in recent times) seem to be positioning themselves as pro-privacy.


4. Continual Experimentation
The final feature of surveillance capitalism is perhaps the most novel. It is the fact that technological infrastructure allows for continual experimentation and intervention into the lives of its users. It is easy to test different digital services using control groups. This is due to the information collected from user profiles, geographical locations, and so on. There are some famous examples of this too. Facebook’s attempt to manipulate the moods of its users being the most widely-known and discussed.

Varian argues that continual experimentation of this sort is necessary. Most methods of big data analytics do not allow companies to work out relationships of cause and effect. Instead, they only allow them to identify correlational patterns. Experimental intervention is needed in order to tease apart the causal relationships. This information is useful to companies in their effort to personalise, customise and generally improve the services they are offering.

Zuboff gets a little bit mystical at this stage in her analysis. She argues that this sort of continual intervention and experimentation gives rise to reality mining. This is distinct from data-mining. With continual experimentation, all the objects, persons and events in the real world can be captured and altered by the technological infrastructure. Indeed, the distinction between the infrastructure and the external world starts to breakdown. As she puts it herself:

Data about the behaviors of bodies, minds and things take their place in a universal real-time dynamic index of smart objects within an infinite global domain of wired things. This new phenomenon produces the possibility of modifying the behaviors of persons and things for profit and control. In the logic of surveillance capitalism there are no individuals, only the world-spanning organism and all the tiniest elements within it. 
(Zuboff 2015, 85)

I’m not sure what Zuboff means by an ‘infinite domain of wired things’. But setting that aside, it seems to me that, in this quote, with its mention of the “world-spanning organism”, Zuboff is claiming that the apotheosis of surveillance capitalism is the construction of a Borg-like society, i.e. a single collective organism that consumes reality with its technological appendages. The possibility and desirability of such a society is something I discussed in an earlier post.


5. Conclusion
To sum up, Zuboff thinks that there are four key structural features to surveillance capitalism. These four features constitute its internal logic. Each of the features has important social and political implications.

The first feature is the trend toward ever-greater levels of data extraction and analysis. The goal of companies like Google is to extract as much data from you as possible and convert it into a commodity that can be bought and sold. This extractive relationship is asymmetrical and devoid of the mutual interdependencies that characterised 20th century corporations like General Motors.

The second feature is the possibility of new forms of contractual monitoring and enforcement. The infrastructure of surveillance capitalism allows for contracts to be monitored and enforced in real-time, without the need for legal recourse. This would constitute a radical break with the classic liberal model of contractual relationship. There would be no need for trust, solidarity and rule of law.

The third feature is the desire to personalise and customise digital services, based on the data being extracted from users. Though there may be some benefits to these personal services, the infrastructure that enables them has facilitated a considerable redistribution of privacy rights from ordinary citizens to surveillance capitalist firms like Google and Facebook.

The fourth, and final feature, is the capacity for continual experimentation and intervention into the lives of the service users. This gives rise to what Zuboff calls reality-mining, which in its most extreme form will lead to the construction of a ‘world-spanning organism’.

Tuesday, June 23, 2015

Drunken Consent to Sex and Personal Responsibility




(Previous entry)

This post is part of an ongoing series on the ethics of intoxicated consent to sexual relations. The series is working its way through the arguments from Alan Wertheimer’s work on this topic.

In the previous entry, I looked at something called the ‘responsibility claim’. According to this claim, one ought to be held responsible for actions one performs whilst voluntarily intoxicated. This claim is widely accepted, and indeed forms the basis for the legal system’s attitude toward criminal responsibility and voluntary intoxication. There are many potential arguments in its favour, several of which were reviewed in the previous entry. We won’t go through them again. Instead, we’ll simply assume that the responsibility claim is correct and go on to assess its implications for intoxicated consent to sexual relations. This means we are looking for an answer to the question: if you are responsible for actions performed whilst voluntarily intoxicated (including actions like killing or raping another person) does it thereby follow that your consent to sex whilst voluntarily intoxicated is valid?

We will look at two arguments in this post. The first, from Heidi Hurd, argues that if we accept the responsibility claim, then we ought to accept that intoxicated consent is valid. To do otherwise would be inconsistent. The second, from Susan Estrich, argues that Hurd is wrong since we never hold victims of crime responsible for what happens to them whilst drunk. Wertheimer thinks that both arguments are mistaken and that a more nuanced approach to the implications of the responsibility claim is required. We won’t get to his personal opinion today, but we will cover his criticisms of Hurd and Estrich.


1. Hurd’s Argument: Intoxicated Consent can be Valid
When considering the implications of the responsibility claim, it is tempting to adopt an analogical mode of reasoning. This involves imagining a situation in which someone is drunk and is/is not held responsible for what they did, and then reasoning from this case to the case of intoxicated consent to sex. The analogical mode makes sense given the nature of the responsibility claim. As noted the last day, the responsibility claim is not an ironclad and exceptionless rule. It merely states that one can be (and often is) held responsible for one’s drunken acts; this does not exclude the possibility of certain exceptions to this rule. Given the non-absolute nature of the responsibility claim, it makes sense to work our way through a range of representative and analogous cases in order to work out its implications for the consent case. Little surprise then that both Hurd and Estrich adopt this analogical mode of reasoning.

Hurd’s analogy is the most direct. She focuses on cases in which a drunken man and a drunken woman have sex. She asks us first to imagine a case in which a drunken man has sex with a woman who does not signal consent; she then asks us to imagine a case in which a drunken woman does signal consent to the sex. She then poses the question: how could it be that the man would be held criminally responsible for raping the woman in the first case, and that woman’s intoxicated consent is declared invalid in the second? Her position is neatly summed up in the following quote:

…we should be loathe to suggest that the conditions of responsibility vary among actors, so that the drunken man, who has sex with a woman he knows is not consenting is responsible for rape, while the drunken woman who invites sex is not sufficiently responsible to make such sex consensual. 
(Hurd 1996, p 141)

To put this argument in slightly more formal terms:


  • (1) If a voluntarily intoxicated man has sexual intercourse with a woman who does not signal consent, he is criminally responsible for rape. (call this Case A)

  • (2) A voluntarily intoxicated woman who consents to sex is like the voluntarily intoxicated man in Case A in all morally important respects.

  • (3) Therefore, the consent of a voluntarily intoxicated woman is valid.



Analogical arguments of this sort are never logically watertight. The conclusion only follows on a defeasible and probabilistic basis. The argument depends entirely on the strength of the analogy, i.e. how morally similar are the two cases? Hurd thinks they are pretty similar because the degree of moral accountability faced by the man is quite high. Why shouldn’t the same be true for the woman?

Wertheimer thinks that Hurd’s argument is much too quick. He thinks there are two major problems with it. The first is that she fails to appreciate the different sorts of capacities that are required for responsibility vis-a-vis valid consent. It is generally true to say that responsibility tracks capacity. That is, in order to be responsible for X you must have the mental and physical capacity to do X. In the criminal law, the kinds of mental capacities that are singled out for responsibility are quite low. In order to be held criminally responsible for someone’s death you must either intend their death or serious injury, or be reckless with respect to their deaths. Unless an intoxicated person lacks consciousness, it is relatively easy for them to have such mental capacities. The same is true for rape, where the requisite mental capacity is the intention to sexually penetrate another human being. It is very difficult to see how anything short of severe intoxication could impair this capacity (though it is possible to imagine scenarios in which penetration occurred but was not intentional). You could reject the criminal law’s position on all this and argue that higher capacities are required, but if you don’t reject the legal position then the problem is that consent seems like really does require higher mental capacities. As Wertheimer points out, responsibility concerns how we should respond to what an agent has done; consent concerns what it is morally permissible for us to do to that agent. It seems plausible to suggest that the the ‘transformations [brought about by consent] require a deeper expression of the agent’s will than the intentions required for culpability for wrongdoing’ (Wertheimer 2000, 387).

The second problem with Hurd’s argument is that it fails to identify the limitations of ascribing responsibility to a particular agent. In saying that a person is responsible for some particular act (X), one is not committed to saying that the person is responsible for all the consequences of X (i.e. for bearing all the moral burdens associated with the act). To use a simple analogy, if I choose to go cycling without a helmet, I assume a certain risk of being in an accident. But it does not follow that I should necessarily bear all the costs associated with an accident, should one occur. That cost is likely to be borne socially, by either a governmental or private insurance fund, because that is a more appropriate way of distributing the associated burden. The same might be true in the case of intoxicated decisions to consent to sex. The agent might be responsible for some of her intoxicated behaviour, but not for all the consequences of her consent.


2. Estrich’s Argument: Intoxicated Consent may not be Valid
We move on then to consider Estrich’s opposing argument. This argument holds that the responsibility claim does not entail the validity of intoxicated consent. In making this argument, Estrich relies on analogies that are, in my experience, extremely common in the debate about consent and intoxication. She also uses these analogies to directly engage with the issue of distributing moral burdens associated with particular kinds of activities.

She points out that we don’t always hold the victims of other sorts of crimes morally accountable for getting themselves into a position where it was more likely for those crimes to occur. Instead, we resolutely impose moral accountability on those who carry out the crimes. She thinks it follows from this that we shouldn’t leap to the conclusion that someone who signals consent whilst voluntarily intoxicated is morally accountable for that consent. She gives the following examples to illustrate her point:

[we would not impose the risk burden on] people who walk alone on dangerous streets at night and get mugged or people who forget to lock their cars or leave the back windows of their houses wide open. 
(Estrich 1992, 10)

The analogies can be made even stronger. I would argue that we wouldn’t impose the moral burden on such people even if they were voluntarily intoxicated at the time. I could walk down a dangerous street whilst voluntarily intoxicated, but that wouldn’t mean that I wasn’t being criminally assaulted if I got beaten up by a group of thugs when I did. Why shouldn’t we adopt a similar approach in the case of intoxicated consent?

To put this in slightly more formal terms:


  • (4) People who perform actions that raise the risk of their being victims of crime (whilst voluntarily intoxicated) do not necessarily bear the moral burdens associated with those crimes (e.g. people who walk down dangerous alleyways or who forget to lock their cars are not responsible if they are mugged or robbed); the criminals bear the moral burden.

  • (5) The woman who consents whilst voluntarily intoxicated is like (or might be like) the victims of crime in these cases in all morally important respects.

  • (6) Therefore, the woman may not necessarily bear the moral burden associated with her intoxicated consent.


There is some appeal to this argument. It echoes the strong anti-victim blaming arguments one finds elsewhere in the literature on sexual assault. To take an obvious example, we (rightly) would not say that a woman who wore a revealing skirt should shoulder some of the moral burden associated with her sexual assault, even if it was true that wearing such a skirt raised the probability of her being sexually assaulted.

Still, there are problems with the argument. These can be detected in the slightly equivocal expression of premise (2). This expression is mine, not Estrich’s or Wertheimer’s, but it reveals a certain weakness in the case that Estrich is trying to make. To understand this weakness, you have to bear in mind two things. First, throughout this series of posts we are imagining cases in which a woman clearly signals consent to sexual relations whilst voluntarily intoxicated; not simply cases in which women are victims of sexual assault whilst voluntarily intoxicated. The latter sort of cases are, no doubt, common, but they are very different from the former because they may not involve any signal of consent. To put it more pithily, we are imagining cases in which a woman gets drunk, signals consent to sex, and engages in sex; we are not imagining cases in which a woman gets drunk and then someone engages in sexual activity with her without her consent. The second thing we need to bear in mind is the ‘moral magic’ of consent. Consent, if valid, transforms an impermissible act into a permissible one. Thus, determining whether or not valid consent is present always changes our moral interpretation of a given scenario.

These two things suffice to block the analogies upon which Estrich seeks to rely. In her imagined cases, the people really are victims of crime. There is no ambiguity or uncertainty about it. If I walk down the dangerous alleyway, whilst intoxicated, and someone physically attacks me, then I am a victim of crime. The physical attack would be criminal irrespective of whether I was drunk at the time or whether I foolishly assumed the risk associated with walking down that alleyway. This is very different from the intoxicated consent scenario. There is ambiguity and uncertainty in these cases. If the consent is valid, then the act is not criminal; if it is not valid, then it is criminal. The moral interpretation of the case varies dramatically depending on which is true.

To sum up, neither Hurd’s nor Estrich’s argument from analogy settles the matter with respect to the implications of the responsibility claim. Hurd is wrong to suppose that responsibility for criminal activity whilst voluntarily intoxicated entails the validity of consent whilst voluntarily intoxicated. The sorts of capacities required for valid consent seem like they might be higher than the capacities required for responsibility, and hence voluntary intoxication might affect the former but not the latter. Similarly, Estrich is wrong to suppose that an analogy can be drawn between cases in which someone is a clearcut victim of crime (whilst voluntarily intoxicated) and cases in which the moral status of intoxicated consent is in issue.

Monday, June 15, 2015

How might algorithms rule our lives? Mapping the logical space of algocracy


IBM Blue Gene

This post is a bit of an experiment. As you may know, I have written a series of articles looking at how big data and algorithm-based decision-making could affect society. In doing so, I have highlighted some concerns we may have about a future in which many legal-bureaucratic decisions are either taken over by or made heavily dependent on data-mining algorithms and other artificial intelligence systems. I have used the term ‘algocracy’ (rule by algorithm) to describe this state of affairs.

Anyway, one thing that has bothered me about these past discussions is their relative lack of nuance when it comes to the different forms that algocratic systems could take. If we paint with too broad a brush, we may end up ignoring both the advantages and disadvantages of such systems. Cognisant of this danger, I have been trying to come up with a better way to taxonomise and categorise the different possible forms of algocracy. In the past couple of weeks, I think I may have come up a way of doing it.

The purpose of this post is to give a very general overview of this new taxonomy. I say it is ‘experimental’ because my hope is that by sharing this idea I will get some useful critical feedback from interested readers. My goal is to develop this taxonomic model into a longer article that I can publish somewhere else. But I don’t know if the model is any good. So if you are interested in the idea, I would appreciate any thoughts you may have in the comments section.

So what is this taxonomy? I’ll try to explain it in three parts. First, I’ll explain the inspiration for the taxonomy — namely: Christian List’s analysis of the logical space of democratic decision-procedures. Second, I’ll explain how my taxonomy — the logical space of algocratic decision-procedures — works. And third, I will very briefly explain the advantages and disadvantages of this method of categorisation.

I’m going to try my best to keep things brief. This is something at which I usually fail.


1. List’s Logical Space of Democracy
I’ve written two posts about List’s logical space of democracy. If you want a full explanation of the idea, I direct your attention those posts. The basic idea behind it is that politics is all about constructing appropriate ‘collective decision procedures’. These are to be defined as:

Collective Decision Procedures: Any procedures which take as their input a group of individuals’ intentional attitudes toward a set of propositions and which then adopt some aggregation function to issue a collective output (i.e. the group’s attitude toward the relevant propositions).

Suppose that you and I form a group. We have to decide what to do this weekend. We could go rollerblading or hillwalking. We each have our preferences. In order to come up with a collective decision, we need to develop a procedure that will take our individual preferences and aggregate them into a collective output. This will determine what we do this weekend.

But how many different ways are there of doing this? One of List’s key insights is that there is a large space of logically possible decision procedures. We could adopt a simple majority rule system. Or we could adopt a dictatorial rule, preferring the opinion of one person over all others. Or we could demand unanimity. Or we could do some sort of sequential ordering: he who votes first, wins. I won’t list all the possibilities here. List gives the details in his paper. As he notes there, there are 24 logically possible decision procedures.


This might seem odd since there are really only two possible collective decisions, but List’s point is that there are still 16 possible aggregation procedures. By itself, this mapping out of the space of logically possible decision procedures isn’t very helpful. As soon as you have larger groups with more complicated decisions that need to be made, you end up with unimaginably vast spaces of possible decision procedures. For instance, List calculates that if you had ten voters faced with two options, you would have 21024 possible collective decision procedures.

So you have to do something to pare down the space of logical possibilities. List does this by adopting an axiomatic method. He specifies some conditions (axioms) that any democratic decision procedure ought to satisfy in advance, and then limits his search of the logical space of possible decision procedures to the procedures that satisfy these conditions. In the case of democratic decision procedures, he highlights three conditions that ought to be satisfied: (i) robustness to pluralism (i.e. the procedure should accept any possible combination of individual attitudes); (ii) basic majoritarianism (i.e. the collective decision should reflect the majority opinion); and (iii) collective rationality (i.e. the collective output should meet the basic criteria for rational decision making). And since it turns out that it is impossible to satisfy all three of these conditions at any one time (due to classic ‘voting paradoxes’), the space of functional democratic decision procedures is smaller than we might first suppose. We are left then with only those decision procedures that satisfy at least two of the mentioned conditions. Once you pare the space of possibilities down to this more manageable size you can start to think more seriously about its topographical highlights.

Anyway, we don’t need to understand the intricacies of List’s model. We just need to understand the basic gist of it. He is highlighting how there are many possible ways of implementing a collective decision procedure, and how only a few of those procedures will meet the criteria for a morally or politically acceptable collective decision procedure. I think you can perform a similar analysis when it comes to understanding the space of possible algocratic decision procedures.


2. The Logical Space of Algocratic Decision-Procedures
To appreciate this method of mapping out the logical space, we first need to appreciate what an algocratic decision procedure actually is. In its most general terms, an algocratic decision procedure is any public decision procedure in which a computerised algorithm plays a role in the decision making process (this can be via data-mining, predictive analytics etc). Take, for example, the use of facial recognition algorithms in detecting possible instances of criminal fraud. In his book The Formula, Luke Dormehl mentions one such procedure being used by the Massachusetts registry of motor vehicles. This algorithm looks through the photographs stored on the RMV’s database in order to weed out faces that seem to be too similar to one another. When it finds a matching a pair, it automatically issues a letter revoking the licenses of the matching drivers. This is a clearcut example of an algocratic decision procedure.

Moving beyond this general definition, three parameters appear to define the space of possible algocratic decision procedures. The first is the particular domain or type of decision-making. Legal and bureaucratic agencies make decisions across many different domains. Planning agencies make decisions about what should be built and where; revenue agencies sort, file and search through tax returns and other financial records; financial regulators make decisions concerning the prudential governance of financial institutions; energy regulators set prices in the energy industry and enforce standards amongst energy suppliers; the list goes on and on. In the formal model I outline below, the domain of decision-making is ignored. I focus instead on two other parameters defining the space of algocratic procedures. But this is not because the domain is unimportant. When figuring out the strengths or weaknesses of any particular algocratic decision-making procedure, the domain of decision-making should always be specified in advance.

The second parameter concerns the main components of the decision-making ‘loop’ that is utilised by these agencies. Humans in legal-bureaucratic agencies use their intelligence when making decisions. Standard models of intelligence divide this capacity into three or four distinct tasks. I’ll adopt a four-component model here (this follows my earlier post on the topic of automation):

(a) Sensing: collecting data from the external world.
(b) Processing: organising that data into useful chunks or patterns and combining it with action plans or goals.
(c) Acting: implementing action plans.
(d) Learning: the use of some mechanism that allows the entire intelligent system to learn from past behaviour (this property is what entitles us to refer to the process as a ‘loop’).

Although individual humans within bureaucratic agencies have the capacity to perform these four tasks themselves, the work of entire agency can also be conceptualised in terms of these four tasks. For example, a revenue collection agency will take in personal information from the citizens in a particular state or country (sensing). These will typically take the form of tax returns, but may also include other personal financial information. The agency will then sort that collected information into useful patterns, usually by singling out the returns that call for greater scrutiny or auditing (processing). Once they have done this they will actually carry out audits on particular individuals, and reach some conclusion about whether the individual owes more tax or deserves some penalty (acting). Once the entire process is complete, they will try to learn from their mistakes and triumphs and improve the decision-making process for the coming years (learning).

The important point in terms of mapping out the logical space of algocracy is that algorithmic systems could be introduced to perform one or all of these four tasks. Thus, there are subtle and important qualitative differences between different types of algocratic system, depending on how much of the decision-making process is taken over by the computer.

In fact, it is more complicated than that and this is what brings us to the third and final parameter. This one concerns the precise relationship between humans and algorithms for each task in the decision-making loop. As I see it, there are four possible relationships: (1) humans could perform the task entirely by themselves; (2) humans could share the task with an algorithm (e.g. humans and computers could perform different parts of the analysis of tax returns); (3) humans could supervise an algorithmic system (e.g. a computer could analyse all the tax returns and identify anomalies and then a human could approve or disapprove their analysis); and (4) the task could be fully automated, i.e. completely under the control of the algorithm.

This is where things get interesting. Using the last two parameters, we can construct a grid which we can use to classify algocratic decision-procedures. The grid looks something like this:




This grid tells us to focus on the four different tasks in the typical decision-making loop and ask of each task: how is this task being distributed between the humans and algorithms? Once we have answered that question for each of the four tasks, we can start coding the algocratic procedures. I suggest that this be done using square brackets and numbers. Within the square brackets there would be four separate number locations. Each location would represent one of the four decision-making tasks. From left-to-right this would read: [sensing; processing; acting; learning]. You then replace the names of those tasks with numbers ranging from 1 to 4. These numbers would represent the way in which the task is distributed between the humans and algorithms. The numbers would correspond to the numbers given previously when explaining the four possible relationships between humans and algorithms. So, for example:


[1, 1, 1, 1] = Would represent a non-algocratic decision procedure, i.e. one in which all the decision-making tasks are performed by humans.

[2, 2, 2, 2] = Would represent an algocratic decision procedure in which each task is shared between humans and algorithms.

[3, 3, 3, 3] = Would represent an algocratic decision procedure in which each task is performed entirely by algorithms, but these algorithms are supervised by humans with some residual possibility of intervention.

[4, 4, 4, 4] = Would represent an pure algocratic decision procedure in which each task is performed by algorithms, with no human oversight or intervention.


This coding system allows us to easily work out the extent of the logical space of algocratic decision procedures. Since there are four tasks and four possible ways in which those tasks could be distributed between humans and algorithms, there are 256 logically possible procedures (bear in mind that this is relative to a particular decision-making domain — if we factored in all the different decision-making domains we would be dealing with a truly vast space of possibilities).


3. Conclusion: The Utility of this Mapping Exercise?
So that’s it: that’s the basic gist of my proposal for mapping out the logical space of algocracy. I’m aware that this method has its limitations. In particular, I’m aware that coding the different possible algocracies in terms of general tasks like ‘sensing’ and ‘processing’, or particular relationships in terms of ‘sharing’ or ‘supervising’, leaves something to be desired. There are many different ways in which data could be collected and processed, and there are many different ways in which tasks could be shared and supervised. Thus, this coding method is relatively crude. Nevertheless, I think it is useful for at least two reasons (possibly more).

First, it allows us to see, at a glance, how complex the phenomenon of algocracy really is. In my original writings on this topic, I used relatively unsophisticated conceptual distinctions, sometimes referring to systems that pushed humans ‘off’ the loop or kept them ‘on’ the loop. This system is slightly more sophisticated and allows us to appreciate some of the nuanced forms of algocracy. Furthermore, with this coding method we can systematically think our way through the different ways in which an algocratic system can be designed and implemented in a given decision-making domain.

Second, using this coding method allows us to single out broadly problematic types of algocracy and subject them to closer scrutiny. As mentioned in my original work, there are moral and political problems associated with algocratic systems that completely undermine or limit the role of humans within those systems. In particular, there can be problems when the algorithms make the systems less prone to human understanding and control. At the same time, there are types of algocracy that are utterly benign and possibly beneficial. For example, I would be much more concerned about a [1, 3, 1, 3] system than I would be about a [3, 1, 3, 1] system. Why? Because in the former system the algorithm takes over the main cognitive components of the decision-making process (the data analysis and learning), whereas in the latter the algorithm takes over some of the ‘drudge work’ associated with data collection and action implementation.

What do you think? Is this a useful system for mapping out the space of possible algocratic decision procedures? Or is it just needlessly confusing?

Sunday, June 14, 2015

Four Concepts of Consent and their Relevance to Sexual Offences




I’ve been focusing on the ethics of consent and sexual relations quite a bit recently. This is to help with a number of papers that I’m working on at the moment. Most recently, I have been looking into intoxication and consent, but I thought I would take a break from that discussion today to take a very brief look at some of Peter Westen’s work on the general concept of consent.

Westen’s 2004 book The Logic of Consent is widely-recommended for its dense and nuanced look at the topic of consent as a defence to criminal charges. I have never read it myself, but I have read a number of reviews and one of Westen’s own papers on the topic. One of Westen’s key contributions to the debate about consent is his attempt to identify the distinct consent-concepts that operate in both the legal and philosophical debates about the nature of consent. Four such concepts feature in his book. I want to share those four concepts in this post.


1. A Motivating Example: The Valdez Case
Philosophical discussions of consent often rely on thought experiments. These hypotheticals help us to tease apart difficult conceptual questions. But there is a limit to their utility. They are often contrived and abstracted from real-world dynamics. Important details and nuances are sometimes papered-over in order to achieve philosophical insight. But that insight is often then limited due to its lack of connection to the real world.

One of the more useful aspects of Westen’s work is that he uses examples drawn from real-world criminal law cases. This helps to anchor the conceptual analysis in something more concrete and practically significant. I’ll use one of these examples to motivate the following discussion about the different concepts of consent. I’m going to describe the facts of this case in fairly neutral terms, despite its obviously horrendous nature:

Valdez Case: This is a case that was decided in Texas back in 1993. It concerned a man named Joel Rene Valdez. He broke into the apartment of a woman named Elisabeth Wilson. He was wielding a knife at the time. He ordered her to take off her pants. Certain that she was either going to be stabbed or raped, she agreed to sexual intercourse if Valdez wore a condom. He did and she offered no further resistance.

Valdez was arrested but a grand jury refused to indict him on the grounds that the victim had consented to the sex. On the face of it this seems absurd. Most of us would agree that this is a clearcut case of rape. But Westen argues that the failure to indict may have been attributable to confusion about the precise nature of consent. This confusion is in turn aided by the fact that different jurisdictions use the term ‘consent’ in different ways in their statutes and laws. Hence, conceptual clarification is needed.

(In case you are wondering, Valdez was indicted by a second grand jury and then sentenced to 40 years. You can read about the case here.)


2. Four Different Concepts of Consent
Westen argues that four different concepts of consent operate in philosophical and legal discussions. These four concepts are organised around three pairs of distinctions. I’ll discuss the distinctions first and then the concepts.

The first pair of distinctions concerns the difference between ‘factual’ and ‘legal’ consent. The former is what is true as a matter of fact; the latter is what is needed to meet the legal test for the application of that concept. A 16 year-old may, as a matter of fact, consent to sexual intercourse, but legally her consent would not be recognised due to her age. Factual/legal distinctions are common in the law. For example, in tort law we distinguish between factual and legal causation. The second pair of distinctions concerns the difference between ‘attitudinal’ and ‘expressive’ forms of consent. The former concerns the subjective state of mind of the victim; the latter concerns the outward signs that were expressive of that state of mind. The third pair of distinctions concerns the difference between ‘actual’ and ‘imputed’ consent. This is somewhat similar to the first insofar as the law sometimes constructs or imputes the presence of consent which is lacking in actuality. The distinctions between law and reality in all instances arise because there can be a difference between what is actually true and what we would normatively prefer to be true. Hence, underlying all three of these pairs of distinctions there is a larger distinction between factual and prescriptive understandings of consent. The former try to accurately describe objective and subjective facts about the victim; the latter try to prescribe certain conditions that must be met in order for consent to be legally valid.

Though technically one could have several different combinations of these distinctions, in reality four tend to dominate in legal and philosophical debates about the nature of consent. The first two are:

Factual Attitudinal Consent: This type of consent is concerned purely with the victim’s subjective state of mind. Did she assent to what was happening to her (all things considered)? According to Westen, the victim in the Valdez case consented in the factual attitudinal sense of the term. Her all-things-considered preference was, at the time, to have protected sexual intercourse with Valdez. She preferred this to the alternatives of being stabbed or running the risk of contracting HIV.

Factual Expressive Consent: This type of consent focuses on the objectively observable signals or indicators of the victim’s subjective attitude. Did she clearly communicate (verbally or non-verbally) her assent to what happened to her? There can be much debate about what counts as a clear communication of consent, but, again, in the Valdez case it would appear that we have factual expressive consent. The victim communicated a willingness to engage in sexual intercourse if a condom was worn.

Westen suggests that the reason Valdez was not indicted first time round was because the grand jury worked with a purely factual conception of consent. The reason for finding this failure outrageous has to do with the incorporation of prescriptive conditions into general definitions of consent. When we think about consent from a prescriptive perspective, we don’t just care about what the victim’s attitudes and expressions happened to be at the relevant time. We also care about how they acquired those attitudes and expressions. This gives us Westen’s third concept of consent:

Prescriptive Attitudinal and/or Expressive Consent: This type of consent focuses on the victim’s attitudes and expressions as well as a number of other prescriptive conditions. There are three general families of prescriptive conditions: (i) freedom conditions (was the victim forced or coerced into the sexual encounter?); (ii) knowledge conditions (did the victim know exactly what they were consenting to or were they deceived or defrauded in some way?) and (iii) competence conditions (is the victim of the right age and do they have the mental capacity to consent?).

In the prescriptive sense, it is clear that the victim in the Valdez case did not consent to the sexual intercourse. She was forced into her factual attitudinal and expressive consent by the threat of violence. This brings us finally to Westen’s last concept of consent:

Prescriptively Imputed Consent: This type of consent uses normatively-motivated legal fictions to impute consent to someone who did not otherwise meet the factual or prescriptive conditions of consent. This arises in three different ways: (i) constructed consent, which is where someone does not consent to X, but they consent to Y and consent to Y is said to include or incorporate consent to X; (ii) informed consent, which is where someone consents to a potential risk of X and so when X actually happens it is deemed to be their tough luck and (iii) hypothetical consent, which is where someone temporarily lacks capacity to consent but consent is attributed to them because, if they had capacity, they would have consented to X.

Informed and imputed consent are not hugely relevant in the debate about sexual offences. They are more common in medicine. But constructed consent can, and historically has, played a role in sexual offences. Westen argues that the infamous marital rape exemption (which held that a husband could never be guilty of raping his wife) involved constructed consent. The idea being that in consenting to marriage the wife had also irrevocably consented to sex with her husband. Obviously, we no longer accept the normative rationale underlying this type of imputed consent. (Note: I have some doubts about this example simply because I am not sure whether a wife’s original consent to marriage was deemed to be important during the heyday of the marital exemption)

I have mapped out the four concepts of consent below. This gives a better sense of the conceptual territory that Westen describes.






3. Conclusion
Conceptual clarifications of the sort undertaken by Westen can sometimes be more frustrating than illuminating. Nevertheless, I think there is some genuine utility to the four-part distinction. The main reason for this is that once you appreciate the distinctions you gain a better understanding of the problems that arise in the criminal law. Consent is central to the definition of all the major sexual offences. Ideally, we would want that concept to be understood in prescriptive terms. But in some jurisdictions the concept of consent is left incompletely or imperfectly defined. This creates confusion when juries (and judges) are asked to determine whether or not a victim consented to some sexual act. They may often interpret it in factual terms, when they ought to be interpreting it in prescriptive terms.

Wednesday, June 10, 2015

Voluntary Intoxication and Responsibility

(Previous entry - should probably be read to understand the context)

If you voluntarily consume alcohol and then go out and commit a criminal act, should you be held responsible for that act? Many people seem to think that you should. Indeed, within the criminal law, there is an oft-repeated slogan saying that “voluntarily intoxication is no excuse”. In other words, someone cannot simply avoid criminal liability by intoxicating themselves prior to committing a criminal act. There is, however, some leeway in this. The criminal law distinguishes between crimes of basic and specific intent, and intoxication may undermine specific intent, but this still would not allow someone to completely escape liability. For example, murder is classed as a crime of specific intent, and so it may be possible to avoid a murder charge if you can prove you were so severely intoxicated at the time as to lack specific intent, but the lesser charge of manslaughter would still apply.

This all seems sensible, but it leads to some problems when dealing with sexual offences. Alcohol and intoxication play a significant role in many sexual encounters. Suppose Ann and Bob meet at a party, where they have both been drinking heavily. They seem to enjoy one another’s company, and Bob invites Ann back to his apartment. He asks her if she wants to have sex. She nods her head. They proceed to have sexual intercourse.

Many people might be inclined to view this as a perfectly innocent sexual encounter. But others are less sure. As I noted in the previous post, some people believe that it is impossible to consent to sex whilst voluntarily intoxicated. Nevertheless, many of the same people hold that it is possible to be held responsible for a crime you committed whilst voluntarily intoxicated. This means that, in the above example, Bob may be guilty of raping Ann (to approach it from the traditional male-to-female perspective). This looks kind of odd. Although there is no formal contradiction involved, it seems inconsistent to suggest that you can be responsible for doing X whilst voluntarily intoxicated, but not for consenting to X whilst voluntarily intoxicated.

Alan Wertheimer thinks that this inconsistency can be overcome. In his article, ‘Intoxicated Consent to Sexual Relations’ he argues that it is possible to hold someone responsible for some of the actions they perform whilst voluntarily intoxicated, and at the same time deny the validity of their consent to sex whilst voluntarily intoxicated. I’ll examine the first part of his argument in this post, i.e. why he thinks we are responsible for what we do whilst voluntarily intoxicated.


1. Two Traditional Arguments for Responsibility whilst Intoxicated
At the outset, it is worth remembering Wertheimer’s definition of intoxicated consent. He holds that intoxicated consent arises when someone takes a substance that causes them to act in a manner that is not consistent with their higher order preferences and desires. This is distinct from mere substance-affected consent, which arises when a substance is consumed that causes an action to be performed that would not otherwise have been performed, but which does nothing to disturb one’s higher order preferences.

Following this definition, there seem to be at least three ways in which someone could perform an act, or consent to something, whilst voluntarily intoxicated. First, they could deliberately cause themselves to become intoxicated because they wish to act in a manner that is not in keeping with their higher order preferences and desires. We could call this the Ulysses case since it is redolent of the story of Ulysses tying himself to the mast of his ship so that he could hear the Sirens’ song. There is probably a paradox involved in this case since if someone wishes to do something that is out of keeping with their higher order preferences and desires, does that not suggest that they are acting in accordance with an even higher preference? The Ulysses case should be contrasted with the reckless indifference case, which is where someone becomes intoxicated and they are well aware of the risk that they might act in a (specific) manner that is out of keeping with their higher order preferences and desires, but they choose to run that risk anyway. It should also be contrasted with the unexpected outcome case, where someone does something whilst voluntarily intoxicated that was genuinely beyond the realm of reasonable expectation.




These three cases can be treated differently when it comes to assessing the responsibility claim. I suspect most people will feel that a person should be held responsible for actions performed whilst intoxicated in the Ulysses and reckless indifference cases. The unexpected outcome case is trickier. I would have more sympathy for someone in this predicament, but I think the outcome would really have to be beyond the realm of reasonable expectation.

Anyway, the important question is why do we think that people should be held responsible for (at least some) of the actions they perform whilst voluntarily intoxicated? Wertheimer identifies two arguments made by the legal theorist Heidi Hurd. The first is the reasoning argument:


  • (1) If one can appreciate and guide one’s conduct in accordance with moral reasons for action, then one is responsible for one’s actions.
  • (2) Voluntary intoxication never (or rarely) undermines one’s ability to appreciate and guide one’s conduct in accordance with moral reasons for action.
  • (3) Therefore, voluntary intoxication never (or rarely) undermines one’s responsibility for one’s actions.


This is hastily constructed, but it is based on R.Jay Wallace’s theory of responsibility, which holds that the capacity to grasp and appreciate moral obligations is central to moral responsibility.

The second argument from Hurd is the so-called incentive argument:


  • (4) We should not give people an incentive to commit criminal acts.
  • (5) If voluntary intoxication excused people from criminal responsibility, they would have an incentive to commit criminal acts.
  • (6) Therefore, we should not excuse people from criminal responsibility due to voluntary intoxication.


This argument is consequentialist in nature. It is based on the belief that if you excused criminal behaviour committed whilst the offender was voluntarily intoxicated, you would be effectively writing a blank cheque for future criminals. They would then know that all they had to do to absolve themselves of responsibility would be to get blindingly drunk prior to committing a criminal offence.


2. Criticisms of the Two Traditional Arguments
Are these two argument sufficient to explain our willingness to hold intoxicated people responsible? Wertheimer has his doubts. There are two problems with the reasoning argument. First, while it may well be true that most intoxicated people have the ability to grasp and understand moral reasons for action, this doesn’t suffice to explain cases in which we are willing to hold them responsible even if they were completely unable to reason. Imagine a case. Bob drives to a party and gets blindingly drunk. His intoxication is such that he is no longer able to guide his conduct in accordance with moral reasons for action. He gets into his car and drives home. Along the way he mows down a pedestrian named Ann. She dies from her injuries the following morning. Do we hold Bob responsible for his actions? Wertheimer thinks we would, even though he may not have had the capacity required by Hurd and Wallace’s argument. (If you don’t feel the intuitive pull of that case, imagine an alternative one in which Bob deliberately chooses to get so drunk at T1 that he knows he will then drive home in a state that absolves him of responsibility under Hurd and Wallace’s account - is he responsible then?). The second problem with the reasoning argument is more serious. It actually has nothing to do with voluntary intoxication. On this account, all that matters is that the capacity for moral reasoning is intact (or not). If a person is involuntarily intoxicated, but still able to grasp and understand moral reasons for action, they would still be deemed responsible. How they got to their present state is irrelevant.

The incentive argument also has its problems. For one thing, it is not clear whether it concerns moral responsibility at all. As noted, the incentive argument is consequentialist in nature. It justifies holding someone responsible for their actions because of the future benefits of doing so. This is controversial. Although consequentialist theories of responsibility have found favour in some quarters, many think that responsibility is a necessarily backward-looking concept. In other words, we hold people responsible for what they have done, not because of what others might do in the future. Still, consequentialist considerations do feature fairly heavily in legal decisions about whether or not to impose legal liability on someone for what they have done. So perhaps we could distinguish between moral and legal approaches to responsibility and see the incentive argument as part of a general argument in favour of imposing liability on the voluntarily intoxicated.

If these two arguments are problematic, are there any others that might justify holding people responsible for what they do whilst voluntarily intoxicated?


3. Tracing and Intoxicated Responsibility
Wertheimer thinks there is a family of responsibility-related theories that would justify this practice. Classically, responsibility for one’s actions at a particular moment in time (T2) was said to require the satisfaction of two conditions: (i) a control condition, i.e. did you control your action at T2? and (ii) a knowledge condition, i.e. did you know what you were doing at T2? To these two conditions, modern responsibility theorists often add a third, a ‘tracing’ condition. This condition holds that you can be held responsible for an action at T2, even if you do not satisfy the two conditions just mentioned at T2, provided that your responsibility can be traced back to an earlier moment in time (T1). If at that earlier moment in time you were in control of and had knowledge of your actions, your responsibility could carry forward to T2.

But that’s just a very abstract sketch of the tracing condition. It needs a bit more content. Wertheimer identifies two variations on the tracing condition. The first can be characterised in the following manner:

Flow Through View: “If an agent has a fair opportunity at T1 to control or guide her behaviour at T2, then our moral response to her use of those opportunities at T1 flows through to her behaviour at T2, even if she is unable to guide or control her behaviour at T2” (Wertheimer 2000, 383).

This might sound like a repeat of what I just said about the abstract nature of the tracing condition, but there are some subtleties hidden within it. According to the flow through view, there is an intimate connection between the opportunity for control at T1 and the future act at T2. The opportunity for control at T1 must be such that it allowed for the prevention of the later act. Furthermore, according to the flow through view, moral responsibility is a diachronic phenomenon, not a static or synchronic phenomenon. Your responsibility for action at any particular moment in time is never completely dependent on how you acted at that moment in time; it is dependent on how you acted across connected moments in time.

The flow through view should be contrasted with the second variation on tracing, which does link responsibility to your characteristics at particular moments in time. More precisely, it views the agent acting at T1 as being distinguishable from the agent acting at T2:

The Dual Self View: If agent B1 chooses to become voluntarily intoxicated at T1, in such a manner that agent B2 acting at T2 is nonresponsible, then we may hold agent B1 responsible for the decision made at T1.

Here, we are holding someone responsible for what they did at the earlier time (which may have been causally necessary for what happened at T2), not for what they did at the later time. The responsibility does not flow through time, but we may still be justified in ascribing blame. For example, if their choice at T1 was motivated by something like the motivations mentioned in the Ulysses or reckless indifference cases I mentioned above, then it seems like blame could attach for what they did at T1. Furthermore, that blame would have some link to the quality of act performed at T2.

Wertheimer says he isn’t sure which of these two theories are correct, but he thinks that they are much better able to account for the belief that the voluntarily intoxicated are responsible than the reasoning or incentive arguments.

Of course, the ultimate question is how does this link back into the debate about intoxicated consent. Some people think that intoxicated responsibility necessarily entails the validity of intoxicated consent. But Wertheimer doesn’t think this. I’ll delve into the consent-related material in the next post.

Sunday, June 7, 2015

Intoxicated Consent to Sexual Relations: A Map of Moral Claims


Consent is moral magic. It transforms an impermissible act into a permissible one. But deciding when and whether to respect a particular token or signal of consent is an ethically fraught business. Can children consent to medical treatment? Can adults with early stage dementia consent to give away all their earthly possessions? Is a smile or a nod sufficient for consent? Is it possible to consent to something by doing or saying nothing? Can you consent to have something done to you while you are asleep, if you provided the consent in writing in advance? Questions of this nature abound.

One of the most contentious of all these questions has to do with the correct attitude toward consent to sexual relations that occurs when one or more of the parties to a particular sexual encounter are voluntarily intoxicated. To take a typical and all-too frequent case, suppose that Ann and Bob meet at a party. Ann has voluntarily consumed several alcoholic drinks. Nevertheless, she seems alert and aware of what is going on. Bob starts talking to her. She seems to enjoy his company and later on he asks her if she wishes to come back to his apartment. She says yes. When they get back they start kissing. She asks him if he has a condom. He says he does. They then engage in sexual intercourse.

Was this sexual activity morally permissible? That’s the question I wish to explore in the remainder of this post. Throughout, I will approach the topic from a heterosexual and gynocentric perspective. In other words, I will interpret it to mean ‘was it permissible for Bob to have sex with Ann?’ This makes sense given that the story says nothing about Bob’s level of intoxication, and it also makes sense given that this is the typical direction from which the issue is approached in law and popular culture. Sexual violence has historically been a predominantly male-to-female phenomenon.

To answer the permissibility question I’m going to enlist the help of Alan Wertheimer’s excellent article ‘Intoxicated Consent to Sexual Relations’. It is a long and thoughtful analysis of this issue, brimming with interesting arguments and insights, a real model of the conceptual-empirical analysis I like in contemporary philosophy. Unfortunately, there is so much to the article that there is no way I can cover it all in one blog post. Hence, I’m going to write a series of connected but standalone pieces on Wertheimer’s article. In the remainder of this post, I address two issues. First, I try to clarify exactly what it is that Wertheimer is interested in. And second (and probably more interestingly) I map the space of possible claims one could make about this topic.


1. What is intoxicated consent to sexual relations?
Put simply, the permissibility question that I asked at the outset has to do with the case in which a woman signals consent to sexual relations whilst voluntarily intoxicated. That is why the story I told had such a simple and relatively uncontroversial structure. For example, I said nothing about whether Ann woke up the following day and could not remember the sexual encounter. Nor did I suggest that Bob repeatedly offered her drinks during the course of the evening. Those kinds of details often feature in real-life cases, and they can make the moral calculations both easier and more complicated. But they are not relevant to this discussion. To appreciate this, three clarifications are in order.

First, this discussion is only about voluntary intoxication. It is not about involuntary intoxication. If Bob had plied Ann with or forced Ann to consume alcohol prior to the sexual activity, the moral status of that activity would be much more straightforward. It would be clearly impermissible since Bob could be said to have either deliberately or recklessly used the intoxicant to weaken Ann’s capacity for consent.
Second, this discussion assumes that there is some unambiguous signal of consent being given by the voluntarily intoxicated party. This is a significant assumption. I have written previously on the problems with setting standards in relation to the signals of consent. Do you need a clear verbal signal of willingness to continue with an act? Or is an absence of unwillingness sufficient? Should we adopt a ‘no means no’ standard of consent? Or should we go with a ‘yes means yes’ standard? These questions deserve attention, but they are being bracketed for present purposes. Also bracketed are any concerns relating to deception, fraud or coercion on the part of the other individual.

Third, and finally, this discussion is only interested in intoxicated consent. This should be distinguished from a related phenomenon, which Wertheimer refers to as substance-affected consent:

Substance-affected consent: Arises when A takes some substance and later consents to sexual activity and ‘but for’ their consumption of this substance, they would not have consented to the activity.

Intoxicated consent: Arises when A takes some substance and later consents to sexual activity, and the substance in question causes them to consent to an activity that is not consistent with their stable higher order preferences and desires.

Substance-affected consent is, presumably, common and oftentimes uncontroversial. If Ann had a headache but took some headache pills before she went to a party where she met Bob who she really wanted to have sex with, then it may be that her consumption of the headache pills was an integral part of the causal chain leading to her subsequent consent to sexual activity. But it doesn’t seem like her subsequent consent is ethically problematic. It is only when the consumption of the substance tips over into intoxication that things become problematic. In the intoxicated state, Ann may consent to something that is out of keeping with what she really wants. The borderline between substance-affected consent and intoxicated consent might be difficult to determine in many real-life cases, but again we are bracketing that difficulty in this analysis. We’re assuming that we are dealing with cases in which the consent is of the intoxicated variety.


2. Intoxicated Consent and the Space of Connected Moral Claims
Now that we have clarified the phenomenon of interest, we can proceed to consider the different possible claims one could make about the ethical status of intoxicated consent. This is where Wertheimer makes one of his more useful contributions to the debate. He identifies five, interconnected, moral claims that are often bandied about in the literature:


The Impermissibility Claim: It is impermissible for B to have sex with A whilst A is voluntarily intoxicated, even if A has signaled consent (there is also the complementary ‘permissibility claim’)

The Intoxication Claim: Intoxication necessarily negatives or undermines consent to sexual activity, i.e. the mere fact of A’s voluntary intoxication is sufficient to undermine A’s consent.

The Responsibility Claim: A is a responsible for acts they perform whilst voluntarily intoxicated.

The Responsibility-Entails-Validity Claim: If A is responsible for the acts they perform whilst voluntarily intoxicated, it follows that A’s consent to sexual activity whilst voluntarily intoxicated is valid. (This is given the abbreviated name of ’the validity claim’ throughout Wertheimer’s article).

The Consistency Claim: If we are going to hold A responsible for the acts they perform whilst voluntarily intoxicated, we should also, in order to be consistent, deem A’s consent whilst voluntarily intoxicated to be morally valid.


Each of these claims has its defenders. Ultimately, we are most interested in the truth or falsity of the impermissibility claim. But the truth or falsity of that claim is partly dependent on the others. Thus, we need to figure out what our attitude ought to be in relation to those other claims. Some people accept the intoxication claim on the grounds that they think intoxication necessarily compromises the capacity to consent. Those who accept that claim must also accept the impermissibility claim. That is to say, the intoxication claim entails the impermissibility claim.

Some people accept the responsibility claim. Indeed, this is the mainstream position in the criminal law. It is widely agreed that voluntarily intoxication does not excuse someone from legal responsibility for their actions. The responsibility claim entails the falsity of the intoxication claim because it maintains that intoxication does not completely undermine the capacity for moral accountability. Nevertheless, the responsibility claim does not, in and of itself, entail the permissibility claim. This is because it may be morally appropriate to differentiate between different types of intoxicated act. Thus, holding A responsible for petty theft might be alright, but validating their consent to sexual intercourse might not. This is why the validity claim is needed. When that claim is combined with the responsibility claim they jointly contradict the impermissibility claim. The consistency claim is then a background consideration, which provides some support for the validity claim. This is because those who support the responsibility claim and the consistency claim will be more inclined to accept the validity claim, and hence more inclined to reject the impermissibility claim.

This might sound rather convoluted, but the relationships between the claims is illustrated by the diagram below.




Wertheimer’s own view is that the responsibility claim is correct (and hence the intoxication claim should be rejected). Nevertheless, he is more iffy about the validity claim. It seems wrong to suggest that if someone is voluntarily intoxicated they are somehow morally responsible for what happens to them. He thinks that the permissibility of intoxicated consent depends on a whole range of other factors that are imperfectly captured by this simple map of the logical space of moral claims. His view is an interesting one, and I hope to explore some more of its details in future posts. That’s all for now.