Friday, March 31, 2017

Robot Rights: Intelligent Machines (Panel Discussion)





I participated in a debate/panel discussion about robot rights at the Science Gallery (Trinity, Dublin) on the 29th March 2017. A video from the event is above. Here's the description from the organisers:

What if robots were truly intelligent and fully self aware? Would we give them equal rights and the same protection under the law as we provide ourselves? Should we? But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?

Moderated by Lilian Alweiss from the philosophy department at Trinity College Dublin, panellists include Conor Mc Ginn, Mechanical & Engineering Department, Trinity College Dublin; John Danaher, Law department NUI Galway; and Eoghan O'Mahoney from McCann Fitzgerald.

Join us as we explore these issues as part of our HUMANS NEED NOT APPLY exhibition with a panel discussion featuring leaders in the fields of AI, ethics and law.

Tuesday, March 28, 2017

BONUS EPISODE - Pip Thornton on linguistic capitalism, Google's ad empire, fake news and poetry

slide1.jpg


[Note: This was previously posted on my Algocracy project blog; I'm cross-posting it here now. The audio quality isn't perfect but the content is very interesting. It is a talk by Pip Thornton, the (former) Research Assistant on the project].

My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end. I have really enjoyed the five months I have spent here in Galway - I  have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university.

I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute's Ideas Forum,  which I managed to record.

Part of my research involves using poetry to critique linguistic capitalism and the way language is both written and read in an age of algorithmic reproduction. For the talk I invited Galway poet Rita Ann Higgins to help me explore the the differing 'value' of words, so the talk includes Rita Ann reciting an extract from her award winning poem Our Killer City, and my own imagining of what the poem 'sounds like' - or is worth, to Google. The argument central to my thesis is that the power held by the tech giant Google, as it mediates, manipulates and extracts economic value from the language (or more accurately the decontextualised linguistic data) which flows through its search, communication and advertising systems, needs both transparency and strong critique. Words are auctioned off to the highest bidder, and become little more than tools in the creation of advertising revenue. But there are significant side effects, which can be both linguistic and political. Fake news sites are big business for advertisers and Google, but also infect the wider discourse as they spread through social media networks and national consciousness. One of the big questions I am now starting to ask is just how resilient is language to this neoliberal infusion, and what could it mean politically? As the value of language shifts from conveyor of meaning to conveyor of capital, how long will it be before the linguistic bubble bursts?

You can download it HERE or listen below:



Track Notes



  • 0:00- introduction and background 4:30 - Google Search & autocomplete - digital language and semantic escorts 
  • 6:20 - Linguistic Capitalism and Google AdWords - the wisdom of a linguistic marketplace?
  • 9:30 - Google Ad Grants - politicising free ads: the Redirect Method, A Clockwork Orange and the neoliberal logic of countering extremism via Google search 
  • 16:00 - Google AdSense - fake news sites, click-bait and ad revenue  -  from Chicago ballot boxes to Macedonia - the ads are real but the news is fake 
  • 20:35 - Interventions #1 - combating AdSense (and Breitbart News) - the Sleeping Giants Twitter campaign 
  • 23:00 - Interventions #2 - Gmail and the American Psycho experiment 
  • 25:30 - Interventions #3 - my own {poem}.py project - critiquing AdWords using poetry, cryptography and a second hand receipt printer 
  • 30:00 - special guest poet Rita Ann Higgins reciting Our Killer City 
  • 33:30 - Conclusions - a manifestation of postmodernism? sub-prime language - when does the bubble burst? commodified words as the master's tools - problems  of method


Relevant Links


Monday, March 20, 2017

Abortion and the Violinist Thought Experiment




Here is a simple argument against abortion:


  • (1) If an entity (X) has a right to life, it is, ceteris paribus, not permissible to terminate that entity’s existence.
  • (2) The foetus has a right to life.
  • (3) Therefore, it is not permissible to kill or terminate the foetus’s existence.


Defenders of abortion will criticise at least one of the premises of this argument. Many will challenge premise (2). They will argue that the foetus is not a person and hence does not have a right to life. Anti-abortion advocates will respond by saying that it is person or that it has some other status that gives it a right to life. This gets us into some abstruse questions on the metaphysics of personhood and moral status.

The other pro-choice strategy is to challenge premise (1) and argue that there are exceptions to the principle in questions. Indeed, exceptions seem to abound. There are situations in which one right to life must be balanced against another and in those situations it is permissible for one individual to kill another. This is the typical case of self-defence: someone immediately and credibly threatens to end your life and the only way to neutralise that threat is to end theirs. Killing them is permissible in these circumstances. A pro-choice advocate might argue that there are some circumstances in which pregnancy is analogous to the typical case of self-defence, i.e. there are cases where the foetus poses an immediate and credible threat to the life of the mother and the only way to neutralise that threat is to end the life of the foetus.

The trickier scenario is where the mother’s life is unthreatened. In those cases, if the foetus has a right to life, anti-abortionists will argue that the following duty holds:

Gestational duty: If a woman’s life is unthreatened by her being pregnant, she has a duty to carry the foetus to term.

The rationale for this is that the woman’s right to control her body cannot trump the foetus’ right to life. In the moral pecking order, the right to life ranks higher than the right to do with one’s body as one pleases.

It is precisely this understanding of the gestational duty that Judith Jarvis Thomson challenged in her famous 1971 article ‘A Defense of Abortion’. She did so by way of some ingenious thought experiments featuring sick violinists, expanding babies and floating ‘people-seeds’. Much has been written about those thought experiments in the intervening years. I want to take a look at some recent criticism and commentary from John Martin Fischer. He tries to show that Thomson’s thought experiments don’t provide as much guidance for the typical case of pregnancy as we initially assume, but this, in turn, does not provide succour for the opponents of abortion.

I’ll divide my discussion up over two posts. In this post, I’ll look at Fischer’s analysis of the Violinist thought experiment. In the next one, I’ll look at his analysis of the ‘people seeds’ thought experiment.


1. The Violinist Thought Experiment
The most famous thought experiment from Thomson’s article is the one about the violinist. Even if you know nothing about the broader abortion debate, you have probably come across this thought experiment. Here it is in all its original glory:

The Violinist: ‘You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you — we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.”’ (1971: 132)

Do you have a duty to remain plugged into the violinist? Thomson argues that you don’t; that intuitively, in this case, it is permissible to unplug yourself from the violinist. That doesn’t mean we would praise you for doing it — we might think it is morally better for you to stay plugged in — but it does mean that we don’t think you are blameworthy for unplugging. In this case, your right to control your own body trumps the violinist’s right to life.

Where does that get us? The argument is that the case of the violinist is very similar to the case of pregnancy resulting from rape. In both cases you are involuntarily placed in position whereby somebody else’s life is dependent on being attached to your body for nine months. By analogy, if your right to control your own body trumps the violinist’s right to life, it will also trump the foetus’ right to life:


  • (4) In the violinist case, you have no duty to stay plugged into the violinist (i.e. your right to control your own body trumps his right to life).
  • (5) Pregnancy resulting from rape is similar to the violinist case in all important respects.
  • (6) Therefore, probably, you have no duty to carry the foetus to term in the case of pregnancy resulting from rape (i.e. your right to control your own body trumps the foetus’ right to life).


Since it will be useful for later purposes, I’ve tried to map the basic logic of this argument from analogy in the diagram below. The diagram is saying that the two cases are sufficiently similar so that it is reasonable to suppose that the moral principle that applies to the first case carries over to the second.



2. Fischer’s Criticism of the Violinist Thought Experiment
In his article, ‘Abortion and Ownership’, Fischer challenges Thomson’s intuitive reaction to The Violinist. His argumentative strategy is subtle and interesting. He builds up a chain of counter-analogies (i.e. analogies in which the opposite principle applies) and argues that they are sufficient to cast doubt on the conclusion that your right to control your own body trumps the violinist’s right to life.

He starts with a thought experiment from Joel Feinberg:

Cabin Case 1: “Suppose that you are on a backpacking trip in the high mountain country when an unanticipated blizzard strikes the area with such ferocity that your life is imperiled. Fortunately, you stumble onto an unoccupied cabin, locked and boarded up for the winter, clearly somebody else’s private property. You smash in a window, enter, and huddle in a corner for three days until the storm abates. During this period you help yourself to your unknown benefactor’s food supply and burn his wooden furniture in the fireplace to keep warm.” (Feinberg 1978, 102)

Feinberg thinks that in this case you have a right to break into the house and use the available resources. The problem is that this clearly violates the cabin-owner’s right to control their property. Still, the fact that you are justified in violating that right tells us something interesting. It tells us that, in this scenario, the right to life trumps the right to control one’s own property.

So what? The right of the cabin-owner to control his/her property is very different from your right to control your body (in the case of the violinist and pregnancy-from-rape). For one thing, the violation in the case of cabin owner is short-lived, only lasting three days, until the storm abates. Furthermore, it requires no immediate interference with their enjoyment of the property or with their bodies. We are explicitly told that the cabin is unoccupied at the time. So, on an initial glance, it doesn’t seem like the Cabin Case 1 tells us anything interesting about abortion.

Fischer begs to differ. He tries to construct a series of thought experiments that bridge the gap between the Cabin Case 1 and The Violinist. He does so by first imagining a case in which the property-owner is present at the time of the interference and in which the interference will continue for at least nine months:

Cabin Case 2: "You have secured a cabin in an extremely remote and inaccessible place in the mountains. You wish to be alone; you have enough supplies for yourself, and also some extras in case of an emergency. Unfortunately, a very evil man has kidnapped an innocent person and [left] him to die in the desolate mountain country near your cabin. The innocent person wanders for hours and finally happens upon your cabin…You can radio for help, but because of the remoteness and inaccessibility of your cabin and the relatively primitive technology of the country in which it is located, the rescue party will require nine months to reach your cabin…You can let the innocent stranger into your cabin and provide food and shelter until the rescue party arrives in nine months, or you can forcibly prevent him from entering your cabin and thus cause his death (or perhaps allow him to die)." (Fischer 1991, 6)

Fischers argues that, intuitively, in this case the innocent person still has the right to use your property and emergency resources and you have a duty of beneficence to them. In other words, their right to life trumps your right to control and use your property. Of course, a fan of Thomson’s original thought experiment might still resist this by arguing that the rights violation in this second Cabin Case is different because it does not involve any direct bodily interference. So Fischer comes up with a third variation that involves precisely that:

Cabin Case 3: The same scenario as Cabin Case 2, except that the innocent person is tiny and injured and would need to be carried around on your back for the nine-months. You are physically capable of doing this.

Fischer argues that the intuition doesn’t change in this case. He thinks we still have a duty of beneficence to the innocent stranger, despite the fact that it involves a nine-month interference with our right to control our properties and our bodies. The right to life still trumps both. This is important because Cabin Case 3 is, according to Fischer, very similar to the Violinist.

What Fischer is arguing, then, is sketched in the diagram below. He is arguing that the principle that applies in Cabin Case 1 carries over to Cabin Case 3 and that there is no relevant moral difference between Cabin Case 3 and the Violinist. Thomson’s original argument is, thereby, undermined.



For what it’s worth, I’m not entirely convinced by this line of reasoning. I don’t quite share Fischer’s intuition about Cabin Case 3. I think that if you really imagined the inconvenience and risk that would be involved in carrying another person around on your back for nine months you might not be so quick to imply a duty of beneficence. That reveals one of the big problems with this debate: esoteric thought experiments can generate different intuitive reactions.


3. What does this mean for abortion?
Let’s suppose Fischer is correct in his reasoning. What follows? One thing that follows is that the right to life trumps the right to control one’s body in the case of the Violinist. But does it thereby follow that the right to life trumps the right to control one’s body in the case of pregnancy from rape? Not necessarily. Fischer argues that there could be important differences between the two scenarios, overlooked in Thomson’s original discussion, that warrant a different conclusion in the rape scenario. A few examples spring to mind.

In the case of pregnancy resulting from rape, both the woman and the rapist will have a genetic link with the resulting child and will be its natural parents. The woman is likely to have some natural affection and feelings of obligation toward the child, but this may be tempered by the fact that the child (innocent and all as it is) is a potential reminder (trigger) of the trauma of the rape that led to its existence. The woman may give the child up for adoption — and thereby absolve herself of legal duties toward it — but this may not dissolve any natural feelings of affection and obligation.  Furthermore, the child may be curious about its biological parentage in later years and may seek a relationship with its natural mother or father (it may need to do so because it requires information about its genetic lineage). All of which is to say, that the relationship between the mother and child is very different from the relationship between you and the violinist or you and the tiny innocent person you have to carry on your back. Those relationships normatively and naturally dissolve after the nine-month period of dependency. This is not true in the case of the mother and her offspring. The interference with her rights lingers.

These differences may be sufficient to warrant a different conclusion in the case of pregnancy resulting from rape. But this is little advantage for the pro-choice advocate for it says nothing about other pregnancies. There are critics of abortion who are willing to concede that it should be an option in cases of rape. They argue that this doesn’t affect the gestational duty in the larger range of cases where pregnancy results from consensual sexual intercourse. That’s where Thomson’s other thought experiment (People Seeds) comes into play. I’ll look at that thought experiment, along with Fischer’s analysis of it, in the next post.

Tuesday, March 14, 2017

How to Plug the Robot Responsibility Gap




Killer robots. You have probably heard about them. You may also have heard that there is a campaign to stop them. One of the main arguments that proponents of the campaign make is that they will create responsibility gaps in military operations. The problem is twofold: (i) the robots themselves will not be proper subjects of responsibility ascriptions; and (ii) as they gain autonomy, there is more separation between what they do and the acts of the commanding officers or developers who allowed their use, and so less ground for holding these people responsible for what the robots do. A responsibility gap opens up.

The classic statement of this ‘responsibility gap’ argument comes from Robert Sparrow (2007, 74-75):

…the more autonomous these systems become, the less it will be possible to properly hold those who designed them or ordered their use responsible for their actions. Yet the impossibility of punishing the machine means that we cannot hold the machine responsible. We can insist that the officer who orders their use be held responsible for their actions, but only at the cost of allowing that they should sometimes be held entirely responsible for actions over which they had no control. For the foreseeable future then, the deployment of weapon systems controlled by artificial intelligences in warfare is therefore unfair either to potential casualties in the theatre of war, or to the officer who will be held responsible for their use.

This argument has been debated a lot since Sparrow first propounded it. What is often missing from those debates is some application of the legal doctrines of responsibility. Law has long dealt with analogous scenarios — e.g. people directing the actions of others to nefarious ends — and has developed a number of doctrines that plug the potential responsibility gaps that arise in these scenarios. What’s more, legal theorists and philosophers have long analysed the moral appropriateness of these doctrines, highlighting their weaknesses, and suggesting reforms that bring them into closer alignment with our intuitions of justice. Deeper engagement with these legal discussions could move the debate on killer robots and responsibility gaps forward.

Fortunately, some legal theorists have stepped up to the plate. Neha Jain is one example. In her recent paper ‘Autonomous weapons systems: new frameworks for individual responsibility’, she provides a thorough overview of the legal doctrines that could be used to plug the responsibility gap. There is a lot of insight to be gleaned from this paper, and I want to run through its main arguments in this post.


1. What is an autonomous weapons system anyway?

To get things started we need a sharper understanding of robot autonomy and the responsibility gap. We’ll being with the latter. The typical scenario that is imagined by proponents of the gap is where some military officer or commander has authorised the battlefield use of an autonomous weapons system (or AWS), that AWS has then used its lethal firepower to commit some act that, if it had been performed by a human combatant, would almost certainly be deemed criminal (or contrary to the laws of war).

There are two responsibility gaps that arise in this typical scenario. There is the gap between the robot and the criminal/illegal outcome. This gap arises because the robot cannot be a fitting subject for attributions of responsibility. I looked at the arguments that can be made in favour of this view before. It may be possible, one day, to create a robot that meets all the criteria for moral personhood, but this is not going to happen for a long time, and there may be reason to think that we would never take claims of robot responsibility seriously. The other gap arises because there is some normative distance between what the AWS did and the authorisation of the officer or commander. The argument here would be that the AWS did something that was not foreseeable or foreseen by the officer/commander, or acted beyond their control or authorisation. Thus, they cannot be fairly held responsible for what the robot did.

I have tried to illustrate this typical scenario, and the two responsibility gaps associated with it, in the diagram below. We will be focusing the gap between the officer/commander and the robot for the remainder of this post.



As you can see, the credibility of the responsibility gaps hinges on how autonomous the robots really are. This prompts the question: what do we mean when we ascribe ‘autonomy’ to a robot? There are two competing views. The first describes robot autonomy as being essentially analogous to human autonomy. This is called ‘strong autonomy’ in Jain’s paper:

Strong Robot Autonomy: A robotic system is strongly autonomous if it is ‘capable of acting for reasons that are internal to it and in light of its own experience’ (Jain 2016, 304).

If a robot has this type of autonomy it is, effectively, a moral agent, though perhaps not a responsible moral agent due to certain incapacities (more on this below). A responsibility gap then arises between a commander/officer and a strongly autonomous robot in much the same way that a responsibility gap arises between two human beings.

A second school of thought rejects this analogy-based approach to robot autonomy, arguing that when roboticists describe a system as ‘autonomous’ they are using the term in a distinct, non-analogous fashion. Jain refers to this as emergent autonomy:

Emergent Robot Autonomy: A robotic system is emergently autonomous if its behaviour is dependent on ‘sensor data (which can be unpredictable) and on stochastic (probability-based) reasoning that is used for learning and error correction’ (Jain 2016, 305)

This type of autonomy has more to do with the dynamic and adaptive capabilities of the robot, than with its powers of moral reasoning and its capacity for ‘free’ will. The robot is autonomous if it can be deployed in a variety of environments and can respond to the contingent variables in those environments in an adaptive manner. Emergent autonomy creates a responsibility gap because the behaviour of the robot is unpredictable and unforeseeable.

Jain’s goal is to identify legal doctrines that can be used to plug the responsibility gap no matter what type of autonomy we ascribe to the robotic system.


2. Plugging the Gap in the Case of Strong Autonomy
Suppose a robotic system is strongly autonomous. Does this mean that the officer/commander that deployed the system cannot be held responsible for what it does? No; in fact legal systems have long dealt with this problem, developing two distinct doctrines for dealing with it. The first is the doctrine of innocent agency or perpetration; the second is the doctrine of command responsibility.



The doctrine of innocent agency or perpetration is likely to be less familiar. It describes a scenario in which one human being (the principal) uses another human being (or, as we will see, a human-run organisational apparatus) to commit a criminal act on their behalf. Consider the following example:

Poisoning-via-child: Grace has grown tired of her husband. She wants to poison him. But she doesn’t want to administer the lethal dose herself. She mixes the poison in with sugar and she asks her ten-year-old son to ‘put some sugar in daddy’s tea’. He dutifully does so.

In this example, Grace has used another human being to commit a criminal act on her behalf. Clearly that human being is innocent — he did not know what he was really doing — so it would be unfair or inappropriate to hold him responsible (contrast with a hypothetical case in which Grace hired a hitman to do her bidding). Common law systems allow for Grace to be held responsible for the crime through the doctrine of innocent agency. This applies whenever one human being uses another human being with some dispositional or circumstantial incapacity for responsibility to perform a criminal act on their behalf. The classic cases involve taking advantage of another person’s mental illness, ignorance or juvenility.

Similarly, but perhaps more interestingly, there is the civil law doctrine of perpetration. This doctrine covers cases in which one individual (the indirect perpetrator) gets another (the direct perpetrator) to commit a criminal act on their behalf. The indirect perpetrator uses the direct perpetrator as a tool and hence the direct perpetrator must be at some sort of disadvantage or deficit relative to the indirect perpetrator. The German Criminal Code sets this out in Section 25 and has some interesting features:

Section 25 of the Strafgesetzbuch The Vordermann is the indirect perpetrator. He or she uses a Hintermann as a direct perpetrator. The Vordermann possesses Handlungsherrschaft (act hegemony) and exercises Willensherrschaft (domination) over the will of the Hintermann.

Three main types of willensherrschaft are recognised: (i) coercion; (ii) taking advantage of a mistake made by the hintermann or (iii) possessing control over some organisational apparatus (Organisationsherrschaft). The latter is particularly interesting because it allows us to imagine a case in which the direct perpetrator uses some bureaucratic agency to carry out their will. It is also interesting because Article 25 of the Rome Statute establishing the International Criminal Court recognises the doctrine of perpetration and the ICC has held in their decisions that it covers perpetration via organisational apparatus.

Let’s now bring it back to the issue at hand. How do these doctrine apply to killer robots and the responsibility gap? The answer should be obvious enough. If robots possess the strong form of autonomy, but they have some deficit that prevents them from being responsible moral agents, then they are, in effect, like the innocent agents or direct perpetrators. Their human officers/commanders can be held responsible for what they do, through the doctrine of perpetration, provided those officers/commanders intended for them to do what they did, or knew that they would do what they did.

The problem with this, however, is that it doesn’t cover scenarios in which the robot acts outside or beyond the authorisation of the officer/commander. To plug the gap in those cases you would probably need the doctrine of command responsibility. This is a better known doctrine, though it has been controversial. As Jain describes it, there are three basic features to command responsibility:

Command Responsibility: A doctrine allowing for ascriptions of responsibility in cases where (a) there is a superior-subordinate relationship where the superior has effective control over the subordinate; (b) the superior knew or had reason to know (or should have known) of the subordinates’ crimes and (c) the superior failed to control, prevent or punish the commission of the offences.

Command responsibility covers both military and civilian commanders, though it is usually applied more strictly in the case of military commanders. Civilian commanders must have known of the actions of the subordinates; military commanders can be held responsible for failing to know when they should have known (a so-called ‘negligence standard’).

Command responsibility is well-recognised in international law and has been enshrined in Article 28 of the Rome Statute on the International Criminal Court. For it to apply, there must be a causal connection between what the superior did (or failed to do) and the actions of the subordinates. There must also be some temporal coincidence between the superior’s control and the subordinates’ actions.
Again, we can see easily enough how this could apply to the case of the strongly autonomous robot. The commander that deploys that robot could held responsible for what it does if they have effective control over the robot, if they knew (or ought to have known) that it was doing something illegal, and if they failed to intervene and stop it from happening.

The problem with this, however, is that it assumes the robot acts in a rational and predictable manner — that its actions are ones that the commander could have known about and, perhaps, should have known about. If the robot is strongly autonomous, that might hold true; but if the robot is emergently autonomous, it might not.


3. Plugging the Gap in the Case of the Emergent Autonomy
So we come to the case of emergent autonomy. Recall, the challenge here is that the robot behaves in a dynamic and adaptive manner. It responds to its environment in a complex and unpredictable way. The way in which it adapts and responds may be quite opaque to its human commanders (and even its developers, if it relies on certain machine learning tools) and so they will be less willing and less able to second guess its judgments.

This creates serious problems when it comes to plugging the responsibility gap. Although we could imagine using the doctrines of perpetration and/or command responsibility once again, we would quickly be forced to ask whether it was right and proper to do so. The critical questions will relate to the mental element required by both doctrines. I was a little sketchy about this in the previous section. I need to be clearer now.

In criminal law, responsibility depends on satisfying certain mens rea (mental element) conditions for an offence. In other words, in order to be held responsible you must have intended, known, or been reckless/negligent with respect to some fact or other. In the case of murder, for example, you must have intended to kill or cause grievous bodily harm to another person. In the case of manslaughter (a lesser offence) you must have been reckless (or in some cases grossly negligent) with respect to the chances that your action might cause another’s death.

If we want to apply doctrines like command responsibility to the case of an emergently autonomous robot, we will have to do so via something like the recklessness or negligence mens rea standards. The traditional application of the perpetration doctrine does not allow for this. The principal or vordermann must have intention or knowledge with respect to the elements of the offence committed by the hintermann. The command responsibility doctrine does allow for the use of recklessness and negligence. In the case of civilian commanders, a recklessness mental element is required; in the case of military commanders, a negligence standard is allowed. So if we wanted to apply perpetration to emergently autonomous robots, we would have to lower the mens rea standard.



Even if we did that it might be difficult to plug the gap. Consider recklessness first. There is no uniform agreement on what this mental element entails. The uncontroversial part of it is that in order to be reckless one must have recognised and disregarded a substantial risk that the criminal act would occur. The controversy arises over the standards by which we assess whether there was a consciously disregarded substantial risk. Must the person whose conduct led to the criminal act have recognised the risk as substantial? Or must he/she simply have recognised a risk, leaving it up to the rest of us to decide whether the risk was substantial or not? It makes a difference. Some people might have different views on what kinds of risks are substantial. Military commanders, for instance, might have very different standards from civilian commanders or members of the general public. What we perceive to be a substantial risk might be par for the course for them.

There is also disagreement as to whether the defendant must consciously recognise the specific type of harm that occurred or whether it is enough that they recognised a general category of harm into which the specific harm fits. So, in the case of a military operation gone awry, must the commander have recognised the general risk of collateral damage or the specific risk that a particular, identified group of people, would be collateral damage? Again, it makes a big difference. If it is the more general category that must be recognised and disregarded, it will be easier to argue that commanders are reckless.

Similar considerations arise in the case of negligence. Negligence covers situations where risks were not consciously recognised and disregarded but ought to have been. It is all about standards of care and deviations therefrom. What would the reasonable person or, in the case of professionals, the reasonable professional have foreseen? Would the reasonable military commander have foreseen the risk of an AWS doing something untoward? What if it is completely unprecedented?

It seems obvious enough that the reasonable military commander must always foresee some risk when it comes to the use of AWSs. Military operations always carry some risk and AWSs are lethal weapons. But should that be enough for them to fall under the negligence standard? If we make it very easy for commanders to be held responsible, it could have a chilling effect on both the use and development of AWSs.

That might be welcomed by the Campaign against Killer Robots, but not everyone will be so keen. They will say that there are potential benefits to this technology (think about the arguments made in favour of self-driving cars) and that setting the mens rea standard too low will cut us off from these benefits.

Anyway, that’s it for this post.

Thursday, March 9, 2017

TEDx Talk: Symbols and Consequences in the Sex Robot Debate




The video from the TEDx talk I did last month is now available for your viewing pleasure. A text version is available here. Some people worry about the symbolic meaning of sex robots and their consequences for society. I argue that these worries may be misplaced.


Wednesday, March 8, 2017

Virtual Sexual Assault: A Classificatory Scheme


Party scene from Second Life


In 1993, Julian Dibbell wrote an article in The Village Voice describing the world’s first virtual rape. It took place in a virtual world called LambdaMOO, which still exists to this day. It is a text-based virtual environment. People in LambdaMOO create virtual avatars (onscreen ‘nicknames’) and interact with one another through textual descriptions. Dibbell’s article described an incident in which one character (Mr. Bungle) used a “voodoo doll” program to take control of two other users’ avatars and force them to engage in sexual acts.

In 2003, a similar incident took place in Second Life. Second Life is a well-known virtual world. It is visual rather than textual. People create virtual avatars that can interact with other user’s avatars in a reasonably detailed virtual environment. In 2007, the Belgian Federal Police announced that they would be investigating a ‘virtual rape’ incident that took place in Second Life back in 2003. Little is known about what actually happened, but taking control of another character’s avatar and forcing it to engage in sexual acts was not unheard of in Second Life.

More recently, in October 2016 to be precise, the journalist Jordan Belamire reported how she had been sexually assaulted while playing the VR game QuiVR, using the HTC Vive. The HTC Vive (for those that don’t know) is an immersive VR system. Users don a headset that puts a virtual environment into their visual field. The user interacts with that environment from a first person perspective. QuiVR is an archery game where players fight off marauding zombies. It can be played online with multiple users. Players appear in a disembodied form as a floating helmet pair of hands. The only indication of gender comes through choice of name and voice used to communicate with other players. Jordan Belamire was playing the game in her home. While playing, another user — with the onscreen name of ‘BigBro442 — started to rub the area near where her breasts would be (if they were depicted in the environment). She screamed at him to ‘stop!’ but he proceeded to chase her around the virtual environment and then to rub her virtual crotch. Other female users of VR have reported similar experiences.

These three incidents raise important ethical questions. Clearly, there is something undesirable about this conduct. But how serious is it and what should we do about it? As a first step to answering this question, it seems like we need to have a classificatory scheme for categorising the different incidents of virtual rape and sexual assault. Maybe then we can say something useful about their ethical importance? Prima facie, there is something different about the virtual sexual assaults that took place in LambdaMOO and QuiVR and these differences might be significant. In this post, I’m going to try to pin down these differences by developing a classificatory scheme.

In developing this scheme, I am heavily indebted to Litska Strikwerda’s article “Present and Future Instances of Virtual Rape…”. What I present here is a riff off the classificatory scheme she develops in her article.


1. Defining Virtual Sexual Assault
I’ll start with a couple of definitions. I’ll define ‘virtual sexual assault’ in the following manner:

Virtual Sexual Assault: Unwanted, forced or nonconsensual, sexually explicit behaviour that is performed by virtual representations acting in a virtual environment.

This is a pretty vague definition. This is deliberate: I want it to cover a range of possible scenarios. There are many different kinds of virtual representations and virtual environments and hence many forms that virtual sexual assault can take. Nevertheless, since the focus is on sexual behaviour, we have to assume that these virtual representations and environments include beings who are capable of symbolically representing sexual acts. The paradigmatic incident of virtual sexual assault would thus be a scenario like the one in Second Life where two humanoid avatars engage in sexual behaviour. You may wonder what it means for sexual behaviour to be ‘unwanted, forced or nonconsensual’ in a virtual environment. I’ll assume that this can happen in one of two ways. First, if one of the virtual representations is depicted as not wanting or not consenting to the activity (and/or one is depicted as exerting force on the other). Second, and probably more importantly, if the human who controls one of the virtual representations does not want or consent to the activity being represented.

That’s virtual sexual assault. What about virtual rape? This is much trickier to define. Rape is a sub-category of sexual assault. It is the most serious and violative kind of sexual assault. But its definition is contested. Most legal definitions of rape focus on penetrative sex and get into fine details about specific organs or objects penetrating specific bodily orifices. The classic definition is ‘penile penetration of the vagina’, but this has been broadened in most jurisdictions to include oral and anal penetration. As Strikwerda points out, these biologically focused definitions might seem to rule out the concept of ‘virtual rape’. They suggest that rape can only take place when the right biological organ violates the right biological orifice. This is not possible if actions take place through non-biological virtual representations.

So I’m going to be a bit looser and less biologically-oriented in my definition. I’m going to define a virtual rape as any virtual sexual assault in which the represented sexual behaviour depicts what would, in the real world, count as rape. Thus, for instance, a virtual sexual assault in which one character is depicted as sexually penetrating another, without that other’s consent (etc) would count as a ‘virtual rape’.

Due to its less contentious nature, I’ll focus mainly on virtual sexual assault in this post.


2. Who is the perpetrator and who is the victim?
These definitions bring us to the first important classificatory issue. When thinking about virtual sexual assault we need to think about who is the victim and who is the perpetrator. The three incidents described in the introduction involved humans interacting with other humans through the medium of a virtual avatar. Thus, the perpetrators and victims were, ultimately, human-controlled. But one of the interesting things about actions in virtual worlds is that they need not always involve human controlled agents. They could also involve purely virtual agents.* A couple of years back, I wrote a blogpost about the ethics of virtual rape. The blogpost focused on games in which human controlled players were encouraged to ‘rape’ onscreen characters. These raped characters were not being controlled by other human players. They existed solely within the game environment. It was a case of a human perpetrator and a virtual victim. We could also imagine the reverse happening — i.e. a situation where purely virtual characters sexually assault human controlled characters — as well as a case involving two purely virtual characters.

This suggests that we can categorise the possible forms of virtual rape and virtual assault, using a two-by-two matrix, with the categories varying depending on whether they involve a virtual perpetrator/victim or a human perpetrator/victim. As follows:



In the top left-hand corner we have a case involving a virtual perpetrator and a virtual victim. In the top right-hand corner we have a case involving a virtual perpetrator and a human victim. In the bottom left-hand corner we have a case involving a human perpetrator and a virtual victim. And in the bottom right-hand corner we have a case involving a human perpetrator and human victim.

Is it worth taking all four of these cases seriously? My sense is that it is not. At least, not right now. The virtual-virtual case is relatively uninteresting. Unless we assume that virtual agents have a moral status and are capable of being moral agents/patients, the interactions they have with one another seem to be of symbolic significance only. That’s not to say that symbols are unimportant. They are important and I have discussed their importance on previous occasions. It is just that cases of virtual sexual assault involving at least one moral agent/patient seem like they are of more pressing concern. That’s why I suggest we limit our focus to cases involving at least one human participant.


3. How do the human agents interact with the virtual environment?
If we limit our focus in this way, we run into the next classificatory problem. How exactly do the human agents interact with the virtual environment. It seems like there are two major modes of interaction:

Avatar interaction: This is where the human creates a virtual avatar (character, on-screen representation) and uses this avatar to perform actions in the virtual world.

Immersive interaction: This is where the human dons some VR helmet and/or haptic clothing/controller and acts in the virtual world from a first person perspective (i.e. they act ‘as if’ they were really in the virtual world). They may still be represented in the virtual world as an avatar, but the immersive equipment enables them to see and potentially feel what is happening to that avatar from the first person perspective.

Avatar interaction is the historical norm but immersive interaction is becoming more common with the advent of Occulus Rift and rival technologies. As these technologies develop we can expect the degree of immersion to increase. This is important because it reduces the psychological and physical distance between us and what happens in the virtual world. This ‘distance’ could have a bearing on how morally harmful or morally blameworthy the conduct is deemed to be.

Anyway, we can use the distinction between avatar and immersive interaction to construct another two-by-two matrix for classifying cases of virtual sexual assault. This one focuses on whether we have a human victim or perpetrator and the mode of interaction. This one is a little bit more complicated than the previous one. To interpret it, suppose that you are the human victim/perpetrator and that you either interact with the virtual world using an avatar or using immersive technology. If you are the human victim and interact using an avatar, for instance, there are two further scenarios that could arise: either you are assaulted by another human or by a virtual agent. This means that for each of the four boxes in the matrix there are two distinct scenarios to imagine.



This, then, is the classificatory scheme I propose for dealing with virtual sexual assault (and rape). I think it is useful because it focuses on cases involving at least one human agent and encourages us to think about the mode of interaction, the role of the human in that interaction (victim or perpetrator), and to consider the ethics of the interaction from the perspective of the victim or perpetrator. All of the scenarios covered by this classificatory scheme strike me as being of ethical and, potentially, legal interest. We should be interested in cases involving human perpetrators because what they do in virtual worlds probably says something about their moral character, even if the victim is purely virtual. And we should be interested in cases involving human victims because what happens to them (the trauma/violation they experience) is of ethical import, irrespective of whether the perpetrator was ultimately human or not. Finally, we should care about the mode of interaction because it can be expected to correlate with the degree of psychological/physical distance experienced by the perpetrator/victim and possibly with the degree of moral harm implicated by the experience.

There is one complication that I have not discussed in developing this scheme. That is the distinction that some people might like to draw between robotic interactions and virtual ones. Robotic interactions would involve embodied, artificial agents acting with humans or other robots in the real world. There are cases in which robot-human interactions can be distinguished from virtual interactions of the sort discussed here (I wrote an article about some of the issues before). But there is one scenario that I think should fall under this classificatory scheme. That is the case where humans interact via robotic avatars (i.e. remote controlled robots). I think these can be classed as avatar-interactions or (if they involve haptic/immersive technologies) as immersive interactions. The big difference, of course, is that the effects of robotic interactions are directly felt in the real world.


That’s enough for now. In the future, I will try to analyse the different scenarios from an ethical and legal perspective. In the meantime, I’d be interested in receiving feedback on this classificatory scheme. Is it too simple? Does it miss something important? Or is it overly complicated?


* Of course, there is no such thing as a purely virtual agent. All virtual agents (for now) have ultimately been created or programmed into being by humans. What I mean by ‘purely’ virtual is that they are not under the immediate control of a human being, i.e. their actions in the game are somewhat autonomous.

Monday, March 6, 2017

Episode #20 - Karen Yeung on Hypernudging and Big Data

Karen_Yeung.png

[If you like this blog, consider signing up for the newsletter...]

In this episode I talk to Karen Yeung. Karen is a Chair in Law at the Dickson Poon School of Law, Kings College London. She joined the School to help establish the Centre for Technology, Ethics and Law & Society (‘TELOS’), of which she is now Director.  Professor Yeung is an academic pioneer in the field of regulation studies (or ‘regulatory governance’ studies) and is a leading scholar concerned with critically examining governance of, and governance through, new and emerging technologies. We talk about her concept of 'hypernudging' and how it applies to the debate about algorithmic governance.

You can download the episode here. You can also listen below or subscribe on Stitcher or iTunes (via RSS).


Show Notes

  • 0:00 - Introduction
  • 2:20 - What is regulation? Regulation vs Governance
  • 6:35 - The Different Modes of Regulation
  • 11:50 - What is nudging?
  • 15:40 - Big data and regulation
  • 21:15 - What is hypernudging?
  • 32:30 - Criticisms of nudging: illegitimate motive, deception and opacity
  • 41:00 - Applying these criticisms to hypernudging
  • 47:35 - Dealing with the challenges of hypernudging
  • 52:40 - Digital Gerrymandering and Fake News
  • 59:20 - The need for a post-liberal philosophy?
 

Relevant Links

     

Saturday, March 4, 2017

Is Judicial Review Compatible with Democracy?




[If you like this blog, consider signing up for the newsletter...]

Many countries have constitutions that protect individual rights. Strong form judicial review (hereinafter ‘strong JR’) is the practice whereby courts, usually the ‘supreme’ court in a given jurisdiction, have the final power to strike down legislation that they perceive to be in conflict with constitutionally protected rights. The United States and Ireland are two jurisdictions in which strong JR prevails; the UK and New Zealand are two jurisdictions where it does not. In the US, judicial review was not enshrined in the original text of the constitution, but it was endorsed in the famous case of Marbury v Madison; in Ireland, the practice was explicitly enshrined in the text of the 1937 Constitution.

Strong JR is a controversial practice. It allows judges — who are usually unelected and relatively unaccountable officials — to strike down legislation that has been passed by the majority-approved legislature. Its democratic legitimacy is, consequently, contested. But it also has many supporters, particular among legal academics, who defend it is an appropriate check on the ‘tyranny of the majority’ and a potentially progressive force for good.

Some go even further. In his 1996 book Freedom’s Law, the late Ronald Dworkin argued that not only was strong JR a potentially progressive check on the tyranny of the majority, it was also consistent with (possibly protective of) democracy. Thus, far from there being any tension between strong JR and democratic decision-making, there was actually considerable harmony between the two, both in theory and in practice.

Jeremy Waldron is a long-time critic of this school of thought. He argues that strong JR is democratically problematic and that, in many instances, legislative process is the more appropriate forum for settling matters of individual rights and policy. He wrote a critique of Dworkin’s attempted reconciliation of strong JR and democracy in 1998.

Sadly, Waldron is not the most reader-friendly of political theorists. This blog post is my attempt to make sense of the objections he presents in his 1998 critique. On my reading, Waldron challenges two of Dworkin’s arguments. I’ll give these arguments names. I’ll call the first one the ‘deliberative argument’ and the second one the ‘necessary connection argument’. I’ll look at each in turn.

As a preliminary to doing that, it is worth noting that arguments for and against strong JR can usually be divided into two main categories:

Consequentialist Arguments: These are arguments that claim that strong JR is good/bad because it yields better/worse results (according to some measure of betterness) than alternative procedures for settling rights disputes.

Proceduralist Arguments: These are arguments that claim that strong JR is good/bad because it creates a procedure that is/is not consistent with the requirements of democracy.

Consequentialist arguments are most commonly marshalled in defence of strong JR (though sometimes are used against it) and proceduralist arguments are most commonly marshalled against strong JR. What’s interesting about Dworkin’s arguments is that they try to dispute the proceduralist arguments against strong JR, while at the same time defending it on consequentialist grounds.


1. The Deliberative Argument for Strong JR
Public deliberation is often thought to be part of what makes democracy a preferable mode of government. Democracy is not simply about getting individuals to vote on competing propositions and policy preferences; it is also about getting them to deliberate on these propositions and preferences. They get to defend particular policy preferences on rational grounds; they get to present reasons to one another.

But not all exercises in public deliberation are ideal. Sometimes public debate on contentious policies ignores the important values and facts that are at stake in those debate (witness the rise of so-called ‘post truth’ politics); sometimes public debate is reduced to fear-mongering, demonisation and ad hominem attacks. Surely anything that could improve the quality of public deliberation would be desirable?

That’s what Dworkin’s deliberative argument claims on behalf of strong JR. He thinks that strong JR can raise the standard of national debate:

When an issue is seen as constitutional…and as one that will ultimately be resolved by courts applying general constitutional principles, the quality of public argument is often improved, because the argument concentrates from the start on questions of political morality…a sustained national debate begins, in newspapers and other media, in law schools and classrooms, in public meetings and around dinner tables. That debate better matches [the] conception of republican government, in its emphasis on matters of principle, than almost anything the legislative process on its own is likely to produce.  

(Dworkin 1996, 345)

To put this in an argumentative form:


  • (1) Good quality public deliberation is important/essential for democracy: anything that facilitates or encourages it is consistent with (or contributes to) democracy.
  • (2) Strong JR facilitates/encourages good quality public deliberation (possibly more so than the legislative process on its own).
  • (3) Therefore, strong JR is consistent with/contributes to the conditions of democratic rule.


Waldron challenges premise (2) of this argument. I see five distinct challenges in what he has to say. The first is a modest challenge to the bracketed portion of premise (2):


  • (4) Counterexamples: In countries without strong JR, good quality public deliberation can take place on matters of public importance.


Waldron appeals here to his experience of the UK and New Zealand, arguing that national debates about issues such as abortion are just as robust and well-informed in those jurisdictions as it is in the US. This may well be true, but, as I say, this is only a modest challenge to Dworkin’s argument. It doesn’t call into question Dworkin’s larger point, which is that strong JR is at least consistent with (and possibly facilitative of) good quality deliberation. The four remaining challenges take issue with this larger point, suggesting that strong JR can actually undermine good quality public deliberation.



  • (5) Contamination problem: Strong JR can contaminate public deliberation by replacing the core moral questions with abstruse and technical questions of constitutional/legal interpretation.


This is probably Waldron’s most important critique of the deliberative argument. Take a contentious issue like whether the death penalty should be implemented. In the US, any debate about the desirability of the death penalty has to get into debates about the interpretation of the 8th amendment (the “cruel and unusual punishment” clause). It has to ask whether the death penalty in general is ‘cruel’ and ‘unusual’ or whether particular methods of execution are; it has to look at the history of constitutional jurisprudence on the interpretation of the 8th amendment. This often distracts from the core moral questions around theories of punishment and their respective moral desirability. We cannot have a full-blooded moral debate about the death penalty; we have to have one filtered through the lens of the 8th amendment.

For what it’s worth, I think this critique of strong JR is correct: public debate that is contaminated by legalistic concepts is less than ideal. But let’s not romanticise legislative debates either. They are often contaminated by political considerations that prevent the elected representative from engaging in a full-blooded moral debate.

Another problem, related to contamination, is this:


  • (6) Mystification problem: When public debates have to be filtered through legal concepts and ideas, it has a mystificatory effect: the ordinary public is put off by the technical and legalistic framing, and are less able to contribute to the debate.


People often assume that legal arguments can only be properly understood after years of technical, professional training. It’s only once you have been schooled in the fine arts of legal interpretation, have familiarised yourself with the key legal doctrines and precedents, and mastered the legal mode of speech, that you will be able to say anything worthwhile. This excludes people from public debate.
This has another negative implication:


  • (7) Anticipatory Defence Problem: People are unwilling to support or defend certain policy views for fear that they may be inconsistent with or contrary to the interpretation of certain constitutional provisions.


This is subtly different from the contamination and mystification problems. Here, the problem is not just that strong JR changes how we talk about certain issues; it also prevents certain issues (certain points of view) from ever getting to the table. Why? Because participants in the public discourse are put off by the possibility of judicial strikedown of a proposed reform. Daly (2017) has documented several examples of this happening in the Irish political context. Politicians in this jurisdiction often conveniently avoid controversial legal reforms on the ground that they may be ‘unconstitutional’, despite the fact that the Irish Supreme Court gives considerable latitude to the legislature to decide on the content of constitutional rights.

Finally, as Waldron points out, there is something a bit shallow about the kind of public deliberation Dworkin celebrates in the US:


  • (8) Spectator Problem: When questions of political/public morality are finally settled/determined by courts, the general public tends to be reduced to mere spectators in political/legal reform.


In other words, Americans can have as many fine public debates as they like on the legality of abortion and same-sex marriage, but these debates have very little practical import. The law on those issues is only going to change when the US Supreme Court decides to overturn its previous decisions. The public deliberation is, consequently, a little hollow or empty.

This strikes me as being a good critique of what happens in the US. I am not sure that it applies as well to a country like Ireland where the constitutional referendum procedure is used quite regularly. Referenda can effectively bring judicially determined topics back onto the public playing field.



That’s Waldron’s critique of the deliberative argument. In the end, this argument is not too important. The far more interesting argument is the next one.


2. The Necessary Connection Argument


The necessary connection argument is a philosophically deeper defence of strong JR. It maintains that there is a necessary connection between certain individual rights and democracy. That is to say, unless those rights are protected there can be no (legitimate) democratic governance. It then adds to this observation the claim that strong JR can sometimes help to protect these rights from majoritarian encroachment and hence strong JR is consistent with democracy at a fundamental level (i.e. it is one of the things that makes democracy possible).

Let’s set this argument out in formal terms first:


  • (9) Certain rights are necessary for democracy; if those rights are limited/encroached there can be no legitimate democratic governance.

  • (10) Sometimes, strong JR can protect those rights from majoritarian encroachment.

  • (11) Therefore, strong JR is not anti-democratic; it can be protective of democracy.


This is a weak version of the argument. It concludes merely that strong JR is consistent with democratic rule. A stronger version of the argument (which is probably closer to what Dworkin actually believes) would modify premises (10) and (11) to something like the following:


  • (10*) Strong JR is a good way (or is often a good way) to protect those rights from majoritarian encroachment.

  • (11*) Therefore, strong JR is not anti-democratic; in fact, it is often a good way to protect democratic rule.


Obviously, this stronger version of the argument is harder to defend. It requires some empirical support.

Now that we have a clearer sense of what the argument looks like, let’s burrow down into its key premises, starting with (9). Why is it that certain rights are necessary for democracy? Which rights might those be? Waldron identifies two broad classes of rights that animate Dworkin’s discussion:

Participatory Rights: These are rights that make it possible to actually participate in democratic decision-making (e.g. right to vote; right to be heard).

Legitimacy Rights: These are rights that, if they were not protected in a given society, would mean that any decision made through a democratic decision-procedure in that society would lack legitimacy (e.g. freedom of conscience, freedom of association, freedom of speech).

The connection between participatory rights and democratic rule is logical: it would not be possible to have democratic rule if you did not have those participatory rights. The connection between legitimacy rights and democracy is normative: we could imagine a world in which we can all participate in democratic decision-making procedures and yet those procedures would lack legitimacy because our opinions were suppressed or brainwashed into us.

The problem with legitimacy rights is that the potential pool of rights that fit within this category is indeterminate. There is much (reasonable) disagreement about what exactly is necessary for democratic legitimacy. We might all agree that a right to vote is essential, but should the voting system be first past the post or proportional representation? Do we need to protect certain minority groups in order to ensure democratic legitimacy? Must we privilege the opinions of some in certain contexts? Must all opinions be freely expressible or can some be legitimately curtailed? This indeterminacy creates problems for Dworkin’s argument.

Let’s assume that premise (9) is defensible. This moves us on to premise (10) (or 10* if you prefer). Waldron deals with a number of objections and replies to this premise. Some of them are of academic interest only; some of them are technical or esoteric. I’m going to ignore these and focus instead on the two major criticisms he launches against Dworkin.

The first criticism takes issue with Dworkin’s results-oriented approach to assessing the merits of certain rights-based decisions. In suggesting that strong JR can be a good way to protect democratic rights, Dworkin dangerously tiptoes into suggesting that the ‘ends justify the means’ when it comes to such rights. As long as the procedure (strong JR in this instance) protects the democratic rights it does not matter if the procedure was non-democratic in nature. Indeed, Dworkin almost exactly this in his writings:

I see no alternative but to use a result-driven rather than a procedure-driven standard…The best institutional structure is the one best calculated to produce the best answers to the essentially moral question of what the democratic conditions actually are, and to secure compliance with those conditions. 
(Dworkin 1996, 34)

Waldron challenges this with a thought experiment:

Voting System Decision: Suppose two countries are debating whether to switch from using a first past the post voting system to using a single transferable vote system. Suppose it is true that the single transferable vote system is better for democracy. In country A (say the United Kingdom), the constitutionally recognised monarch grows exasperated by the parliamentary debate on the topic and decides to resolve the issue by decree, favouring the single transferable vote system. In country B, the issue is debated in public and decided upon by a popular referendum, implementing the single transferable vote (Waldron points out that this was what actually happened in New Zealand). Both countries end up with the same system. This system is democratically preferable. But can we really say that both systems are equally protective of democracy?

Waldron suggests we cannot: the means matter. To elaborate on this point, when it comes to assessing any decisions about democratic rights, there are two variables to keep in mind: (i) the result of the decision and (ii) the means to that result. Both can be consistent with democracy, or not (as the case may be). That means that when it comes to assessing such decisions, we have four possible scenarios with which to contend (illustrated below):



Waldron criticises Dworkin for assuming that scenario C is on a par with scenario A. They are not. Scenario A is clearly preferable to scenario C: it is more consistent with and protective of democratic values. This strikes me as being a relatively uncontroversial point, at least in principle. But I think that the really tricky question is how we should rank scenarios B and C. Should you favour a democratic means over a democratic result Waldron seems to suggest that he might follow Dworkin to some extent by ranking C above B, though his comments about the tyranny of the majority (below) suggest he has doubts about this.

That’s the major criticism of the necessary connection argument. The other point that Waldron makes is slightly more subtle and takes issue with the implication underlying the argument. In appealing to strong JR’s capacity to resist majoritarian encroachments on democratic rights, Dworkin, like many, is presuming the so-called ‘tyranny of the majority’. Waldron doesn’t like this. He thinks the phrase ‘tyranny of the majority’ trips too easily off the tongue and leads us to ignore other forms of tyranny.

The problem he has is this. Proponents of strong JR too readily assume that if the majority is or can be tyrannous it follows that strong JR is a legitimate check on that tyranny. That does not follow. All forms of decision-making have the potential to be illegitimate/tyrannous. Strong JR doesn’t pass muster simply because majority rule is acting up. Strong JR can be — and historically often has been — a conservative and anti-democratic force. So just because one system has its faults does not mean that we can jump to the defence of an alternative system. That system could be just as bad (possibly even worse).

Okay, so that’s it — that’s my summary of Waldron’s critique of Dworkin. To briefly recap, Dworkin defended strong JR on the grounds that it facilitated good quality public discourse. Waldron disputed this, suggesting that it can contaminate and mystify public discourse, leading politicians to act in a conservative and precautionary manner, and reducing most ordinary citizens to mere spectators. Dworkin also defended strong JR on the grounds that it protected rights that were essential to democratic rule. Waldron also disputed this, suggesting that when it comes to the protection of such rights the means matter: if we can protect them by democratic means this is better than protecting them through judicial review.

Wednesday, March 1, 2017

The Carebot Dystopia: An Analysis



[If you like this blog, consider signing up for the newsletter...]

The world is ageing. A demographic shift is underway. According to some figures (Suzman et al 2015), the proportion of the worldwide population aged 65 or older will outnumber the proportion aged under 5 by the year 2020. And the shift is faster is some countries. Japan is a striking example. Demographers refer to it as a ‘super-ageing’ society. By 2030, they estimate that one in three Japanese people will be aged 65 or over. One in five will be over 75.

Old age, in its current form (i.e. absent any breakthroughs in anti-ageing therapies), is a state of declining health and increasing dependency. The care burden facing our ageing societies is, consequently, a matter of some concern. Some people are turning to robots for the answer, advocating that they be used as companions and carers for the elderly, both to shift the burden off the shrinking youth population, as well as to cut costs.

Is this the right way to go? Robert and Linda Sparrow think not. In their 2006 article “In the hands of machines? The future of aged care”, they paint a dystopian picture. They describe a future in which robots dominate the care industry and the elderly are increasingly isolated from and forgotten by their younger human peers:

The Carebot Dystopia: “We imagine a future aged-care facility where robots reign supreme. In this facility people are washed by robots, fed by robots, monitored by robots, cared for and entertained by robots. Except for their family or community service workers, those within this facility never need to deal or talk with a human being who is not also a resident. It is clear that this scenario represents a dystopia…” 
(Sparrow and Sparrow 2006, 152)

But is it really so clear? I want to try to answer that question in this post. I do so with the help of Mark Coeckelbergh’s recent article “Artificial agents, good care and modernity.” Coeckelbergh’s article performs two important functions. First, it tries to develop an account of good care that can be used to evaluate the rise of the carebots. And second, it explains how the use of carebots is tied into the broader project of modernity in healthcare. He criticises this project because not only does it try to mechanise healthcare, it also tries to mechanise human beings.

I’ll explain both parts of his critique in this post. As will become clear, my own personal view is less dystopian than that of Coeckelbergh (and Sparrow & Sparrow), but I think it is important to understand why he (and they) find the rise of the carebots so disturbing.


1. Ten Features of Good Care
Coeckelbergh starts his article by identifying ten features of good care. There is little to be done apart from listing these ten features and explaining the rationale behind their articulation. I’m going to label these ten features using the notation F1, F2 (and so on) because I’ll be referring to them in a later argument and it is easier to do this using these abbreviations:

F1 - ‘Good care attempts to restore, maintain and improve the health of persons’. This feature speaks for itself really. It seems obvious enough that care should aim at maintaining and restoring health and well-being, but Coeckelbergh notes that this excludes certain controversial practices from the realm of care (e.g. euthanasia). That’s not because he objects to those practices, but because he wants to develop an account of care that is relatively uncontroversial.

F2 - ‘Good care operates according to bioethical principles and professional standards and codes of ethics’. Again, this feature is pretty straightforward. Ethical principles and codes of practice are widespread in medicine nowadays (with most contemporary codes being developed, initially, as a response to the abuse of powers by Nazi doctors during WWII).

F3 - ‘Good care involves a significant amount of human contact’. This feature will obviously be controversial in the context of a debate about carebots since it seems to automatically exclude them. But there is clearly a powerful intuition out there — shared by many — which holds that human contact is an important part of therapy and well-being. That said, the intuition is probably best explained in terms of other, more specific properties or characteristics of human-human contact (such as those that follow).

F4 - ‘Good care is not just physical but also psychological (emotional and relational)’. This cashes out what is important about human-human contact in terms of empathy, sympathy and other emotional relations. This is a key idea is many theories of care. It requires that we take a ‘holistic’ approach to care. We don’t just fix the broken body but also the mind (of course, holding to some rigid distinction between body and mind is problematic but we’ll ignore that for the time being).

F5 - ‘Good care is not only professional but should involve relatives, friends and loved ones’. Again, this cashes out what is important about human-human contact in terms of the specific relationships we have with people we also love and care about. It’s important that we don’t feel abandoned by them to purely professional care.

F6 - ‘Good care is not experienced solely as a burden but also (at least sometimes) as meaningful and valuable’. This one speaks for itself really. What is interesting about it is how it switches focus from the cared-for to the carer. The claim is that care is better when the carer gets something out of it too.

F7 - ‘Good care involves skilled know-how next to skilled know-that’. This might require some explanation. The idea is that good care depends not just on propositional or declarative knowledge being dispensed by some professional expert (like a doctor) but also on more tacit and implicit forms of manual, and psychological knowledge. The suggestion is that care is a craft and that the carer is a craftsman/woman.

F8 - ‘Good care requires an organisational context in which there are limits to the division of labour so as not to make the previous criteria impossible to meet’. This feature points to problems that arise from excessive specialisation (assembly-line style) in healthcare. If the care task is divided up into too many discrete stages and distributed among too many individuals, it will be impossible to develop the rich, empathetic, craftsman-style relationship that good care requires.

F9 - ‘Good care requires an organisational context in which financial considerations are not the only or main criterion of organisation’. This feature is related to the previous one. It suggests that a capitalistic, profit-maximising logic is antithetical to good care.

F10 - ‘Good care requires the patient to accept some degree of vulnerability and dependency’. This features brings the focus back to the cared-for and suggests that they have to shoulder some of the burden/responsibility for ensuring that the care process goes well. They cannot resist their status as someone who is dependent on others. They need to embrace this status (to at least some extent).

There is probably much to be said about each of these ten features. Some could be disputed (as, indeed, I have already disputed F3) and others may need to be finessed in light of criticisms, but we will set these complications to the side for now and consider how these ten features of good care can be used to criticise the rise of the carebots.




2. The Case Against Carebots
There is a simple argument to be made against carebots:


  • (1) Good care requires features F1…F10.
  • (2) If we use carebots, or, rather, if their use becomes widespread, it will not be possible to satisfy all of the required features (F1…F10) of good care.
  • (3) Therefore, the rise of the carebots is contrary to good care.


This simple argument leads to some complex questions. Obviously premise (2) is too vague in its current form. It prompts at least two further questions: (i) Which features of good care, exactly, are blocked by the use of carebots? and (ii) why does the use of carebots block these features?

The first of these questions is important because some of the features clearly have nothing to do with carebots and are unlikely to be undermined by their use. For example, the attitude of the cared-for, the adherence to professional ethical codes, the organisational context, and the ability to maintain, restore and improve health would seem to be relatively unaffected by the use of carebots. There could certainly be design and functionality issues when it comes to the deployment of carebots — it could be that they are not great at maintaining health and well-being — but these are contingent and technical problems, not in principle problems. Once the technology improves, these problems could be overcome. The deeper question is whether there are certain limitations that the technology could not (or is highly unlikely to) overcome.

That’s where features F3…F7 become important. They are the real focus when it comes to opposition to carebots. As I said previously, F3 (the need for human contact) is unhelpful in the present context because it stacks the deck against the deployment of carebots. So let’s leave that to the side. The more important features are F4…F7, which cash out in more detail why human-human contact is important. There is serious concern that carebots would block the satisfaction of those features of good care.

This brings us to the second question: why? What is it about robots that prevents those features from being satisfied? The arguments are predictable. The claim, broadly speaking, is that robots won’t be able to provide the empathy and sympathy needed, they won’t be able to develop the skills needed for care-craftsmanship, they cannot be our loved ones, and they cannot experience the care-giving task as a meaningful one. Why not? Because robots are not (and are unlikely to be) persons. They may give the illusion of being persons, but they will lack the rich, phenomenological, inner mental life of a person. They may provide a facade or pretense of personhood, nothing more.

This is my way of putting it. Coeckelbergh is more subtle. He acknowledges that carebots may actually help satisfy the conditions of good care if they are mere tools, i.e. if they merely assist human caregivers in certain tasks. The danger is that they won’t be mere tools. They will be artificial agents that takeover certain tasks. Of course, it is not clear what it means to say that an artificial agent will ‘takeover’ a task — the caregiving task is multifaceted and consists of many sub-tasks. But here Coeckelbergh focuses on the perception and experience of humans involved in care. He is a proponent of an experiential approach to robot ethics — one that prioritises the felt experiences of humans over any supposed objective reality.

So he argues that carebots will undermine good care “if and insofar as [they] appear as agents that takeover care tasks” (2015, 273). And these appearances matter, in turn, because the robots who appear as agents will be unable to satisfy the features of good care:

”insofar as the machine is perceived as ‘taking over’ the task of care and as taking on the role of the human care agent, then, if the ideal of care articulated above is assumed, it seems that something would be expected from the machine that the machine cannot give: the machine cannot take up this role and responsibility, cannot care in the ways defined above. It may appear to have emotions, but be unable to fulfil what is needed for care as articulated above.” 
(Coeckelbergh 2015, 273)


Is it a good argument? I’ve voiced my opposition to this kind of thing before. I have three major objections. The first is that robots could be persons and have a rich inner mental life. In my mind, there is no good ‘in principle’ objection to this. That said, this is just a conceptual possibility, not an immediate practical reality. The second objection is that I am a performativist/behaviourist when it comes the ethics of our interactions with others (including robots and human beings). I think we never have access to another person’s inner mental life. We may infer this from their outer behaviour, but this outer behaviour is ultimately all we have to go on. If robots are performatively equivalent to humans in this respect, they will be able to fulfil the same caregiving roles as human agents. Indeed, I’m surprised Coeckelbergh, with his preference for the experiential approach, doesn’t endorse something similar. In this respect I find the ‘experiential’ framing of his objection to carebots a little odd. His preoccupation with appearances is not that deep. His objection is ultimately metaphysical in nature. The appearances only matter if the robots do not, in fact, have the requisite capacities for empathy, sympathy and so on. That said, I accept that carebots are unlikely to be performatively equivalent to human beings in the near future. So I fall back on my third objection, which is that in many instances carebots will be able to complement, not undermine human-to-human relationships.

This final objection, however, is challenged by Coeckelbergh’s other argument about modernisation in healthcare. Let’s look at that now.


3. Healthcare, Modernity and the Machine
The argument in the previous section was about carebots blocking the route to good care because of what they are and how they interact with humans. As such, it was focused on the robots themselves. This next argument shifts focus from the robots to the general social-economic context in which they are deployed. The idea underlying it is that robots are a specific manifestation of a much more general problem.

That problem is one of modernisation in healthcare. It is a problem that goes to the heart of the capitalistic and ‘neoliberal’ model of organisation. The idea is that capitalistic modes of production and service provision are mechanistic at an organisational level. Think of Henry Ford’s assembly-line. The goal of that model of production was to break the task of building a car up into many discrete, specialised tasks, so as to maximise the productivity of labour power. The production process was thus treated as a machine. The machine was built out of capital and labour power.

This has bad consequences for the humans that are part of that production machine. The individual workers in the assembly-line are dehumanised and automatised. They are reduced to mere cogs in the capitalistic machine. They are alienated from their labour and the products of their labour.

Coeckelbergh uses this Marxist line of thought to build his critique of carebots. His claim is that modern healthcare has been subjected to the same mechanical organisational forces. I’ll leave him describe the problem:

…All kinds of practices become shaped by this kind of thinking and this way of organizing work, even if they do not literally resemble industrial production processes or assembly lines. For health care work, it means that under modern conditions, care work has become ‘labour’, which (1) is wage labour (care is something one does for money) and (2) involves modern employment relations (with professionalization, disciplining, formalization of the work, management, etc.) and (3) involves relations between care giver and care receiver in which the receiver is in danger of appearing to the care giver as an object…in which the care is made into a commodity, a product or a service. 
(Coeckelbergh 2015, 275)

The problem with carebots is that they simply reinforce and accelerate this process of mechanisation. They contribute to the project of modernity and that project is (or should be) disturbing:

Here the worry is that the machine is used to automate health care as part of its further modernization, and that this has the alienation effects mentioned. This is not about the machine ‘taking over’; it is about humans becoming machines. 
(Coeckelbergh 2015, 275)

To set all this out a little more formally:


  • (4) The mechanisation of service provision is bad (because of its alienating and dehumanising effects) and so anything that contributes to the process of mechanisation is bad/not to be welcomed.
  • (5) The use of carebots contributes to the mechanisation of service provision in health care.
  • (6) Therefore, the use of carebots is bad.


This is an interesting argument. It involves a much more totalising critique: a critique of modern society and how it treats its citizens and subjects. Robots are challenged because they are a particular manifestation of this more general social trend.

Is the argument any good? I have some concerns. Because it is part of this more totalising critique, its persuasiveness is largely contingent on how much you buy into that larger critique. If you are not a strong Marxist, if you don’t accept the Marxist concept of alienation, if you embrace an essentially mechanical and materialist view of humanity, then the criticisms have much less bite.

Furthermore, even if you do buy into those larger critiques, there is at least some reason to doubt that the use of carebots is all that bad. There are good reasons to object to the mechanisation of service provision because of how it treats human service providers: they are treated as cogs in a machine not as fully autonomous beings in themselves. Replacing them with machine labour might be thought to free them from this dehumanising process. Thus automation might be deemed a net benefit because of its potential to liberate humans from certain capitalistic forces. This is an argument that I have made on other occasions, and it is embraced by some on the academic left. That said, this argument only focuses on the workers and service providers, not on the people to whom the service is provided. There may be a dehumanising effect on them. But that’s really what the first of Coeckelbergh’s arguments was about.

Anyway, that’s it for now. To briefly recap, Coeckelbergh has provided two arguments against carebots. The first focuses on the conditions of good care and suggests that robots are unable to satisfy those conditions. The second focuses on the project of modernity and its mechanising effects. It worries about carebots to the extent that they contribute to that project. Both arguments have their merits, but it’s unclear whether they truly support the ‘dystopian’ concerns outlined at the start of this post.