Monday, August 21, 2017

The 'In Principle' Objection to Privatisation

A few years ago, they privatised the Irish water supply. Rather than water being a freely provided public service, funded out of general taxation, water was now to be a privately supplied good, with each household paying an annual fee that varied depending on usage. It proved to be quite a controversial move, leading to numerous protests and a significant loss of legitimacy for the government. So much so, in fact, that the future of water privatisation in Ireland is currently uncertain.

The privatisation of formerly public services often proves controversial. Privatisation is a major feature of the so-called ‘neo-liberal’ agenda. It is often favoured by economists and policy wonks on the grounds of efficiency. The typical argument is this: if we learned nothing else from the history communism and socialism, it is that government agencies aren’t particularly good a supplying scarce resources. The incentives are out of whack. They are often hugely wasteful, and tend to over-supply or under-supply goods and services. Private agents, motivated by profit and incentivised by prices, are often much more efficient, supplying just as much as the market demands, at a price that maximises societal welfare. (Note: this isn’t always true: for certain goods and services even classical economists will agree that private markets can fail to be efficient — I talk about this in more detail in my posts on Hayek’s famous argument against centrally planned economies).

And yet the process still proves controversial, with many philosophers and activists resisting the wave of privatisation. Sometimes their arguments are strictly empirical in nature: they disagree with the economists and policy wonks who insist upon the efficiency of private markets and the inefficiency of the state. And there are, indeed, empirical studies that support their disagreements. But sometimes their arguments are more philosophical or normative in nature: they hold that, irrespective of the consequences of privatisation, there is something morally circumspect about process. It leads to the selling off of the public sphere and the erosion of public authority and legitimacy. They challenge privatisation on principled grounds, not empirical ones.

Avihay Dorfman and Alon Harel are possibly the leading defenders of this ‘in principle’ objection to privatisation. Over the past few years, they have authored a number of papers that try, with increasing degrees of sophistication and rigour, to present a robust, non-empirical objection to privatisation. They argue that there are some ‘intrinsically public goods’ that should never be handed over to private agents, and they work hard to identify the key properties of these goods.

Although I cannot hope to do justice to the full body of their work on this topic, I do want to look at their main line of argument in the remainder of this post. I do so by analysing and evaluating one of their most recent papers, entitled ‘Against Privatisation as Such’, which appeared in the Oxford Journal of Legal Studies in 2016. Their argument in that paper is that privatisation is objectionable because it undermines public engagement with and responsibility for certain kinds of decision.

1. Two Conceptions of Privatisation
To understand the argument, we first need to understand what it means to privatise something. We all have an intuitive and commonsense understanding of what this entails. The opening example of the privatisation of Irish water gives us some sense of what happens. When the government ‘privatises’ the provision of a particular good or service, it transfers decision-making authority for the provision of that good or service to a private agent (company/corporation). This private agent will then follow a slightly different set of incentives/reasons than a public agency would when supplying the good or service. The hope is that they will follow a set of incentives/reasons that enables them to supply the good or service in a more efficient manner.

This gives us two distinct ways of understanding the process of privatisation. Both of these have been identified and discussed in the academic debate:

The Reasons View - To privatise the provision of a good or service, X, is to change (wholly or partially) the reasons for which someone supplies that good or service.

The Agency View - To privatise the provision of a good or service, X, is to transfer the decision-making authority in relation to that good or service to a private agent.

Dorfman and Harel note that many theorists seem to endorse the Reasons View, and in doing so they often stumble upon an interesting way in which to defend privatisation. One of the features of the Reasons View is that it pays little attention to the identity of the decision-maker when it comes to classifying a particular decision as being ‘private’ or ‘public’. Instead, it focuses on the reasons utilised by the decision-maker. So on this view what is distinctive about private decision-making processes is that they are motivated by things like profit and loss and other relevant economic variables, and not by concerns like fairness, justice and the common good. These concerns are more typically associated with public decision-making processes.

One of the consequences of this method of categorisation is that it is possible for private agents to act in a public-spirited way (or vice versa, i.e. for public agents to act in a privatised way). You could imagine a private company being contracted by the government to provide a good or service on the basis that they direct themselves to the common good. You could also imagine private companies and contractors acting for a combination of reasons, some of which are strictly economic in nature and others of which are more public spirited. Indeed, this might lead you defend privatisation on the grounds that it gives you the ‘best of both worlds’: It brings the efficiencies of the private sector without necessarily losing the public touch.

The essence of Dorfman and Harel’s case against privatisation is that the Reasons View gets it wrong. Privatisation is not solely or even primarily about changing the reasons for which a decision is made; it is really about the transferal of authority. This means that the Agency View is more correct, and when you understand privatisation in terms of agency, you begin to see why it might be objectionable in principle: because it changes the nature and locus of legitimacy in society.

The defence of this argument comes in two parts. The first part highlights the flaws in the Reasons View; the second explains why the transferal of authority is so problematic.

2. Against the Reasons View
Dorfman and Harel don’t present their case against the Reasons View in formal terms, but I’m going to do so, for ease of exposition. Their argument is a very simple one and works like this:

  • (1) If the Reasons View of privatisation were correct, then all we should care about (when it comes to debating the pros and cons of the process) are the reasons motivating the decisions, not who makes them.

  • (2) We do not only care about the reasons motivating particular decisions; we also care about the identity of the agent making those decisions.

  • (3) Therefore, the Reasons View of privatisation must be incorrect.

The key to this argument is the second premise. Dorfman and Harel develop a few lines of support for this premise. One of them is to look at how people talk about privatisation. They consider the work of Richard Bauman, who once identified five characteristics of privatisation:

(1) the complete or partial sell-off (through asset or share sales) of major public enterprises; (2) the deregulation of a particular industry; (3) the commercialization of a government department; (4) the removal of subsidies to producers; and (5) the assumption by private operators of what were formerly exclusively public services, through, for example, contracting out. 
(Bauman 2000, 2 - sourced in Dorfman and Harel 2016)

They argue that it is difficult to make sense of Bauman’s five characteristics if you favour the Reasons View. While some the characteristics could be understood in terms of changing the reasons for which a decision is made (specifically, characteristics 2, 3 and 4), others cannot. Indeed, the other characteristics are probably best understood in terms of the Agency View. Dorfman and Harel then argue that if someone like Bauman wished to stick with the Reasons View he would need to explain away the fact that some of relevant characteristics of privatisation are concerned with agency.

Another line of argument comes from the debate about the normative justification for punishment. The typical rationales for punishment are either retributive or consequentialist in nature. The retributive rationale holds that it is intrinsically good to punish people in proportion to their wrongdoing. The consequentialist rationale holds that punishment is justified if it achieves some desirable end (e.g. deterrence). Both rationales are, to some extent, concerned with the reasons motivating the decision to punish. This might suggest that the debate about the justification of punishment plays out in an arena that is shaped by the Reasons View. But this is not the case. Many of the participants in the debate about the normative justification of punishment, be they retributive or consequentialist in their leanings, hold that there is another condition that must be satisfied before punishment can be legitimate. They hold that the punishment must be administered by a public official. Indeed, most theorists of punishment implicitly or explicitly assume that private individuals are never the appropriate administrators of punishment. This is why they usually balk at the notion of individuals taking it upon themselves to punish wrongdoers (so-called ‘vigilante justice’). It is difficult to explain this in terms of the Reasons View.

A final line of support comes from legal doctrine. Dorfman and Harel reference two cases in their article that highlight how courts often reject the privatisation of public services on grounds that are unrelated to the Reasons View. One case comes from the Indian Supreme Court and had to do with outsourcing of police services on short-term contracts. The court rejected this practice on the grounds that policing was something that must ‘necessarily…be delivered by forces that are and personnel who are completely under the control of the state’ (Dorfman and Harel 2016, 410). The other case is Israeli and had to do with the privatisation of prison services. This was rejected by the court on the grounds that only public officials are normatively competent to deny someone’s liberty.

In short, Dorfman and Harel reject the Reasons View because it conflicts with how we characterise and critique the process of privatisation. They argue that if you pay attention to both of these things you will find that the identity of the agent is a paramount concern.

3. In Defence of the Agency View
The preceding line of argument only gets us so far. We have cause to reject the Reasons View, and we have found considerable concern with the identity of the agent making the decisions. The problem is that this concern is somewhat mysterious in character. Why must punishment (or whatever) be administered by a public agent? The defender of the Agency View owes us some account of that.

Dorfman and Harel try to step up to the plate and provide us with exactly that. Their argument is quite complex and convoluted. I’ll present a simplified version of it here. In essence, it has to do with the need for certain decisions (punishment being a good example) to be publicly legitimised. If you are going to make decisions that could harm, deprive, or redistribute core rights and responsibilities, you need for those decisions to be publicly legitimate. The problem, according to Dorfman and Harel, is that you will never get that legitimacy if you privatise those decisions. The reason for this is that privatisation necessarily undermines and erodes public responsibility for, and engagement with, the relevant decision, both of which are essential for legitimacy.

The argument seems to work like this:

  • (4) In order for particular decisions to have public legitimacy they must emanate from the public (i.e. the public must be engaged with the decision-making process and must be able to take responsibility for the decision).

  • (5) In order for a decision to emanate from the public it must be made by an agent who defers to the public in a particular way.

  • (6) A private agent, contracted to make those decisions, cannot defer to the public in the right way.

  • (7) Therefore, privatised decision-making lacks the legitimacy that is required for particular decisions.

You could challenge several aspects of this argument. It is certainly a little bit sketchy about which decisions require this form of legitimation, and there is plenty of disagreement about the conditions that must be satisfied in order for a decision to be legitimate in the academic literature. Nevertheless, most of the action in Dorfman and Harel’s article centres on premises (5) and (6).

Let’s consider premise (5). Many ‘public’ decisions are made by individual decision-makers (public officials, government ministers, government agencies, etc.). Members of the general public may have little direct influence and involvement in those decisions. Nevertheless, in order for the decisions to be legitimate, the individual decision-maker must defer to the public in making the decisions. They must see and understand themselves to be public servants. They must take the public’s views into consideration and be answerable to the public for what they do. In short, they must be part of a normative community/institution that engages with and answers to the general public. That’s the view that underlies premise (5).

Premise (6) is then defended on the grounds that private agents, who are contracted to make similar decisions, can never show the right kind of deference. This is fleshed out in what Dorfman and Harel term the ‘different contracts’ argument. A public official is employed under a particular set of norms, norms that require their integration into and answerability to the general public. A private agent is employed under a contractual agreement. Their duties and obligations will be set by the terms and conditions of that contractual agreement. Their duty is not to the general public, it is to the contract. There is consequently distance between them and the general public, not deference.

You might respond to this by arguing that deference to the public could be built into the terms and conditions of the contractual agreement. But Dorfman and Harel argue that this won’t work. They argue that every privatisation agreement will have to afford the private agent a ‘zone of permissibility’ (or ‘autonomy’) in which they are free to exercise their own judgment about what to do. This zone of permissibility will necessarily remove them from the kind of public deference that is required. The reason for this is that in order to make any sense at all, a privatisation agreement must defer to the skills and judgment of the private agent. Recall, that the whole point of privatisation is that private agents are able to provide a good or service more efficiently than public agents. This necessarily implies a zone of permissibility. The problem is that within that zone of permissibility, the private agent will have an ‘immunity’ or ‘claim right’ against the state: they will be legally entitled to resist state interference and direction. This is what distances them from the public.

Dorfman and Harel concede that, in principle, a private agent could be fully integrated into a public agency, but they argue that in such as case the private agent would cease to be ‘private’. They also concede that there are some seemingly public agencies that have zones of autonomy that seal them off from political interference. They give the example of an independent election monitoring agency as an example. But they argue that such agencies are not ‘private’ in any meaningful sense. They still serve and engage with the public in the fulfilment of their duties. They are not employed or constituted under the same kind of private contractual agreement as a private agent.

4. Concluding Thoughts
That’s Dorfman and Harel’s argument in very broad outline. Suffice to say there is a lot of detail and nuance missing from this summary. For those who want that detail and nuance, I recommend reading the original paper. Granting that my summary is imperfect, I nevertheless want to close with three critical reflections on the argument presented above.

First, I’m not sure I am entirely convinced by the whole ‘different contracts’ line of argument. It seems to me that contracts of employment (or service provision) are pretty fluid things. There is a classical notion of contracts that views them as strictly private agreements, untethered from general norms and outside influences. But this classical notion is obviously flawed, and certainly does not represent any contemporary legal position on the nature of a contractual agreement. Nowadays, so-called private contractual agreements are frequently influenced, directed and constrained by public policy. Consumer protection legislation, for example, severely restricts what can be put into a private contract. Given this, I’m not sure why the contractual agreement underlying a privatisation arrangement couldn’t be heavily influenced and constrained by public policy, and, more importantly, why the contract couldn’t insist on deference to the public. I’m also not sure that I am convinced by the claim that a ‘zone of permissibility’ make that big a difference in this regard since, presumably, the employment contracts of public officials will include similar zones of permissibility. That doesn’t completely distance them from the general community, not grant them a relevant claim right.

Second, I’m somewhat puzzled by the distinction between the Reasons View and the Agency View. One problem I have is that Dorfman and Harel’s defence of the Agency View seems to bring them full circle, back to a position that is very similar to the Reasons View. Think about it. When it boils down to it, their major line of objection is that privatisation agreements give private agents a ‘zone of permissibility’ in making decisions. But why is this so objectionable? Because it means they do not defer to the public in making decisions, i.e. they do not take the public appropriately into account in their decision-making. This seems pretty close to saying that the problem is that they don’t act for sufficiently ‘public’ reasons. Now, I’m sure Dorfman and Harel would respond by saying that their argument is not just about reasons, it is also about the formal legal immunities that the contractual agreement would grant the private agent. But if those formal immunities are up for grabs — as I suggested in the previous paragraph — then it seems like reasons for action would be the only relevant difference.

Third, I worry that the argument as a whole is little bit too clever. It seems to come perilously close to being true by definition. Privatisation is being defined as the process whereby decision-making authority is transferred to a private agent through a contractual agreement that distances them from the public. Distancing thus becomes the defining feature of privatisation. And this defining, in turn, is why the process is objectionable. Any so-called privatisation arrangement that doesn’t involve this distancing is not really privatisation at all.

That said, I do find something compelling in the line of argument sketched by Dorfman and Harel. I do think public responsibility and engagement are important in some contexts, and I do worry that privatisation agreements tend to be more corrosive of those virtues. But that’s not to say that they are always and everywhere more corrosive, and that they might not have other, countervailing virtues.

Friday, August 18, 2017

Does Human Nature Exist? On the Philosophy of Human Nature

We hear the term bandied about all the time. A man cheats on his wife. We are told that this is simply part of his 'nature’ - that men have evolved to be philanderers. Two young men fight on the streets, taunting and goading each other on. This too is said to be part of their nature - they have evolved modules that predispose them toward violence and jockeying for status. Some people have dedicated their lives to studying and identifying all the constituent elements of human nature, convinced that their inquiries are unearthing important truths about the human condition.

Are they right? Is there a tractable concept or idea of human nature that can form the basis of their inquiries? Or are they like theologians debating the properties of angels dancing on the head of a pin? This is a fundamentally philosophical question. It has nothing to do with particular claims about human nature — such as the two, highly contentious claims with which I opened this post — and everything do with the concept of human nature. How ought it be understood or defined?

According to some philosophers, there is no such thing as human nature. According to them, to think that humans (or other animals) have some stable ‘nature’ is contrary to one of the central tenets of modern evolutionary biology. Others think that there is a defensible concept of human nature. In this post, I want to take a look at some of the arguments that are presented in this debate.

I do so through the lens of Edouard Machery’s article ‘A Plea for Human Nature’. As you might guess from the title, Machery is one of the philosophers who thinks that there is a defensible concept of human nature. He defends his view by looking at two arguments from the work of David Hull, one of the leading critics of the concept of human nature. Let’s take a look at what he has to say.

1. Two Concepts of Human Nature
Machery’s defence of human nature hinges on a particular understanding of human nature. He argues that there are two concepts of human nature at play in the contemporary debate. One of them is the ‘essentialist view’ of human nature:

Essentialist view of Human Nature = The claim that human nature is determined by the set of necessary and sufficient properties of humanness, coupled with the claim that the properties that are part of human nature are distinctive of human beings.

This is a classic, Platonic view. It is premised on the belief that every object, event or state of affairs (every ‘kind’) has a set of necessary and sufficient properties that determine its ontological status (a set of ‘essences’). The essence of being a triangle, for example, would be ‘having three sides’. Any object that had more than three sides could not be a triangle. In the case of human beings, this essentialist view usually translates into the claim that things like intelligence, humour, morality, reason, and language are distinctively and essentially human. They are what define us and mark us out as different from other animals. They constitute our nature as human beings.

The essentialist view is to be contrasted with something Machery calls the ‘nomological view’:

Nomological view of Human Nature = The claim that human nature is the set of properties that humans tend to have due to the evolution of their species.

The nomological view does not try to identify what is distinctive or special about human beings. It simply tries to identify properties that humans exhibit that are best explained by their evolutionary (and not by their cultural) heritage. Examples of properties that proponents of this view claim to be part of human nature would include bipedalism, sexual dimorphism, and large brains.

Machery’s defence of human nature works like this: He argues that most of the critics of human nature have taken aim at the essentialist view, not the nomological view. He favours the nomological view and thinks that it withstands the criticisms usually levelled at the essentialist view.

2. Hull’s Anti-Essentialism Argument
To see how this plays out, let’s first consider David Hull’s famous anti-essentialist argument. Here is the relevant text from Hull’s paper setting out this argument:

Generations of philosophers have argued that all human beings are essentially the same, that is, they share the same nature… In this paper, I argue that if ‘biology’ is taken to refer to the technical pronouncements of professional biologists, in particular evolutionary biologists, it is simply not true that all organisms that belong to Homo Sapiens as a biological species are essentially the same… periodically a biological species might be characterised by one or more characters which are both universally distributed among and limited to the organisms belonging to that species, but such states of affairs are temporary, contingent and relatively rare. 
(Hull 1986, 3)

In this passage, Hull is highlighting one of the key findings of evolutionary biology. Since the Neo-Darwinian synthesis was formulated in the first half of the 20th century, evolutionary biology has been wedded to anti-essentialist thinking. Indeed, one of the most vigorous defenders of evolution — Richard Dawkins — starts his book-length defence of the truth of evolution with a chapter outlining its anti-essentialism.

According to the modern view, species are not immutable Platonic kinds. They are all part of one great tree of life. Individual organisms reproduce by exchanging and combining genetic material. This, allied with occasional mutations in DNA, leads to variation in their offspring. The core truth of evolutionary biology is that life (across space and time) is just one teeming mass of variation, with some stable clusters of organisms within it. These stable clusters only exchange genetic material with one another and they form what we call ‘species’. But their clustering is just a contingent accident of evolutionary history and even within these breeding populations there is considerable variation in offspring.

As a result, there is no ‘essence’ to any particular species. As soon as you identify a property that you think is shared by all (and only) members of a particular species, you are sure to find another member of that species who lacks that property. This knocks the essentialist view of human nature on the head. What's more, it is consistent with our everyday experience of humanity. For every allegedly distinctive property of humanity — reason, morality, language — we can find other animals who share some version of those properties or humans who lack them. Some defenders of essentialism might to avoid this problem by focusing on ‘statistically characterised essences’, i.e. by claiming that rather than there being a specific set of properties that humans must have (in order to count as humans), there is instead a set of properties of which an individual must share a certain proportion (in order to count as human). Hull argues that this doesn’t work because, in practice, it has proved impossible to define species membership using such clusters of properties.

Hull’s argument could then be reconstructed like this:

  • (1) If there is a human nature, it will be because there is a set of necessary and sufficient properties that are distinctively human (i.e. shared by all and only members of the species Homo Sapiens)
  • (2) There is no set of necessary and sufficient properties that are distinctively human (nor any statistical set).
  • (3) Therefore, there is no such thing as human nature.

Machery concedes premise (2) of this argument. He thinks Hull is absolutely right to claim that there are no essences of humanness. Where he thinks the argument goes wrong is in the first premise, i.e. in the assumption that the essentialist view is the only game in town. He argues that if we adopt the nomological view, we end up with something that is unscathed by Hull’s argument. Indeed, the nomological view is designed to be consistent with evolutionary theory. It does not claim that there are particular properties that all and only members of Homo Sapiens share. It merely claims that there are some properties that we tend to share as a result of our evolutionary history. These properties could be shared by other species and may not be shared by some members of our own species. That does not mean they are not part of our nature.

3. Criticisms of the Nomological View
The nomological view is not beyond criticism. Though it may avoid the clutches of Hull’s argument, there are some potential problems. Machery discusses two in his paper.

The first is that the nomological approach is too reformatory. That is to say, it moves us too far away from the traditional conception of human nature, such that the concept of human nature no longer performs the function we expect of it in our scientific and everyday discourse. When people refer to something being part of human nature, they have in mind those properties and traits that are distinctively human. The nomological view doesn't give them this.

Machery responds to this by arguing that the concept of human nature has played many roles in human history and although the nomological concept fails to fulfil some of those roles, it does fulfil others. In particular, he thinks it helps to mark out humans as a special group in evolutionary history and to identify properties that are likely to be shared by members of this group, irrespective of culture or background.

The second problem with the nomological view is that it might be over-inclusive. That is to say, it might include too many properties within the definition of human nature. There is a terrible tendency to assume that every trait or property that is shared by the majority of humans must have its origins in our evolutionary history — i.e. to suggest the ‘universals’ of the human condition are aspects of human nature. But this cannot be right. Machery gives the example of the belief that water is wet. This is a universal belief, but clearly it cannot be part of human nature. It is a belief prompted by exposure to water not by evolutionary processes. The problem is that, trivially, evolution has contributed to the belief that water is wet because it has provided us with the sensory apparatus that enables us to form this belief. Nevertheless, it doesn’t seem right to claim that the belief is part of our nature.

The solution to this problem, according to Machery, is to argue that although evolution does trivially contribute to the existence of any trait or disposition shared by humanity, not all such traits and dispositions can be ultimately explained by evolutionary processes. The phrase ‘human nature’ should be reserved for those traits that can be ultimately explained by these processes.

This, however, is easier said than done. Many of the most contentious debates between evolutionary psychologists and their critics, for example, tend to centre on whether certain, seemingly universal (or near-universal), traits can be best explained by evolutionary processes or not. To return to the opening example of the disposition of young men toward violence or philandering. Some people will want to argue that these traits are products of our evolutionary histories; some will want to argue that they are the result of certain consistently present environmental factors.

So in short, even if we accept the nomological view of human nature, there will be plenty of debate left about the actual contents of human nature. Philosophy alone cannot resolve those debates but it can, at least, clarify what we are debating about.

Thursday, August 17, 2017

The Reality of Virtual Reality: A Philosophical Analysis

The Holodeck - Star Trek

There is an apple in front of me. I can see it, but I can’t touch it. The reason is that the apple is actually a 3-D rendered model of an apple. It looks like an apple, but exists only within a virtual environment — one that is projected onto the computer screen in front of me. I can interact with the apple. I have an avatar that I can control on the screen. That avatar is a virtual projection of my self. It can pick up the apple, throw it around the virtual room, or eat it. But I can’t touch it and interact with it using my own physical hands.

Is the apple real? Of course not: it’s virtual. But are virtual objects (or events or states of affairs) ever real? This question is of considerable importance. We already live a considerable amount of our lives online (in ‘virtual’ worlds). We interact with people virtually. We deal in virtual goods and services. And if the prognostications of technological enthusiasts are anything to go by, we will probably live more and more of our lives in virtual worlds in the future. With the emergence of immersive virtual reality (VR) headsets such as the Occulus Rift, the Samsung Gear, and Sony Playstation VR, we can now participate in highly realistic and engaging virtual activities. It would be nice to know whether any of these qualify as being ‘real’, particularly given that the technology is marketed to us as virtual reality.

The reality (or unreality) of the virtual is, fundamentally, a philosophical question. And, fortunately, philosophers have already begun to answer it. The philosopher Philip Brey, in particular, has developed a sophisticated framework for thinking about the reality of virtual reality. He argues that some virtual objects and events are obviously not real (they are merely representations or simulacra), but others are every bit as real as their real world analogues. He suggests that we use John Searle’s theory of social reality to tell the difference.

I want to analyse and evaluate Brey’s proposed framework in the remainder of this post.

1. The Physical and the Functional
To warm up, let’s think a little bit more about the opening example of the virtual apple. As Brey points out, this ‘apple’ clearly exists in some form. It is not a mirage or hallucination. It really exists within the virtual environment. But its existence has a distinctive metaphysical quality to it. It does not exist qua real apple. You cannot bite into it or taste its flesh. But it does exist qua representation or simulation. In this sense it is somewhat like a fictional character. Sherlock Holmes is not real: there was no one by that name living at 221b Baker Street in London in the late 1800s, nor did anyone answering his description solve the crimes that befuddled the hapless inspectors from Scotland Yard. But Sherlock Holmes clearly does exist qua fictional character. There are agreed upon facts about his appearance, habits, and intellect, as well as what he did and did not do qua fictional character.

So Sherlock Holmes and the apple have a simulative reality, but nothing more. They do not and cannot exist qua real apple or real person. Why not? The answer seems to lie in the essentially physical nature of apples and detectives. An apple does not exist qua real apple unless it has certain physical properties and attributes. It has to have mass, occupy space, consist in a certain mix of proteins, sugars and fats, and so on. A virtual apple cannot have those properties and hence cannot be the same thing as a real apple.

The same goes for detectives like Sherlock Holmes. Although there are some complexities there. Human detectives have to have mass, occupy space, and consist in a certain mix of proteins and metabolic processes. But do all detectives have to have these properties? Here we get into one of the great debates in philosophy. It seems to be at least conceivable that there could be a virtual detective that could solve real world crimes in the same manner as Sherlock Holmes. Imagine a really advanced artificial intelligence (AI) that is constantly fed data about crimes and criminal behaviour. It spots patterns and learns how to solve crimes based on this data. You could then feed information about new crimes into the AI and it could spit out a solution. This AI program would then be a ‘real’ detective, not a mere simulation or representation of a detective. In fact, you don’t really have to imagine such a detective. Companies like PredPol are already creating them.

We can draw some lessons from these examples. First, we can see that there are at least some kinds of entities — like apples and human detectives — that are essentially physical in nature. We can call them essentially physical kinds. These are objects, events and states of affairs that must have some specific physical properties in order qualify as an instance of the relevant kind. Virtual versions of these kinds can never be real; they can only be simulations or representations. But then there are other kinds that are not essentially physical in nature. A ‘detective’ would seem to be an example. A detective is a non-physical functional kind: an entity qualifies for membership of the class of detectives in virtue of the function it performs — attempting to investigate and solve crimes — not in virtue of any physical properties it might have. Virtual versions of these kinds can be every bit as real as their real-world equivalents.

Some functional kinds are essentially physical in nature. A lever is a functional kind. A wooden stick can be counted as a ‘real’ instance of a lever in virtue of the function it performs, but it can only perform that function because it has certain physical characteristics. Just try lifting a heavy object with a virtual lever — one simulated on the screen of your smartphone. You won’t be able to do it. On the other hand, a spirit level does not require any particular physical shape or constitution. You can quite happily assess the levelness of your bookshelf with a spirit level that has been simulated on the screen of your smartphone.

Furthermore, the term ‘non-physical functional kind’ is something of a misnomer. Objects and entities that belong to that class will have some physical instantiation (after all virtual objects are physically instantiated, in some symbolic form, in computer hardware); it’s just that they don’t require any particular or specific physical characteristics in order to perform the relevant function.

2. Social Kinds and Social Reality
So there are some essentially physical kinds: virtual instances of these kinds can only be simulacra. There also some non-physical functional kinds: virtual instances of these kinds can be as real as their real world equivalents. Are there any other kinds whose virtual instances can be every bit as real as their real world equivalents? Yes, there are: social kinds. These are a sub-category of non-physical functional kinds, which are particularly interesting because of their practical importance and their ontological origins.

In terms of their importance, it goes without saying that large chunks of the reality with which we engage on a daily basis is social in nature. Our relationships, jobs, financial assets, property, legal obligations, credentials, social status, and so on, are all socially constructed and sustained. Brey argues that much of this social reality can be recreated in virtual form. He argues that we can use John Searle’s theory of social reality as a guide to when and whether social kinds can be ‘ontologically reproduced’ (as he puts it) in virtual form.

To understand his proposal, we need first to understand Searle’s theory. Searle distinguishes physical kinds and social kinds along two dimensions:* their ontology (what they are) and their epistemology (how we come to know of their existence). He argues that physical kinds are distinctive in virtue of the fact that they are ontologically objective and epistemically objective. An apple does not depend on the presence of a human mind for its existence — it is thus ontologically objective. Furthermore, we can come to know of its existence through intersubjectively agreed upon methods of inquiry — it is thus epistemically objective.

Social kinds are distinctive because they are ontologically subjective and epistemically objective. Money depends on human minds for its existence. Gold, silver, paper and other physical tokens do not count as money in virtue of their physical properties or characteristics (contrary to what people often believe). They count as money because human minds have conferred the functional status of money on them through an exercise of collective imagination. In other words, particular physical tokens only count as money because most of us agree that they count as money. In theory, we can confer the functional status of money on any token, be it an exquisitely sculpted metal coin or a digital register of bank balances. In practice, certain tokens are better suited to the functional task than others. This is due to their durability and incorruptibility. Nevertheless, this hasn’t stopped us from conferring the functional status of money on virtual tokens. Indeed, most money that is in existence today is virtual in nature: it only exists in digital bank balances; it does not, and never will, exist in the form of notes or coins. We happily pay for goods and services with this ‘virtual’ money, even though it lacks physical tangibility. This virtual money is still epistemically objective in nature. I cannot unilaterally imagine more money into my bank account. My current financial status is a matter of intersubjectively agreed upon fact.

Searle argues that many social kinds share these twin properties of ontological subjectivity and epistemic objectivity. Examples include marriages, property, legal rights and duties generally, corporations, political offices and so on. He calls these ‘institutional facts’. These are social kinds that come into existence through the collective agreement upon a constitutive rule. The constitutive rule takes the form ‘X counts as Y in context C’. In the case of money, the constitutive rule might read something like ‘Precious metal coins of with features a, b, c, count as money for the purposes of purchasing goods and services’. Searle doesn’t think that we explicitly formulate constitutive rules for all social objects and events. Some constitutive rules are implicit in how we behave and act; others are more explicit.

What’s interesting about Searle’s theory is that it means that much of our everyday social reality is, in a sense, already ‘virtual’ in nature. It doesn’t depend on any physical, real world properties or characteristics for its existence. Money, marriages, property, rights, duties, political offices and the like do not exist ‘out there’ in the physical world; they exist inside our (collective) minds. They are fictional projections of our minds over the physical reality we inhabit. In principle, we can project the same social reality over anything, including the representations and simulations that exist within virtual reality. Thus, according to Brey, we can ontologically recreate things like money, marriage, rights, duties, political offices, and so forth in virtual worlds. All it takes is some collective imagination and will.

3. Conclusion: What's real and what's not?
Brey’s view on the reality of virtual reality can be summarised as follows:

Essentially physical kinds: i.e. entities that have some specific physical properties or characteristics can never be ontologically reproduced in a virtual environment; their virtual form can only ever be a simulation or representation (e.g. apples, chairs, cars etc.).

Non-physical functional kinds: i.e. entities that perform functions that do not depend on any particular physical properties or characteristics can be ontologically reproduced in a virtual environment; their virtual form can be every bit as real as their real world equivalents.

Social kinds: i.e. a sub-set of non-physical functional kinds whose existence depends on the collective coordination and agreement upon a constitutive rule (of the form ‘X counts as Y in C’) can be ontologically reproduced in a virtual environment; their virtual form can be every bit as real as their real world equivalents.

Note that this theory covers virtual objects, events and states of affairs. It does not include virtual actions. As Brey points out in his paper on the topic, virtual actions have to be treated differently for the simple reason that virtual actions are typically performed by human controllers of characters operating in virtual worlds. As such, virtual actions can have ‘extravirtual’ origins and effects, and this means that they share a much more fluid relationship with reality than do virtual objects and events. Virtual actions are constantly spilling over into the real world. It would require another post to clarify the exact ontological status and classification of these acts, but suffice to say virtual actions are often every bit as real as real world actions.

For what it is worth, I think Brey’s theory is pretty much spot on. There clearly are some objects and events that require a particular physical instantiation and this can never be recreated in virtual form; and there are also clearly other objects and events that do not depend on a particular physical instantiation (that are ‘multiply realisable’ - to use the philosophical parlance). I also agree that much of our everyday social reality can be recreated in virtual form because it depends for its existence on collective agreement. I think this is an important observation because its consequences could be far reaching. We can certainly quibble about the utility of Searle’s specific theory of how social kinds come into existence, but there is general agreement that much of the social world is constructed by the minds of human actors. (If you are interested in a slightly different theory of social kinds, see my previous post on the philosophy of social construction).

That said, I think that there might an alternative approach to differentiating between the virtual and the real that is overlooked by the theory. I’m not sure that defining everything that is represented or created on a computer as ‘virtual’ captures what we really mean by the term. Indeed, I tend to favour something closer to an exclusionary definition of the virtual. In other words, I would prefer a definition of the virtual that necessarily excludes reality: that holds that the virtual can never be real.
Furthermore, even though I agree with the theory in its current form, I think there will be much disagreement over specific cases. For any particular object or event, people might disagree about whether it requires some essential physical property or characteristic or not. Consider the debate about the human mind. There are some philosophers, called functionalists, who think that the human mind can be realised in multiple different physical forms. It is, consequently, not an essentially physical kind. There are others who think that only a human brain could instantiate a mind. This means that it is an essentially physical kind. We can expect disagreements of this sort to arise over allegedly real objects and events that are instantiated in virtual worlds, even if we agree on the general principles that apply to distinguishing that which is real from that which is merely a simulation. To be fair, Brey recognises this point. One of his main observations is that virtual objects and events tend to exist in an ontologically uncertain/contested state.

* Searle uses slightly different terminology in his work. He distinguishes between brute facts and institutional facts. 

Wednesday, August 9, 2017

Podcast - Why we should create artificial offspring

I recently had the pleasure of being a guest on the RoboPsych podcast. I was interviewed by hosts Tom Guarriello and Julie Carpenter about my recent paper 'Why we should create artificial offspring'.  The paper is an extended thought experiment, arguing that creating artificial offspring might be good for humanity.  The podcast explored many of the key ideas in the paper and some other issues too. You can listen below or follow this link to the RoboPsych website. While there, you should check out the other episodes. There have been a number of interesting guests and topics.

Friday, July 28, 2017

New Paper: In Defence of the Epistemological Objection to Divine Command Theory

I have a new paper coming out in the journal Sophia. It's about the so-called 'epistemological objection' to divine command theory. This builds on some of my previous posts on the topic, albeit at much greater length and in more detail. The paper argues that DCT's inability to account for the moral obligations of reasonable non-believers is a problem that undermines its credibility as a metaethical theory. Full details below, along with links to the pre-publication version of the paper.

Title: In Defence of the Epistemological Objection to Divine Command Theory
Journal: SOPHIA: An international journal of philosophy and traditions 
Links: Philpapers; Academia; Research Gate
Abstract: Divine Command Theories (DCTs) comes in several different forms but at their core all of these theories claim that certain moral statuses (most typically the status of being obligatory) exist in virtue of the fact that God has commanded them into exist. Several authors argue that this core version of the DCT is vulnerable to an epistemological objection. According to this objection, DCT is deficient because certain groups of moral agents lack epistemic access to God’s commands. But there is confusion as to the precise nature and significance of this objection, and critiques of its key premises. In this article I try to clear up this confusion and address these critiques. I do so in three ways. First, I offer a simplified general version of the objection. Second, I address the leading criticisms of the premises of this objection, focusing in particular on the role of moral risk/uncertainty in our understanding of God’s commands. And third, I outline four possible interpretations of the argument, each with a differing degree of significance for the proponent of the DCT.

Wednesday, July 26, 2017

Episode #27 - Gilbert on the Ethics of Predictive Brain Implants


In this episode I am joined by Frédéric Gilbert. Frédéric is a philosopher and bioethicist who is affiliated with quite a number of universities and research institutes around the world. He is currently a Scientist Fellow at the University of Washington (UW), in Seattle, US but has a concomitant appointment with the Department of Medicine, at the University of British Columbia, Vancouver, Canada. On top of that he is an ARC DECRA Research Fellow, at the University of Tasmania, Australia. We talk about the ethics of predictive brain implants.

You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:50 - What is a predictive brain implant?
  • 5:20 - What are we currently using predictive brain implants for?
  • 7:40 - The three types of predictive brain implant
  • 16:30 - Medical issues around brain implants
  • 18:45 - Predictive brain implants and autonomy
  • 22:40 - The effect of advisory implants on autonomy
  • 35:20 - The effect of automated implants on autonomy
  • 38:17 - Empirical findings on the experiences of patients
  • 47:00 - Possible future uses of PBIs
  • 51:25 - Dangers of speculative neuroethics

Relevant Links


Sunday, July 23, 2017

The Everlasting Check: Understanding Hume's Argument Against Miracles

"I have discovered an argument [...] which, if just, will, with the wise and learned, be an everlasting check to all kinds of superstitious delusion"
(Hume, Of Miracles

Miraculous events lie at the origins of most religions. Jesus’s resurrection from the dead. Moses’s parting of the Red Sea. Mohammed’s journeys on the back of a winged horse. Joseph Smith’s shenanigans with the Angel Moroni. All these events are, in common parlance, miraculous. If you wish to be a religious believer, you must accept the historical occurrence of at least some of these originating miraculous events. The problem is that you don’t get to observe them — to verify them with your own eyes. You must rely on the testimony of others, often handed down to you through religious texts or a lineage of oral histories. Is this testimonial evidence ever sufficient to warrant belief in the miraculous?

David Hume famously argued that it wasn’t. In his essay Of Miracles, which appears as section 10 of his larger work An Enquiry Concerning Human Understanding, Hume argues that testimonial evidence is unlikely to ever be sufficient to warrant belief in a miraculous event. Hume’s argument has been the subject of much interpretation and debate over the past 250 or so years. Much of that debate obscures or misrepresents what Hume actually argued. Fortunately, the philosopher Alexander George has recently published a beautiful exposition and analysis of Hume’s argument, titled The Everlasting Check: Hume on Miracles, which corrects the record on a number of key points.

George’s analysis is somewhat similar to that of Robert Fogelin, which I have written about previously. In essence, both authors argue that Hume’s argument is routinely misinterpreted as providing an a priori (or ‘in principle’) case against the possibility of testimonial proof of miracles. But this is emphatically not what Hume argues: Hume merely argues that testimonial proof of the miraculous is extremely unlikely. The mistake stems from the fact that Hume’s essay is broken into two parts, and many people assume that both parts present two separate arguments: an a priori argument and an a posteriori argument. They do not. Both parts must be read together as presenting one single argument.

Although Fogelin and George reach similar conclusions, they do so by subtly different means. George’s exposition has the advantage of being more thorough, more up to date, and ultimately more straightforward. On top of that, George makes some important points about how Hume defines miracles and how he relates the evidence for the occurrence of natural laws to the evidence for the reliability of testimony. When you understand these points, much of Hume’s argument falls neatly into place.

So what I want to do over the remainder of this post is to share George’s analysis of Hume’s argument. By the end, I’m hoping that the reader will appreciate the strengths (and limitations) of Hume’s analysis.

1. The Basic Structure of Hume’s Argument
Hume’s argument is about proof of miracles. But what is a ‘miracle’? Hume is pretty clear about this. He defines a miracle as:

Miracle = A violation of the laws of nature.

The problem is that this raises a further question: what is a law of nature? Some people argue that a law of nature is an exceptionless pattern in the natural world. The law of energy conservation or the second law of thermodynamics would be common examples. But to say that a particular law, L, is an exceptionless pattern is also somewhat ambiguous. Is the pattern truly exceptionless? In other words, is it some ontological necessity of the universe? Or is it simply that we have never observed an exception? In other words, is it a strong epistemic inference from our observations?

George argues that Hume favoured a strictly epistemic understanding of the laws of nature. He viewed laws of nature as well-confirmed regularities. We say that there is a law of conservation of energy because we have never observed energy coming into being from nothing; we have only ever observed it being changed from one form to another. But who knows, maybe some day we will observe an exception and this exception will itself be well-confirmed and hence we will have to revise our original conception of the law. That said, laws of nature are, for Hume, very very well-confirmed. We usually have the strongest possible evidence in their favour.

This epistemic understanding of miracles is consistent with Hume’s general empiricism, and it allows for miracles to come in degrees: an event can be more or less miraculous depending on how well-confirmed the regularity with which it is inconsistent really is. Indeed, Hume himself distinguished so-called ‘marvels’ from ‘miracles’ on the grounds that the former were inconsistent with less well-confirmed regularities than the latter.

This is all by way of saying that Hume’s target is best understood in the following manner (note: this is my gloss on Hume, not George’s):

Miracle* = A violation of a very well confirmed regularity that is observed in the natural world.

What then of Hume’s argument? That argument has a very simple structure, consisting of two premises and a conclusion. George uses formal mathematical terminology to describe the two premises of the argument, referring to them as the ‘first lemma’ and ‘second lemma’, respectively. He also refers to the conclusion as a ‘theorem’. I will follow suit though I don’t think it is strictly necessary:

  • (1) If the falsehood of testimony on behalf of an alleged miraculous religious event is not “more miraculous” than the event itself, then it is not rational to believe in the occurrence of that event on the basis of that testimony. [First Lemma]
  • (2) The falsehood of the testimony we have on behalf of alleged religious miraculous events is not more miraculous than those events themselves. [Second Lemma]
  • (3) Therefore, it is not rational to believe that those miraculous religious events have occurred. [Hume’s Theorem]

I have modified the wording of the lemmas and the theorem slightly from how they appear in George’s original text. I did this in order to make the argument more logically coherent and more consistent with Hume’s aims. As we will see, Hume doesn’t try to argue against all testimonial proofs of the miraculous; merely against the historical testimonial proof provided on behalf of the major religions. Indeed, Hume specifies under what conditions it might be acceptable to believe in a miracle.

For those who care about how this argument maps onto the structure of Hume’s essay, the first lemma is defended in the first part and the second lemma is defended in the second part. Again, to reiterate what was said above, it is important that we don’t disconnect the two parts. You cannot derive Hume’s Theorem from the first part alone.

2. Establishing the First Lemma
The first lemma of Hume’s argument is what has generated most of the debate and controversy. It is significant because it states a general principle or test that should apply to the evaluation of testimonial evidence. As such, it has significance beyond the debate about the occurrence of religious miracles. Here’s how Hume introduces it:

[N]o testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish. And even in that case, there is a mutual destruction of arguments, and the superior only gives us an assurance suitable to that degree of force, which remains, after deducting the inferior. 
(Hume, Of Miracles para 13)

Here’s the basic idea: the evidence we have in favour of the laws of nature is strong. We have observed the same regularities over and over again. So our epistemic confidence in their status is very very high. A miraculous event, by definition, contradicts these regularities. To believe in the miracle we would need to have even stronger evidence in favour of its occurrence than we do in favour of the laws of nature. When the evidence supplied comes in the shape of testimony, then we will only reach this standard when the probability of the testimony being false is lower (i.e. more miraculous) than the probability of the miracle itself.

The major philosophical problem with this analysis is that it relies on the commensurability of the testimonial evidence in favour of the miraculous and evidence in favour of the laws of nature. In order to establish the respective improbabilities of the two propositions (testimony is false; miracle is true) we need to establish some common metric along which those probabilities can be measured. This is where Hume made his most important philosophical contribution to the debate. He argued that all evidence ultimately stems from our observations of the world around us. Any particular proposition that we hold to be true about the world (e.g. that dead men stay dead; that energy is conserved; that there is a universal tendency toward increased entropy) is ultimately warranted by observations we have made about the world. This is clearly true for the laws of nature, but it is also true for testimonial proofs as well. The reason why we are usually confident of testimony is because because there is a well-confirmed regularity to the effect that when someone says that an event occurred it usually did occur. To put it another way, Hume is saying that there is a law of testimony. This law of testimony is grounded in our experiences of the relationship between testimony and events in the real world.

Drawing out this equivalence is critical to Hume’s project, and it is a point that interpreters of Hume often miss, according to George. It is only through establishing this equivalence that Hume is entitled to say that we can meaningfully compare the relative probabilities of testimonial proof and a putative law of nature. As he puts it:

Given Hume’s central commensurating analysis in Part 1 of his essay, it is indeed meaningful to evaluate whether the relation is greater than holds between an event’s occurrence and a testimony’s falsehood (because we can now appreciate that there is a lawlike claim about nature with which the falsehood of that testimony conflicts). 
(George 2016, 15)

This equivalence is what justifies the general principle stated in Hume’s first lemma. We are only warranted in believing a miracle on the basis of testimony if the warrant for the law of testimony in that instance outweighs the warrant for the law of nature. This only happens when the falsehood of the testimony is more improbable than the falsehood of the law of nature.

2. Establishing the Second Lemma
The first lemma is interesting in and of itself, but it doesn’t get us anywhere close to establishing that it is irrational to believe in historically testified miracles. Obviously, the suspicion underlying the first lemma is that the warrant we have for the law of testimony is much weaker than the warrant we have for any candidate law of nature. After all, even though testimony is usually accurate, there are plenty of times when it isn’t. People misunderstand what they have seen. They fabulate and overinterpret. They suffer from a confirmation bias, whereby they assume that what they saw is consistent with their prior beliefs/desires. They also, sometimes, lie outright for their own gain.

The purpose of the second lemma is to establish that the law of testimony breaks down in the case of historically testified miracles: that the falsehood of the testimony in favour of those miraculous events is not more miraculous than the events themselves. Hume does this by setting out four main lines of argument in favour of the second lemma. These can be briefly summarised as:

(A) The Reliability Argument: Testimonial evidence is strongest when it meets certain conditions of reliability. Specifically, when it comes from (i) many witnesses; (ii) of good sense, education and integrity; (iii) who have reputations that could be tarnished by the evidence they are presenting; and (iv) they testify to an even that occurred in public and so would have enabled to detection of fraud. Hume’s contention is that the evidence for religious miracles doesn’t meet these conditions.

(B) The Propensity to the Marvellous Argument: People in general have a propensity to the marvellous. As Hume puts it ‘the passion of suprize and wonder, arising from miracles, being an agreeable emotion, gives a sensible tendency towards the belief of those events, from which it is derived…[we] love to partake of satisfaction at second-hand or by rebound, and place a pride and delight in exciting the admiration of others.’ In other words, we have an emotional tendency to accept and repeat claims to the miraculous. What’s more, this tendency is even higher in the case of religious miracles because of the authority and power that is often granted to accepted religious prophets and missionaries. This means there is a tendency to trade emotional satisfaction for accuracy. This may not be done deliberately; it may be entirely subconscious; but it still undermines credibility.

(C) The ‘Ignorant and Barbarous’ Peoples Argument: The people from whom testimony of historical religious miracles emanates are ‘ignorant and barbarous’. They were likely to be predisposed to credulity and misunderstanding; they were not sceptical and disinterested observers. Obviously, Hume’s language in expressing this argument is antiquated and un-PC.

(D) The ‘End to Commonsense’ Argument: When it comes to religious affairs in general, Hume argues that there is an ‘end of commonsense’. In other words, people don’t follow commonsense rules of reasoning and inference when it comes to religious matters. Systems of religious thought tend to blind people to the truth and we should consequently weigh religious claims accordingly. Hume uses a thought experiment to make his point here. He asks us to imagine a group of people who testified to the resurrection of Elizabeth I in order to found a new system of religion. How much weight would we accord their testimony? Very little. Hume suggests that all ‘men of sense’ would ‘reject the fact…without farther examination’.

The first and third of these arguments have to do with the credibility and reliability of the witnesses we have for religious miracles. The strength of these arguments depends on how accurately they represent the historical record. The second and fourth arguments are more general, and focus on people’s tendency to be misled when it comes to the marvellous and the miraculous, particularly when it is religious in nature.

It is important to emphasise that these arguments do not undermine all testimony in favour of miracles. Hume is very clear about this. He thinks it would be possible to have proof of a miracle that overcomes the test established by the first lemma. Indeed, he uses a thought experiment — ‘the eight days of darkness’-thought experiment — to illustrate the conditions under which testimony would provide proof of the miraculous. The thought experiment asks us to imagine that “all authors, in all languages, agree, that, from the first of January 1600, there was a total darkness over the whole earth for eight days: Suppose that the tradition of this extraordinary event is still strong and lively among the people: That all travellers, who return from foreign countries, bring us accounts of the same tradition, without the least variation or contradiction”. This admittedly sets a high bar, but that’s what we would expect when testimony is going up against a law of nature and it at least shows that testimony might be sufficient on some occasions.

3. Limitations of Hume’s Argument
Hume’s argument has many limitations. My belief is that the general principle established by the first lemma is fairly robust: it is difficult to see how else testimony could override a law of nature. The arguments adduced in support of the second lemma are much less robust. If you read any religious apologetics, you will know that arguments (A) and (C) are highly contested. Apologists introduce all sorts of arguments for thinking that the testimony we have is more reliable than Hume claims. For instance, in relation to the resurrection of Jesus, they will argue that we do have multiple witnesses (perhaps as many as 500), that some of them were well-educated and disinterested, and that some of them did suffer greatly for what they believed. Now, I tend to think that Hume is still, broadly speaking, correct and that the apologetical arguments ultimately fail, but clearly a lot more work would be needed to address each and every one of these contentions and thereby shore up the support for the second lemma.

I also think that arguments (B) and (D) are pretty contentious. Hume is certainly on to something. There are lots of putative miracle claims out there, and systems of religious thought sometimes do come with benefits for believers which may cause them to be more credulous than they ought to be. Furthermore, psychological evidence suggests that we do have a propensity to over-ascribe agency to events in the natural world, and to misinterpet what our senses have shown us. These errors probably lie at the foundation of many miracle claims. But to dismiss all religion as anathema to commonsense and rationality seems to go too far to me. I think, along with Oscar Wilde, that ‘commonsense’ is not that common and that it is a mistake to assume that secular/non-religious people have some innate epistemic superiority over their religious brethren. It’s more complicated than that.

Despite this, I would argue that Hume provides a pretty good framework for evaluating religious miracle claims and that while he may be wrong (or too superficial and glib) on some of the critical details, anyone who cares about this issue in the modern day plays on the terrain that he defined nearly three centuries ago.

Friday, July 21, 2017

The Argument from Irreducible Complexity

Bacterial flagella

When I was a student, well over a decade ago now, intelligent design was all the rage. It was the latest religiously-inspired threat to Darwinism (though it tried to hide its religious origins). It argued that Darwinism could never account for certain forms of adaptation that we see in the natural world.
What made intelligent design different from its forebears was its seeming scientific sophistication. Proponents of intelligent design were often well-qualified scientists and mathematicians, and they dressed up their arguments with the latest findings from microbiology and abstruse applications of probability theory. My sense is that the fad for intelligent design has faded in the intervening years, though I have no doubt that it still has its proponents.

That’s all really by way of an apology for the following post, which is going to revisit some of the arguments touted by intelligent design proponents, arguments that have long been challenged and dismissed by scientists and philosophers alike. My excuse for this is that I have recently been reading Benjamin Jantzen’s excellent book An Introduction to Design Arguments which goes through pretty much every single design argument in the history of Western thought, and subjects them all to fair and serious criticism. He has two chapters on arguments from the intelligent design movement: one based on Michael Behe’s argument from irreducible complexity and one based on William Dembski’s argument from specified complexity. Both arguments get at the same basic point, but arrive there by different means. I want to look at the argument from irreducible complexity in the remainder of this post, summarising some of Jantzen’s thoughts on it.

I’m hoping that this is of interest to people who are familiar with the idea of irreducible complexity as well as those who are not. If nothing else, the following analysis helps to clarify the structure of the argument from irreducible complexity, and to highlight some important conceptual issues when it comes to interpretation of natural phenomena.

1. The Argument Itself
Let’s start by clarifying the structure of the argument. The basic idea is that certain natural phenomena, specifically features of biological organisms, display a property that cannot be accounted for by mainstream evolutionary theory. In Behe’s case the relevant property is that of irreducible complexity. But what is this property? To answer that, we’ll need to look at Behe’s definition of an ‘irreducibly complex system’:

Irreducibly complex system (ICS) = Any single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to cease functioning. 
(Behe 1996, 39)

There is a problem with this definition, but we’ll get to that in the next section. For now, it would probably help us to wrap our heads around the notion if we had an example of an ICS. Behe’s favourite example is the bacterial flagellum. This is a thin, filament-like, appendage that protrudes from the cell membrane of many species of bacteria. It is used to help propel the bacteria through liquid. When observed with the aid of a microscope, one of the remarkable features of a bacterial flagellum is that it functions like a rotary motor, where the flagellum is like a freely-rotating axle, supported by a complex assemblage of protein parts. Behe’s contention, and we can accept this, is that if you removed one component from the complex assemblage it would cease to function as a rotary motor.

A slightly more familiar example, and one also used by Behe, is a mousetrap (an old-fashioned, spring-loaded one). This is made up of fewer functional parts, but every one of them is essential if the mousetrap is going to perform its desired function of trapping — and unfortunately killing — mice and other small vermin. Thus, it is an ICS because if you remove one of the parts it ceases to function as intended.

Hopefully this suffices for understanding the property of irreducible complexity. What about the argument from irreducible complexity? That argument begins by identifying an ICS and then works like this:

  • (1) X is an irreducibly complex system. (For illustrative purposes say ‘X= bacterial flagellum’)

  • (2) If X is an irreducibly complex system, then X must have been brought about by intelligent design.

  • (3) Therefore, X (the bacterial flagellum) must have been intelligently designed.

Two important interpretive points about this argument. First, note that the use of the variable-term X is significant. While the bacterial flagellum is the most widely-discussed example, the idea behind the argument is that there are many such ICSs in nature and hence many things in need of an explanation in terms of intelligent agency. Second, note the conclusion. The claim is not that God must have created the bacterial flagellum but, rather, that an intelligent designer did. For tactical reasons, proponents of intelligent design liked to hide their religious motivations, trying to claim that their theory was scientific, not religious in nature. This was largely done in order to get around certain legal prohibitions on the teaching of religion under US constitutional law. I’m not too interested in that here though. I view the intelligent design movement as a religious one, and hence the arguments they proffer as on a par with pretty much all design arguments.

Now that we are clear on the structure of the argument, we can proceed to critically evaluate it. There are two major criticisms I want to discuss, both drawn from Jantzen’s book.

2. The First Criticism: Problems with the concept of irreducible complexity
The first criticism takes issue with the first premise. Appropriately enough. That premise claims that there are readily identifiable ICSs in the natural world. But is this true? Go back for a moment to Behe’s definition (given above). It defines an ICS in relation to a so-called ‘basic function’. The idea is that the basic function of the bacterial flagellum is to propel a bacterium through liquid. All the protein parts of the rotary-motor are directed towards the performance of that basic function, and this is what makes it right and proper to say that removal of one of those parts would cause the system to cease functioning. The same goes for the mousetrap. The basic function of the mousetrap is to capture and kill mice. All the parts of the system are geared toward that end.

That probably sounds fine, but there’s a subtle interpretive problem lurking in the background. It’s easy enough to say that the basic function of the mousetrap is to trap and kill mice. After all, we know the purpose for which it was designed. We know why all the parts are arranged in the format that they are. When it comes to natural objects, it’s a very different thing. Every object, organism, or event in the physical world causes many effects. A mouth is a useful food-grinding device, but it is also a breeding ground for bacteria, a signalling tool (e.g. smiles and smirks), a pleasure organ, and more. To say that one of these effects constitutes its ‘basic function’ is contentious. As Jantzen puts it:

Physical systems that were not crafted by human hands do not come with inscriptions telling us what they are for. 
(Jantzen 2014, 191)

We cannot read the basic function of an alleged ICS off the book of nature. We need interpretive principles. One such principle would be to appeal to the intentions of an intelligent designer. But proponents of intelligent design don’t like to do this because they try to remain agnostic about who the designer is. Furthermore, even if they admitted to being orthodox theists, there would be problems. The mind of God is mysterious thing. Many a theologian has squandered a career trying to guess at His intentions. Some say we should not even try: God has beyond-our-ken reasons for action.

Another possibility is to try to naturalise the concept of a basic function. But this too poses a dilemma for the proponent of intelligent design. One popular way of naturalising basic functions is to appeal to the theory of evolution by natural selection — i.e. to argue that the basic function of a system is the one that was favoured by natural selection — but since the goal of intelligent design theorists is to undermine natural selection this solution is not available to them. The other way to do it is to define the basic function of a system in terms of the causal contribution that the system makes to some larger system. Thus, for instance, you can say that the basic function of the lens of the eye is to focus light rays because this contributes to the larger system that enables us to see.

The main problem with this second approach is that it simply pushes the problem back a further step. It defines the basic functionality of a sub-system by reference to the functionality of the larger system of which it is a part. But then the question becomes: what is the function of that larger system? It’s only once we have settled the answer to that question that we can figure out whether the sub-system is indeed an ICS, which lands us back with the original problem: that basic functions cannot simply be read off the book of nature.

To summarise:

  • (4) In order to successfully identify an ICS, you must be able to identify the basic function of the system in question.

  • (5) In order to determine the basic function of a system you must either: (a) appeal to the intentions of the designer of the system; (b) appeal to the purpose for which the system has been naturally selected; or (c) identify the causal contribution that the system makes to some super-system with function y.

  • (6) A proponent of intelligent design cannot appeal to the intentions of the designer, since they wish to remain agnostic about the intentions of the designer.

  • (7) A proponent of intelligent design cannot appeal to natural selection, since their goal is to deny its truth.

  • (8) Appealing to the causal contribution that the system makes to some super-system simply pushes the problem back a step.

This leads to the negation of premise (1), i.e. the claim that we have successfully identified an ICS.
This is a somewhat technical objection and its unlikely to have much intuitive appeal. It just seems too obvious to most people that the basic function of something like the bacterial flagellum is to propel a bacterium; that the basic function of the eye is to see; that the basic function of the teeth is to grind food; and so on. It’s only if you really interrogate our reasons for thinking that this is obvious that you begin to see the problem.

Fortunately, there are other ways to object to the argument.

3. Second criticism: The problem of evolutionary co-optation
The main criticism of the argument from irreducible complexity focuses on premise (2) of the argument. That premise claims that the only possible explanation for the existence of an ICS is that it was brought into being by an intelligent designer. But why think that? Aren’t there other plausible explanations for the existence of an ICS? Couldn’t natural selection do the trick?

  • (9) Natural selection can explain the existence of an ICS.

The proponent of intelligent design says ‘no’. They argue that natural selection — if they accept the idea at all — can only work in a gradual, step-wise fashion. This might enable the development of some systems that display interesting adaptations and functionality, but it can only work if every step in the chain has a function (i.e. contributes positively to the organism’s survival and reproduction). The problem is that an ICS cannot evolve in a gradual, step-wise fashion. Suppose you have forty different protein parts that need to be arranged in a very precise way in order for the bacterial flagellum to function as it does. It is beyond the bounds of credibility to believe that this could happen through random mutations in an organism’s genetic code. Too many things have to line up in a precise order for that to happen. It would be like having a forty-wheeled combination lock, randomly spinning each wheel, and then hoping to end up with the right sequence. You might get two or three in the right place, but not all forty. You need intelligent designers to bring about improbable (and functional) arrangements.

  • (10) Natural selection cannot explain the existence of an ICS because natural selection only accounts for gradual, step-wise changes. An ICS cannot emerge from gradual stepwise changes.

The evolutionist’s response to this is pretty straightfoward: you’re thinking about it in the wrong way. It may well be true that the bacterial flagellum is, currently, irreducibly complex, such that if you altered or removed one part it would no longer function as a rotary motor. But that doesn’t mean that the parts that currently make up the flagellum couldn’t have had other functions over the course of evolutionary history, or couldn’t have contributed to other systems that are not irreducibly complex over that period of time. The flagellum is the system that has emerged at the end of an evolutionary sequence, but evolution did not have that system in mind when it started out. Evolution isn’t an intelligently directed process. Anything that works (that contributes to survival or reproduction) gets preserved and multiplied, and the bits and pieces that work can get merged into other systems that work. So one particular protein may have contributed to a system that helped an organism at one point in time, but then get co-opted into another, newer, system at a later point in time.

That’s the critical point. The history of evolution is a history of co-optation. Just like the mechanic who might take a part from an old car engine in order to make a new improved one, so too does evolution repurpose and reorganise parts into new, improved systems. This is effectively what other microbiologists have pointed out in response to Behe. They’ve noted that the proteins in the bacterial flagellum have other uses in other biological systems. Furthermore, many evolutionary texts are filled with examples of the co-optation process. Jantzen has a very nice example in his book about the evolution of flying insects. He highlights research showing how they evolved from sea-dwelling crustacean ancestors. In the process, the thoracic gill plates of the ancestors (whose original purpose was to facilitate oxygen respiration under water) were repurposed in order to enable the insects to push themselves along the surface of the water. They then evolved to enable the insects to ‘sail’ along the surface of the water, before finally (and I’m skipping several steps here) evolving into full-blown wings.

  • (11) Natural selection can explain the evolution of an ICS through the process of co-optation, i.e. through the fact that the component parts of biological systems often get repurposed and reorganised into new systems over the course of evolutionary history.

This might still leave a puzzle as to why natural selection has favoured the creation of ICSs. After all, ICSs are highly vulnerable to change: mess around with one component and the system ceases to function. Why wouldn’t there be some inbuilt redundancy of parts? There are many responses to this. It is quite possible that an organism (or, rather, species) could survive the loss of one ICS. There are, after all, many ways of making a living, as the diversity of life on earth proves. But also, vulnerable and fragile systems can emerge from less vulnerable ones. A.G. Cairns-Smith famously used the example of a stone arch to illustrate the point. An arch is irreducibly complex. Remove one stone and the whole thing collapses. But arches are built by having scaffolding in place during the construction process. It’s only once the keystone is in place that the scaffolding is removed and the system becomes more vulnerable to change. Many alleged ICSs could have emerged through an analogous process.

Okay so that’s it for this post. Hopefully this has effectively explained the concept of irreducible complexity and the two main criticisms of the argument. If you have read this far, I trust it has been of interest to you, even if it does retread old ground.