Sunday, October 22, 2017

Freedom and the Unravelling Problem in Quantified Work


A Machinist at the Tabor Company where Frederick Taylor (founder of 'scientific management') consulted.


[This is a text version of a short talk I delivered at a conference on ’Quantified Work’. It was hosted by Dr Phoebe Moore at Middlesex University on the 13th October 2017 and was based around her book ‘The Quantified Self in Precarity’.]

Surveillance has always been a feature of the industrial workplace. With the rise of industrialism came the rise of scientific management. Managers of manufacturing plants came to view the production process as a machine, not just as something that involved the use of machines. The human workers were simply parts of that machine. Careful study of the organisation and distribution of the machine parts could enable a more efficient production process. To this end, early pioneers in scientific management (such as Frederick Taylor and Lillian and Frank Gilbreth) invented novel methods for surveilling how their workers spend their time.

Nowadays, the scale and specificity of our surveillance techniques has changed. Our digitised workplaces enable far more information to be collected about our movements and behaviour, particularly when wearable smart-tech is factored into the mix. The management philosophy underlying the workplace has also changed. Where Taylor and the Gilbreths saw the goal of scientific management as creating a more consistent and efficient machine, we now embrace a workplace philosophy in which the ability to rapidly adapt to a changing world is paramount (the so-called ‘agile’ workplace). Acceleration and disruption are now the aim of the game. Workers must be equipped with the tools to enable them to navigate an uncertain world. What’s more, work now never ends — it follows us home on our laptops and phones — and we are constantly pressured to be available to work, while maintaining overall health and well-being. Employers are attuned to this and have instituted various corporate wellness programmes aimed at enhancing employee health and well-being, while raising productivity. The temptation to use ‘quantified self’ technology to track and nudge employee behaviour is, thus, increasing.

These are the themes addressed in Phoebe’s book, and I think they prompt the following question, one that I will seek to answer in this talk:

Question: Does the rise of ‘quantified self’ surveillance threaten our freedom in some new or unique way?

In other words, do these new forms of workplace surveillance constitute something genuinely new or unprecedented in the world of work, or are they really just more of the same? I consider two answers to that question.


Answer 1: No, because work always, necessarily, undermines our freedom
The first answer is the sceptical one. The notion that work and freedom are mutually inconsistent is a long-standing one in left-wing circles. Slavery is the epitome of unfreedom. Work, it is sometimes claimed, is a form of ‘waged’ or ‘economic’ slavery. You are not technically owned by your employer (after all you could be self-employed, as many of us now are in the ‘gig’ economy) but you are effectively compelled to work out of economic necessity. Even in countries with a generous social welfare provision, access to this provision is usually tied to the ability and willingness to work. There is, consequently, no way to escape the world of work.

I’ve covered arguments of this sort previously on my blog. My favourite, comes from the work of Julia Maskivker. The essence of her argument is this:

(1) A phenomenon undermines our freedom if: (a) it limits our ability to choose how to make use of our time; (b) it limits our ability to be the authors of our own lives; and/or (c) it involves exploitative/coercive offers.
(2) Work, in modern society, (a) limits our ability to choose how to make use of our time; (b) limits our ability to be the authors of our own lives; and c) involves an exploitative/coercive offer.
(3) Therefore, work undermines our freedom.

Now, I’m not going to defend this argument here. I did that on a previous occasion. Suffice to say, I find the premises in it plausible, with something reasonable to said in defence of each. I’m not defending it because my present goal is not to consider whether work does in fact, always, undermine our freedom, but, rather, to consider what the consequences of accepting this view are for the debate about quantified work practices.

You could argue that if you accept it, then there is nothing really interesting to be said about the freedom-affecting potential of quantified work. If work always undermines our freedom, then quantified work practices are just more in a long line of freedom-undermining practices. They do not threaten something new or unique.

I am sympathetic to this claim but I want to resist it. I want to argue that even if you think freedom is necessarily undermined by work, there is the possibility of something new and different being threatened by quantified work practices. This is for three reasons. First, even if the traditional employer-employee relationship undermines freedom, there is usually some possibility of escape from that freedom-undermining characteristic in the shape of down time or leisure time. Quantified work might pose a unique threat if it encourages and facilitates more surveillance in that down time. Second, quantified work might threaten something new if its utility is largely self-directed, rather than other-directed. In other words, if it is imposed from the bottom-up, by workers themselves, and not from the top-down, by employers. Finally, quantified work might threaten something new simply due to the scale and ubiquity of the available surveillance technology.

As it happens, I think there are some reasons to think that each of these three things might be true.


Answer 2: Yes, due to the unravelling problem
The second answer maintains that there is something new and different in the modern world of quantified work. Specifically, it claims that quantified work practices pose a unique threat to our freedom because they hasten the transition to a signalling economy, which in turn leads to the unravelling problem. I take this argument from the work of Scott Peppet.

A ‘signalling’ economy is to be differentiated from a ‘sorting’ economy. The difference has to do with how information is acquired by different economic actors. Information is important when making decisions about what to buy and who to employ. If you are buying a used car, you want to know whether or not it is a ‘lemon’. If you are buying health insurance, the insurer will want to know if you have any pre-existing conditions. If you are looking for a job, your prospective employer will want to know whether you have the capacity to do it well. Accurate, high-quality information enables more rational planning, although it sometimes comes at the expense of those whose informational disclosures rule them out of the market for certain goods and services. In a ‘sorting’ economy, the burden is on the employer to screen potential employees for the information they deem relevant to the job. In a ‘signalling’ economy, the burden is on the employee to signal accurate information to the employer.

With the decline in long-term employment, and the corresponding rise in short-term, contract-based work, there has been a remarkable shift away from a sorting economy to a signalling economy. We are now encouraged to voluntarily disclose information to our employers in order to demonstrate our employability. Doing so is attractive because it might yield better working conditions or pay. The problem is that what initially appears to be a voluntary set of disclosures ends up being a forced/compelled disclosure. This is due to the unravelling problem.

The problem is best explained by way of an example. Imagine you have a bunch of people selling crates of oranges on the export market. The crates carry a maximum of 100 oranges, but they are carefully sealed so that a purchaser cannot see how many oranges are inside. What’s more, the purchaser doesn’t want to open the box prior to transport because doing so would cause the oranges to go bad. But, of course, the purchaser can easily verify the total number of oranges in the box after transport by simply opening it and counting them. Now suppose you are one of the people selling the crates of oranges. Will you disclose to the purchaser the total number of oranges in the crate? You might think that you shouldn’t because, if you are selling less than the others, you would put you at a disadvantage on the market. But a little bit of game theory tells us that we should expect the sellers to disclose the number of oranges in the crates. Why so? Well, if you had 100 oranges in your crate, you would be incentivised to disclose this to any potential purchaser. Doing so makes you an attractive seller. Correspondingly, if you had 99 oranges in your crate, and all the sellers with 100 oranges have disclosed this to the purchasers, you should disclose this information. If you don’t, there is a danger that a potential seller will lump you in with anyone selling 0-98 oranges. In other words, because those with the maximum number of oranges in their crates are sharing this information, purchasers will tend to assume the worst about anyone not sharing the number of oranges in their crate. But once you have disclosed the fact that you have 99 oranges in your crate, the same logic will apply to the person with 98 oranges and so on all the way down to the seller with 1 orange in their crate.

This is informational unravelling in practice. The seller with only 1 orange in their crate would much rather not disclose this fact to the purchasers, but they are ultimately compelled to do so by the incentives in operation on the market. The claim I am making here — and that Peppet makes in his paper — is that unravelling is also likely to happen on the employment market. The more valuable information we have about ourselves, the more we are incentivised to disclose this to our employers in order to maintain our employability. Those with the best information will do so voluntarily and willingly, but ultimately everybody will be forced to do so in an effort to differentiate themselves from other, potentially ‘inferior’, employees.

This could have a pretty dramatic effect on our freedom. If quantified self technologies enable more and more valuable information be tracked and disclosed, there will be more and more unravelling, which will in turn lead to more and more forced disclosures. This could result in something quite different from the old world of workplace surveillance, partly because it is being driven from the bottom up, i.e. workers do it themselves in order to secure some perceived advantage. There are laws in place that prevent employers from seeking certain information about their employees (e.g. information about health conditions) but those laws usually only cover cases where the employer demands the information. Where the information is being supplied, seemingly willingly, by masses of gig workers looking to increase their employability, the situation is rather different. This could be compounded by the fact that the types of information that are desirable in the new, agile, workplace will go beyond simple productivity metrics into information about general health and well-being. New and more robust legal protections may be required to redress this problem of seemingly voluntary disclosure.

I’ll close on a more positive note. Even though I think the unravelling problem is worth taking seriously, the argument I have presented is premised on the assumption that the information derived from quantified self technologies is in fact valuable. This may not be the case. It may turn out that accurately signalling something like the numbers of hours you slept last night, the number of calories you consumed yesterday, or the number of steps you have taken, is not particularly useful to employers. In that case, the scale of the unravelling problem might be mitigated. But we should still be cautious. There is a distinction to be drawn between information that is genuinely valuable (i.e. has some positive link to economic productivity) and information that simply perceived to be valuable (i.e. thought to be of value by potential employers). Unfortunately, the latter is what really counts, not the former. I see this all the time in my own job. Universities are interested in lots of different metrics for gauging the success of their employees (papers published, number of citations, research funding received, number of social media engagements, number of paper downloads etc. etc.). Many of these metrics are of dubious value. But that doesn’t matter. They are perceived as having some value and so academic staff are encouraged to disclose more and more of them.





Saturday, October 14, 2017

Some things you wanted to know about robot sex* (but were afraid to ask)




BOOK LAUNCH - BUY NOW!

I am pleased to announce that Robot Sex: Social and Ethical Implications (MIT Press, 2017), edited by myself and Neil McArthur, is now available for purchase. You can buy the hardcopy/ebook via Amazon in the US. You can buy the ebook in the UK as well, but the hardcopy might take another few weeks to arrive. I've never sold anything before via this blog. That all changes today. Now that I actually have something to sell, I'm going to turn into the most annoying, desperate, cringeworthy and slightly pathetic salesman you could possibly imagine...

...Hopefully not. But I would really appreciate it if people could either (a) purchase a copy of the book and/or (b) recommend it to others and/or (c) review it and generally spread the word. Academic books are often outrageously expensive, but this one lies at the more reasonable end of the spectrum ($40 in the US and £32 in the UK). I appreciate it is still expensive though. To whet your appetite, here's a short article I put together with Neil McArthur that covers some of the themes in the book.

----------------------------------------------------------------

Sex robots are coming. Basic models exist today and as robotics technologies advance in general, we can expect to see similar advances in sex robotics in particular.

None of this should be surprising. Technology and sex have always gone hand-in-hand. But this latest development in the technology of sex seems to arouse considerable public interest and concern. Many people have questions that they want answered, and as the editors of a new academic book on the topic, we are willing to oblige. We present here, for your delectation, *some* of the things you might have wanted to know about robot sex, but were afraid to ask.


1. What is a sex robot?
A ‘robot’ is an embodied artificial agent. A sex robot is a robot that is designed or used for the purpose of sexual stimulation. One of us (Danaher) has argued that sex robots will have three additional properties (a) human-like appearance, (b) human-like movement and behaviour and (c) some artificial intelligence. Each of these properties comes in degrees. The current crop of sex robots, such as the Harmony model developed by Abyss Creations, possess them to a limited extent. Future sex robots will be more sophisticated. You could dispute this proposed definition, particularly its fixation on human-likeness, but we suggest that it captures the kind of technology that people are interested in when they talk about ‘sex robots’.


2. Can you really have sex with a robot?
In a recent skit, the comedian Richard Herring suggested that the use of sex robots would be nothing more than an elaborate form of masturbation. This is not an uncommon view and it raises the perennial question: what does it mean to ‘have sex’? Historically, humans have adopted anatomically precise definitions of sexual practice: two persons cannot be said to have ‘had sex’ with one another until one of them has inserted his penis into the other’s vagina. Nowadays we have moved away from this heteronormative, anatomically-obsessive definition, not least because it doesn’t capture what same-sex couples mean when they use the expression ‘have sex’. In their contribution to our book, Mark Migotti and Nicole Wyatt favour a definition that centres on ‘shared sexual agency’: two beings can be said to ‘have sex’ with one another when they intentionally coordinate their actions to a sexual end. This means that we can only have sex with robots when they are capable of intentionally coordinating their actions with us. Until then it might really just be an elaborate form of masturbation -- emphasis on the 'elaborate'.


3. Can you love a robot?
Sex and love don’t have to go together, but they often do. Some people might be unsatisfied with a purely sexual relationship with a robot and want to develop a deeper attachment. Indeed, some people have already formed very close attachments to robots. Consider, for example, the elaborate funerals that US soldiers have performed for their fallen robot comrades. Or the marriages that some people claim to have with their sex dolls. But can these close attachments ever amount to ‘love’? Again, the answer to this is not straightforward. There are many different accounts of what it takes to enter into a loving relationship with another being. Romantic love is often assumed to require some degree of reciprocity and mutuality, i.e. it’s not enough for you to love the other person, they have to love you back. Furthermore, romantic love is often held to require free will or autonomy: it’s not enough for the other person to love you back, they have to freely choose you as their romantic partner. The big concern with robots is that they wouldn’t meet these mutuality and autonomy conditions, effectively being pre-programmed, unconscious, sex slaves. It may be possible to overcome these barriers, but it would require significant advances in technology.


4. Should we use child sex robots to treat paedophilia?
Robot sex undoubtedly has its darker side. The darkest of all is the prospect of child sex robots that cater to those with paedophiliac tendencies. In July 2014, in a statement that he probably now regrets, the roboticist Ronald Arkin suggested that we could use child sexbots to treat paedophilia in the same way that methadone is used to treat heroin addiction. After all, if the sexbot is just an artificial entity (with no self-consciousness or awareness) then it cannot be harmed by anything that is done to it, and if used in the right clinical setting, this might provide a safe outlet for the expression of paedophiliac tendencies, and thereby reduce the harm done to real children. ‘Might’ does not imply ‘will’, however, and unless we have strong evidence for the therapeutic benefits of this approach, the philosopher Litska Strikwerda suggests that there is more to be said against the idea than in its favour. Allowing for such robots could seriously corrupt our sexual beliefs and practices, with no obvious benefits for children.


5. Will sex robots lead to the collapse of civilisation?
The TV series Futurama has a firm answer to this. In the season 3 episode, ‘I Dated a Robot’, we are told that entering into sexual relationships with robots will lead to the collapse of civilisation because everything we value in society — art, literature, music, science, sports and so on — is made possible by the desire for sex. If robots can give us ‘sex on demand’ this motivation will fade away. The Futurama-fear is definitely overstated. Unlike Freud, we doubt that the motivations for all that is good in the world ultimately reduce to the desire for sex. Nevertheless, there are legitimate concerns one can have about the development of sex robots, in particular the ‘mental model’ of sexual relationships that they represent and reinforce. Others have voiced these concerns, highlighting the inequality inherent in a sexual relationship with a robot and how that may spill over into our interactions with one another. At the same time, there are potential upsides to sex robots that are overlooked. One of us (McArthur) argues in the book that sex robots could distribute sexual experiences more widely and lead to more harmonious relationships by correcting for imbalances in sex drive between human partners. Similarly, our colleague Marina Adshade, argues that sex robots could improve the institution of marriage by making it less about sex and more about love.

This is all speculative, of course. The technology is still in its infancy but the benefits and harms need to be thought through right now. We recommend viewing its future development as a social experiment, one that should be monitored and reviewed on an ongoing basis. If you want to learn more about the topic, you should of course buy the book.


~ Full Table of Contents ~



I. Introducing Robot Sex
1. 'Should we be thinking about robot sex?' by John Danaher 
2. 'On the very idea of sex with robots?' by Mark Migotti and Nicole Wyatt

II. Defending Robot Sex
3. 'The case for sex robots' by Neil McArthur 
4. 'Should we campaign against sex robots?' by John Danaher, Brian Earp and Anders Sandberg 
5. 'Sexual rights, disability and sex robots' by Ezio di Nucci

III. Challenging Robot Sex
6. 'Religious perspectives on sex with robots' by Noreen Hertzfeld 
7. 'The Symbolic-Consequences argument in the sex robot debate' by John Danaher 
8. Legal and moral implications of child sex robots' by Litska Strikwerda

IV. The Robot's Perspective
9. 'Is it good for them? Ethical concern for the sexbots' by Steve Petersen 
10. 'Was it good for you too? New natural law theory and the paradox of sex robots' by Joshua Goldstein

V. The Possibility of Robot Love
11. 'Automatic sweethearts for transhumanists' by Michael Hauskeller
12. 'From sex robots to love robots: Is mutual love with a robot possible' by Sven Nyholm and Lily Eva Frank

VI. The Future of Robot Sex
13. 'Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications' by Matthias Scheutz and Thomas Arnold
14. 'Deus sex machina: Loving robot sex workers and the allure of an insincere kiss' by Julie Carpenter
15. 'Sex robot induced social change: An economic perspective' by Marina Adshade









Sunday, October 1, 2017

Episode #30 - Bartholomew on Adcreep and the Case Against Modern Marketing

1442864569210.jpg

In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book.

You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here).


Show Notes

  • 0:00 - Introduction
  • 0:55 - The crisis of attention
  • 2:05 - Two types of Adcreep
  • 3:33 - The history of advertising and its regulation
  • 9:26 - Does the history tell a clear story?
  • 12:16 - Differences between Europe and the US
  • 13:48 - How public and private spaces have been colonised by marketing
  • 16:58 - The internet as an advertising medium
  • 19:30 - Why have we tolerated Adcreep?
  • 25:32 - The corrupting effect of Adcreep on politics
  • 32:10 - Does advertising shape our identity?
  • 36:39 - Is advertising's effect on identity worse than that other external forces?
  • 40:31 - The modern technology of advertising
  • 45:44 - A digital panopticon that hides in plain sight
  • 48:22 - Neuromarketing: hype or reality?
  • 55:26 - Are we now selling ourselves all the time?
  • 1:04:52 - What can we do to redress adcreep?
 

Relevant Links


   

Thursday, September 28, 2017

Algorithmic Governance: Developing a Research Agenda Through Collective Intelligence




I have a new paper, just published, on the topic of algorithmic governance. This one is a bit different from my usual fare. It's a report from a 'collective intelligence' workshop that I ran with my colleague Michael Hogan from the psychology department at NUI Galway. It tries to develop a research agenda for the study of algorithmic governance by harnessing the insights from an interdisciplinary group of scholars. It's available in open access format at the journal Big Data and Society. Just click on the paper title below to read the full thing.

Title: Algorithmic Governance: Developing a research agenda through collective intelligence
Journal: Big Data and Society
Authors: John Danaher, Michael J Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, Aisling De Paor, Heike Felzmann, Muki Haklay, Su-Ming Khoo, John Morison, Maria Helen Murphy, Niall O’Brolchain, Burkhard Schafer and Kalpana Shankar
Abstract: We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.





Tuesday, September 26, 2017

How to Build and Rawlsian Algorithm for Self-Driving Cars


Google's Self-Driving Car - via Becky Stern on Flickr

Swerve or slow down? That is the question. The question that haunts designers of self-driving cars. The dilemma will be familiar to students of moral philosophy. Suppose an autonomous car is driving down an urban street. You are the passenger. Suddenly, from behind a parked car, a group of pedestrians stumbles out into the middle of the road. If the car breaks and continues on its current course, it will not slow down in time to avoid colliding with the group. If it does collide with them, it will almost certainly kill them all. If the car swerves, it will collide with a solid wall, almost certainly killing you. What should it be programmed to do?

Some philosophical thought experiments are completely fanciful — the infamous trolley problems upon which this story is based are good examples of this. This particular philosophical thought experiment is not. It’s likely that self-driving cars will encounter some variant of the swerve or slow down dilemma in the course of their operation. After all, human drivers encounter these dilemmas on a not infrequent basis. And no matter how safe and risk averse the cars are programmed to be, they will have unplanned encounters with reckless pedestrians.

So what should be done? The answer is that the car should be programmed to follow some sort of moral algorithm — a rule (or set of rules) that tells it how to behave in these scenarios. One possibility is that it should be programmed to follow an act-utilitarian algorithm: the probability of death for you and the pedestrians should be calculated (the car could be fed the latest statistics about deaths in these kinds of scenarios and update accordingly), and it should pick the option that maximises the overall survival rate. Alternatively, the car could be programmed to follow a ‘heroic self-sacrifice’ algorithm, i.e. whenever it encounters a scenario like this, it should sacrifice the car and its passenger, not the pedestrians. Either way, the same outcome is likely: the car should swerve, not slow down. More selfish algorithms are possible too. Maybe the car should follow a ‘Randian algorithm’ whereby the interests of the driver trump the interests of the pedestrians?

In this post, I want to look at yet another possible algorithm that could be adopted in the swerve or slow down dilemma: a Rawlsian one. This is a proposal that has recently been put forward by Derek Leben, a philosopher at the University of Pittsburgh. I find the proposal fascinating because not only is it philosophically interesting, it also sounds eminently feasible. At the same time, it forces us to confront some uncomfortable truths about ethics in the age of autonomous vehicles.

I’ll break the discussion down into three parts. First, I’ll briefly explain Rawlsianism - the philosophy that inspires the algorithm. Second, I’ll outline how a Rawlsian algorithm would work in practice. And third, I’ll address some of the objections one could have to the use of the Rawlsian algorithm. This post is very much a summary of Leben’s article, which I encourage everyone to read. It’s one of the more interesting pieces of applied philosophy that I have read in recent times.


1. What is a Rawlsian Algorithm?
Leben’s proposal is obviously based on the work of John Rawls, who was the most influential political philosopher of the 20th century. Rawls’s most famous work was the 1971 classic A Theory of Justice in which he outlined his vision for a just society. We don’t need to get too mired in the details of the theory for the purposes of understanding Leben’s proposal; a few choice elements are all we need.

First, we need to appreciate that Rawls’s theory is a form of liberal contractarianism. That is to say, Rawls works from the basic liberal assumption that people are moral equals (i.e. no one person has the right to claim moral authority over another without certain legitimacy conditions being met). This moral assumption creates problems because we often need to exercise some coercive control over one another’s behaviour in order to secure mutually beneficial outcomes.

This problem is easily highlighted by thinking about some of the classic ‘games’ that are used to explain the issues that arise when two or more people must cooperate for mutual gain. The Prisoners’ Dilemma is the most famous of these. The set-up will be familiar to many readers (if you know it, skip this paragraph and the next). Two prisoners are arrested for the same crime and put in separate jail cells. The police are convinced that they have enough evidence to charge them with an offence that attracts a two year sentence, however, they would like to charge at least one of them with a more severe offence that attracts a ten year sentence. To enable this, the police offer each prisoner the same deal. If one of them ‘squeals’ on their partner and the partner remains silent, they can get off free and the partner will be charged with the ten-year offence. If they both squeal on each other, they both get charged with a five-year offence. If they both remain silent, they will be charged with the two-year offence. If you were one of these prisoners, what would you do? Before answering that, take a look at the payoff matrix for the game, which is illustrated below. The strategies of ’squealing’ and ’staying silent’ have been renamed ‘defect’ and ‘cooperate’ in keeping with the convention in the literature.



So now that you have looked at the payoff matrix, return to the question: what should you do? If we follow strict principles of rationality, the answer is that you should squeal on your partner. In the language of game theory, doing so ‘strictly dominates’ staying silent: it always yields a higher payoff (or in this instance: a lesser sentence), no matter what your opponent does. The difficulty with this analysis, though, is that it yields an outcome that is clearly worse for both prisoners than remaining silent. They both end up in jail for five years when they could have got away with a two-year sentence. In technical terms, we say that the ‘rational’ choice in the game yields a ‘Pareto inefficient’ outcome. There is another combination of choices in the game that would make every player better off without a loss to anyone else (an outcome that is ‘Pareto optimal’).

The Prisoners’ Dilemma is just a story, but the interaction it describes is supposed to be a common one. Indeed, one of Rawls’s key contentions — and if you don’t believe me read his lecture notes on political philosophy — is that coming up with a way to solve Prisoners’ Dilemma-scenarios is central to liberal political theory. Somehow, we have to move society out of the Pareto inefficient outcomes and into the Pareto optimal ones. The obvious way to do this is to establish a state with a monopoly on the use of violence. The state can then threaten people with worse outcomes if they fail to cooperate. But coercive threats don’t sit easily with the liberal conscience. There has to be some morally defensible reason for allowing the state to exercise its muscle in this way.

That’s where the Rawlsian algorithm comes in. Like other liberal theorists, Rawls argued that the authority we grant the state has to be such that reasonable people would agree to it. Imagine that everyone is getting together to negotiate the ‘contract of state’. What terms and conditions would they agree to? One difficulty we have in answering this question is that we are biased by our existing predicament. Some of us are well-off under the current system and will, no doubt, favour its terms and conditions. Others are less well-off and will favour renegotiation. To figure out what reasonable people would really agree to, we need to rid ourselves of these biases. Rawls recommended that we do this by imagining that we are negotiating the contract of state from behind a ‘veil of ignorance’. This veil hides our current predicament from us. As a result, we don’t know where we will end up after the contract has been agreed. We might be among the better off; but then again we might not.

Rawls’s key claim then is that if we were negotiating from behind the veil of ignorance, we would adopt the following decision rule:

Maximin decision rule: Favour those terms and conditions (policies, rules, procedures etc) that maximise the benefits to the worst off members of society.

Or, to put it another way, favour the distribution of the benefits and burdens of social living that ‘raises the floor’ to its highest possible level.

This maximin decision rule is in effect a ‘Rawlsian’ algorithm. How could it be implemented in a self-driving car?


2. The Rawlsian Algorithm in Practice
To implement a Rawlsian algorithm in practice, you need to define three variables:

Players: First, you need to define the ‘players’ in the game in which you are interested. In the case of the swerve or slow down ‘game’ the players are the passenger(s) (i.e. you - P1) and the pedestrians. For ease of analysis, let’s say there are four of them (P2... P5).

Actions: Second, you need to define the actions that can be taken in the game. In our case, the actions are the decisions that can be made by the car’s program. There are two actions in this game: slow down (A1) and swerve (A2)

Utility Functions: Third, you’ll need to define the utility functions for the players in the game, i.e. the payoffs they receive for each action taken. In our case, the payoffs can be recorded as the probability of survival for each of the players. This will be a number between 0 and 1. Presumably, actual tables of data could be assembled for this purpose based on records of past accidents of this sort, but let’s say for our purposes that if the car slows down and collides with the four pedestrians, it lowers their probability of survival from 0.99 to 0.05. And if it swerves, it lowers your probability of survival from 0.99 to 0.01. (Just note that this means we are assuming that the pedestrians have a slightly higher probability of survival from collision in this scenario than the passenger does)

This gives us the following payoff matrix for the game:



With this information in place, we can easily program the car to follow a maximin decision procedure. Remember the goal of this procedure is to ‘raise the floor’ for the highest number of people. This can be done by following three simple steps:

Step One: Identify the worst payoffs for each possible action and compile them into a set. In our case, the worst payoff for A1 is the 0.05 probability of survival and the worst payoff for A2 is the 0.01 probability of death. This gives us the following set of worst outcomes (0.05, 0.01).

Step Two: From this set of worst outcomes, identify the best possible outcome and the actions that yield it. Call this outcome a. In our case, outcome a is the 0.05 probability of survival and the action that yields it is A1.

Step Three: If there is only one action that yields a , implement this action. If there is more than one action that yields a, then you need to ‘mask’ for a (i.e. eliminate the as from the analysis) and repeat steps one and two again (i.e. maximise for the second worst outcome). You repeat this process until either (i) you identify a unique action that can yield an outcome a or (ii) if there is only one outcome left in the game, and a tie between two or more actions that yield that outcome, then you randomise between those actions (because it doesn’t matter from the maximin perspective).

In the case of the swerve or slow down dilemma, the algorithm is very simply applied. Following step two there will be only one action (A1) that yields the least-bad outcome in the game (the 0.05 probability of survival). This is the action that will be selected by the car. This means the car will slow down rather than swerve. This is in keeping with Rawls’s maximin procedure since it is raises the worst possible outcome from a 0.01 probability of survival to a 0.05 probability of survival. This is, admittedly, somewhat counterintuitive because it means that more people are likely to die, but we return to this point below.

The Rawlsian algorithm is illustrated below.

The Rawlsian Algorithm - diagram from Leben 2017


Two points should be noted before we proceed. First, two aspects of this decision-procedure are not found in Rawls’s original writing: (i) the ‘masking’ procedure and the (ii) randomisation option. These are modifications introduced by Leben, but they make a lot of sense and I would not be inclined to challenge them (I’ve long been a fan of randomisation options in the design of moral algorithms). Second, the maximin procedure can yield significantly different outcomes if you modify the probabilities of survival ever so slightly. For example, if you reversed the probabilities of survival so that the pedestrians had the 0.01 probability of survival following collision and the driver had the 0.05, the maximin procedure would favour swerving over slowing down. This is despite the fact that the utilitarian choice in both cases is the same.


3. Objections to the Rawlsian Algorithm
One thing I like about Leben’s proposal is that it is eminently practicable. Sometimes discussions about moral algorithms are fantastical because they demand information that we simply do not have nor could not hope to have. That doesn’t seem to be true here. We could assign reasonable figures to the probability of survival in this scenario that could be quickly calculated and updated by the car’s onboard computer. Furthermore, I like how it puts another option on the table when it comes to the design of moral algorithms. To date, much of the discussion has focused on standard act-utilitarian versus deontological algorithms. This is largely due to the fact that the discussion has been framed in terms of trolley problem dilemmas, which were first invented to test our intuitions with respect to those moral theories.

That said, there are some obvious concerns. One could reject Rawls’s views and so reject any algorithm based on them, but as Leben notes, his job is not to defend Rawlsianism as a whole. That’s too large a task. Other concerns can be tied more specifically to the application of Rawlsianism to the swerve or slow down scenario. Leben discusses three in his article.

The first is that the utility functions are incomplete. The survival probabilities are just one factor among many that we should be considering. Some people would claim that not all lives are equal. Some people are young; some are old. The young have more of their lives left to live. Perhaps they should be favoured in any crash algorithm over the old? In other words, perhaps there should be some weighting for ‘life years’ included in the Rawlsian calculation. Leben points out that, if you wanted to, you could include this information. The QALY (quality adjusted life years) measure is in widespread use in healthcare contexts and could inform the car’s decision-making. It might be a little bit more difficult to implement this in practice. The car would have to be given access to everybody’s QALY score and this would have to be communicated to the car prior to its decision. This is not impossible — given ongoing developments in smart tech and mass surveillance, people could be forced to carry devices on their person that communicated this information to the car — but allowing for it would have other significant social costs that should be borne in mind.

The second is that applying the Rawlsian algorithm might create a perverse incentive. Remember, the maximin decision procedure tries to avoid the worst possible outcomes. This means, bizarrely, that it actually pays to be the person with the highest probability of death in a swerve or slow down dilemma. We see this clearly above: the mere fact that the passenger had a higher probability of death from collision with the wall was enough to save his/her skin, despite the fact that doing so would raise the probability of death for more people. This might give people a perverse incentive not to take precautions to protect themselves from harm on the roads. But this incentive is probably overstated. Even though the kinds of dilemmas covered by the algorithm are not implausible, they are still going to be relatively rare. The benefits of taking precautions in all other contexts are likely to outweigh the costs of doing so when you land yourself in a swerve or slow-down type scenario.

The third and final concern is simply the one I noted above: that the maximin procedure yields a very counterintuitive result in the example given. It says the car should collide with the pedestrians even though this means that more people are likely to die. This is pretty close to a typical utilitarianism vs Rawlsianism concern and so brings us into bigger issues in moral philosophy. But Leben says a couple of sensible things about this. One is that how counterintuitive this is will depend on how much Rawlsian Kool Aid we have imbibed. Rawls argued that we should think about social rules from behind a ‘thick’ veil of ignorance, i.e. a veil that masks us from pretty much everything we know about our current selves, leaving us with just our basic rational and cognitive faculties. If we really didn’t know who we might end up being in the swerve or slow down dilemma, we might be inclined to favour the maximin rule. The other point, which is probably more important, is that every moral rule that is consistently followed yields counterintuitive results. So if we’re after totally intuitive results when it comes to designing self-driving cars, we are probably on a fool’s errand. Still, as I discussed previously when looking at the work of Hin Yan Liu, the fact that self-driving cars might follow moral rules more consistently than humans ever could, might tell against them for other reasons.

Anyway, that's it for this post.




Friday, September 22, 2017

Episode #29 - Moore on the Quantified Worker


cropped-img_7809-1.jpg


In this episode, I talk to Phoebe Moore. Phoebe is a researcher and a Senior Lecturer in International Relations at Middlesex University. She teaches International Relations and International Political Economy and has published several books, articles and reports about labour struggle, industrial relations and the impact of technology on workers' everyday lives. Her current research, funded by a BA/Leverhulme award, focuses on the use of self-tracking devices in companies. She is the author of a book on this topic entitled The Quantified Self in Precarity: Work, Technology and What Counts, which has just been published. We talk about the quantified self movement, the history of workplace surveillance, and a study that Phoebe did on tracking in a Dutch company.


You can download the episode here, or listen below. You can also subscribe on iTunes and Stitcher.


Show Notes

  • 0:00 - Introduction
  • 1:27 - Origins and Ethos of the Quantified Self Movement
  • 7:39 - Does self-tracking promote or alleviate anxiety?
  • 10:10 - The importance of gamification
  • 13:09 - The history of workplace surveillance (Taylor and the Gilbreths)
  • 16:27 - How is workplace quantification different now?
  • 20:26 - The Agility Agenda: Workplace surveillance in an age of precarity
  • 29:09 - Tracking affective/emotional labour
  • 34:08 - Getting the opportunity to study the quantified worker in the field
  • 38:18 - Can such workplace self-tracking exercises ever be truly voluntary?
  • 41:05 - What were the key findings of the study?
  • 46:07 - Why was there such a high drop-out rate?
  • 49:37 - Did workplace tracking lead to increased competitiveness?
  • 53:32 - Should we welcome or resist the quantified worker phenomenon?

Relevant Links




Wednesday, September 20, 2017

The Moral Problem of Accelerating Change





Holmes knew that killing people was wrong, but he faced a dilemma. Holmes was a member of the crew onboard the ship The William Brown, which sailed from Liverpool to New York in early April 1842. During its Atlantic crossing, The William Brown ran into trouble. In a tragedy that would repeat itself 70 years later during the fateful first voyage of The Titanic, the ship struck an iceberg off the coast of Canada. The crew and half the passengers managed to escape to a lifeboat. Once there, tragedy struck again. The lifeboat was too laden with people and started to sink. Something had to be done.

The captain made a decision. The crew would have to throw some passengers overboard, leaving them to perish in the icy waters, but raising the level of the boat. It was the only way anyone was going to get out alive. Holmes followed these orders and was complicit in the deaths of 14 people. But the remaining passengers were saved. Holmes and his fellow crew were their saviours. Without doing what they did, everyone would have died. For his troubles, Holmes was eventually prosecuted for murder, but the jury refused to convict him for this. They reduced the conviction to one of manslaughter and Holmes only served six months in jail.

I discuss this case every year with students. Most of them share the jurors’ sense that although Holmes intentionally killed people, he didn’t deserve much blame for his actions. In the context, we would be hard pressed to have done differently. Indeed, many of my students think he should avoid all punishment for his actions.

Holmes’s story illustrates an important point: morality is contextual. What we ought to do is dependent on what is happening around us. Sometimes our duties and obligations can change. You probably don’t think about this phenomenon too much, taking it is a natural and obvious feature of the moral universe, but the contextual nature of morality poses a challenge during times of accelerating technological change.

That’s one of the central ideas motivating Shannon Vallor’s recent book Technology and the Virtues. I’m still working my way through it (I’ve read approximately 65 pages at the time of writing) but it is provoking many thoughts in my mind and I feel I have to get some of them down on the page. This post is my first attempt to do so, examining one of the key arguments developed by Vallor over the opening chapters of the book.

That argument comes in two parts. The first part claims that there is a particularly acute and important moral problem facing us in the modern age. Vallor calls this the problem of ‘acute technosocial opacity’; I’m going to give it a slightly different name: the moral problem of accelerating change. The second part argues for a solution to this problem: developing a technology-sensitive virtue ethics. I’m going to analyse and evaluate both parts of the argument in what follows.

Before I get into the details, a word of warning. What I am about to say is highly provisional. As noted, I’m still reading Vallor’s book. I am very conscious of the fact that the problems I raise with certain aspects of her argument might be addressed later in the book. So take what I am about to say with a hefty grain of salt.


1. The Moral Problem of Accelerating Change
We are living through a time of accelerating technological change. This is one of the central theses of futurists like Ray Kurzweil. In his infamous 2005 book The Singularity is Near, Kurzweil maps out the exponential improvements in various technologies, including computing speed, size and density of transistors, data storage and so on. Some of these improvements are definitely real: Moore’s law — the observation that the number of transistors that can fit on an integrated circuit doubles every two or so years — is the most famous example. But Kurzweil and his fellow futurists take the idea much further, arguing that converging trends in artificial intelligence, biotech, and nanotech hold truly revolutionary potential for human society. Kurzweil believes that we are heading towards a ‘singularity’ where humans and machines will merge together and we will suffuse the cosmos with our intelligence. Others are less optimistic, thinking that the singularity holds much darker promises.

You don’t have to be a fully signed-up Kurzweilian to believe that there is something to the notion of accelerating change. We all have a sense that things are changing pretty quickly. Jobs that were once stable and dependable sources of income have been automated or eliminated. Digital and smart technologies that were non-existent ten years ago are embedding themselves in our daily lives, turning us all into screen-obsessed zombies. This is to say nothing of the advances in other technologies, such as AI, 3-D printing and brain-computer interfaces. You might think that we can handle all this change — that although things are moving quickly they are not moving so quickly that we cannot keep up. But this assessment might be premature. One of the key insights of Kurzweil’s work — one that has been taken onboard by others — is that accelerating change has a way of sneaking up on us. A doubling of computer speed year-on-year is not that spectacular for the first few years, particularly if you start from a low baseline, but after ten or twenty years the changes become truly astronomical. It’s like that old puzzle about the lily pad that doubles in size every day. If it covers half the pond on day 47 when does it cover the entire pond? Answer: on day 48. One more day is enough to completely wipe out the pond.

Accelerating change poses a significant moral challenge. We all seek moral guidance — even the committed moral relativists among us try to figure out what they ought to do. But noted in the introduction, moral guidance is often contextual. It depends, critically, on two variables: (i) what is happening in the world around us and (ii) what is within our power to control. Once upon time, no one would have said that you had a moral obligation to vaccinate your children. It wasn’t within your power to do so. But with the invention of vaccines for the leading childhood illnesses, as well as the copious volumes of evidence in support of their safety and efficacy, what was once unimaginable has become something close to a moral duty. Some people still resist vaccinations, of course, but they do so knowing that they are taking a moral risk: that their decision could impose costs on their child and the children of others. Consequently, there is a moral dimension to their choice that would have been historically unfathomable.

Accelerating change ramps up the problem of moral contextuality. If our technological environment is rapidly changing, it’s hard to offer concrete guidance and moral education to people about what they ought to do. They may face challenges and have powers that are beyond our ability to predict. This is something that most historical schools of moral thought did not envisage. As Vallor notes:

The founders of the most enduring classical traditions of ethics — Plato, Aristotle, Aquinas, Confucius, the Buddha — had the luxury of assuming that the practical conditions under which they and their cohorts lived would be, if not wholly static, at least relatively stable…the safest bet for a moral sage of premodern times would be that he, his fellows, and their children would confront essentially similar moral opportunities and challenges over the course of their lives. 
(Vallor 2016, 6)

All of this suggests that the following argument is worthy of our consideration:


  • (1) In order to provide practical and useful moral guidance to ourselves and our cohorts, we must be able to predict and understand the moral context in which we will operate.
  • (2) Accelerating technological change makes it extremely difficult to predict and understand the moral context in which we and our cohorts will operate.
  • (3) Therefore, accelerating technological change impedes our ability to provide practical and useful moral guidance.


Support for premise (1) derives from the preceding discussion of moral contextuality. If what we ought to do depends on the context, we need to know something about that context in order to provide practical guidance. Support for premise (2) derives from the preceding discussion of accelerating change. Admittedly, I haven’t provided a robust case for accelerating change, but I would suggest that there is something to the idea that is worth taking seriously. I also think the argument as a whole is worthy of serious scrutiny. The question is whether there is any solution to the problem it identifies.


2. The Failures of Abstract Normative Ethics
One possible solution lies in abstract normative principles. Students of moral philosophy will no doubt be suspicious of premise (1). They will know that modern ethical theories — in particular the theories associated with Immanuel Kant and proponents of utilitarianism — offer a type of moral guidance that makes no appeal to the context in which a moral choice must be made.

Consider Kant’s famous categorical imperative. There are various formulations of is, but the most popular and widely discussed is the ‘universalisation’ formulation (note: this is my wording, not Kant’s):

Categorical Imperative: You ought to only act on a maxim of the will that you can, at the same time, will as a universal maxim.

In other words, whenever you are about to do something ask yourself: would it be acceptable for everyone else, in this circumstance, to act as I am about to act? Are my choices universalisable? If not, then you are taking special exceptions for yourself and not acting in a moral way. Note how this principle is supposed to ‘float free’ of all contexts. It should work whatever fate may throw your way.
Consider also the basic principle of utilitarianism. Again, there are many formulations of utilitarianism, but they all involve something like this:


Utilitarian Principle: Act in a way that maximises the amount of pleasure (or some other property like ‘happiness’ or ‘desire satisfaction’) and minimises the amount pain, for the greatest number of people.

This principle also floats free of context. No matter what circumstance you find yourself in, you should always aim to maximise pleasure and minimise pain.

Vallor finds both of these solutions to the problem of accelerating change lacking. The issue is essentially the same for both. Although they may seem to be context-free, abstract moral principles, translating them from their abstract form into practical guidance requires far greater knowledge of moral context than initially seems to be the case. To know whether the rule you wish to follow is truly universalisable, you have be able to predict its consequences in multiple scenarios. But prediction of that sort is elusive in an era of rapid technological change. The same goes for figuring out how to maximise pleasure and minimise pain. This has been notoriously difficult for utilitarians given the complex causal relationships between acts and consequences. This was true even before the era of accelerating technological change. It will hardly be better in it.

For what it is worth, I think Vallor is correct in this assessment. Although abstract moral principles might seem like a solution to the problem of accelerating change, they falter in practice. That said, I think there is some value to the abstraction. Having a general rule of thumb that can apply to all contexts can be a useful starting point. We are always going to find ourselves in new situations and new contexts, irrespective of changes to our technologies. In those contexts we will have to work with the moral resources we have. I may walk into a new context and not know what choice is universalisable or likely to maximise pleasure, but I can at least know what sorts of evidence I should seek out to inform my choice.


3. The Virtue Ethical Solution


Vallor favours a different solution to the problem of accelerating change. She argues that instead of finding solace in abstract moral principles, we should look to the great virtue ethical traditions of the past. These are the traditions associated with Aristotle, Confucius and the Buddha. These traditions emphasise moral character, not moral principles. The goal of moral education, according to these traditions, is to train people to develop virtuous character traits that will enable them to skilfully navigate the ethical challenges that life throws their way.

Why is this a compelling solution to the problem of accelerating change? An analogy might help. As a university lecturer in the 21st century, I am very aware of the challenge of educating students for the future. The common view of higher education is that it is about conveying information. A lecturer stands at a lectern and tries to transfer his/her notes into the minds of the students. The students learn specific propositions, theories and facts that they later regurgitate in exams and, if we are lucky, in their professional lives. The problem with this common view is that it seems ill-equipped to deal with the challenges of the modern world. The information that I have in my notes will soon be outdated. For example, if I am teaching students about the law, I have to be cognisant of the fact that the rules and cases that I am explaining to them today may be overturned or reformed in the future. When the students step out into the professional world, they will have to cope with these new laws — ones they haven’t learned about in the course of their education.

So education cannot simply be an information-dump. It wouldn’t be very useful if it were. This is why there is such an emphasis on ‘skills-based’ education in universities today. The goal of education should not be get students to learn facts and propositions, but to develop skills that will enable them to handle new information and knowledge in the future. The skill of critical thinking is probably foremost amongst the skills that universities try to cultivate among their students. Most course descriptions nowadays suggest that critical thinking is a key learning objective of college education. As I understand it, this skill is supposed to enable students to critically assess and evaluate any kind of information, argument, theory or policy that might come their way. The successful critical thinker is, consequently, capable of facing the challenges of a changing world.

The goal of virtue ethics is much the same. Virtue ethical traditions try to cultivate moral skills among their adherents. The virtuous person doesn’t just learn a list of rules and regulations that they slavishly follow in all circumstances, they cultivate an ability to critically reflect upon new moral challenges and judge for themselves what the best moral solution might be. This may require casting off the principles that once seemed sensible. As Vallor puts it:

Moral expertise thus entails a kind of knowledge extending well beyond a cognitive grasp of rules and principles to include emotional and social intelligence: keen awareness of the motivations, feelings, beliefs, and desires of others; a sensitivity to the morally salient features of particular situations; and a creative knack for devising appropriate practical responses to those situations, especially where they involve novel or dynamically unstable circumstances. 
(Vallor 2016, 26)

The claim then is that cultivating moral expertise is the ideal way in which to provide moral guidance in an era of accelerating change:

[Ask yourself] which practical strategy is more likely to serve humans best in dealing with [the] unprecedented moral questions [raised by technological advances]: a stronger commitment to adhere strictly to fixed rules and moral principles (whether Kantian or utilitarian)? Or stronger and more widely cultivated habits of moral virtue, guided by excellence in practical and context-adaptive moral reasoning? 
(Vallor 2016, 27)

This is a direct challenge to premise (1) of the argument from accelerating change. The claim is that we do not need to know the particulars of every moral choice we might face in the future to provide moral guidance to ourselves and our cohorts. We just need to develop the context-adaptive skill of moral expertise.


4. Criticisms and Concerns
Of course, the devil is in the detail. Vallor’s book is an attempt to map out and defend exactly what this skill of moral expertise might look like in an era of accelerating technological change. As already noted, I haven’t read the whole book. Nevertheless, I have some initial concerns about the virtue ethical solution that I want to highlight. I know that Vallor is aware of most of these, so hopefully they will addressed later on.

The first is the problem of parochiality. Prima facie, virtue ethics seems like an odd place to find solace in a time of technological change. The leading virtue ethical traditions are firmly grounded in the parochial concerns of now-dead civilisations: Ancient Greece (Aristotle), China (Confucius) and India (the Buddha). Indeed, Vallor herself acknowledge this, as is clear from the quote I provided earlier on about the luxury these iconic figures had in assuming that things would be roughly the same in the future.

Vallor tries to solve this problem in two ways. First, she tries to argue that there is a ‘thin’ core of shared commitments across all of the leading virtue ethical traditions. This core of commitments can be divorced, to some extent, from the parochial historical concerns of Ancient Greece, China and India. These commitments include: (i) a belief in flourishing as the highest ideal of human existence; (ii) a belief in virtues as character traits shared by certain exemplary figures; (iii) a belief that there is practical path to the cultivation of moral expertise; and (iv) some conception of human nature that is relatively fixed and stable. Second, she tries to identify virtues that are particularly relevant to our era. She does this by adopting Alisdair MacIntyre’s theory of virtues, which argues that virtues are always tied to the inherent goods of particular social practices. She then tries to argue that there is a set of goods inherent to modern technosocial practice. These goods are mainly tied to our growing global interconnectedness, and the consequent need to cultivate global wisdom, community and justice.

Both of these attempts to overcome the problem of parochiality are interesting and worthy of greater consideration. I hope to examine them in more depth at a later stage. I want to fixate, however, on one aspect of Vallor’s ‘thin’ theory of virtues because I think it reveals another important problem: the problem of human nature. As she notes, all virtue ethical theories share the idea that the goal of moral practice should be to promote human flourishing. They also share the belief that the path to this goal is determined by some conception of human nature. It is because there is a relatively stable and fixed human nature that we can meaningfully identify certain practices and traits as conducive to human flourishing. Vallor accepts that the ‘thick’ details of this theory will vary between the traditions, but also seems committed to the notion that there is some stable core to what is conducive to human flourishing. For example, when commenting on the need to develop social bonds and a sense of community, she says:

Humans in all times and places have needed cooperative social bonds of family, friendship, and community in order to flourish; this is simply a fact of our biological and environmental dependence. 
(Vallor 2016, 50)

This quote shows, I think, how the virtue ethical solution to the problem of accelerating change is to swap abstract and fixed principles for an abstract and fixed human nature. I think this is problematic.

I’m certainly not a denier of human nature. I think there probably are some stable and relatively fixed aspects of human nature, at least for humans as they are currently constituted. But that’s the crucial point. One of the biggest moral challenges posed by technological development is the fact that it is no longer just the environment around us that is changing. Technologies of human-machine integration or human enhancement threaten the historical stability of our ‘biological and environmental dependence’. Two potential technological developments seem to pose a particular challenge in this regard:

The Hyperagency Challenge: This arises from the creation of enhancement technologies that allow us to readily control and manipulate the constitutive aspects of our agency, i.e. our beliefs, desires, moods, motivations and dispositions. If all these things can be erased, changed, overridden, and altered, the idea that there is an internal, fixed aspect of our nature that serve as a moral guide becomes more questionable. I’ve written two papers about this challenge in the past, so I won’t say anymore about it here.

The Hivemind Challenge: This arises from the creation of technologies that blur the boundary between human and machine, and enable greater surveillance and interconnectedness of human beings. As I’ve noted in the past, such technologies could, in extreme forms, erode the existence of a stable, individual moral agent. Since most virtue ethical traditions (even the more communitarian ones) assume that the target of moral education is the individual agent, this challenge also calls into question the utility of virtue ethics as a guide to our changing times. Indeed, if we do become a global hivemind, the idea of ‘human’ nature would seem to go out the window.

I don’t know how seriously we should take these challenges. You could argue that the technologies that will make them possible are hypothetical and outlandish — that for the time being we will have a relatively stable nature that can serve as the basis for a technomoral virtue ethics. But if the relevant technologies could be realised, it might call into question the long-term sustainability of a virtue ethical solution to the problem of accelerating change.

The final problem I have is the problem of calibration. This is a more philosophical worry. It is a worry about the philosophical coherence of virtue ethics itself. The claim made by many virtue ethicists is that moral expertise is a skill that is cultivated through practice. The moral expert is someone who can learn from their experiences and the experiences of others, and use their judgment to hone their ability to ‘see’ what is morally required in new contexts. What has never been quite clear to me is how the moral expert is supposed to calibrate their moral sensibility. How do they know that they are honing their skill in the right direction? How can they meaningfully learn from their experiences without some standards against which to evaluate what they have done? I’m not exactly sure what the answer is, but it seems to me that it will require some appeal to abstract moral standards. The budding moral expert will have to assess their actions by appealing to standards such as the general utility of pleasure over pain, the desirability of individual autonomy and control, the typical superiority of impartial universal rules over partial and parochial ones. In sum, it seems like the contrast between virtue ethics and abstract moral principles and standards may not be that sharp in practice. We may need both if we are going to successfully navigate these changing times.