Science & morality
Can science help us understand moral questions? Can it help to tell us what we should and shouldn’t do?
‘We’re on the edge of understanding the scientific basis of morality and ethics, at this point.' Ard: ‘We are?'
Transcript
Ard: Let’s say I want you to work out the value of a human being. So what’s the value of each of us here?
PA: Well you could do that using economics, couldn’t you? If you wanted to, at the crude level.
Ard: I don’t mean just the monetary value.
PA: I’m asking you to try to clarify your question.
Ard: So I think when I say ‘value of a human being’, it’s something along the lines of do I think David has some kind of intrinsic value so that I shouldn’t kill him, for example.
PA: Well we’re on the edge of understanding the scientific basis of morality and ethics at this point.
Ard: We are?
PA: And moral principles, in my view, emerge from two sources. One is our ethological history, our evolutionary history, that we have learned to, if you like, contribute to stable societies through particular patterns of behaviour, and those have now so pervaded our behaviour that we regard them as our moral fibre.
And the second one is that we, with our big brains, can reflect, in quiet moments at least, on the consequences of our actions. So we’re not just automata in terms of our evolutionary history: we are reflective human beings. I’m not going to kill you because you might have similar views about me, so let’s compromise and not kill each other.
Ard: But why is that a scientific explanation?
PA: Well it’s a way of looking for the roots of morality. And if the roots of morality are ultimately the stability of societies, then you have to explore using the scientific method – whatever that quite means – but in terms of evidence, looking at genetics, looking at histories – what ultimately leads to stable societies.
Ard: And that will give us a science of ethics?
PA: Yes.
David: So for you that’s going to be in the genes, ultimately, isn’t it?
PA: Yes, ultimately in the genes, in the sense that we, with particular types of genes for not killing each other...
David: So you think a morality, a scientific morality, is ultimately going to have to look to biology, to evolutionary theories, genetic theories?
PA: Oh, absolutely, yes.
David: It’s going to have to be built from what we know about altruism and the genetics of altruism.
PA: Yes. And morality, if you like, is the ultimate emergent property of the gene.
Ard: So isn’t there a worry there that once I understand that this is the case, then I have this sense that I don’t want to kill David because I feel it would be a bad thing to do? But once I realise that that’s just my genes or my history that’s telling me that, there’s nothing more than that.
PA: But there is more than that, because you know that he might be thinking the same: to kill you.
Ard: That’s right.
PA: Society is a network of compromises, and we know that if we don’t go around randomly killing, then we’re more likely to survive.
Ard: That’s true. So as long as David doesn’t randomly kill, and you don’t randomly kill, I’m fine. But I can do whatever I like.
David: It sounds to me like civilisation is a very large Mexican stand-off!
PA: Well I’m afraid that’s largely true.|
Ard: You were saying that goodness is linked to fitness in evolution, but in evolution things do annihilate one another, so…
PA: Because sometimes they…
Ard: Is that good?
PA: They only have an immediate view of their fitness.
Ard: Okay, so the goodness is much more complicated than evolution?
PA: Yes, and probably more far-sighted than even we are. I mean, we are the most far-sighted of all the creatures that there are, but whether we’re far-sighted enough, who knows?
Ard: But the science of good versus evil will come out of our understanding evolution?
PA: That’s a very deep question I think, because, in a sense, the bigger our brain, the more able it is for us to transcend physical evolution.
Ard: Sure.
PA: We can look to the future and see the consequences of sacrifice now. Well, in principle.
David: It sounds like, for you, what’s good and what’s bad is a human construction. We’ll agree what’s good and what’s bad. Is that right?
PA: And it changes.
David: And it changes. Whereas for you, Ard, some things just have to be good.
Ard: Yeah, I think some things…
David: Transcendently good? Have I used that word?
PA: Like what?
Ard: Like cruelty, I think, is always wrong: generosity is good, whether we agree on it or not. There are societies that think that cruelty is… They advocate cruelty towards certain other groups.
PA: Yes.
Ard: I think they’re wrong. And I think they’re wrong regardless of what…
PA: So inhibition of the aspirations of others is bad? Is wrong?
Ard: I think that, for example…
PA: Whereas encouragement of the aspirations of anyone is good? Even Hitler?
Ard: No, that’s not what I’m saying. I’m saying that cruelty towards others can be bad, irrespective of whether the society thinks it is or it isn’t a good thing. So a classic example would be slavery, which is a kind of cruelty towards others. I would say that’s wrong, regardless of whether society itself thinks it’s a good thing.
PA: It depends what you mean by slavery, doesn’t it? We’re all slaves, in a certain sense.
Ard: Yeah.
PA: We are all employed, so we are slaves under the masters: our paymasters.
'I think it’s really interesting to think about how brain chemistry can influence our moral values. It’s certainly evidence that they’re not set in stone...'
TranscriptArd: And so, I’m just thinking about myself in my now, currently sleep-deprived state. So, my judgement is probably a little impaired, and maybe my sentiments are impaired, but also my ability to make decisions is a little impaired, so I’m more likely to make not good decisions because I just don’t take the time to think about them.
MC: I think most people probably think that their moral sentiments, their moral values, are set in stone: that they’re very difficult to change. They feel central to who we are. But in our work we’ve shown that we can actually shift around people’s moral decisions and their moral judgements by giving them a pill, like an antidepressant drug, or a drug that boosts dopamine in the brain, and this, I think, is evidence that our values really do shift around to the extent that our brain chemistry is shifting around.
We’ve shown, for example, that a single dose of an antidepressant drug nearly doubles the amount of money people are willing to pay to avoid shocking someone else. And it makes people less willing to say it’s morally acceptable to harm one person in order to save many others.
So I think it’s really interesting to think about how brain chemistry can influence our moral values. It’s certainly evidence that they’re not set in stone, which I think is really encouraging, because it suggests that these intractable conflicts that involve a disagreement in moral values could potentially be resolved.
And it’s interesting to think about whether, one day, there actually might be medications or pills that could actually change people’s moral behaviour. I don’t think we’re there yet. I think it’s a long way before we would have the technology to target moral sentiments in this specific way, and one reason for that is it’s so difficult to define what is morality, which I’m sure you guys are very sympathetic with. But the fact that we’ve shown different chemicals in the brain do influence moral decisions is preliminary evidence that this could, maybe, be feasible.
David: And worrying.
MC: And worrying. But I think there’s no need to worry about an off-the-shelf morality pill, because that could never exist. I think morality is way too complex to be delivered in pill form.
‘It couldn’t be that an empirical investigation of the structure of our bodies or brains could reveal that there is no such thing as an authoritative moral requirement.'
TranscriptArd: On this topic of morals, what do you think is at stake if someone were to say, ‘Well, you know, I’m just going to let go of this idea that there is any kind of real normativity to morals, that they are really out there. They’re just something that we’ve created and, finally, science has told us that that’s all that they are.’? Some people say, ‘Well, who cares?’ Is that going to matter? Or is something at stake?
JC: Well, science, I think, could never tell us that that’s all that they are. It couldn’t be that an empirical investigation of the structure of our bodies or brains could reveal that there is no such thing as an authoritative moral requirement.
Ard: Well, there are people who claim that it does, but you just don’t think that’s right? On what basis? Why do you think that’s not right?
JC: I think you could certainly say that science exhausts all the reality that there is. And so anything not explicable in terms of the procedures of science just has no status. But that would be scientism, not science.
David: Yeah, what is scientism?
JC: I think we all have, I certainly have, an enormous respect and admiration for science. I think it’s one of the greatest of human achievements. But scientism is very different: that is the non-scientific dogma or doctrine that science exhausts all the reality that there is. And that could not be established scientifically, so it’s not a scientific claim.
David: I’m not quite sure. What does that mean? ‘It exhausts all the reality there is’.
JC: One way of putting it would be that everything, ultimately, is grounded in some ultimate physical entities, particles or forces, and that there are no truths which aren’t, in principle, explicable ultimately in those terms.
But I think, what we’ve just been talking about – namely normativity, authority, the idea of a requirement which is incumbent on me however I happen to feel, however my drives or inclinations push me – that’s something which it’s very hard to see as explicable in those terms.
David: Yes.
Ard: So, are you saying that, partially, the argument that science can explain everything is itself a non-scientific argument because it stands outside of science in order to make that claim?
JC: Yes.
Ard: So maybe a last question along these lines. Some scientists will say the beauty of science, or of mathematics even, is that science has a method by which we all end up agreeing. That’s not true of moral truths: we don’t have a science of morality that allows us to adjudicate in some kind of clear way – it’s a lot more fuzzy. So, given that it’s more fuzzy, it’s therefore not nearly as good a method as the scientific method is, because the scientific method allows us to agree.
JC: Yes, it’s interesting that. I mean, the great moral philosopher Bernard Williams, who died not that long ago, made a similar point when he said that in science, there’s always hope for convergence, but he saw no prospects for such convergence in ethics. I think in fact there is increasing convergence.
Ard: Okay.
JC: Things which were matters of debate 200 years ago – the rightness or wrongness of slavery – are no longer serious matters of debate. And similarly, there are ways of dealing with people who don’t share your particular preferences, say, sexual preferences or political preferences. There seems to be a convergence towards a less authoritarian… Of course, there are no guarantees in this, but I think those who take a religious view of the cosmos as a whole must believe that there is a right answer which, in principle, ought to be converged upon sooner or later, though no one can predict how long the search will take.
‘There are many attempts nowadays to derive morality from science. They always introduce by the back door some concept of the good life which they take for granted without discussion.’
TranscriptArd: People are often nervous about the idea there is moral truth out there.
GE: Yes.
Ard: And they’re nervous because they worry that you’ll use that to hit them over the head.
GE: What?
Ard: You’ll use that to whack them.
David: And say it must be like this.
Ard: It must be like this. And others say, ‘Well, you know, our morals can be explained on evolutionary grounds; evolution gives us morals.’ What do you make of that?
GE: There are many attempts nowadays to derive morality from science. Some are from evolutionary grounds, some are from neuroscience, and so on. They always introduce, by the back door, some concept of the good life which they take for granted without discussion, and they assume that’s the right thing. They then go on to talk about morality arriving out of evolution. So, for instance, the fact that people have behaved in a certain way does not mean it’s a good way to behave, and so a lot of people say because people did that, it’s good. People try to say because it promotes people living together, it’s good. Well, who said that people living together is good? It may be good for survival, but it’s not the same as good in an ethical sense.
Now, for instance, a book called The Moral Molecule is based on the idea that a good life is a happy life. Well, if a good life is a happy life, we can solve it by giving everybody drugs and we’ll all feel happy. And then is that a moral life? Of course it isn’t.
Ard: Can science explain morality? It’s very natural. You look around you – you see technology, how it’s changed our lives, medicine, how it’s made our lives better, healthier, in really dramatic ways, and that has come through the power of science.
GE: Yes.
Ard: And so it’s very natural to think we’re going to use those same methods and solve the perennial problems of what is the good life.
GE: Well there’s a very different question. Will science solve the problem of what a good life is? Or will it help you to live a good life once you’ve decided what it is?
Ard: Exactly.
GE: Okay, science can do the second, to some extent, although, of course, technology is only a fraction of the solution: a huge part of living the good life is to do with psychology, sociology, philosophy, ethics, and so on.
Can science tell you what a good life is? My answer is an unequivocal, no. There’s no chance science can tell you what a good life is, because there’s no scientific experiment for what is good and what is bad.
And as I’ve said already, whenever people claim they’ve got an explanation from evolutionary theory, or genetics, or neurobiology, they always import, behind the scenes, a concept of what the good life is, and they don’t tell you they’re doing it. They take it for granted, and you have to learn to challenge them when they say this is what will make you make things good, and you have to challenge them: ‘Well, how do you know it’s good?’
And they will keep on coming back to you with some assumption about what is good and what is bad, and that is what science cannot do.
Morality is a completely different dimension. Science can explain what I would call the lower reaches of morality. It can explain certain behaviours which tend to enable societies to live together. To call that good or bad, it’s simply... it’s the wrong dimension. It’s not a moral dimension.
‘One of my favourite quotes is from Blaise Pascal, who says, “Human beings are the glory and the scum of the universe.”’
TranscriptDavid: When I was at university in the ‘80s, moral sentiment was… All the talk was about altruism and that we find it very difficult. We might be altruistic to someone we’re related to, but that’s about it.
MC: I think what’s the most fascinating aspect of human nature, to me, is the fact that we harbour these benevolent sentiments at the same time as being quite selfish and even malevolent in some situations.
One of my favourite quotes is from Blaise Pascal, who says, ‘Human beings are the glory and the scum of the universe,’ which is just so evocative, right?
And it’s absolutely true: we care a lot about fairness, for example. We’re really motivated to achieve fair outcomes and will even sacrifice personal costs to ensure that outcomes are fair. And this has been shown many times in the lab, and I’m sure you talked a lot about this with Martin [Nowak].
We like to cooperate. We like to cooperate for the sake of cooperating. We like to do good for the sake of doing good. And that’s why you can make people more generous just by reminding them about moral norms. You can also make people more generous by reminding them about their reputation.
So not only do we care about doing good, we know that other people care about that, and so then that gives us a selfish reason to do good. And one of the great debates in the altruism and cooperation literature over the past – well, as long as we’ve been studying it, really – is this question of does ‘true’ altruism exist? Are people willing to sacrifice themselves to help someone else, even maybe a stranger, for the sake of that other person? Or does it all come down to selfish value? And I don’t think that question has been fully resolved, yet. But there are certainly hints of evidence that we do genuinely value the welfare of others for its own sake, and not for the sake of what it can bring to us.
David: I’m amazed to hear you say, ‘we haven’t answered that question fully.’ I would have though when you say ‘we’, do you mean academics?
MC: Yes.
David: Because the rest of the world… There’s about several thousand years of clear evidence, surely. I mean, I don’t see that it’s a question. Yes, people do. They’re willing to be completely unselfish.
MC: Of course they are, behaviourally, but the unresolved question, in my mind, is, when I help someone, am I helping them because I truly care about them? Or am I helping them because it feels good to me?
Ard: Yeah. Scratch an altruist, watch a hypocrite bleed, right?
David: But when you when you say, ‘It feels good to me…’
MC: It feels good, yeah.
David: Would that…? I mean, if you’re standing by the side of the road and a child who you’ve never met trips and is going to be hit by the bus, and you reach in, and you risk life and limb, but you pull them back. You didn’t have time to think, ‘Now, am I related to them?’ And, ‘are there a lot of people watching?’
MC: Of course.
David: ‘Will people clap?’ You just did it.
MC: Yeah.
David: Now, is that being hypocritical or selfish, or is that just doing it? Is that the moral sentiment just making you do something good?
MC: Yeah, it’s making you do something good.
David: So there’s no question then, surely?
MC: So again the debate is not about whether people do good.
David: No.
MC: Even really, really profound heroic acts, like risking your life to save a stranger, people do do this, of course. There’s no argument about that. The question in academic research – which might be the kind of question which only academics who think about this all the time care about – is the question of, what is the motivation? And I think your point is a good one, which is to say that maybe a lot of these more selfish kinds of motivations actually take some time to compute.
There’s research by David Rand, who went into the narrative accounts of people who won the Carnegie Medal for heroism. (So these are people who have risked their lives to save a stranger), and he went in and he analysed the narratives of these experiences and looked for language indicating whether people thought about it or whether they just did it impulsively, and overwhelmingly the evidence shows that people are not deliberating in these kinds of situations, which is, I think, pretty good evidence for a pure, altruistic motive. But it’s not the smoking gun. I think we would we would need to be very clever in order to find that smoking gun.
‘I don’t think that science is in a position to tell you what we ought and ought not do. It is in a position to tell you why we’ve done it.'
TranscriptArd: What does science tell us about questions like euthanasia or abortion or war, for example?
AR: So science can answer a lot of empirical, factual questions about these matters, okay? But what science can also do is explain why the debates about these issues between two people who fundamentally disagree are intractable, and it’s a mistake to look for resolution of these disputes, and those who hold one side or the other aren’t either morally right or morally wrong: that the search for this more fundamental basis on which we could absolutely adjudicate these questions is a mistake.
Ard: So what should we do then? Given that, say, David and I really disagree about something…
AR: I think it’s an important factor for moral toleration of these disputes. And, at least in some cases, we can come to understand why people have held them over time, why cultures have held to very radically incompatible mores and norms, and even identify how the environmental circumstances in which these mores emerged have changed in a way to make them no longer ones we ought to support.
Foot-binding is a nice example. And a lot of the disputes that we have, cross-culturally, about differences in moral norms are to be unravelled and understood in the way that we now understand foot-binding as a practice which, at its start, was adaptive for individuals and by the end was mal-adaptive for everybody.
Ard: And so your argument is to say we shouldn’t do foot-binding anymore because it’s not adaptive, or should we…?
AR: No. I don’t think that it is in a position to tell you what we ought and ought not do: it is in a position to tell you why we’ve done it and what the consequences of continuing or failing to do it are, okay? But it can’t adjudicate ultimate questions of value, because those are expressions of people’s emotions and, dare I say, tastes. And we understand now what the basis of those differences are from what we understand from neuroscience, or at least we’re beginning. I mean, when I say what neuroscience or cognitive neuroscience can teach us, I’m talking about what my projections and hopes are for the future of a science which has only just begun.
Ard: I think what we both agree on is that science does not answer questions like, ‘What is the value of a human being?’ I think what Alex says is, thus, that question, ‘What is the value of a human being?’ is in fact not a very well-posed question. Whereas I would say…
AR: To some extent these questions are ill-posed. To some extent they are pseudo questions, and to some extent they can be answered by science. That’s what I hold, and there’s no residue left after those three categories are exhausted.
‘Our neurobiology is not a moral or immoral system: it’s an amoral system. I don’t think a neurobiologist would have said, “Look, I must have a brain that loves. I must have a brain that hates.” I think it’s just, “I must have a brain that survives.”’
TranscriptDavid: What about the good? People sometimes feel that the good is also beautiful: that good things are beautiful and bad things tend to be ugly. Is there a…?
SZ: It is important to emphasise, I think we left out of the discussion a finding which is important, that moral beauty also correlates with activity in the medial orbitofrontal cortex.
David: Does it? Really?
SZ: This is not our finding, it’s a finding from somebody in Japan. But in the sense of moral beauty, what I mean…
David: It lights up that same bit as mathematical beauty and…?
SZ: Yes, yes, yes.
Ard: But what is moral beauty?
SZ: Well, for example, if I put you in a situation where you’re very hungry and I can give you a very, very nice-looking steak, but you can give up that steak, and remain hungry, and give it to a child who is poor and hungry, then in the first case you have satisfied yourself. You had your reward and pleasure. In the second case, you have satisfied your moral sense, and in such conditions the activity in the medial orbitofrontal cortex goes up. Sorry, I should have said that before. So there is a connection in terms of brain activity in terms of moral beauty, and visual beauty, and musical beauty, and mathematical beauty.
Ard: That’s fascinating.
SZ:-And the experience of someone beautiful as being somebody morally good probably reflects that.
Ard: So we get fooled sometimes by our brains, looking at somebody beautiful and thinking…?
SZ: You get fooled, that’s right.
Ard: But on the other hand, you get fooled because there is something to it. A true moral act, like sharing your food when you’re hungry, actually lights up the same part of the brain.
SZ: Yes, yes, yes, yes, yes.
Ard: So, I’ve seen you write that the fact that we exploit people is part of our neurobiology.
SZ: Yes, yes, yes.
Ard: Would you say that’s true?
SZ: Well, you see, I think in terms of that, I’ve got great difficulty in terms of discussing these issues with people who have got no interest in neurobiology, because they think that hate is evil – it’s bad – and love is good. I don’t think that’s the way it works. I think hate and love are part of the makeup of the brain. Hate and love have both served their function in achieving great things, and also in destroying great things. So, to me, it is an amoral system. It’s not a moral or immoral system, it’s an amoral system.
Ard: Our brains, you mean? Our neurobiology?
SZ: Yes, our neurobiology. I don’t think a neurobiologist would have said, ‘Look, I must have a brain that loves. I must have a brain that hates.’ I think it’s just, ‘I must have a brain that survives, and I must have a brain that achieves.’
David: Well, love and hatred a bit like light and dark. You can’t have the one without the other?
SZ: Yes, they are.
David: No matter how lovely the light might be, if there was no dark, you wouldn’t see the light.
SZ: Yes, it’s part of our repertoire.
Ard: But I think what you’re saying is that these neurobiological states are neither good nor evil, in and of themselves.
SZ: Yes.
Ard: The category of good and evil is something outside of those neurobiological states. It’s a different category.
SZ: Yes, I mean, I would say that good and evil, and the urge to destroy, and the urge to love, and the urge to compassion, all have strong survival values.
Ard: But that doesn’t make them right or wrong?
SZ: In terms of neurobiology, it doesn’t make them right or wrong. It’s just these are states, and this…
Ard: But just in terms of us as human beings, right or wrong is a different category?
SZ: I don’t know. I mean, what is right and what’s wrong? I think the killing of millions of people throughout the ages has been tolerated and accepted, and indeed welcomed, rapturously. So, at that time, presumably people did not think of it as wrong.
Ard: Would you say that they were mistaken?
SZ: Well, who am I to say whether they were mistaken or not? At the time they did it, they did it without qualms.
Ard: Yeah, that’s true.
SZ: Equally, people have shown great compassion and sometimes have not asked themselves whether showing great compassion was necessarily a good thing, but they’ve done it. So I think that these are biological things. I’m not sure that they are written out there.
David: When I heard you talking about the brain trying to stabilise reality, like keep the leaf green, what jumped to mind was this tendency that people want to say that there are moral absolutes, moral reality – this is always good and that’s always bad – and I just wondered whether it’s ridiculous to wonder if that tendency derives from that strategy the brain has of trying to just keep things stable.
SZ: I think so, and there’s another similarity there such as you find with illusions, such as you find with the contradiction between the laws of gravity and the laws quantum mechanics. You accept them both; you accept they’re both valid in their own right: they don’t clash. So you would classify some people as good, some people as bad, some people as moral, some as immoral. You stabilise the world for yourself in this way. It’s a very easy pigeon-holing classification.
David: So it might not be that you can say this is morally, absolutely good and this is morally bad, but that whole drive to make a moral world in some ways looks, to me, like it leaps off from that very ancient thing that the brain does. It wants to say, ‘I must be able to categorise things as good or bad.’
SZ: Yes. Yes, yes, yes, yes, yes, absolutely. I think it’s part of the imperative working of the brain to stabilise things or to categorise things, and if they come into conflict, you just put them into separate categories.
Ard: I think that the point is that you stabilise the colour green because that makes it much easier for you to understand what’s really there. So it may be that you stabilise…
David: …morals because they’re really there. You have an answer for everything.
Ard: No, no, no, no, not because it helps you. Even though it may not always be perfect, it helps you see more clearly what’s really there. It has a use. It has a…
SZ: But another word for stabilising would be categorising…
David: And making sense of.
‘I would never trust scientists to tell me what is moral or immoral because they're a bit like philosophers: they're sort of narrow in how they look at things.’
TranscriptDavid: If you say that the natural part of us is going to be aggressive and selfish and bad, in some way, then you're either left… you've got to say, well, where does the good part come from? And it seems to me in religion we say, well it comes from God, and then if you're not religious, you say, well it comes from rationality. So it seems to me that rationality steps in for the atheist where God used to be.
FdW: That’s what happened during the Renaissance, the philosophers did that. They said, well religion, let's move that to the side and we philosophers, we will propose rationality as an explanation of human morality. More recently there have been proposals, like Sam Harris and people like that, that science is going to solve the moral issue: science is going to tell us what is moral and immoral.
Ard: And what do you think about those kinds of proposals, because they sound attractive… science…
FdW: I would never trust scientists to tell me what is moral or immoral, because they're a bit like philosophers: they're sort of narrow in how they look at things. And if you look narrow enough, like, for example, take the utilitarian view, which is very popular amongst philosophers – that you do the greatest good for the greatest number of people – if you follow that rule, for example, I could give a very good scientific explanation why slavery would be beneficial: slavery is actually, rationally, a very good system. What's wrong with slavery? We could have that argument and I might win it. You know, I might say slavery is good, even though we now recognise that…
David: Well, the utilitarian would say, look, if we have to enslave a few people to benefit a larger number of people, then that's the greatest good for the greatest number of people, which is in fact the argument that was made.