diff options
Diffstat (limited to 'content/entry/metaethics.md')
-rw-r--r-- | content/entry/metaethics.md | 101 |
1 files changed, 101 insertions, 0 deletions
diff --git a/content/entry/metaethics.md b/content/entry/metaethics.md new file mode 100644 index 0000000..e09c102 --- /dev/null +++ b/content/entry/metaethics.md @@ -0,0 +1,101 @@ +--- +title: "Metaethics" +date: 2020-10-11T00:00:00 +draft: false +--- +I'm going to talk about metaethics using the 3 questions posed by Bernard Rosen and Richard Garner. The first part is moral semantics. Moral semantics asks how we should interpret moral language, words like good, evil, right, wrong and ought. The next is moral ontology. Moral ontology asks about the nature of moral judgements. Are there many kinds of moral judgements? Are those judgements true for everyone or only specific groups? Lastly, there is moral epistemology. It talks about what we can know about morality and how we know it, irrespective of its nature. For instance, how can we justify our moral judgements to others? I'll start with moral semantics. Without defining the semantics, nothing else in this post would have any meaning. + +# Moral Semantics +There are at least 2 desirable properties for our moral semantics: + +1. It should allow us to convince rational agents of our moral judgements. +2. It should minimize the number of assumptions we have to make. + +Theories such as emotivism[1] assert moral sentences just express emotions. When I say "Murder is wrong", I don't mean "I dislike murder". Neither does anyone I have ever met. We want more out of a moral theory than expressing emotions. We want to be able to convince others of our judgements. Emotivism isn't what we're after. What about universal prescriptivism[2]? It holds that moral judgements such as "Murder is wrong" should be interpreted as "Don't murder". But just commanding someone to do something isn't necessarily convincing because it doesn't employ logical reasoning. It's unlikely to convince anyone that doesn't already believe they shouldn't be murdering. So again, it fails our first requirement of being able to convince others. Let's move on to some other theories. + +## Hume's Guillotine +Ethical naturalism[3] says that moral propositions are objective properties of the cosmos. This means that we can look at features of reality and "see" what is right and wrong in the same way that we can look into a microscope and deduce the germ theory of disease. This idea of moral semantics is self-evidently absurd. Making no extra assumptions, nothing about the way the world is tells us the way it should be. I cannot deduce "Murder is wrong" from empirical facts like "The sky is blue" or any other facts about the physical or metaphysical cosmos. This strict divide between facts and moral judgements is known as Hume's Guillotine[4]. + +Ethical non-naturalism[5] tries to bypass Hume's Guillotine by saying that these moral judgements are irreducible. Nothing about the way the cosmos is tells us how it should be, but how things should be is an objective irreducible (possibly intuitive) property of the cosmos itself. If someone asks me "Why shouldn't I murder?", the only correct response according to ethical non-naturalism is philosophical jargon like "It is an irreducible, intrinsic property of the universe that murder is wrong". If another ethical non-naturalist comes along saying murder is ethical, all I can do is repeat how my belief is an intrinsic property of the universe, so the other person must be mistaken. It would be like watching two presuppositionalists[6] argue in circles. I'd be comfortable going on record saying presuppositional apologetics has never convinced anyone that didn't already believe what it is they were presupposing. More like they already believed something and went looking for philosophical jargon to defend it. That's exactly what ethical non-naturalism does and also why it's not convincing. Ethical non-naturalism fails both of our criteria because it is unconvincing to third-parties and requires making assumptions. + +So far, we haven't had any luck finding a moral semantics that satisfies both our requirements. What about divine command theory[7]? According to it, god's moral judgements are correct. There is no evidence that a god or gods exist, but let's pretend for a moment that a god does exist and that god makes moral judgements. According to divine command theory, god's judgements are true. This raises the Euthyphro dilemma[8]: Are god's moral judgements true just because god declares them, or are god's moral judgements true because god only declares true moral judgements? If the former is true, then god can declare "Murder is perfectly morally okay" and it would be true because god said so and morality would be arbitrary. If the latter is true, then god is just the messenger for moral judgements that are true independent of god's opinion. Therefore god would be arbitrary. Ideal observer theory[9] suffers from the same dilemma. Even if we ignore all of that, both theories still fail our second criteria. The assumption is that god or the ideal observer's judgements are true. We want to avoid making strong assumptions, so these theories aren't good either for our criteria. + +## Moral Progress +I want to define "moral progress" before I continue. Moral progress means just what is sounds like; that it is possible to go from a less ethical society or individual to a more ethical one. Certain moral theories don't allow us to do this. Error theory[10] says that all moral claims are false. This is an assumption and it doesn't allow us to convince rational agents of our moral judgements because all moral judgements are false. "Murder is wrong" and "Murder is good" are both false under this theory. So it's a non-starter. We can't do anything useful with this theory. We can't convince others, can't reason, can't make deductions, and never have any reason to change our minds. + +Moral progress is also impossible under moral relativism[11]. It's difficult to draw a hard line between what constitutes a "culture" or a "group", but let's ignore that for now. Let's say we have a very clear idea of who belongs to which culture at what time. According to relativistic morality, what is good is defined as what the "group" accepts as good. This group could be a single individual or a society. Let's take the case of a single individual. If I am my own group, then whatever I believe is automatically correct because I believe it. It's "true for me" that murder is wrong. It may not be true for another person or group, but it is true for me. Morality is relative. + +With this reasoning, I am never wrong. There is never a reason for me to change my mind about any moral judgement because I'm right by definition. I can't convince other individuals because whatever they believe is "true for them", so this theory fails our first criteria. With cultural relativism, the culture is the group, not the individual. So, it might be possible for an individual to be wrong if they disagree with their culture. This would mean that an abolitionist in a slave-owning culture would be morally wrong about slavery because the predominant culture is in favor of owning slaves. Also, if the culture decides slavery is wrong, then there are two interpretations that can be made of their previous support of owning slaves. The first interpretation is that the culture was wrong to think that slave-owning was just, and now they have the right belief. But according to cultural relativism, this would also be true in the reverse direction. Going from an abolitionist culture to a slave-owning one would also have to be considered moral progress, since the only metric by which moral judgements can be made is what the existing culture believes. The second interpretation is that the culture was never wrong. When the culture was in favor of slave owning, it was in fact good to own slaves for that culture. And when the culture was in favor of the abolition of slavery, then owning slaves was immoral for that culture. This would imply that moral judgements can change over time, but moral progress never really happens. Moral progress aside, convincing other cultures of your culture's moral judgements has no rational basis in cultural relativism. Furthermore, it assumes that the culture is always right, a very strong assumption that fails our second criteria. + +## Objective Morality +Other moral semantics define morality in different ways. For example, some define good to be that which maximizes the well-being of all conscious creatures and bad to be that which maximizes suffering. These objective moral semantics are not relative to any individual or culture. They do allow for moral progress and they have the benefit that they allow us to convince others of our moral judgements. For a contrived example, let's say I believe that "Murder is wrong", but my interlocutor thinks that "Murder is good". If we are both using the same definition of good (the maximization of well-being), then I can show that murder reduces well-being, empirically. I can show how it negatively affects the well-being of the family and friends of the victim and talk about the pain of being murdered and the loss of the possibility of future well-being for that person. To use my earlier example, I can point out how slave-owning societies have less well-being in total than those without slaves. Therefore an abolitionist culture would be morally superior according to my definition of good and going from a slave-owning culture to an abolitionist one would be moral progress. + +The big problem with objective morality is it must make at least one assumption about what ought to be in order to bypass Hume's Guillotine. With utilitarianism, I am assuming that maximizing well-being and minimizing suffering is what we're after. I have to assume that to deduce that murder is wrong. Otherwise I can point out that murder reduces well-being all day long, but it won't get me anywhere because good has nothing to do with well-being. So we are stuck with either not being able to reason about moral judgements with rational agents, or assuming that good has something to do with well-being. + +I am levying the same criticism about popular moral philosophy that Immanuel Kant[12] did back in his day through my examples. Kant rightly realized that objective moral philosophy has the insurmountable problem that it must rely on a "heavily subjective" moral imperative to get started. The earlier example I gave of well-being does not apply to people who only care about their own well-being. Utilitarianism[13] will never persuade moral action on behalf of those that only care about themselves. Therefore, objective morality can never surpass hypothetical imperatives[14]. A hypothetical imperative only applies to someone who wishes to achieve certain ends. If I want to pass a test, I'd better study. Another way to say this is I only need to study if I want to pass the test. If I don't care about passing, then I can study or not. It makes no difference. Kant saw this as inadequate and came up with categorical imperatives[15] instead. Categorical imperatives boil down to maxims[16] which also have to be assumed. So while Kant rightly criticized the objective morality of his day for making assumptions, he went on to create his own theory also based on assumptions. + +## Hypothetical Imperatives +In a way, with this post, I am doing what Kant originally set out to do. He pointed out the same problems I see with objective morality and attempted to fix them. That is, existing moral systems all either require making some strong assumption or they don't make any assumptions but are useless when it comes to convincing rational agents of our moral judgements. But in doing so, he just made his own assumptions in the form of categorical imperatives. I am not going to do that. Kant's categorical imperatives are unnecessary. Hypothetical imperatives are all that's needed. Kant would have been right if he had just stopped after his criticism of objective morality and not tried to create his own Kantian morality. I do not need to assume my way around Hume's Guillotine because I'm not going to make any assumptions. There's no need for morality to go beyond hypothetical imperatives. I shall explain further. + +We all have values. Values are things we care about. Some values are fundamental, meaning that we care about them in and of themselves. Others are instrumental. We care about them because they "derive" from other values. If I value passing a test, then I ought to study. Studying would be good. In this case, studying is an instrumental value. I care about studying because I care about passing the test. I care about passing the test because I care about passing the class. I care about passing the class because I care about graduating. I care about graduating because graduating increases my chances of getting a high-paying job. I care about getting a high-paying job because I care about having money. I care about having money because it increases my opportunities. I care about increasing my opportunities because increasing my opportunities increases my well-being. And I care about my well-being in and of itself. In that example, everything except well-being is an instrumental value. Well-being is the intrinsic value. + +Why does any of that matter? It matters because we can make certain assumptions about others' values. We can assume others generally value staying alive because evolution has baked that into all of us. Whether that is an intrinsic value or instrumental isn't important. As long as others value their continued existence, we can convince them that they ought to care about certain other instrumental values as well like having enough food to eat, having shelter, acting in a non-violent manner, etc. If we know someone's values, we can reason with them about what other values they should have, if they are rational. People often aren't rational, meaning they often have instrumental values incompatible with their intrinsic values. This is a fancy way of saying they don't know what's good for them. People can also be irrational by not doing what they know is good for them. It is common knowledge that a healthy diet and exercise is important, but we don't always do that even though we all want to be healthy. While people aren't always rational, I still consider it an important requirement of a moral system to be able to use rational arguments to convince others. + +Hypothetical imperatives don't make any assumptions because they are stated as conditionals. They also allow us to reason with other rational agents about moral judgements. The vast majority of the population values something like well-being for themselves and other conscious creatures. Therefore, I can deduce their other instrumental values if they are being rational. This allows us to collaborate on our values. It means we can tell someone "Murder is wrong" and they understand that to mean "Murder is in contradiction with one or more of my instrumental or intrinsic values". It doesn't do any good to tell a psychopath that murder is wrong because they don't value the well-being of others. This is a big problem in artificial intelligence. If a general artificial intelligence is created that is incompatible with our intrinsic human values, it could be extraordinarily dangerous. The orthogonality thesis[17] explains that any level of intelligence is compatible with any goal. This means a superintelligent AI smarter than we can imagine could value maximizing the number of peanuts in the universe above all else, including human life. It need not have human values which is what makes it so dangerous. It's not that it's bent on harming people. It's just so bent on maximizing peanuts that it grinds humans up for resources to create peanuts. It is neutral toward our well-being because it only cares about peanuts. + +We aren't going to convince psychopaths or AI systems to change their behavior by presenting them with moral theories. Hypothetical imperatives can explain why this is. Both the psychopath and the AI system do not share the same moral imperatives as most of humanity, so convincing them rationally is a lost cause. We don't lose anything by using only hypothetical imperatives. With rational agents that share our values, we can make convincing rational arguments. With rational agents that don't share our values such as psychopaths or AI, we never had any hope of convincing them anyway. With irrational agents, we may be able to convince them, but not using rational argument. Therefore whatever we are doing to convince them can't be considered moral reasoning, so we need not worry about it. + +## Wrapping Up Moral Semantics +Returning back to our original question, how can we define words like good, evil, right, and wrong? Consider the fact that most of humanity shares the same values. In general, those values boil down to well-being for ourselves and others. So, the best way to understand a sentence like "Murder is wrong" might be "Murder is in contradiction with the instrumental or intrinsic values shared by nearly all of humanity". There is an obvious objection to this. The objection is that some people just have different intrinsic values entirely. Psychopaths just don't value the well-being of others intrinsically. A statement like "Murder is wrong" has no meaning to them. Let's take a less extreme example. Person A values well-being so fervently that their ideal future looks like creating as many simulated beings as possible and flooding their minds with the utmost pleasure for the longest amount of time possible. Person A wants heaven on earth, literally. Person B on the other hand gets uncomfortable at the thought of utopia. Person B values variety, spicing things up a bit once in a while. They want to increase well-being also, but think that some suffering in the cosmos is appropriate. It's what keeps us all human and shouldn't be completely gotten rid of. Person A and B both want increased well-being for themselves and others, but given a thousand years of arguing they could never come to a consensus on where exactly the peak of the moral landscape lies. + +The second less extreme scenario I gave with Person A and Person B is far more common I imagine than psychopaths or AI. True psychopaths are rare and we don't have strong AGI yet. But every person you talk to probably pictures their version of heaven slightly differently. We want our moral language to be inclusive so we can convince others of our moral judgements, but everyone has slightly different intrinsic values. How can we reconcile this? My answer to this is simply that we don't need to. In the same way that the rays of light from the sun hit earth in parallel because the sun is so far away, most people's idea of utopia is far enough away from where we are now that the steps toward it are similar no matter what your idea of heaven is. Even if every person had radically different incompatible intrinsic values, the best thing we can do is try to find common instrumental values and work together on those. That is still the best option we have. So the only appropriate way to define moral language that I see is translating it into the language of hypothetical imperatives. + +Some may disagree, but I tend to be pragmatic. Language should be useful for communication. That's where I get my first criteria for moral semantics. What good is moral language if we can't use it to make rational arguments to convince others about our moral judgements? This is why I view theories like error theory, emotivism, and ethical non-naturalism as non-starters. They are not useful for convincing anybody of moral judgements and only serve to nullify moral language. Hypothetical imperatives are the most convincing way to interpret moral language such that extra assumptions are not necessary. + +# Moral Ontology +Given that I am using hypothetical imperatives to interpret moral language, should moral statements be interpreted as universal or relative? When I say "Murder is wrong", how can that apply to others who do not value human well-being? They can't exactly translate that to "Murder contradicts my instrumental or intrinsic values" if it in fact doesn't. Does this mean everyone has their own morality and it's all relative[18]? Or should we treat common intrinsic values as universal[19] and even those that don't value the well-being of others are subject to that moral judgement? + +As I said before, we are far enough away from utopia that even if most of us don't share the exact same intrinsic values, they converge on instrumental values. Therefore, as a matter of language, it is best for us to talk as if everyone shares the value of well-being for themselves and others. This doesn't mean we universalize well-being into a global intrinsic value for everyone. Being a pragmatist, I care about convincing others of my values. I don't think it's really an important question if values are universal or relative. My answer to this would be interpret it however you want. Pragmatically, it isn't going to affect your ability to convince anyone. I personally am going to talk in a universal way because it sounds more natural and gets the point across. I am going to say "Murder is wrong", not "Murder is wrong, for me". "Murder is wrong" applies to everyone that shares the intrinsic value of increasing well-being and decreasing suffering, which is almost all humanity. So, even though I know that not everyone has an intrinsic set of values that can deduce "Murder is wrong", I am going to speak as if it's a universal anyway because it's close enough that I'm not going to speak with exception. "Murder is wrong" is a good analogy. Murder is wrong, generally. But what about in wartime? What about in self-defense? It's less clear. But despite that, we don't say things like "Murder is wrong except during wartime and except in self-defense and except...". We don't speak this way because the list of exceptions goes on forever. For the same reason, I am going to say "Murder is wrong" without considering all the edge cases like an AI that only values maximizing peanuts. + +The short answer to the moral ontology of my metaethics is "I don't care". You can treat it as relative or universal. It makes no difference to the hypothetical imperatives. Either someone shares your values and you can go about using rational argument to convince them or they don't and you can't. Whether you want to say "Murder is wrong" is true for only people that value well-being or it's true for everyone is a question I don't think deserves an answer. It's a question that doesn't have any meaning. Semantically, I think it makes the most sense to speak in universals ("Murder is wrong", not "Murder is wrong, for me") and I've given my reasons why. With that, I'll move on to the last section which is moral epistemology. + +# Moral Epistemology +Now that we know how to interpret moral judgements, how can we actually support or defend moral judgements? Part of my motivation for writing this post is how much I enjoyed Sam Harris' book The Moral Landscape[20]. I highly recommend it. In it, he explains how scientific facts can inform moral values. What many people take issue with is that he doesn't really solve the is-ought problem. He just asserts that morality has something to do with the well-being of conscious creatures. I don't take too much issue with this since it is almost universally true. I just avoid taking that step and instead use hypothetical imperatives to avoid running into the is-ought problem that is a popular critique of his book[21]. + +The bigger problem I see with Sam's morality is one I brought up already. Even for those who do value well-being, there may be minute differences in the end goal those values imply. Some that value well-being may want a perfect utopia. Others that also value well-being may think that goes too far, that there should always be at least some discomfort to spice things up. Sam himself has admitted before that he finds the idea of a "well-being utopia" uncomfortable. His common response to criticisms of this sort is that the idea of well-being is fluid and continually evolving. However, this still doesn't solve the problem that some people, likely many people, just have irreconcilable intrinsic values, even if they all value well-being. For that reason, I choose not to assume that well-being is what everyone is after. This allows my theory to account for wide variances in value structures, but I understand why Sam starts with well-being. + +Now that I've finished criticizing what I think Sam got wrong, I'll talk about what he got right in his book. I'll start with a quote from Sam's book The Moral Landscape: + +> "If our well-being depends upon the interaction between events in our brains and events in the world, and there are better and worse ways to secure it, then some cultures will tend to produce lives that are more worth living than others; some political persuasions will be more enlightened than others; and some world views will be mistaken in ways that cause needless human misery." + +With this, we get a sense for how Sam thinks about how science can inform moral values (well-being). On this, I agree with him. He is pointing out that some ways of being produce more well-being than others. Some ways of living, some cultures, some political persuasions, some world views will tend to produce more well-being than others. And we can use science to discover which ways of doing things produces the most well-being for everyone, even given slight differences in the way individuals value well-being. If I go on for too long about science informing moral values, I will just be summarizing his book. Instead of that, I would recommend reading The Moral Landscape[22] to find out more. If you're a busy person, you can listen to the audiobook instead. Reading is more active than listening and also allows you to go at your own pace, so I would just read it if you have time. + +With the way Sam defines morality, the other value structures without well-being as an intrinsic value aren't really relevant. With my semantics, they are. So I want to make a quick observation. Science can inform all value structures, not only maximizing well-being and minimizing suffering. This includes non-human value structures such as maximizing peanuts. You can imagine an AI using the scientific method to find the optimal configuration of matter for producing peanuts. If maximizing peanuts is your only intrinsic value, then science can inform you on what you should be doing to best accomplish that because some methods are going to produce more peanuts than other methods. This goes back to the hypothetical imperative. If you value maximizing peanuts, then you should use peanut-production method A rather than method B. Other than that, I think Sam's book does a great job at explaining how science can inform well-being. + +# Conclusion +I believe this is my longest blog post yet. I don't get paid to write these posts. My main motivation for writing is to contribute to the world of ideas and I feel like I have ideas worth offering. I put a lot of thought and effort into my blog, so consider making a donation if you made it this far. Thanks for reading! + + +Link(s): +[1: https://wikiless.org/wiki/Emotivism](https://wikiless.org/wiki/Emotivism) +[2: https://wikiless.org/wiki/Universal_prescriptivism](https://wikiless.org/wiki/Universal_prescriptivism) +[3: https://wikiless.org/wiki/Ethical_naturalism](https://wikiless.org/wiki/Ethical_naturalism) +[4: https://wikiless.org/wiki/Is%E2%80%93ought_problem](https://wikiless.org/wiki/Is%E2%80%93ought_problem) +[5: https://wikiless.org/wiki/Ethical_non-naturalism](https://wikiless.org/wiki/Ethical_non-naturalism) +[6: https://wikiless.org/wiki/Presuppositional_apologetics](https://wikiless.org/wiki/Presuppositional_apologetics) +[7: https://wikiless.org/wiki/Divine_command_theory](https://wikiless.org/wiki/Divine_command_theory) +[8: https://wikiless.org/wiki/Euthyphro_problem#The_dilemma](https://wikiless.org/wiki/Euthyphro_problem#The_dilemma) +[9: https://wikiless.org/wiki/Ideal_observer_theory](https://wikiless.org/wiki/Ideal_observer_theory) +[10: https://wikiless.org/wiki/Error_theory](https://wikiless.org/wiki/Error_theory) +[11: https://wikiless.org/wiki/Moral_relativism](https://wikiless.org/wiki/Moral_relativism) +[12: https://wikiless.org/wiki/Immanuel_Kant](https://wikiless.org/wiki/Immanuel_Kant) +[13: https://wikiless.org/wiki/Utilitarian](https://wikiless.org/wiki/Utilitarian) +[14: https://wikiless.org/wiki/Hypothetical_imperative](https://wikiless.org/wiki/Hypothetical_imperative) +[15: https://wikiless.org/wiki/Categorical_imperative](https://wikiless.org/wiki/Categorical_imperative) +[16: https://wikiless.org/wiki/Maxim_(philosophy)](https://wikiless.org/wiki/Maxim_(philosophy)) +[17: https://wikiless.org/wiki/Orthogonality_thesis](https://wikiless.org/wiki/Orthogonality_thesis) +[18: https://wikiless.org/wiki/Moral_relativism](https://wikiless.org/wiki/Moral_relativism) +[19: https://wikiless.org/wiki/Moral_universalism](https://wikiless.org/wiki/Moral_universalism) +[20: https://samharris.org/books/the-moral-landscape/](https://samharris.org/books/the-moral-landscape/) +[21: https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap](https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap) +[22: https://samharris.org/books/the-moral-landscape/](https://samharris.org/books/the-moral-landscape/) |