summaryrefslogtreecommitdiff
path: root/content/entry/metaethics.md
diff options
context:
space:
mode:
authorNicholas Johnson <nick@nicholasjohnson.ch>2023-02-14 00:00:00 +0000
committerNicholas Johnson <nick@nicholasjohnson.ch>2023-02-14 00:00:00 +0000
commite933093f7be23412121e33a077af3732e75cd3704c5002d66af6673b2502f686 (patch)
tree778fca0de636090d5a04748137464b9bb24975781e77f59be2e226c85e9cf6d9 /content/entry/metaethics.md
parenteeb545eb531c6a75cf33e26a5e2cc76d54922f09b96c31e4ff3607788892ab9d (diff)
downloadjournal-e933093f7be23412121e33a077af3732e75cd3704c5002d66af6673b2502f686.tar.gz
journal-e933093f7be23412121e33a077af3732e75cd3704c5002d66af6673b2502f686.zip
Convert refs: metaethics
Diffstat (limited to 'content/entry/metaethics.md')
-rw-r--r--content/entry/metaethics.md48
1 files changed, 11 insertions, 37 deletions
diff --git a/content/entry/metaethics.md b/content/entry/metaethics.md
index a2bb77e..d5f71da 100644
--- a/content/entry/metaethics.md
+++ b/content/entry/metaethics.md
@@ -2,7 +2,6 @@
title: "Metaethics"
date: 2020-10-11T00:00:00
draft: false
-makerefs: false
---
I'm going to talk about metaethics using the 3 questions posed by Bernard Rosen and Richard Garner. The first part is moral semantics. Moral semantics asks how we should interpret moral language, words like good, evil, right, wrong and ought. The next is moral ontology. Moral ontology asks about the nature of moral judgments. Are there many kinds of moral judgments? Are those judgments true for everyone or only specific groups? Lastly, there is moral epistemology. It talks about what we can know about morality and how we know it, irrespective of its nature. For instance, how can we justify our moral judgments to others? I'll start with moral semantics. Without defining the semantics, nothing else in this post would have any meaning.
@@ -12,19 +11,19 @@ There are at least 2 desirable properties for our moral semantics:
1. It should allow us to convince rational agents of our moral judgments.
2. It should minimize the number of assumptions we have to make.
-Theories such as emotivism[1] assert moral sentences just express emotions. When I say "Murder is wrong", I don't mean "I dislike murder". Neither does anyone I have ever met. We want more out of a moral theory than expressing emotions. We want to be able to convince others of our judgments. Emotivism isn't what we're after. What about universal prescriptivism[2]? It holds that moral judgments such as "Murder is wrong" should be interpreted as "Don't murder". But just commanding someone to do something isn't necessarily convincing because it doesn't employ logical reasoning. It's unlikely to convince anyone that doesn't already believe they shouldn't be murdering. So again, it fails our first requirement of being able to convince others. Let's move on to some other theories.
+Theories such as [emotivism](https://www.wikipedia.org/wiki/Emotivism) assert moral sentences just express emotions. When I say "Murder is wrong", I don't mean "I dislike murder". Neither does anyone I have ever met. We want more out of a moral theory than expressing emotions. We want to be able to convince others of our judgments. Emotivism isn't what we're after. What about [universal prescriptivism](https://www.wikipedia.org/wiki/Universal_prescriptivism)? It holds that moral judgments such as "Murder is wrong" should be interpreted as "Don't murder". But just commanding someone to do something isn't necessarily convincing because it doesn't employ logical reasoning. It's unlikely to convince anyone that doesn't already believe they shouldn't be murdering. So again, it fails our first requirement of being able to convince others. Let's move on to some other theories.
## Hume's Guillotine
-Ethical naturalism[3] says that moral propositions are objective properties of the cosmos. This means that we can look at features of reality and "see" what is right and wrong in the same way that we can look into a microscope and deduce the germ theory of disease. This idea of moral semantics is self-evidently absurd. Making no extra assumptions, nothing about the way the world is tells us the way it should be. I cannot deduce "Murder is wrong" from empirical facts like "The sky is blue" or any other facts about the physical or metaphysical cosmos. This strict divide between facts and moral judgments is known as Hume's Guillotine[4].
+[Ethical naturalism](https://www.wikipedia.org/wiki/Ethical_naturalism) says that moral propositions are objective properties of the cosmos. This means that we can look at features of reality and "see" what is right and wrong in the same way that we can look into a microscope and deduce the germ theory of disease. This idea of moral semantics is self-evidently absurd. Making no extra assumptions, nothing about the way the world is tells us the way it should be. I cannot deduce "Murder is wrong" from empirical facts like "The sky is blue" or any other facts about the physical or metaphysical cosmos. This strict divide between facts and moral judgments is known as [Hume's Guillotine](https://www.wikipedia.org/wiki/Is%E2%80%93ought_problem).
-Ethical non-naturalism[5] tries to bypass Hume's Guillotine by saying that these moral judgments are irreducible. Nothing about the way the cosmos is tells us how it should be, but how things should be is an objective irreducible (possibly intuitive) property of the cosmos itself. If someone asks me "Why shouldn't I murder?", the only correct response according to ethical non-naturalism is philosophical jargon like "It is an irreducible, intrinsic property of the universe that murder is wrong". If another ethical non-naturalist comes along saying murder is ethical, all I can do is repeat how my belief is an intrinsic property of the universe, so the other person must be mistaken. It would be like watching two presuppositionalists[6] argue in circles. I'd be comfortable going on record saying presuppositional apologetics has never convinced anyone that didn't already believe what it is they were presupposing. More like they already believed something and went looking for philosophical jargon to defend it. That's exactly what ethical non-naturalism does and also why it's not convincing. Ethical non-naturalism fails both of our criteria because it is unconvincing to third-parties and requires making assumptions.
+[Ethical non-naturalism](https://www.wikipedia.org/wiki/Ethical_non-naturalism) tries to bypass Hume's Guillotine by saying that these moral judgments are irreducible. Nothing about the way the cosmos is tells us how it should be, but how things should be is an objective irreducible (possibly intuitive) property of the cosmos itself. If someone asks me "Why shouldn't I murder?", the only correct response according to ethical non-naturalism is philosophical jargon like "It is an irreducible, intrinsic property of the universe that murder is wrong". If another ethical non-naturalist comes along saying murder is ethical, all I can do is repeat how my belief is an intrinsic property of the universe, so the other person must be mistaken. It would be like watching two [presuppositionalists](https://www.wikipedia.org/wiki/Presuppositional_apologetics) argue in circles. I'd be comfortable going on record saying presuppositional apologetics has never convinced anyone that didn't already believe what it is they were presupposing. More like they already believed something and went looking for philosophical jargon to defend it. That's exactly what ethical non-naturalism does and also why it's not convincing. Ethical non-naturalism fails both of our criteria because it is unconvincing to third-parties and requires making assumptions.
-So far, we haven't had any luck finding a moral semantics that satisfies both our requirements. What about divine command theory[7]? According to it, god's moral judgments are correct. There is no evidence that a god or gods exist, but let's pretend for a moment that a god does exist and that god makes moral judgments. According to divine command theory, god's judgments are true. This raises the Euthyphro dilemma[8]: Are god's moral judgments true just because god declares them, or are god's moral judgments true because god only declares true moral judgments? If the former is true, then god can declare "Murder is perfectly morally okay" and it would be true because god said so and morality would be arbitrary. If the latter is true, then god is just the messenger for moral judgments that are true independent of god's opinion. Therefore god would be arbitrary. Ideal observer theory[9] suffers from the same dilemma. Even if we ignore all of that, both theories still fail our second criteria. The assumption is that god or the ideal observer's judgments are true. We want to avoid making strong assumptions, so these theories aren't good either for our criteria.
+So far, we haven't had any luck finding a moral semantics that satisfies both our requirements. What about [divine command theory](https://www.wikipedia.org/wiki/Divine_command_theory)? According to it, god's moral judgments are correct. There is no evidence that a god or gods exist, but let's pretend for a moment that a god does exist and that god makes moral judgments. According to divine command theory, god's judgments are true. This raises the [Euthyphro dilemma](https://www.wikipedia.org/wiki/Euthyphro_problem#The_dilemma): Are god's moral judgments true just because god declares them, or are god's moral judgments true because god only declares true moral judgments? If the former is true, then god can declare "Murder is perfectly morally okay" and it would be true because god said so and morality would be arbitrary. If the latter is true, then god is just the messenger for moral judgments that are true independent of god's opinion. Therefore god would be arbitrary. [Ideal observer theory](https://www.wikipedia.org/wiki/Ideal_observer_theory) suffers from the same dilemma. Even if we ignore all of that, both theories still fail our second criteria. The assumption is that god or the ideal observer's judgments are true. We want to avoid making strong assumptions, so these theories aren't good either for our criteria.
## Moral Progress
-I want to define "moral progress" before I continue. Moral progress means just what is sounds like; that it is possible to go from a less ethical society or individual to a more ethical one. Certain moral theories don't allow us to do this. Error theory[10] says that all moral claims are false. This is an assumption and it doesn't allow us to convince rational agents of our moral judgments because all moral judgments are false. "Murder is wrong" and "Murder is good" are both false under this theory. So it's a non-starter. We can't do anything useful with this theory. We can't convince others, can't reason, can't make deductions, and never have any reason to change our minds.
+I want to define "moral progress" before I continue. Moral progress means just what is sounds like; that it is possible to go from a less ethical society or individual to a more ethical one. Certain moral theories don't allow us to do this. [Error theory](https://www.wikipedia.org/wiki/Error_theory) says that all moral claims are false. This is an assumption and it doesn't allow us to convince rational agents of our moral judgments because all moral judgments are false. "Murder is wrong" and "Murder is good" are both false under this theory. So it's a non-starter. We can't do anything useful with this theory. We can't convince others, can't reason, can't make deductions, and never have any reason to change our minds.
-Moral progress is also impossible under moral relativism[11]. It's difficult to draw a hard line between what constitutes a "culture" or a "group", but let's ignore that for now. Let's say we have a very clear idea of who belongs to which culture at what time. According to relativistic morality, what is good is defined as what the "group" accepts as good. This group could be a single individual or a society. Let's take the case of a single individual. If I am my own group, then whatever I believe is automatically correct because I believe it. It's "true for me" that murder is wrong. It may not be true for another person or group, but it is true for me. Morality is relative.
+Moral progress is also impossible under [moral relativism](https://www.wikipedia.org/wiki/Moral_relativism). It's difficult to draw a hard line between what constitutes a "culture" or a "group", but let's ignore that for now. Let's say we have a very clear idea of who belongs to which culture at what time. According to relativistic morality, what is good is defined as what the "group" accepts as good. This group could be a single individual or a society. Let's take the case of a single individual. If I am my own group, then whatever I believe is automatically correct because I believe it. It's "true for me" that murder is wrong. It may not be true for another person or group, but it is true for me. Morality is relative.
With this reasoning, I am never wrong. There is never a reason for me to change my mind about any moral judgment because I'm right by definition. I can't convince other individuals because whatever they believe is "true for them", so this theory fails our first criteria. With cultural relativism, the culture is the group, not the individual. So, it might be possible for an individual to be wrong if they disagree with their culture. This would mean that an abolitionist in a slave-owning culture would be morally wrong about slavery because the predominant culture is in favor of owning slaves. Also, if the culture decides slavery is wrong, then there are two interpretations that can be made of their previous support of owning slaves. The first interpretation is that the culture was wrong to think that slave-owning was just, and now they have the right belief. But according to cultural relativism, this would also be true in the reverse direction. Going from an abolitionist culture to a slave-owning one would also have to be considered moral progress, since the only metric by which moral judgments can be made is what the existing culture believes. The second interpretation is that the culture was never wrong. When the culture was in favor of slave owning, it was in fact good to own slaves for that culture. And when the culture was in favor of the abolition of slavery, then owning slaves was immoral for that culture. This would imply that moral judgments can change over time, but moral progress never really happens. Moral progress aside, convincing other cultures of your culture's moral judgments has no rational basis in cultural relativism. Furthermore, it assumes that the culture is always right, a very strong assumption that fails our second criteria.
@@ -33,7 +32,7 @@ Other moral semantics define morality in different ways. For example, some defin
The big problem with objective morality is it must make at least one assumption about what ought to be in order to bypass Hume's Guillotine. With utilitarianism, I am assuming that maximizing well-being and minimizing suffering is what we're after. I have to assume that to deduce that murder is wrong. Otherwise I can point out that murder reduces well-being all day long, but it won't get me anywhere because good has nothing to do with well-being. So we are stuck with either not being able to reason about moral judgments with rational agents, or assuming that good has something to do with well-being.
-I am levying the same criticism about popular moral philosophy that Immanuel Kant[12] did back in his day through my examples. Kant rightly realized that objective moral philosophy has the insurmountable problem that it must rely on a "heavily subjective" moral imperative to get started. The earlier example I gave of well-being does not apply to people who only care about their own well-being. Utilitarianism[13] will never persuade moral action on behalf of those that only care about themselves. Therefore, objective morality can never surpass hypothetical imperatives[14]. A hypothetical imperative only applies to someone who wishes to achieve certain ends. If I want to pass a test, I'd better study. Another way to say this is I only need to study if I want to pass the test. If I don't care about passing, then I can study or not. It makes no difference. Kant saw this as inadequate and came up with categorical imperatives[15] instead. Categorical imperatives boil down to maxims[16] which also have to be assumed. So while Kant rightly criticized the objective morality of his day for making assumptions, he went on to create his own theory also based on assumptions.
+I am levying the same criticism about popular moral philosophy that [Immanuel Kant](https://www.wikipedia.org/wiki/Immanuel_Kant) did back in his day through my examples. Kant rightly realized that objective moral philosophy has the insurmountable problem that it must rely on a "heavily subjective" moral imperative to get started. The earlier example I gave of well-being does not apply to people who only care about their own well-being. [Utilitarianism](https://www.wikipedia.org/wiki/Utilitarian) will never persuade moral action on behalf of those that only care about themselves. Therefore, objective morality can never surpass [hypothetical imperatives](https://www.wikipedia.org/wiki/Hypothetical_imperative). A hypothetical imperative only applies to someone who wishes to achieve certain ends. If I want to pass a test, I'd better study. Another way to say this is I only need to study if I want to pass the test. If I don't care about passing, then I can study or not. It makes no difference. Kant saw this as inadequate and came up with [categorical imperatives](https://www.wikipedia.org/wiki/Categorical_imperative) instead. Categorical imperatives boil down to [maxims](https://www.wikipedia.org/wiki/Maxim_%28philosophy%29) which also have to be assumed. So while Kant rightly criticized the objective morality of his day for making assumptions, he went on to create his own theory also based on assumptions.
## Hypothetical Imperatives
In a way, with this post, I am doing what Kant originally set out to do. He pointed out the same problems I see with objective morality and attempted to fix them. That is, existing moral systems all either require making some strong assumption or they don't make any assumptions but are useless when it comes to convincing rational agents of our moral judgments. But in doing so, he just made his own assumptions in the form of categorical imperatives. I am not going to do that. Kant's categorical imperatives are unnecessary. Hypothetical imperatives are all that's needed. Kant would have been right if he had just stopped after his criticism of objective morality and not tried to create his own Kantian morality. I do not need to assume my way around Hume's Guillotine because I'm not going to make any assumptions. There's no need for morality to go beyond hypothetical imperatives. I shall explain further.
@@ -42,7 +41,7 @@ We all have values. Values are things we care about. Some values are fundamental
Why does any of that matter? It matters because we can make certain assumptions about others' values. We can assume others generally value staying alive because evolution has baked that into all of us. Whether that is an intrinsic value or instrumental isn't important. As long as others value their continued existence, we can convince them that they ought to care about certain other instrumental values as well like having enough food to eat, having shelter, acting in a non-violent manner, etc. If we know someone's values, we can reason with them about what other values they should have, if they are rational. People often aren't rational, meaning they often have instrumental values incompatible with their intrinsic values. This is a fancy way of saying they don't know what's good for them. People can also be irrational by not doing what they know is good for them. It is common knowledge that a healthy diet and exercise is important, but we don't always do that even though we all want to be healthy. While people aren't always rational, I still consider it an important requirement of a moral system to be able to use rational arguments to convince others.
-Hypothetical imperatives don't make any assumptions because they are stated as conditionals. They also allow us to reason with other rational agents about moral judgments. The vast majority of the population values something like well-being for themselves and other conscious creatures. Therefore, I can deduce their other instrumental values if they are being rational. This allows us to collaborate on our values. It means we can tell someone "Murder is wrong" and they understand that to mean "Murder is in contradiction with one or more of my instrumental or intrinsic values". It doesn't do any good to tell a psychopath that murder is wrong because they don't value the well-being of others. This is a big problem in artificial intelligence. If a general artificial intelligence is created that is incompatible with our intrinsic human values, it could be extraordinarily dangerous. The orthogonality thesis[17] explains that any level of intelligence is compatible with any goal. This means a superintelligent AI smarter than we can imagine could value maximizing the number of peanuts in the universe above all else, including human life. It need not have human values which is what makes it so dangerous. It's not that it's bent on harming people. It's just so bent on maximizing peanuts that it grinds humans up for resources to create peanuts. It is neutral toward our well-being because it only cares about peanuts.
+Hypothetical imperatives don't make any assumptions because they are stated as conditionals. They also allow us to reason with other rational agents about moral judgments. The vast majority of the population values something like well-being for themselves and other conscious creatures. Therefore, I can deduce their other instrumental values if they are being rational. This allows us to collaborate on our values. It means we can tell someone "Murder is wrong" and they understand that to mean "Murder is in contradiction with one or more of my instrumental or intrinsic values". It doesn't do any good to tell a psychopath that murder is wrong because they don't value the well-being of others. This is a big problem in artificial intelligence. If a general artificial intelligence is created that is incompatible with our intrinsic human values, it could be extraordinarily dangerous. The [orthogonality thesis](https://www.wikipedia.org/wiki/Orthogonality_thesis) explains that any level of intelligence is compatible with any goal. This means a superintelligent AI smarter than we can imagine could value maximizing the number of peanuts in the universe above all else, including human life. It need not have human values which is what makes it so dangerous. It's not that it's bent on harming people. It's just so bent on maximizing peanuts that it grinds humans up for resources to create peanuts. It is neutral toward our well-being because it only cares about peanuts.
We aren't going to convince psychopaths or AI systems to change their behavior by presenting them with moral theories. Hypothetical imperatives can explain why this is. Both the psychopath and the AI system do not share the same moral imperatives as most of humanity, so convincing them rationally is a lost cause. We don't lose anything by using only hypothetical imperatives. With rational agents that share our values, we can make convincing rational arguments. With rational agents that don't share our values such as psychopaths or AI, we never had any hope of convincing them anyway. With irrational agents, we may be able to convince them, but not using rational argument. Therefore whatever we are doing to convince them can't be considered moral reasoning, so we need not worry about it.
@@ -54,14 +53,14 @@ The second less extreme scenario I gave with Person A and Person B is far more c
Some may disagree, but I tend to be pragmatic. Language should be useful for communication. That's where I get my first criteria for moral semantics. What good is moral language if we can't use it to make rational arguments to convince others about our moral judgments? This is why I view theories like error theory, emotivism, and ethical non-naturalism as non-starters. They are not useful for convincing anybody of moral judgments and only serve to nullify moral language. Hypothetical imperatives are the most convincing way to interpret moral language such that extra assumptions are not necessary.
# Moral Ontology
-Given that I am using hypothetical imperatives to interpret moral language, should moral statements be interpreted as universal or relative? When I say "Murder is wrong", how can that apply to others who do not value human well-being? They can't exactly translate that to "Murder contradicts my instrumental or intrinsic values" if it in fact doesn't. Does this mean everyone has their own morality and it's all relative[18]? Or should we treat common intrinsic values as universal[19] and even those that don't value the well-being of others are subject to that moral judgment?
+Given that I am using hypothetical imperatives to interpret moral language, should moral statements be interpreted as universal or relative? When I say "Murder is wrong", how can that apply to others who do not value human well-being? They can't exactly translate that to "Murder contradicts my instrumental or intrinsic values" if it in fact doesn't. Does this mean everyone has their own morality and [it's all relative](https://www.wikipedia.org/wiki/Moral_relativism)? Or should we treat [common intrinsic values as universal](https://www.wikipedia.org/wiki/Moral_universalism) and even those that don't value the well-being of others are subject to that moral judgment?
As I said before, we are far enough away from utopia that even if most of us don't share the exact same intrinsic values, they converge on instrumental values. Therefore, as a matter of language, it is best for us to talk as if everyone shares the value of well-being for themselves and others. This doesn't mean we universalize well-being into a global intrinsic value for everyone. Being a pragmatist, I care about convincing others of my values. I don't think it's really an important question if values are universal or relative. My answer to this would be interpret it however you want. Pragmatically, it isn't going to affect your ability to convince anyone. I personally am going to talk in a universal way because it sounds more natural and gets the point across. I am going to say "Murder is wrong", not "Murder is wrong, for me". "Murder is wrong" applies to everyone that shares the intrinsic value of increasing well-being and decreasing suffering, which is almost all humanity. So, even though I know that not everyone has an intrinsic set of values that can deduce "Murder is wrong", I am going to speak as if it's a universal anyway because it's close enough that I'm not going to speak with exception. "Murder is wrong" is a good analogy. Murder is wrong, generally. But what about in wartime? What about in self-defense? It's less clear. But despite that, we don't say things like "Murder is wrong except during wartime and except in self-defense and except...". We don't speak this way because the list of exceptions goes on forever. For the same reason, I am going to say "Murder is wrong" without considering all the edge cases like an AI that only values maximizing peanuts.
The short answer to the moral ontology of my metaethics is "I don't care". You can treat it as relative or universal. It makes no difference to the hypothetical imperatives. Either someone shares your values and you can go about using rational argument to convince them or they don't and you can't. Whether you want to say "Murder is wrong" is true for only people that value well-being or it's true for everyone is a question I don't think deserves an answer. It's a question that doesn't have any meaning. Semantically, I think it makes the most sense to speak in universals ("Murder is wrong", not "Murder is wrong, for me") and I've given my reasons why. With that, I'll move on to the last section which is moral epistemology.
# Moral Epistemology
-Now that we know how to interpret moral judgments, how can we actually support or defend moral judgments? Part of my motivation for writing this post is how much I enjoyed Sam Harris' book The Moral Landscape[20]. I highly recommend it. In it, he explains how scientific facts can inform moral values. What many people take issue with is that he doesn't really solve the is-ought problem. He just asserts that morality has something to do with the well-being of conscious creatures. I don't take too much issue with this since it is almost universally true. I just avoid taking that step and instead use hypothetical imperatives to avoid running into the is-ought problem that is a popular critique of his book[21].
+Now that we know how to interpret moral judgments, how can we actually support or defend moral judgments? Part of my motivation for writing this post is how much I enjoyed Sam Harris' book [The Moral Landscape](https://samharris.org/books/the-moral-landscape/). I highly recommend it. In it, he explains how scientific facts can inform moral values. What many people take issue with is that he doesn't really solve the is-ought problem. He just asserts that morality has something to do with the well-being of conscious creatures. I don't take too much issue with this since it is almost universally true. I just avoid taking that step and instead use hypothetical imperatives to avoid running into [the is-ought problem that is a popular critique of his book](https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap).
The bigger problem I see with Sam's morality is one I brought up already. Even for those who do value well-being, there may be minute differences in the end goal those values imply. Some that value well-being may want a perfect utopia. Others that also value well-being may think that goes too far, that there should always be at least some discomfort to spice things up. Sam himself has admitted before that he finds the idea of a "well-being utopia" uncomfortable. His common response to criticisms of this sort is that the idea of well-being is fluid and continually evolving. However, this still doesn't solve the problem that some people, likely many people, just have irreconcilable intrinsic values, even if they all value well-being. For that reason, I choose not to assume that well-being is what everyone is after. This allows my theory to account for wide variances in value structures, but I understand why Sam starts with well-being.
@@ -69,34 +68,9 @@ Now that I've finished criticizing what I think Sam got wrong, I'll talk about w
> "If our well-being depends upon the interaction between events in our brains and events in the world, and there are better and worse ways to secure it, then some cultures will tend to produce lives that are more worth living than others; some political persuasions will be more enlightened than others; and some world views will be mistaken in ways that cause needless human misery."
-With this, we get a sense for how Sam thinks about how science can inform moral values (well-being). On this, I agree with him. He is pointing out that some ways of being produce more well-being than others. Some ways of living, some cultures, some political persuasions, some world views will tend to produce more well-being than others. And we can use science to discover which ways of doing things produces the most well-being for everyone, even given slight differences in the way individuals value well-being. If I go on for too long about science informing moral values, I will just be summarizing his book. Instead of that, I would recommend reading The Moral Landscape[22] to find out more. If you're a busy person, you can listen to the audiobook instead. Reading is more active than listening and also allows you to go at your own pace, so I would just read it if you have time.
+With this, we get a sense for how Sam thinks about how science can inform moral values (well-being). On this, I agree with him. He is pointing out that some ways of being produce more well-being than others. Some ways of living, some cultures, some political persuasions, some world views will tend to produce more well-being than others. And we can use science to discover which ways of doing things produces the most well-being for everyone, even given slight differences in the way individuals value well-being. If I go on for too long about science informing moral values, I will just be summarizing his book. Instead of that, I would recommend reading [The Moral Landscape](https://samharris.org/books/the-moral-landscape/) to find out more. If you're a busy person, you can listen to the audiobook instead. Reading is more active than listening and also allows you to go at your own pace, so I would just read it if you have time.
With the way Sam defines morality, the other value structures without well-being as an intrinsic value aren't really relevant. With my semantics, they are. So I want to make a quick observation. Science can inform all value structures, not only maximizing well-being and minimizing suffering. This includes non-human value structures such as maximizing peanuts. You can imagine an AI using the scientific method to find the optimal configuration of matter for producing peanuts. If maximizing peanuts is your only intrinsic value, then science can inform you on what you should be doing to best accomplish that because some methods are going to produce more peanuts than other methods. This goes back to the hypothetical imperative. If you value maximizing peanuts, then you should use peanut-production method A rather than method B. Other than that, I think Sam's book does a great job at explaining how science can inform well-being.
# Conclusion
I believe this is my longest blog post yet. I don't get paid to write these posts. My main motivation for writing is to contribute to the world of ideas and I feel like I have ideas worth offering. I put a lot of thought and effort into my blog, so consider making a donation if you made it this far. Thanks for reading!
-
-
-Link(s):
-[1: https://www.wikipedia.org/wiki/Emotivism](https://www.wikipedia.org/wiki/Emotivism)
-[2: https://www.wikipedia.org/wiki/Universal_prescriptivism](https://www.wikipedia.org/wiki/Universal_prescriptivism)
-[3: https://www.wikipedia.org/wiki/Ethical_naturalism](https://www.wikipedia.org/wiki/Ethical_naturalism)
-[4: https://www.wikipedia.org/wiki/Is%E2%80%93ought_problem](https://www.wikipedia.org/wiki/Is%E2%80%93ought_problem)
-[5: https://www.wikipedia.org/wiki/Ethical_non-naturalism](https://www.wikipedia.org/wiki/Ethical_non-naturalism)
-[6: https://www.wikipedia.org/wiki/Presuppositional_apologetics](https://www.wikipedia.org/wiki/Presuppositional_apologetics)
-[7: https://www.wikipedia.org/wiki/Divine_command_theory](https://www.wikipedia.org/wiki/Divine_command_theory)
-[8: https://www.wikipedia.org/wiki/Euthyphro_problem#The_dilemma](https://www.wikipedia.org/wiki/Euthyphro_problem#The_dilemma)
-[9: https://www.wikipedia.org/wiki/Ideal_observer_theory](https://www.wikipedia.org/wiki/Ideal_observer_theory)
-[10: https://www.wikipedia.org/wiki/Error_theory](https://www.wikipedia.org/wiki/Error_theory)
-[11: https://www.wikipedia.org/wiki/Moral_relativism](https://www.wikipedia.org/wiki/Moral_relativism)
-[12: https://www.wikipedia.org/wiki/Immanuel_Kant](https://www.wikipedia.org/wiki/Immanuel_Kant)
-[13: https://www.wikipedia.org/wiki/Utilitarian](https://www.wikipedia.org/wiki/Utilitarian)
-[14: https://www.wikipedia.org/wiki/Hypothetical_imperative](https://www.wikipedia.org/wiki/Hypothetical_imperative)
-[15: https://www.wikipedia.org/wiki/Categorical_imperative](https://www.wikipedia.org/wiki/Categorical_imperative)
-[16: https://www.wikipedia.org/wiki/Maxim_(philosophy)](https://www.wikipedia.org/wiki/Maxim_(philosophy))
-[17: https://www.wikipedia.org/wiki/Orthogonality_thesis](https://www.wikipedia.org/wiki/Orthogonality_thesis)
-[18: https://www.wikipedia.org/wiki/Moral_relativism](https://www.wikipedia.org/wiki/Moral_relativism)
-[19: https://www.wikipedia.org/wiki/Moral_universalism](https://www.wikipedia.org/wiki/Moral_universalism)
-[20: https://samharris.org/books/the-moral-landscape/](https://samharris.org/books/the-moral-landscape/)
-[21: https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap](https://www.lesswrong.com/posts/HLJGabZ6siFHoC6Nh/sam-harris-and-the-is-ought-gap)
-[22: https://samharris.org/books/the-moral-landscape/](https://samharris.org/books/the-moral-landscape/)