summaryrefslogtreecommitdiff
path: root/content/entry/making-sense-of-metaethics.md
blob: d1e86f51cd1ffba4402c8aae241480402e61b47b7cd13c3f50d79288249ff100 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
title: "Making Sense of Metaethics"
date: 2023-02-23T00:00:00
draft: false
---
## Motivation
It's been over two years since I wrote my journal entry, [metaethics](/2020/10/11/metaethics/). Looking back, that entry is much longer than it needed to be. I said way too much and didn't explain myself well. And I no longer agree with everything I wrote in it.

This entry supersedes that one. I aim to make this one shorter and more comprehensible.

## Laying The Foundation
Let's start with facts everyone can agree on.

First, there are things that we value. For example, having enough food to stay alive, achieving a high social status, being around people we like, etc. Even if we have value structures that can't be easily codified into statements, it doesn't matter. By definition of what a value is, we want to navigate towards a future where our values are fulfilled.

So far so obvious. Now let's move into the most controversial section of this entry.

## Interpreting Moral Language
Moral language like "should", "ought", "good", "evil", "right", and "wrong" should be interpreted as signalling either alignment or misalignment between values and actions. So if I say "You shouldn't play the lottery." what I mean is that you playing the lottery runs contrary to my values somehow. One plausible reason is that I care about your well being. I know your well being will probably be worse if you have less money and that playing the lottery will probably cause you to lose money.

There are other ways to interpret moral language which I do give some credence. [Emotivism](https://www.wikipedia.org/wiki/Emotivism) suggests that moral statements express feelings. [Prescriptivism](https://www.wikipedia.org/wiki/Universal_prescriptivism) proposes viewing moral statements as imperatives.

I agree that moral language can also express emotions and imperatives. My interpretation is fully compatible with that. But the problem with interpreting moral statements entirely as expressions of emotion or entirely as imperatives is that **people use moral statements as statements of fact**. My claim is that the facts moral statements refer to are facts about how certain actions affect one's values.

## Refuting Hume's Guillotine
At this point, I'd like to address some likely criticism, namely [Hume's Guillotine](https://www.wikipedia.org/wiki/Is%E2%80%93ought_problem). For those who don't know, Hume's Guillotine is the idea that you can't derive an "ought" from an "is". In other words, no description of how the world is tells you how it should be. It's a strict separation of facts and values.

Under my interpretation of moral language, [values are facts](https://www.wikipedia.org/wiki/Moral_realism). There's no distinction. As long as I value my health, I should exercise. That's a fact. If anyone disagrees, I'd be happy to reference the science that shows exercise improves health. "But science can't prove that it's Good to be healthy." I value my health. No further justification is necessary.

Hume says "You can't get an ought from an is." I say "The is *is* the ought." **Values are facts about what we care about and moral statements are facts about the effects of different actions on those values.** How else could moral statements possibly be interpreted while also lining up with how they're actually used?

## Intrinsic and Instrumental Values
Now that I've defended how I interpret moral language, I'd like to elaborate on values a bit. Values can be separated into two categories: [intrinsic and instrumental](https://www.thoughtco.com/intrinsic-and-instrumental-value-2670651).

Intrinsic values are things we care about in and of themselves, like pleasure and happiness. Instrumental values are things we care about because they bring us closer to satisfying our intrinsic values. For example, I care about exercise because I care about my health. I care about my health because I want to feel good. And I want to feel good just because.

Let's assume that the lottery player in my earlier example values playing the lottery solely as an instrumental goal. Their real goal is having more money. Let's also assume that they value having money because they think it will make them happy. The playing the lottery and the having money are instrumental to the happiness. But if I could prove that playing the lottery doesn't lead to having more money or that having more money doesn't make one happy, then those instrumental values would disappear, assuming the lottery player is rational.

It's important to note that a rational agent cannot be reasoned out of their intrinsic values, but they can be reasoned into and out of instrumental values. This is where science comes in. Science can tell us what we should do given our values and what we should instrumentally value given our intrinsic values.

I won't bother going into more detail on that since a very good book has already been written on how science can inform human values. If you're interested, the title of the book is [The Moral Landscape](https://www.samharris.org/books/the-moral-landscape), authored by [Sam Harris](https://www.samharris.org).

## Well Being
In The Moral Landscape, Sam starts with well being as his ethical foundation. So let's talk about how that works in my moral semantics.

First, I'll start with the observation that [any level of intelligence is compatible with almost any intrinsic value/goal](https://www.wikipedia.org/wiki/Orthogonality_thesis). Humans though, as products of evolution by natural selection, share anthropomorphic goals. Generally speaking, we want to promote well being for ourselves and others.

As a matter of convenience, we make the (usually correct) assumption that whoever we're dealing with has similar intrinsic values to us and thus we can attempt to reason with them. In the case of psychopaths, this may not be true. But they're rare enough that it doesn't matter. So I take no issue with Sam's starting with well being as the entry point for thinking about ethical questions.

However, as we continue to make technological progress, we must keep in mind that artificial intelligences do not necessarily share anthropic values like well being. Unless we make sure that AI values are aligned with human values (a very difficult unsolved problem), then **it's not possible to reason morally with AI the same way we can with humans**.

## Being Wrong About Instrumental Values
In terms of human values, the problem isn't differences in intrinsic values. It's differences in instrumental values. Many people possess instrumental values which harm their own intrinsic values. They're just too unwise to see it.

If we agree that well being is what we're after and some actions, attitudes, cultures, political ideologies, and systems of government are better than others at promoting it, then moral statements are equivalent to facts about which states of affairs bring about well being.

## The Importance of Moral Semantics
Even though people aren't perfectly rational, their intrinsic values might change over time, and they're often wrong about their instrumental values, we still need a way to at least try to reason with each other about morals. That's what my moral semantics is meant to do. Coming together on intrinsic and instrumental values is the foundation of cooperation. The more we agree on our instrumental values, the better things will be for us.

Getting clear on what we mean by moral language is more than just a philosophical exercise. If we can't agree on what moral statements mean or whether they have meaning at all, how can we reason with each other about morals? We'll be more likely to talk past each other and resort to violence. That's why moral semantics has practical importance.