On Moral Relevance

I’ve been trying to decide for a while what moral relevance is made of. At the moment I have an answer that seems to serve in day to day life, but I still have the feeling that I haven’t fully resolved the question, a nagging feeling that I’m not yet capturing everything.

Backing up a step, what do I even mean when I say, “moral relevance”?

I consider a to be morally relevant if I think it’s something that I should take into account when choosing my actions. It is accounted for, so that I do not violate any preferences that it has. In short, violations of ‘s preferences is a bad thing, and to be avoided. matters.

A meaningful amount of the problem lies here. I have an intuitive felt sense of what it means to matter, and what it means for a thing to be bad rather than good, but I don’t think I could give a technical explanation of these concepts. I am in some senses a moral realist – it seems to me that there is something more than game theory in being kind to others. I don’t think that you can find an atom of good or the charge carrier of morality, but you’d be hard-pressed to find an atom of human, either, and they certainly seem to exist.

Axioms seem to be necessary to establishing systems, from what I can tell. Investigating this question, it seems that in at least some systems, with enough progress, you can then go back and prove your axioms (see this quora) but note that what I’ve found also supports another claim I’ve made – that a system with no axioms does and can prove nothing; it has no affordances with which one can do anything.

So, my moral axioms:

  1. There is an axis on which events can be measured, this axis runs from “bad” to “good”, and any given action, fully contextualized, has a location on this axis.
  2. The degree to which an event is bad or good depends strongly (although possibly not exclusively) on the degree of suffering that descends from it.
  3. Evil is a quality that bad events can, but do not necessarily have, which requires intentionality – an evil action is one where a bad event will result, the mind that initiated the action was aware of this, and either didn’t care, or (increasing the degree of evil) actively wanted this outcome.
  4. For two events identical in all respects except for the presence or absence of evil, the event which evil was a part of the existence of, is worse (“more bad”).

From this, it can be seen that moral relevance comes the capacity to suffer. Rocks are not morally relevant, because they don’t, as far as we know, experience suffering. People are morally relevant, because they do. This gets… problematic, if one considers engineered beings. Does a person who is identical to me in every respect, except that they experience suffering at twice the intensity, with their upper bound for suffering, matter twice as me? Does a person identical to me in every respect except that she has no capacity to suffer whatsoever, not matter?

I notice, once more, that I am confused.

On reflection, I think that a version of me who didn’t experience suffering at all might not be morally relevant. If she’s not bothered in the hypothetical case where I hit her with a stick as she walks past, I’m not sure where the bad lies.

Similarly, an action that hurts the double-suffering version of me is at least as bad as taking said action towards me. Is it twice as bad? She suffers twice as much, after all.

The incentive gradients are all screwy here, though. It suggests that the right thing to do is to make people who are capable of suffering as much as possible, and then make sure the world (or other people) don’t hurt them, and that seems obviously incorrect.

I need to think more on this topic, clearly, if I’m getting outcomes like this. What are your thoughts, reader?