Death sucks a lot.
I talked about this somewhat in, “On Grief“, about how painful death is on those of us who are still here. Losing people you care about forever is agonizing. To never see them again, never make them smile again, all the unhad conversations and silent moments together, this is unacceptable, but we’re left with it anyway.
There’s also, of course, the loss to the guest of honor at the funeral. All your hopes and dreams, everything you wanted to do and say, the experiences you could have had, all gone. All value gone, turned into a game over screen with no continues and nobody to see it.
We should not leave out the loss to the world, either. A person is gone, taking with them their unique viewpoint and all of their experience. Everything they knew, every skill they had, every idea, gone, just like that, because of a biological hiccup.
It’s unacceptable. It is not to be borne.
The only real resolution that I see to this problem is a superhuman artificial intelligence, aligned to human values. Nothing else offers a complete solution the way more intelligence does. With what we humans have we’ve pushed death back, raised the length of the average lifespan bit by bit as our generations passed, but we still die in the end.
Even if SENS is fully successful, and aging becomes a process that is fully managed and no longer a threat, we still have risks – I’ve been told someone ran the actuarial calculations and found that in the absence of aging we’d average about eight hundred years and die to accidents. It’s certainly far better, but we’d still have to say goodbye.
Aside from that of course we’re still left with the problems of war, disease, and violence. While we’ve reduced these, they’re still factors in the ending of lives, and this doesn’t touch on x-risks – biotech, nanotech, unaligned artificial intelligence, climate change, all threats to us that SENS can’t touch, and solving any of these doesn’t resolve the others, except for AI. An optimizer that’s more intelligent than we are can actually touch these things.
So I’ve made the case now to work in alignment, but why MIRI?
MIRI’s working on a model called HRAD – Highly Reliable Agent Design. Machine learning systems that we have been building thus far are largely trained by rewarding the system for making the ‘right’ choice when given problems, which causes the system to update slightly in that direction. The problem with doing things this way is that we end up with a system that’s a black box – we don’t know what it is that makes it decide what it does in any sort of detail, and giving them novel inputs is a crapshoot – one example is of a system intended to scan photographs for tanks. The researchers fed it a bunch of pictures that had tanks, and a bunch of pictures with out, then tested it on the other photographs in the set, which it classified perfectly.
When they presented it to the military, the military tested it and found that it didn’t do better than chance. It turned out that in all of the training data, the pictures of tanks were taken on cloudy days, and the non-tanks were on sunny days. The system distinguished weather, not tanks.
We need AGI that values what we value, and I think MIRI is doing the best work in this direction, and I want to be a part of it.
Because I want to do the highest leverage thing that I can.
To stop the AI from eating us all.
To end death.