G. K. Chesterton noted that one shouldn't remove a fence until you understand why it was there in the first place. This is a great analogy for the concept of the traditions which have been passed down to us by religion. Technology allows us to remove a lot of fences, but are we sure we understand why they were there to begin with?
Expanding on Nick Bostrom's metaphor for technological development — it's like drawing balls of various shades from an urn, and if you ever draw a pure black ball, the world ends...
Clearly there are some traditions which should be abandoned. And just as clearly there are some which should be preserved. But what are we to do with the vast majority which fall in the middle?
There's significant evidence that traditional practices were beneficial and adaptive. But many reject traditions as relics of a barbarous past. How do we make that determination?
Religion is a very old technology, and the wisdom contains is often hard to identify. This is unlike the technology we create intentionally, where the goal is always obvious. When these two technologies come into conflict who should we decide between them. It's easy and attractive to go with the new, but by dispensing with the old we may be incurring harms that will only manifest years or decades later.
Religion is often viewed as being completely different from technology, but in actuality religion is a technology, one of the oldest forms of it in fact. It's a particularly important form of technology one that we abandon at our peril.
For many years this was the most listened to episode of my podcast. So it seemed logical to update it, but also it's one you might have already heard.
If AIs are going to act as independent agents then they should be praised for the virtues and judged for their sins just as we all are. And in fact this judgment is one of the few things that will ensure their morality.
Per Scott Alexander it's possible that the right optimizes for survival, while the left optimizes for thriving. If so would a society entirely optimized around thriving actually function?
AI risk has been in the news a lot lately. One way to reduce that risk is to make sure that AIs are moral. But what does morality even mean when you're talking about robots and AIs?
Burning Man aspires to reinvent the culture of the Earth. I don't think it's going to, which is to say in 1,000 years I don't think that people will look back on this era as the Dawn of the Burners. But what will they think of us? That's the question we cover in this episode.
Artificial Intelligence has many attributes, which previously have been qualities we've only associated with the divine.
This is cross posted from my Patheos column which you can find here: https://www.patheos.com/blogs/dispatchesendofworld/
I'm still figure out how to do anchor links. Also I really need to figure out how to write shorter reviews...
Religion and technology relate to each other in strange and subtle ways. Religion must, of necessity grapple with technology, but how to do that remains unclear.
This is the first post of my new Patheos column. If you could subscribe that would be fantastic.
A updating of one of my past posts. I actually changed a lot more than I thought would. Also since it's not appearing on Substack I went a little bit crazy with footnotes. Let me know what you think of how I handled them on audio.
At the end of this episode I announce some significant changes to my blog, so make sure to stay till then.
Before then I discuss decision making. How we need to spend most of our time perfecting our habits and thinking carefully about big decisions, but in reality we spend most of our time doing neither. Obsessed with things that don't matter...
I'm not a hardcore prepper, but I'm always surprised by how little preperation most people are willing to make, particularly compared to how much they're willing to panic.
It is not just our ability to cause harm, but our ability to mitigate harm which has grown in an unprecedented fashion. Life has done whatever it could get away with for billions of year, but in the last few hundred humans have come along able to inflict or prevent great harms and the consciousness to decide whether and how. Recent debates have pitted maximalists from both sides. Those who believe we need to do everything possible to prevent certain harms and those who thing that any attempt to prevent harm is likely to cause more harm because it stalls progress.
In 2014, just a few days before Christmas I was fired and served with a lawsuit. The next two years were some of the toughest of my life. (Which actually makes me pretty lucky.) This is the story of that lawsuit. How it started and how it ended. I made a lot of mistakes, hopefully this will enable you to not make the same mistakes.
I recently read a book which claimed to describe the central problem of the modern world, What's Our Problem by Tim Urban. No one can say he lacked for ambition, but I feel like his analysis overlooks several large problems. In particularly I think he doesn't go nearly deep enough into how technology has changed the rules of the game.
He has divided the political landscape up into golems and genies. And he asserts that we just need stronger genies, the problem is that technology has developed in such a way that it actively sabotages genies while empowering golems. And this problem is not going to go away...
There have been some competing explanations for the Nord Stream explosions. Seymour Hersh claims the US did it. The New York Times claims it was Pro-Ukrainian forces and a somewhat obscure blogger claims he has evidence that it was the Russians. How is one to decide? And is it even necessary to decide? Is it perhaps more important to have a robust framework for conspiracy theories in general, than to have firm opinions about specific theories? How has the modern world made the whole entreprise more difficult?