With the enormous increase in the power of AI (specifically LLMs) people are using them for all sorts of things, hoping to find areas where they’re better, or at least cheaper than humans. FiveThirtyNine (get it?) is one such attempt, and they claim that AI can do forecasting better than humans.
Scott Alexander, of Astral Codex Ten, reviewed the service and concluded that they still have a long way to go. I have no doubt that this is the case, but one can imagine that this will not always be the case. What then? My assertion would be that at the point when AI forecasting does “work” (should that ever happen) it will make the problems of superforecasting even worse.2
What are the problems of superforecasting?
...
Journey of the Mind: How Thinking Emerged from Chaos by: Ogi Ogas and Sai Gaddam
Against the Grain: A Deep History of the Earliest States by: James C. Scott
This post represents a new feature (experiment?) I plan to occasionally write posts which take advantage of one or more books I read recently, but which aren’t actually reviews of those books. See, for example, my last post: Superminds, States, and the Domestication of Humans.
Despite the fact that the books feature heavily in these posts, I assume my adoring fans still want actual reviews. But it doesn’t make sense to wait until the next book review collection for those reviews to appear, nor does it make sense to cram the reviews into the original essay which was about something else. And so I thought that instead I would have the reviews quickly follow the essay as sort of supplementary material. So that’s what this is. Let me know what you think.
How durable is the state? How resistant is it to being overthrown? How closely does it reflect our desires? Is it possible it has its own desires?
But maybe more importantly how does all this affect the possibility of a very close election in November?