For the Pretend Very Serious people who controlled ~all funding in EA and "AI safety" for ~15 years, a verbatim prediction of this headline would have been treated with deep contempt, as proof you were not Very Serious like them. Reality was out of their bounds.
@ESYudkowsky I also take this as an example of how one can reason perfectly but real-life outcomes can still defy the reasoned predictions.
@yishan I'd call it absurd to claim that anyone, including me, was reasoning anything close to perfectly at any point. Learn from this the imperfection of your reasoning procesess; do not learn the perfection of your reasoning and the imperfection of mere reality.
So far I have not found any clearly superior reasoning to yours on the alignment issue and combined with my agreement with your above statement, that is the only reason I don't assume we're definitely going to die on this road. (Indeed, I am working on climate change because if we don't die, then we've got this big problem we still need to solve and should not be ignoring). But it is a serious thing to consider (hence the quote I gave) because I don't see a flaw in the reasoning so far.
@ESYudkowsky @yishan i think the point he is making isn’t about “perfect” reasoning vs “imperfect” reality so much as it is about the limited power of reasoning in sufficiently uncertain/highly contingent environments. More of a meta reasoning about where it might be optimal to allocate cognition