We interviewed Nick Bostrom, The author of NYT’s Bestseller “Superintelligence,” and one of the most influential experts on AI Risks. Timestamps: 2:11 - Will AI be used for Extreme Bad? What are the odds? 4:44 - What are the biggest concerns with AI? 6:16 - People are greatly underestimate what Super-intelligence will be. 7:19 - Can we trust anyone with creating a Super-Intelligent AI? 8:49 - Are the big AI companies taking AI safety serious enough? 10:39 - Are multiple AI systems undergoing the transition to super-intelligence at the same time a good or bad thing? 13:10 - How do we prevent AI from becoming biased like we saw with Google Gemini? 16:17 - Is there a danger of AI being too truthful or too opinionated? 18:10 - When do AI’s gain a moral status? How do we know? 19:40 - What is Truth for an AI? 22:37 - What has surprised Bostrom the most regarding AI in the past 10 years? 26:13 - Is AI moving faster or slower than Bostrom imagined 10 years ago? 26:38 - Do we see another AI Winter or AGI first? 28:21 - Is energy a limiting factor in AI? 30:00 - What is Deep Utopia? 33:18 - When will we reach Super-intelligence and Deep Utopia? 36:00 - Can a Deep Utopia be a problem for humanity? 39:29 - Could a Utopian society lead to the creation of a simulated reality as a way to escape? 42:28 - If we are in a simulation, is our consciousness simulated? 47:48 - Elon Musk or Sam Altman? 48:17 - Artificial Super-Intelligence before or after 2035? 49:40 - How can you get Nick Bostrom’s book “Deep Utopia?”
@KrassenCast Yeah like we could totally program the AI to just like, bring us nachos and stuff. That way, we don't have to like get up off the couch. #NachosForDays #LazyLife