Examples of Superintelligence Risk
In talking to people who don't think Superintelligence Risk is a thing we should be prioritizing, it's common for them to want an example of the kind of thing I'm asking about. Unfortunately, I have...
View ArticleConversation with Bryce Wiedenbeck
A few days ago I spoke with Bryce Wiedenbeck, a CS professor at Swarthmore teaching AI, as part of my project of assessing superintelligence risk. Bryce had relatively similar views to Michael: AGI is...
View ArticleWindow Vent
During the summer (here) night-time outdoor temperatures are typically pretty pleasant, but sleeping inside can still be uncomfortable because of heat stored from during the day. The normal fan-based...
View ArticleTechnical Distance to AGI
I think the largest piece of the disagreement between ML people over how to prioritize AI risk may turn out to be: how likely is it that we get AGI soon that looks a lot like systems we have now?...
View ArticleSuperintelligence Risk Project Update
I've now been working on my project of assessing risk from superintelligence for a little over a week, though I was traveling for the end of last week. To keep me motivated, and let other people...
View Article