When people think about AI Risk they think about an artificial superintelligence malevolently and implacably pursuing its goals with humans standing by powerless to stop it. But what if AI Risk is more subtle? What if the same processes which have been so successfully used for image recognition are turned towards increasing engagement? What if that engagement ends up looking a lot like addiction? And what if the easiest way to create that addiction is by eroding the mental health of the user? And what if it's something we're doing to ourselves?