Faculty Associate Jon Penny argues that the law banning TikTok has failed. " [A]s of this writing, TikTok remains accessible in the U.S. But it has failed even if it eventually succeeds in forcing ...
Nishant Shah warns that Meta's doing away with content moderation represents a dangerous lack of oversight. "A mix of human and algorithmic detection, flagging, scrutiny, resolution, and oversight has ...
Meta's recent overhaul of its content moderation approach marks a significant shift in platform governance. To explore these implications, BKC's Institute for Rebooting Social Media gathered insights ...
Bruce Schneier and Nathan Sanders detail the different ways that evolving AI technologies will be used in drafting legislation.
Read the full conversation from Tech Policy Press.
The end of one wildly popular platform is a chance to overhaul the broken social media industry. Rebecca Rinkevich offers insight into the shifting media landscape in the wake of TikTok's short-lived ...
Susan Benesch weighs in on the political implications of Meta's recent change in speech policies.
Affiliate Ram Shankar Siva Kumar and coauthors suggest ways that Microsoft and other tech giants can mitigate the security risks inherent in emerging AI technologies.
Faculty Associate Leah Plunkett tackles the unregulated world of child influencers. "Where is the line between children appearing occasionally in online family videos and child labor? Which practices ...
BKC faculty associate and Campus AI board member, Dariusz Jemielniak, argues, we need a clear rule, a Fourth Law of Robotics—as an addition to Asimov’s classic code.
Ben Brooks and Michelle Fang argue that legislators ought to be more concerned about other nations openly sharing AI models that could undercut the US's dominance in the field.
We've gotten the hang of correcting and preventing human actors' mistakes, but how ought we to prepare for new kinds of mistakes wrought by AI, ask Bruce Schneier and Nathan Sanders?