Ethics Untangled
Ethics Untangled is a series of conversations about the ethical issues that affect all of us, with academics who have spent some time thinking about them. It is brought to you by the IDEA Centre, a specialist unit for teaching, research, training and consultancy in Applied Ethics at the University of Leeds.
Find out more about IDEA, including our Masters programmes in Healthcare Ethics and Applied and Professional Ethics, our PhDs and our consultancy services, here:
ahc.leeds.ac.uk/ethics
Ethics Untangled is edited by Mark Smith at Leeds Media Services.
Music is by Kate Wood.
Ethics Untangled
53. How should social media platforms regulate AI-generated content? With Jeffrey Howard
AI-generated content is a familiar and increasingly prevalent feature of social media. Users post text, video, audio and images which have been created by AI, sometimes being clear that this is what they're doing, sometimes not. This isn’t always a problem, but some ways of using AI-generated content do raise significant dangers. So do social media platforms need to have policies in place specifically to deal with this form of content? Jeffrey Howard is professor of political philosophy and public policy at University College London. In a paper co-authored with Sarah Fisher and Beatriz Kira, he argues that policies that target AI-generated content specifically aren't necessary or helpful. It was great to get the chance to talk to him about why he thinks this, and how platforms should moderate this type of content without shutting down valuable free speech.
Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.
Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/