AI won't **** us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas:
https://ted.com/membership
Follow TED!
Twitter:
https://twitter.com/TEDTalks
Instagram:
https://www.instagram.com/ted
Facebook:
https://facebook.com/TED
LinkedIn:
https://www.linkedin.com/company/ted-conferences
TikTok:
https://www.tiktok.com/@tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit
https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more:
https://go.ted.com/sashaluccioni
https://youtu.be/eXdVDhOGqoE
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy:
https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at
https://media-requests.ted.com
#TED #TEDTalks #AI
Share this page with your family and friends.