Great article Helen! What role do you think AI companies should play in prevention of this? If we look at the past 10 years of tech it is clear the the Twitters/Googles/Facebooks of the world profit heavily off of disseminating fake news (largely off the back of their ad-tech which powers most of their revenue models). How can capitalist desires in AI technology avoid the exact same pratfalls with their products, and even if they could, would they ever really want to?
I have been thinking about deep fakes and their lack of real utility/need and I guess the best part about them is I can make porn now and blame it on robots if my boss finds it (dirty dirty boss). If we have really walked over cliff's edge how do those who don't break their neck's climb back up the mountainside?
I heard a stat on this podcast (https://www.theringer.com/2023/3/21/23649894/the-ai-revolution-could-be-bigger-and-weirder-than-we-can-imagine) that 1 in 10 ML scientists believe their AI innovations will bring about doomsday, yet they still think it is worth creating. What is it about AI innovation that would drive someone to build something that might end humankind as we know it? What problems are we really even solving with these tools? How to quit paying accountants and artists?
Thank you for your kind words! And dirty bosses are gonna be dirty bosses hahaha. Funny enough, I did know 3 people way back in 2017/18-ish, who had been working on AI-generated influencers, including NSFW ones. AFAIK they did make solid money with it, even back then it was a very niche market and the tech wasn't impressive as they are today.
You posed some really important and tough questions, and I don't think anyone has a definitive answer. I do have some counter-intuitive thoughts and ideas around it, and I'll elaborate on them in the upcoming essay -- don't worry, it's gonna be a short one -- to see how they might resonate, as counter-intuitive as they are. I don't think if we need to worry if AIs were really smart and running the show. Our problem is that, AIs are not nearly as smart as they should and could be, and they are already running the world.
Of course, incentives, as you mentioned. The business model that the internet lives on, is another dealbreaker. I already planned it for a later part of this series. (Geez you are like a mindreader!)
Another awesome read, Helen. This is all terrifying, but it’s the world that we created. I’ve always been an optimist, as I believe optimism is what drives progress. And so, whenever someone said that we’re f*cked, I’d (spontaneously, but also eagerly) respond “but these are exciting times, look at the bright side; look at what our human intelligence has been able to create”. But now I’m no longer *that* optimistic (and often am the one saying that we’re f*cked), and I wish I still were. Disseminating untruth has become as easy as buying a laptop and getting an internet connection. It’s a zero cost game. And many people find it fun and exciting to deceive, as cynicism and individualism and egoism have reached unbearable levels. It’s a game whose potentially devastating consequences are all but clear to those who play it. Our civilization has given proof of knowing how to auto-adjust several times in history, and I want to think that’s going to be the case again whenever that point of no return gets alarmingly close. But this is dreadful!
Other than that, I’m in love with your writing, your thinking, and your eclectic knowledge. Thank you for putting this out.
Awww thank you Silvio! Indeed, it's terrifying, but most of times media outlets and personalities tend to only focus on the phenomenon of "fake news" and whatnots, instead of its fundamental impact. So that's the intent of this essay: not to be alarmist or pessimist, but to remind ourselves that "hey, this has been happening for a long time in wars and espionage activities, now it's all over the place, and it damages our social fabric more than we think". I feel that by being realist and acknowledge the potential dangers and harms of these very powerful tools, can we prepare for and work towards an optimistic future. Sounds counterintuitive, but so is "svis pacem, para bellum".
One reason that I chose to riff on Freud's book title is that the book was written in 1930, and it posed a key question: can our collective social forces subdue the violent side of human nature that always seeks to break those social boundaries? Alas, we all know what happened 9 years later. I framed my question in a similar way, but I do hope that we fare better this time.
I'm curious to get your perspective - what is the relationship between trust and hope in the context of advancements in AI?
As a self-labeled tech-optimist, I *hope* humans have good intentions and that we do the right things. But reading your reflections on trust prompt me to wonder if being hopeful falls short. Trusting requires me to assume more responsibility than hoping.
Great article Helen! What role do you think AI companies should play in prevention of this? If we look at the past 10 years of tech it is clear the the Twitters/Googles/Facebooks of the world profit heavily off of disseminating fake news (largely off the back of their ad-tech which powers most of their revenue models). How can capitalist desires in AI technology avoid the exact same pratfalls with their products, and even if they could, would they ever really want to?
I have been thinking about deep fakes and their lack of real utility/need and I guess the best part about them is I can make porn now and blame it on robots if my boss finds it (dirty dirty boss). If we have really walked over cliff's edge how do those who don't break their neck's climb back up the mountainside?
I heard a stat on this podcast (https://www.theringer.com/2023/3/21/23649894/the-ai-revolution-could-be-bigger-and-weirder-than-we-can-imagine) that 1 in 10 ML scientists believe their AI innovations will bring about doomsday, yet they still think it is worth creating. What is it about AI innovation that would drive someone to build something that might end humankind as we know it? What problems are we really even solving with these tools? How to quit paying accountants and artists?
At least we still have bananas (https://www.bbc.com/future/bespoke/follow-the-food/the-pandemic-threatening-bananas.html)...nevermind...
Thank you for your kind words! And dirty bosses are gonna be dirty bosses hahaha. Funny enough, I did know 3 people way back in 2017/18-ish, who had been working on AI-generated influencers, including NSFW ones. AFAIK they did make solid money with it, even back then it was a very niche market and the tech wasn't impressive as they are today.
You posed some really important and tough questions, and I don't think anyone has a definitive answer. I do have some counter-intuitive thoughts and ideas around it, and I'll elaborate on them in the upcoming essay -- don't worry, it's gonna be a short one -- to see how they might resonate, as counter-intuitive as they are. I don't think if we need to worry if AIs were really smart and running the show. Our problem is that, AIs are not nearly as smart as they should and could be, and they are already running the world.
Of course, incentives, as you mentioned. The business model that the internet lives on, is another dealbreaker. I already planned it for a later part of this series. (Geez you are like a mindreader!)
Another awesome read, Helen. This is all terrifying, but it’s the world that we created. I’ve always been an optimist, as I believe optimism is what drives progress. And so, whenever someone said that we’re f*cked, I’d (spontaneously, but also eagerly) respond “but these are exciting times, look at the bright side; look at what our human intelligence has been able to create”. But now I’m no longer *that* optimistic (and often am the one saying that we’re f*cked), and I wish I still were. Disseminating untruth has become as easy as buying a laptop and getting an internet connection. It’s a zero cost game. And many people find it fun and exciting to deceive, as cynicism and individualism and egoism have reached unbearable levels. It’s a game whose potentially devastating consequences are all but clear to those who play it. Our civilization has given proof of knowing how to auto-adjust several times in history, and I want to think that’s going to be the case again whenever that point of no return gets alarmingly close. But this is dreadful!
Other than that, I’m in love with your writing, your thinking, and your eclectic knowledge. Thank you for putting this out.
Awww thank you Silvio! Indeed, it's terrifying, but most of times media outlets and personalities tend to only focus on the phenomenon of "fake news" and whatnots, instead of its fundamental impact. So that's the intent of this essay: not to be alarmist or pessimist, but to remind ourselves that "hey, this has been happening for a long time in wars and espionage activities, now it's all over the place, and it damages our social fabric more than we think". I feel that by being realist and acknowledge the potential dangers and harms of these very powerful tools, can we prepare for and work towards an optimistic future. Sounds counterintuitive, but so is "svis pacem, para bellum".
One reason that I chose to riff on Freud's book title is that the book was written in 1930, and it posed a key question: can our collective social forces subdue the violent side of human nature that always seeks to break those social boundaries? Alas, we all know what happened 9 years later. I framed my question in a similar way, but I do hope that we fare better this time.
I'm curious to get your perspective - what is the relationship between trust and hope in the context of advancements in AI?
As a self-labeled tech-optimist, I *hope* humans have good intentions and that we do the right things. But reading your reflections on trust prompt me to wonder if being hopeful falls short. Trusting requires me to assume more responsibility than hoping.