5 Comments

Great article Helen! What role do you think AI companies should play in prevention of this? If we look at the past 10 years of tech it is clear the the Twitters/Googles/Facebooks of the world profit heavily off of disseminating fake news (largely off the back of their ad-tech which powers most of their revenue models). How can capitalist desires in AI technology avoid the exact same pratfalls with their products, and even if they could, would they ever really want to?

I have been thinking about deep fakes and their lack of real utility/need and I guess the best part about them is I can make porn now and blame it on robots if my boss finds it (dirty dirty boss). If we have really walked over cliff's edge how do those who don't break their neck's climb back up the mountainside?

I heard a stat on this podcast (https://www.theringer.com/2023/3/21/23649894/the-ai-revolution-could-be-bigger-and-weirder-than-we-can-imagine) that 1 in 10 ML scientists believe their AI innovations will bring about doomsday, yet they still think it is worth creating. What is it about AI innovation that would drive someone to build something that might end humankind as we know it? What problems are we really even solving with these tools? How to quit paying accountants and artists?

At least we still have bananas (https://www.bbc.com/future/bespoke/follow-the-food/the-pandemic-threatening-bananas.html)...nevermind...

Expand full comment

Another awesome read, Helen. This is all terrifying, but it’s the world that we created. I’ve always been an optimist, as I believe optimism is what drives progress. And so, whenever someone said that we’re f*cked, I’d (spontaneously, but also eagerly) respond “but these are exciting times, look at the bright side; look at what our human intelligence has been able to create”. But now I’m no longer *that* optimistic (and often am the one saying that we’re f*cked), and I wish I still were. Disseminating untruth has become as easy as buying a laptop and getting an internet connection. It’s a zero cost game. And many people find it fun and exciting to deceive, as cynicism and individualism and egoism have reached unbearable levels. It’s a game whose potentially devastating consequences are all but clear to those who play it. Our civilization has given proof of knowing how to auto-adjust several times in history, and I want to think that’s going to be the case again whenever that point of no return gets alarmingly close. But this is dreadful!

Other than that, I’m in love with your writing, your thinking, and your eclectic knowledge. Thank you for putting this out.

Expand full comment

I'm curious to get your perspective - what is the relationship between trust and hope in the context of advancements in AI?

As a self-labeled tech-optimist, I *hope* humans have good intentions and that we do the right things. But reading your reflections on trust prompt me to wonder if being hopeful falls short. Trusting requires me to assume more responsibility than hoping.

Expand full comment