AI & unintended consequences: don't let AI make you lazy.
Three bits of AI news & three applied learnings from running an AI-first studio.
I am weirdly passionate about unintended consequences. My former colleague and now friend Sarah Rose introduced me to them. Ever since, I just can’t un-see them. I now think of them as one of the major tools in one’s arsenal to help you, as a manager or leader, steer your team or company into the best possible version of its future.
AI is causing a huge amount of unintended consequences. Just in the last week three noteworthy things popped up:
Redditors use AI to send tourists to crappy London steak shop.
Last year Reddit, perhaps controversially, started selling its data for the training of AI models. Google benefits from this in order to make their search results more relevant. Reddit user u/Flonkerton_Scranton started a new trend where people are pushing the touristy Angus Steak House to the front of popular reddit posts… thus polluting Google search outputs. The idea? It sends the tourists to the Angus steak house & frees up the great restaurants for the locals. It shows how easy it is for AI to perpetuate misinformation from reliable sources. Well, I wouldn’t necessarily call Reddit reliable in the first place, but hey.
Apple Intelligence (Apple AI) takes its summaries rather literally.
Rolled out as part of the most recent iOS… Apple AI summarises group chat messages. In this case, it did so rather too literally. Poor Pam.
And sadly… AI has of course already started taking lives.
If you’re not onto Character AI yet, it’s an incredible product with a huge and passionate following (not just on Discord with 350k+ members!). But it’s also claimed what appears to be its first life. 14-year old Sewell used C.AI and, according to the lawsuit, committed suicide after receiving an emotionally charged message from the chatbot.
These three are all examples unintended consequences of brilliant AI product ideas. They’re all unexpected side-effects. Can they be planned for? In the absence of an AI being able to predict them for us… what can humans do?
Some real… every-day solutions from an AI-first studio: don’t let AI make you lazy.
What it comes down to: don’t let AI make you lazy.
Half Moon Studios is two years old this October. Happy birthday to us! Since our founding in October 2022, we’ve been using AI as an integral part of our content and development processes. We’ve been using Midjourney since V3. We’ve been using ChatGPT pretty much since its public launch day. And we were first in the queue to experiment with agents and agentic models.
And as per the introduction, we’re always thinking about unintended consequences and their possible outcomes. Here are three key watch-outs that could be useful for every-day users:
Are you accidentally breaking your employment contract by sharing data with ChatGPT?
On a Gamesforum panel last year, I eplained to the audience about how LLMs are trained. Not only do they use extensive third-party data sources), they also use the data you input to improve the models over-time. If you - even accidentally - put company proprietary data into GPT, it will potentially use that to improve the model over time. Oh, and you’ll be in breach of your employment contract by sharing confidential data.
It’s so easy to copy and paste. And before you know it - you’re speaking to legal & HR for the wrong reasons. Don’t let AI make you lazy. Lazy gets you fired.
Is your output accidentally sexist, discriminatory or racist?
For our TikTok game, Word Quiz Live, we batch-produced picture quiz content using third-party tools like Midjourney, Ideogram and Dall-E. In some cases we looked at the entire output set, and realised the level of bias that had gone into the original model. For our Anagram round, we wanted to have images to liven up things up visually. All of the following jobs: accountant, lawyer and doctor… you guessed it, came out of the model as white males.
Don’t let AI make you lazy. You are responsible for the output you use & can’t blame the model.
My favourite tip: Give it the human 1-2-3.
Excel is 40 years old this year. Looking at the Guardian newspaper’s Excel bloopers real… it makes me think, what will the equivalent AI reel look like in 5 years’ time? At Half Moon Studios, whatever we produce… be it text, image, video or code… we always give it the courtesy of a good old’ human review. Just think about how much time you’re saving with AI tools producing this stuff for you. Re-invest some of that time back. Because the 1% you might miss could cost you or your company dearly in a humungous PR error or other cock-up.
Don’t let AI make you lazy. Re-invest some of that time you saved!
Next up (in a couple of weeks), I’ll be talking about the favourite third-party tools we use at Half Moon Studios. If you like this stuff, thanks for subscribing!
Best,
Pieter