Trust in the Age of AI
“AI generated content is coming for your attention!” “AI agents are replacing humans on the internet!” “You are going to be scammed by AI!” “AI tooling is going to send developers to the poorhouse!” And other things headlines that are probably flashing across your various social media feeds, some of those posts were probably created by AI bots whose sole purpose is farming content for your clicks and attention. It’s easy to lose trust in what you see and read on the internet.
So how do you know who to turn to or who to trust as humans lose pace with the sheer volume of output created by generative AI? Even better, as a security professional, how do you maintain trust in the software your company develops when AI helps develop that too? AI and its applications are rapidly transforming how work gets done, how we think about interaction, and how we trust what we read and see in the digital world. Kyle Hill put out a compelling youtube video which goes over how and why generative AI is eroding trust even further on the internet and is definitely worth a watch -
This is a topic I’ve been thinking over for the past few years and here is my 100% human generated, non-AI created To Do list for trusting in the era of AI. As a security professional, we have been used to withholding trust for years and don't spend it easily. Many of those lessons should apply with slight updates:
Trust Humans (a little) - We humans are in this together and are the same batch of schmoes that you've always known. To really establish trust that you are dealing with a human and not an AI, show up and meet the other person. Until you've met someone in person and exchanged social media accounts, you don't really have a good way to trust that they're not being impersonated. Right now, there isn't a really good way to establish trust between real life and the internet, but that's because there hasn't been a pressing, universal need to. Until we figure out a drivers license or ID card-like scheme for the internet, the best guidance we have is to meet people in real life, or rely on people you've met in real life, to establish trust in the human behind the social media account.
Trust Tech (2FA++) - Scripts, bots, and impersonators have been around for a long time. This is kind of a technical version of validating someone's social media accounts by meeting them in person, but instead of requiring an introduction that doesn’t happen inside the internet, these methods will involve using channels that AI doesn’t have easy access to.
Get your Ducks in a Row - Sometimes, we want to trust AI to do something for us. Sometimes we've found an AI that we can trust to be an AI and want it to do something useful for us. One main limitation in AI models is the number of tokens it has to "think" about a given task. If you overwhelm an AI, maybe by giving it too much context, or too long of instructions, it will run out of tokens and the conversation will end or go off the rails. Since most people use text inputs for AI, a token might be a whole word or a part of a word, but essentially if you talk too much to a ChatGPT, it will get overwhelmed and start making mistakes. AI Co-pilot code writing software currently performs best when asked to complete lines, not programs. When instructing AI to help out, don't give it all the ducks to look after, limit your scope to a single discrete duckling sized task.
Count the Fingers (Weirdness Checks) - A common problem that people noticed right away was that when AI created images of hands, it couldn't really get them right. We all learned that if there was a photo of someone with a weird hand with extra fingers, it was AI generated (or an artist's satire of AI). That's an example of a weirdness check that most of us are familiar with by now, but as AI tooling becomes ubiquitous, get used to the weirdness checks and either use them to weed out AI generated content (e.g. by looking for "As an AI model" in reviews to find automatically generated product reviews), or adapt to the weirdness when using AI for productive uses.
Vendor & Open Source Software (OSS) Risk Management - If you're like me, you don't have a pet AI model that you've built from scratch and trained yourself. This means that all your AI needs will be met by "Somebody Else's Software" or hosted on "Somebody Else's Computer." If you're consuming AI, treat them as a vendor and do all the risk management things. If you're building AI, vet the OSS libraries and training data by looking for contributor and repository characteristic risk measures. If you're concerned about your vendors using AI insecurely, count the fingers whenever they're handing stuff over.
Secure the (rest of the) Iceberg - That old security metaphor is still floating around because it's just so dang useful. If you’re new to the game and haven't had the pleasure of encountering the security iceberg, the basic gist of it is that the amount of ice that's above water represents 10% of what you can see, but it was the hidden 90% below the water that sunk the titanic. AI presents new risk, new attack vectors, new attack surfaces, and new threats to establishing trust, but all of the old stuff is there too. If you haven't cleaned house and have the old-school security habits down pat, focusing on AI originated risk may feel like a bit of a misdirection.
AI is coming. It's too alluring to ignore and the promises it whispers are too irresistible. Don't trust all of it, but don't abandon all hope either. Keep your ears to the ground and put a little trust in humans.
There's always more words to spend on a topic like this one, but I've hit my budget for now. Stay secure, and never forget the humans.