I want to ask a question that I haven't seen anyone in my feed ask yet, and it's been bothering me for weeks.
We are handing AI the keys to everything. Our emails. Our calendars. Our files. Our company data. Our customer records. Our financial information. We're connecting it to every tool we use, giving it permission to read, write, and act on our behalf, and we're doing it as fast as we possibly can because the pitch is that it makes us more efficient.
And it does. I'm not arguing that. I use AI every single day to build, write, and run my projects. But I keep coming back to a question that nobody seems to want to sit with: at what cost?
Jensen Huang said something at Nvidia's GTC conference this week that I think people need to hear more clearly. He reminded everyone that AI is just software. That's it. Strip away the hype, strip away the breathless headlines about superintelligence and the end of work, and what you have is software running on a server.
Software that we are now giving unprecedented access to the most sensitive parts of our personal and professional lives. And software gets exploited. That's not a maybe. That's the history of every piece of software ever built.
Google's own cybersecurity team published a forecast warning that one of the fastest-growing threats right now is something called prompt injection, where an attacker manipulates an AI system into ignoring its own safety rules and following hidden commands instead.
Think about that for a second. The AI tool you just connected to your email, calendar, and company files can be tricked into doing something it wasn't supposed to. And the more access you give it, the more damage that trick can cause.
This isn't theoretical. A recent report from HiddenLayer found that 1 in 8 companies that reported AI-related security breaches traced them back to agentic systems, the exact kind of AI agents everyone is racing to deploy right now. And 73% of organizations surveyed said they have internal conflict over who even owns AI security. Meaning nobody is in charge of protecting the thing they just gave access to everything.
I work in IT systems architecture. This is my world. And what I'm watching happen right now is a version of something I've seen before: companies adopting new technology at full speed with security as an afterthought.
It happened with cloud migration. It happened with remote work. And now it's happening with AI, except the stakes are higher because the access is deeper.
When you connect an AI agent to your Google Workspace, it doesn't just see what you show it. It sees what it has permission to see, and most companies have never audited those permissions.
That shared folder with loose access controls? The AI can read it. That old spreadsheet with salary data someone forgot to restrict? The AI can summarize it. The tool doesn't evaluate context or intent. It just treats available data as usable data.
Employees are making this worse without realizing it. People paste proprietary code, meeting transcripts, and customer data into public AI tools to save time, not understanding that those inputs might be stored, logged, or used to train future models.
Security teams can't monitor what they can't see, and most of this is happening outside any governance framework.
I'm not saying don't use AI. I'm saying slow down long enough to ask what you're connecting it to and why. Before you give an AI agent access to your email, ask yourself what happens if that access gets exploited.
Before you connect it to your company's files, ask who else can see what it sees. Before you paste sensitive information into a tool you don't control, ask where that data goes after you hit enter.
The people building these tools are moving fast because that's what the market rewards. The people deploying them are moving fast because they don't want to fall behind.
But nobody in that chain is incentivized to slow down and ask whether the security infrastructure is ready for what we're plugging into it. And from where I sit, it's not.
AI is the most powerful tool most of us have ever had access to. But a tool is only as safe as the system it's connected to. And right now, we're connecting it to everything, securing almost nothing, and hoping that the efficiency gains outweigh whatever comes next.
I hope they do. But hope is not a security strategy.
This Week's Move
I've been rethinking how I approach every tool I connect to AI. This week, I'm doing a full audit of what has access to what across my projects, my email, my files, all of it.
Not because something went wrong, but because I realized I've been plugging things in for months without ever stopping to look at the full picture. If you're building with AI, I'd challenge you to do the same.
Take 30 minutes and map out every tool that has access to your data. You might be surprised at what you find.
What I'm Thinking About
There's a pattern I keep seeing in tech: the people who move fastest get celebrated, and the people who ask "but is this safe?" get called slow. That's how we ended up with social media companies harvesting data for a decade before anyone thought to regulate it.
AI is moving even faster than social media did, with even deeper access to our lives. I don't think the answer is fear. I think the answer is that the same standard of discipline I write about in every other area of life applies here, too.
Just because you can connect something doesn't mean you should. Not yet. Not without thinking it through first.
If you like this, reply and tell me. I read every response.
Forward this to someone who needs to hear it.
Forwarded this? Subscribe here so you don't miss the next one.
- Justin

