“I think the name of the game is just doing everything you can to minimize risk as much as possible… All we can really do is raise awareness about the difference between the security that Salesforce provides to the orgs versus what clients and customers and stewards of the orgs are responsible for protecting within the data of the Salesforce org.” – John Crimmings, DevOps consultant at Slalom Consulting and member of the Low-Code Security Alliance
Agentforce promises speed. But what’s the tradeoff when AI-powered platforms move faster than our security standards?
In this episode of “From Code to the Cloud,” Andrew Davis, founder of the Institute for Transformational Leadership and author of Flow Engineering, sits down with John Crimmings, DevOps Consultant at Slalom Consulting and member of the Low-Code Security Alliance, to explore what’s changing—and what’s not—in Salesforce security.
Here are five essential takeaways from the episode:

What Prompt Engineering Reveals
Crimmings opens the discussion with his hands-on experience exploring AgentForce using a standard Salesforce org. As a system administrator, he could perform critical actions—like changing user emails or profiles—just by prompting the AI.
At first, it felt like a security violation. But that reaction revealed something deeper: Our perception of risk often shifts when the interface changes, even if the permissions haven’t.
“You feel like you’re doing something you shouldn’t be doing because you’re talking to an agent about it,” Crimmings explains.
The real risk isn’t about unauthorized access—it’s about how AI simplifies and accelerates administrative actions, stripping away the review steps that usually prevent accidents. When permissions are granted but guardrails aren’t enforced, trouble is only a prompt away.
Security Has a Marketing Problem

Agentforce demos are compelling. The value is immediate: faster processes, better UX, improved customer experiences. But nobody asks for a security demo.
“People are always wanting to see demos about what Agentforce can do… Nobody is knocking down our door to get a demo about security,” says Crimmings.
This imbalance mirrors a pattern in public health: Everyone wants the benefits of progress, but few want to think through the hygiene that keeps it safe. The problem isn’t just technical—it’s cultural. Security must become part of the narrative from the beginning, not an afterthought once the product is live.
The Gap That No One Sees—Until It’s Too Late
A major theme in the episode is what the Low-Code Security Alliance calls the shared responsibility gap—the disconnect between what Salesforce secures (the platform) and what the customer must secure (the data, permissions, and business logic inside it).
This gap becomes even more critical with tools like Agentforce, where speed and scale outpace review and documentation. Most orgs lack clear, consistent models for who should see what. And without that, you can’t enforce policy, spot abuse, or even recognize when an exposure has occurred.
AI amplifies whatever shape your current security model is in—for better or worse.

Technical Debt Isn’t Just Code—It’s Access Control Too
If most Salesforce orgs were already struggling to maintain clean, rational permission models before AI, Agentforce will only intensify the mess.
Many orgs are the result of years of patches, vendors, and undocumented decisions. Others are small teams who assume they’re “too simple” to need strong access controls. Both extremes are dangerous.
The solution isn’t to slow down innovation—but to pair it with thoughtful, ongoing rationalization of users, roles, and automation. Otherwise, platform teams risk accelerating into chaos.
Security Deserves a Front Seat
Agentforce and similar AI tooling represent a powerful leap forward in productivity. But as Andrew Davis and John Crimmings argue, we’ve seen this movie before. New tools promise speed. Risk is ignored—until it isn’t.
Security needs a seat at the table now—not after the breach. That means better visibility into org access, clearer models for user permissions, and a cultural shift that treats responsible development as a product feature, not a compliance checkbox.
Next Step…
Listen to the latest episode of “From Code to the Cloud” for in-depth insights, real-world analogies, and practical steps to align AI innovation with smarter security.