Tech
Inside the DM Economy: Cam McMaster on Providing AI Infrastructure for Creator Inboxes
Underneath every creator’s public feed lies a second economy that runs through direct messages. That is where brand deals arrive, partnerships begin, and communities deepen. It is also where harassment, grooming, and threats take place.
For Cam McMaster and Chris McLoghlin, that inbox layer now requires infrastructure. The Australian entrepreneurs are the co-founders of InboxAgents.ai and Guardii.ai, two connected AI products launched in 2025 to help creators, athletes, and families manage private messaging at scale.
InboxAgents.ai unifies DMs and emails across platforms into a single interface, while Guardii.ai provides abuse detection, threat scoring, and child-safety monitoring within that system. Together, they aim to provide users with a context-aware layer across their inboxes, surfacing high-value conversations and filtering out harmful ones.
The idea did not begin inside the Creator Economy. It started when a friend showed Chris explicit, unsolicited messages she had received on LinkedIn.
“It was not just common banter. It was quite malicious and frankly disgusting,” Cam says. As he and Chris spoke with more women, many of them not public figures, they saw a broader pattern. “We realized that this was endemic.”
For private individuals, these messages appeared sporadically. For high-attention creators and athletes, they arrived daily, often in overwhelming volumes. At the same time, those same inboxes carried commercial opportunities that were being missed entirely.
Guardii and InboxAgents were built around that tension: the inbox as both risk surface and revenue channel.
The Inbox as Operational Infrastructure
During early alpha testing with 12 creators, including high-profile OnlyFans personalities, Cam expected to see harassment. What was unexpected was the impact on business.
“What came as a surprise was not so much the abuse, [but that] they were missing commercial opportunities left, right and center,” he says.
One creator generating approximately $1 million per month had abandoned her inbox because of the volume of abuse. As a result, she missed six-figure brand deals, according to Cam. “She sent screenshots of brand deals she missed because her inbox was a sewer. She had abandoned it,” he explains.
That insight reframed the company’s value proposition. Abuse filtering was only one part of the equation. The larger opportunity lay in signal extraction.
“Our ICP (Ideal Customer Profile) is larger creators who get a high volume of noise. The signal-to-noise ratio in their messaging feeds is very low, but the signal that is there is high value,” Cam says.
InboxAgents.ai connects via official APIs (Application Programming Interfaces), uses AI inbox agents to aggregate messages from multiple platforms, and routes high-priority conversations to a centralized dashboard. Abuse is quarantined. Brand inquiries and partnership opportunities are surfaced.
For creators operating without management, this becomes a form of self-management infrastructure. “If there are around 100 to 200 million identified creators on the planet, about 1% are managed by professionals,” Cam estimates.
He sees an opportunity to “provide [creators] with tools so they can self-manage affordably and grow their business.”
Why Private Messaging Became the Blind Spot
Public feeds have moderation teams and reporting systems. Private messaging operates differently.
“It’s not because it’s private, but it is because it’s private,” Cam says.
He points to legal liability and platform incentives. “They are protected by Section 230 [“No provider or user of an interactive computer service shall be treated as the publisher or speaker of information provided by another”]. The moment they begin to intervene at that level, they open a can of worms,” he explains.
In Cam’s view, moderating private messages at scale requires compute resources and a tolerance for legal risk. Platforms are structurally conflicted. “The place is designed for as much attention as possible. It doesn’t matter what kind of attention it is,” he says.
Guardii positions itself as an opt-in external layer. Rather than changing platform governance, it overlays context-aware filtering across multiple inboxes simultaneously.
Cam frames the broader mission as cultural as much as technical. “Our mission is to clean things up, reduce parasitic energy and increase positive energy,” he says.
From Creator Abuse to Child Protection
A critical moment during alpha testing expanded Guardii’s scope beyond creators.
A 19-year-old creator using the product received a message from a self-confessed pedophile. The system intercepted it before she saw it. “That was what led us to the parental controls app,” Cam says.
Further research into grooming and sextortion patterns deepened the company’s focus on youth protection. “The sextortion issues with young boys being sextorted online and the suicides that were happening were pretty horrendous,” he says.
Guardii.ai now includes a parental monitoring layer designed to detect progressive predatory patterns. The system stores message content securely to maintain thread continuity and enable context-aware analysis, which Cam views as necessary for identifying risk signals over time.
Product Iteration and Go-to-Market Reality
Guardii.ai remains in beta, and development has been iterative rather than linear.
“We’ve released about three different betas and taken them out of production. We’ve gone through multiple iterations,” Cam says.
He reports that early traction has emerged through B2B partnerships, including major sporting clubs seeking automated abuse reporting and escalation protocols for athletes. Agencies that once relied on junior staff to manually monitor DMs now test Guardii’s AI-driven filtering instead.
The monetization model includes regular and pro subscription tiers, with abuse filtering included across both. While larger creators stand to gain the most value, they are also the hardest to access, representing a distribution paradox Cam openly acknowledges.
The Future of Inbox Governance
Cam’s immediate focus is on distribution partnerships and on refining the product into a system that manages deal flow end-to-end, filtering abuse and escalating threats. Cam describes this system as “a full-on AI agent sidekick for social media.”
He believes AI-driven content creation will only increase the volume of inbound communication. “I think the noise ratio is only going to get higher. Having tools to decipher the signal from the noise is only going to get more pressing,” he says.
As inbox governance matures, Cam expects a shift in digital norms. “Much cleaner, with hopefully a higher degree of online decorum,” he says.
