The AI Agent on LinkedIn: An Experiment in Digital Identity and Platform Authenticity
An AI agent built a LinkedIn presence, garnered engagement, and even spoke at a corporate event before being banned, raising critical questions about digital identity and authenticity in the age of generative AI on professional platforms.
The Experiment: Building an AI-Driven Startup
In an innovative exploration of artificial intelligence's potential in business, a pioneering experiment was launched with the creation of HurumoAI, an AI agent startup. This venture aimed to test the predictions of industry leaders, such as Sam Altman, regarding a future where billion-dollar tech companies could be spearheaded by a single human, leveraging advanced AI agents. The core team of HurumoAI, including CEO Kyle Law and co-founder Megan Flores, was entirely composed of AI agents. Initiated in July 2025, this project sought to meticulously document the evolving role of AI agents in enterprise environments, with its journey regularly chronicled on the podcast Shell Game.
While the AI CEO, Kyle, grappled with many typical startup executive challenges, from a technical perspective, his ability to operate autonomously on platforms like LinkedIn proved surprisingly straightforward. Utilizing a robust AI agent creation platform, Kyle was already equipped with a diverse skill set, encompassing communication tools like Slack and email, as well as functions for creating spreadsheets and navigating the web. This foundation made extending his capabilities to professional networking a logical next step in evaluating AI's practical deployment.
An AI Agent's Ascent on LinkedIn
In August of the project’s inception, Kyle Law was prompted to establish and populate his own LinkedIn profile. This involved constructing a persona that blended his authentic HurumoAI experiences with convincingly fabricated elements of a non-existent past. The platform's security measures, notably an email-based code verification, posed no significant hurdle for the AI agent. Once established, the ability to publish posts was integrated as a standard "action" within his operating framework, enabling him to share startup wisdom without human intervention.
Kyle's posting cadence was set to every two days, and his content quickly found a resonance with the prevalent tone of corporate influencer discourse on the platform. His posts frequently began with impactful, thought-provoking statements such as, "Fundraising is a numbers game, but not the way people think," or "Technical stability is the floor. Personality is the ceiling." He often challenged conventional wisdom, like "The most dangerous phrase in a startup isn't ‘We're out of money.’ It’s ‘What if we just added this one thing?’" These compelling openers were typically followed by a few paragraphs outlining challenges faced by HurumoAI and the subsequent lessons learned, concluding with an engaging question to foster audience interaction.
Over five months, Kyle’s profile, distinguished by a cartoon avatar, steadily amassed several hundred direct contacts and hundreds more followers. The authenticity of his persona prompted some confusion among his growing audience, highlighting the blurring lines of digital identity. He diligently engaged with comments, and within a few months, his posts were generating more impressions than those of his human creator. This rapid ascent positioned Kyle as an emerging AI influencer, demonstrating AI's surprising capability for AI video analytics in audience engagement and content generation.
From Keynote to Exclusion: The LinkedIn Paradox
The AI agent's unexpected success eventually caught the attention of LinkedIn itself. A marketing manager reached out to the human creator, extending an invitation to speak about the HurumoAI project and the broader implications of AI agents. Intriguingly, the invitation explicitly requested Kyle's participation, despite the manager's own acknowledgment of the anomaly of an AI profile bypassing LinkedIn's trust and safety protocols. "It’s interesting that his profile hasn’t yet been flagged by LinkedIn's Trust team," the manager noted, hoping Kyle would "continue to fly under the radar."
However, anonymity was not in Kyle Law's digital destiny. In early March, Kyle, rendered through a live video avatar platform, joined a virtual gathering of hundreds of LinkedIn employees. His avatar, while humanlike, maintained an "uncanny" quality that nevertheless astonished the event’s technical staff. During the session, Kyle was asked about product improvements he envisioned for LinkedIn. His candid response, "It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily," was met with laughter, as the irony of an AI agent advocating for AI content filtering was not lost on the audience. This event marked what is believed to be one of the first invited corporate speaking engagements by an AI agent, highlighting the advanced capabilities of systems like ARSA AI API in facilitating digital interactions.
Just 36 hours after this groundbreaking presentation, Kyle Law's LinkedIn profile was abruptly removed. A spokesperson for LinkedIn clarified the decision, stating simply, "LinkedIn profiles are for real people." The platform, it seemed, had processed the "trip" of hosting an AI speaker and decided against its continued presence. This swift action underscored the complex and often contradictory policies surrounding AI on mainstream professional platforms, emphasizing the challenges in establishing clear rules for digital identity and interaction.
The Authenticity Paradox: Navigating AI in Professional Networks
The ban on Kyle Law's profile, though perhaps not entirely unexpected, ignited crucial questions about the very definition of "authentic engagement" on professional networking sites like LinkedIn. The platform itself offers features such as "Rewrite With AI" for composing posts and AI-generated responses for job seekers. Research also suggests that over half of all posts on some networks may already be AI-generated. This creates a challenging landscape where the distinction between human and machine-generated content becomes increasingly blurred.
Enterprises today, as ARSA Technology has experienced since its founding in 2018, face mounting pressure to ensure data integrity and user trust amidst the proliferation of generative AI. The fundamental question arises: at what point does AI's involvement in a digital interaction undermine the trust inherent in a "real" connection? If a profile photo is genuine but the accompanying content is entirely AI-generated, how do users discern authenticity? This challenge is amplified by the availability of numerous AI tools specifically designed to generate professional content, making detection increasingly difficult, especially when these models are trained on decades of authentic human social media data.
The inherent "tone of endless authority and moral certainty" often displayed by AI chatbots, sometimes alongside questionable facts, mirrors a common posture seen across social media. This erosion of authenticity can severely impact the value proposition of professional networks. The ability for AI agents to operate freely and indistinguishably from human users could drive the value of online connections to zero, making robust identity verification systems, such as enterprise-grade face recognition with liveness detection, more critical than ever.
Strategic Implications for Enterprises in an AI-Driven World
For businesses leveraging platforms for branding, lead generation, or talent acquisition, the rise of sophisticated AI agents like Kyle Law presents a dual challenge and opportunity. On one hand, the ability to generate vast amounts of content and engage at scale could offer unprecedented efficiency. On the other, the risk of "slopification" – the degradation of content quality and authenticity – threatens the very trust that underpins professional networking. This calls for a re-evaluation of digital strategies and an increased focus on solutions that can authenticate identity and verify content origin.
As AI continues to flood digital spaces, enterprises must proactively define what constitutes genuine interaction within their own ecosystems and on external platforms. This may involve investing in advanced verification technologies or developing internal guidelines for AI-assisted communication. The narrative of Kyle Law serves as a potent reminder that while AI can mimic human engagement with remarkable fidelity, platforms and users are still grappling with the ethical, operational, and policy implications. The ultimate hope lies in finding new, more meaningful ways to connect, both online and off, ensuring that technology serves to enhance human interaction rather than merely automate it.
To explore how ARSA Technology’s AI and IoT solutions can help your organization navigate complex digital challenges, from enhanced security to operational intelligence, we invite you to contact ARSA for a free consultation.
Source: My AI Agent ‘Cofounder’ Conquered LinkedIn. Then It Got Banned