The Unseen Line: When AI Companies Impersonate Creators Without Consent
Explore the controversial case of an AI company using creator names without permission for an "Expert Review" feature, sparking a critical debate on AI ethics, impersonation, and creator rights in the digital age.
The Unseen Line: When AI Companies Impersonate Creators Without Consent
In an increasingly AI-driven world, the boundaries of digital ethics and intellectual property are being tested in unprecedented ways. A recent controversy involving Superhuman, the company behind Grammarly, ignited a global discussion about AI's use of creator data and the critical difference between algorithmic attribution and outright impersonation. The incident, which saw an AI feature leveraging the names of prominent journalists without their permission, brought the CEO of Superhuman, Shishir Mehrotra, into a tense public dialogue about accountability in the age of advanced artificial intelligence. This case underscores the urgent need for AI developers and enterprises to clearly define ethical guidelines and safeguard creator rights as technology continues to evolve.
The controversy centers on a now-defunct Grammarly feature called "Expert Review," launched in August of last year (relative to Mar 23, 2026), which offered users AI-generated writing suggestions. The unique—and deeply problematic—aspect of this feature was its claim to synthesize advice from "experts," explicitly using the names of real individuals, including journalists Nilay Patel of The Verge and investigative reporter Julia Angwin. The issue was not merely one of citation, but of direct identity association, complete with checkmarks implying official endorsement. Neither Patel, Angwin, nor others whose names were used had granted permission for this usage, leading to widespread outrage within the journalistic community and even a class-action lawsuit filed by Angwin. Superhuman initially offered an email-based opt-out before ultimately discontinuing the feature, with CEO Shishir Mehrotra offering a public apology. The incident serves as a stark reminder of the ethical tightrope AI companies must walk, especially when integrating AI capabilities that interact with personal or copyrighted content.
The Vision of AI-Native Productivity
Superhuman, which rebranded from Grammarly in late 2025, positions itself as an "AI-native productivity suite." The company’s philosophy, as articulated by CEO Shishir Mehrotra, centers on "bringing AI to wherever people work." Their suite includes Grammarly, a popular writing assistant; Coda, a document space; and Mail, an email client. A significant recent addition is Superhuman Go, a platform designed to deliver a network of proactive, personalized AI assistance directly into users' workflows. This platform aims to allow developers to create "agents" that function similarly to Grammarly, integrating AI seamlessly without requiring users to fundamentally change their behavior.
The core differentiator for Superhuman, according to Mehrotra, lies in its omnipresence and consistent user experience across various applications. While many productivity tools are incorporating AI, Superhuman strives to offer a unified, integrated AI experience across "a million unique surfaces a day," from web apps like Google Docs and Notion to desktop applications and mobile platforms. The goal is to provide a "virtual human working right next to you," offering both consistency and superior, context-aware results. For enterprises considering such expansive deployments, ARSA Technology understands the importance of seamless integration, offering custom AI solutions tailored to specific operational contexts and existing infrastructure, ensuring a smooth transition without disrupting critical workflows.
The "Expert Review" Controversy: An Ethical Flashpoint
The "Expert Review" feature was designed to offer writing suggestions by synthesizing advice from what it labeled as "AI-cloned experts." This concept, however, quickly became a profound ethical misstep when it was discovered that these "experts" included real individuals, primarily journalists, whose names were used without consent. The inclusion of names like Nilay Patel, Julia Angwin, and even bell hooks, alongside "official"-looking checkmarks, suggested an endorsement or active participation that simply did not exist. This crossed a critical line from algorithmic content suggestion into unauthorized identity association.
The backlash was swift and intense. Journalists and creators expressed outrage at the unauthorized use of their intellectual property and identity for commercial purposes, highlighting a clear lack of respect for their professional integrity and individual rights. The ensuing class-action lawsuit underscored the legal ramifications of such actions, moving the debate beyond ethical considerations into enforceable legal territory. The incident highlighted how easily generative AI, if unchecked, can infringe upon personal and professional identities, raising significant questions about data sourcing, consent, and the responsibilities of AI developers.
Decision-Making Under Scrutiny
In discussing the internal processes that led to the launch of "Expert Review," Shishir Mehrotra outlined Superhuman's decision-making frameworks, such as "Eigenquestions" for framing the right problems and "Dory and Pulse" for soliciting diverse feedback to avoid groupthink. Yet, the launch of a feature that fundamentally disregarded basic consent and creator rights brought these processes into sharp relief. Mehrotra conceded that the feature was "not a good feature" for both experts and users, citing low usage and misalignment with company strategy as reasons for its quick removal—even before the lawsuit was filed.
Despite these internal frameworks, a small team of a product manager and a couple of engineers decided to proceed with the feature. This raises critical questions for all organizations: How do established ethical guidelines translate into practice, especially when smaller teams are empowered to innovate rapidly? The incident serves as a potent case study on the need for robust ethical review mechanisms, internal checks and balances, and comprehensive legal consultation at every stage of AI product development, particularly when dealing with personal data and intellectual property. Businesses operating in regulated environments, such as those that leverage ARSA's on-premise Face Recognition & Liveness SDK, understand the paramount importance of data sovereignty and strict compliance protocols to prevent such ethical breaches.
Attribution vs. Impersonation: Defining the Line
The Superhuman controversy brought into sharp focus the distinction between attribution and impersonation in the AI era. Attribution, in its simplest form, involves crediting the original source of information or ideas. Generative AI models often "learn" from vast datasets, many of which contain copyrighted or proprietary content. While the legal framework for "fair use" in training AI is still evolving, directly associating AI-generated content with a specific individual's name, especially with implications of endorsement or expertise, crosses a clear line into impersonation. This is far more insidious than simply using public data for training; it involves leveraging a person's identity and reputation without their explicit permission or any form of compensation.
For creators, the implications are profound. Their names, reputations, and bodies of work are their primary assets. The unauthorized use of these assets by AI systems not only undermines their control over their own identity but also devalues their expertise and creative output. It highlights a power imbalance where AI companies, with their vast data processing capabilities, can effectively co-opt personal brands for commercial gain without reciprocal benefit or consent. This extractive nature of AI, as Nilay Patel argued, is a significant concern that demands urgent legislative and ethical solutions.
Lessons for Responsible AI Development
The "Expert Review" debacle serves as a crucial learning opportunity for the entire AI industry. It underscores the necessity of moving beyond purely technical feasibility to embed ethical considerations at the core of AI product design and deployment. Key lessons include:
- Explicit Consent: Always seek explicit, informed consent from individuals whose identities, names, or intellectual property are used by AI features, especially if the use implies direct endorsement or expertise.
- Clear Disclosure: Be transparent about how AI models are trained, what data they use, and how they generate outputs, particularly concerning synthetic content.
- Robust Ethical Review: Implement multi-level ethical review processes that involve diverse stakeholders, including legal, privacy, and user advocacy experts, to identify and mitigate potential harms before product launch.
- Creator Compensation Models: Explore fair and equitable models for compensating creators whose work or identity contributes to the value of AI products.
- Focus on Utility, Not Deception: Design AI features that genuinely assist users without resorting to misleading or deceptive tactics regarding the source or nature of the "intelligence."
The rapid discontinuation of the "Expert Review" feature by Superhuman, coupled with Mehrotra's apology, demonstrates that even large technology companies can correct course when confronted with ethical breaches. It also showcases the power of public and media scrutiny in holding AI developers accountable.
ARSA Technology’s Commitment to Ethical AI Deployment
At ARSA Technology, we recognize that the power of AI comes with significant responsibilities. Our approach to AI and IoT solutions prioritizes ethical deployment, data privacy, and robust security, particularly for our enterprise and government clients. We specialize in solutions like AI Video Analytics, where privacy-by-design is paramount. For instance, our on-premise AI software ensures full data ownership, processing sensitive information entirely within client infrastructure, thereby eliminating cloud dependency and safeguarding against unauthorized data exposure or misuse. We believe that AI should enhance operations and security without compromising trust or individual rights. Our commitment to transparent and ethical AI development is a cornerstone of our mission, reflecting our belief that AI must deliver measurable, positive impact in the real world while respecting the intricate web of human creativity and identity. ARSA has been experienced since 2018 in delivering solutions with these principles at their core.
Conclusion
The Superhuman "Expert Review" controversy is a pivotal moment in the ongoing conversation about AI ethics. It highlights the critical difference between using data for learning and outright impersonation, urging the AI industry to establish clear boundaries and respect creator rights. As AI continues to integrate more deeply into our lives, the responsibility to build, deploy, and govern these technologies ethically falls squarely on the shoulders of companies and developers. Only through a proactive commitment to consent, transparency, and accountability can AI truly build trust and deliver on its promise to enhance human capabilities without undermining human dignity.
**Source:** Confronting the CEO of the AI company that impersonated me
To explore how ARSA Technology builds and deploys ethical, secure, and high-impact AI and IoT solutions for your enterprise, we invite you to contact ARSA for a free consultation.