Home / Viral & Trending / Grammarly Removes AI Feature Which Used Real Authors’ Identities, Faces Class Action Lawsuit

Grammarly Removes AI Feature Which Used Real Authors’ Identities, Faces Class Action Lawsuit

Grammarly has officially disabled its "Expert Review" artificial intelligence feature following a wave of public backlash and the filing of a class action lawsuit alleging the company misappropriated the identities and reputations of prominent journalists and authors without their consent. The writing assistant platform, which recently expanded its suite of generative AI tools, now faces intense legal scrutiny over its decision to monetize the names and likenesses of both living and deceased writers to provide AI-generated feedback to its subscribers.

The controversy centers on a tool designed to simulate the editorial voices of subject-matter experts, which the company marketed as a way to elevate professional and academic writing. However, the discovery that the tool leveraged the specific identities of real-world professionals—ranging from tech journalists to world-renowned historians—has ignited a broader debate regarding the ethical boundaries of generative AI and the commercial exploitation of personal identity.

The Rise and Sudden Fall of the Expert Review AI Feature

Launched in August as part of a broader rollout of eight specialized AI agents, the Expert Review feature was initially integrated into Grammarly’s Free and $12-per-month Pro subscription plans. The company promoted the tool as a sophisticated feedback mechanism that could analyze a user’s text and provide critiques based on "insights from subject-matter experts and trusted publications."

According to archived versions of Grammarly’s website, the feature promised to provide AI-generated feedback "based on publicly available expert content." Users were given the ability to personalize their experience by selecting the names of specific authors, essentially prompting the AI to mimic the editorial style and expertise of those individuals.

In its initial marketing materials, Grammarly described the agent as a tool to help users meet "rigorous academic or professional standards tailored to the user’s field." By drawing on what it termed "influential perspectives," the company sought to position itself as more than just a grammar checker, evolving instead into a comprehensive editorial partner.

Investigative Reports Reveal Identity Misappropriation

The feature’s operations remained largely under the radar until an investigative report by Wired revealed the extent to which real-world identities were being utilized. The report highlighted that Grammarly was offering AI-generated edits and critiques attributed to specific, identifiable writers and academics. This list included high-profile journalists from outlets such as The New York Times, Bloomberg, and The Verge, as well as academic figures.

Following the report, numerous writers discovered their names were being used as "expert sources" within the Grammarly interface. The platform included a disclaimer stating that references to experts were for "informational purposes only" and did not indicate an official affiliation or endorsement. However, critics argued that this disclaimer was insufficient, as the primary value proposition of the tool was built upon the established reputations of the writers it claimed to emulate.

Historian Mar Hicks was among the first to voice public concern after seeing their identity included in the tool. In a post on the social media platform Bluesky, Hicks characterized the practice as a potential form of defamation, stating that companies should not be allowed to misappropriate intellectual property and falsely attribute AI-generated statements to real people.

Growing Backlash from the Journalism Community

The reaction from the media industry was swift and overwhelmingly negative. Platformer founder Casey Newton, whose name was also invoked by the feature, criticized Grammarly for what he described as a deliberate choice to monetize the identities of real people without their involvement or compensation. Newton noted that the tool essentially allowed AI models to "hallucinate" advice on behalf of real people and hide that functionality behind a paid subscription wall.

Other journalists expressed concern that the tool could damage their professional standing. If a Grammarly user received poor or inaccurate advice that was falsely attributed to a specific journalist, that journalist’s reputation for accuracy and expertise could be unfairly tarnished.

The scope of the misappropriation appeared vast. Reports indicate that the tool listed editorial staff from a wide array of prestigious publications, including The Atlantic, PC Gamer, Gizmodo, Digital Foundry, and Tom’s Guide. Even Mashable’s sister sites, such as IGN and Rock Paper Shotgun, were reportedly included in the database of expert identities.

The Controversy Over Deceased Authors’ Identities

Beyond the use of living writers, Grammarly also faced condemnation for including the identities of deceased authors and scholars. The Expert Review feature reportedly offered "insights" attributed to the late astronomer Carl Sagan and the influential intersectional academic bell hooks.

This aspect of the tool drew particularly sharp criticism from researchers and activists. Sarah J. Jackson, a prominent researcher, noted that the use of bell hooks’ name was a violation of her memory and legacy. The inclusion of deceased figures raised unique ethical and legal questions, as these individuals cannot defend their own likenesses or opt out of being included in AI training sets or commercial products.

The use of deceased figures also highlighted the limitations of Grammarly’s initial response to the backlash. When the company first addressed the concerns, it suggested that writers who did not wish to be included could email the company to opt out. This "opt-out" model was criticized for placing the burden of discovery and action on the victims of the misappropriation, and it offered no solution for those who were no longer alive to voice their objections.

Class Action Lawsuit Filed in New York District Court

The situation escalated from a public relations crisis to a legal battle when Julia Angwin, an award-winning investigative journalist and writer for The New York Times, filed a class action lawsuit against Superhuman Platform, Inc., the developer behind Grammarly. The lawsuit, filed in a New York District Court, accuses the company of violating long-standing statutes regarding the commercial use of a person’s name and likeness.

Represented by the law firm Peter Romer-Friedman Law PLLC, the lawsuit seeks to represent a broad class of writers and editors whose identities were allegedly exploited by Grammarly. The complaint argues that for over a century, New York law has prohibited businesses from using an individual’s name for commercial profit without obtaining express written consent.

"The law does not provide an exception for technology companies or AI," said Peter Romer-Friedman in a public statement. He emphasized that the fundamental principles of identity rights apply regardless of whether the misappropriation occurs through traditional advertising or through the deployment of generative AI models.

The lawsuit seeks both monetary damages for the impacted writers and a permanent injunction to prevent Grammarly from ever using writers’ identities in this manner again. Since the filing, Angwin has stated that a significant number of writers have expressed interest in joining the litigation, suggesting the class could eventually include hundreds of members.

Corporate Response and the Disabling of Expert Review

As the legal and social pressure mounted, Shishir Mehrotra, the CEO of Superhuman, announced on Wednesday that the company would pull the Expert Review feature offline. In a post on LinkedIn, Mehrotra acknowledged that the company had received "valid critical feedback" and admitted that the tool had misrepresented the voices of the experts it featured.

Mehrotra offered an apology, stating that the company intended the tool to help users discover influential perspectives and provide a way for experts to build deeper relationships with their audiences. He acknowledged, however, that the execution fell short of these goals.

Despite disabling the feature, Mehrotra indicated that the company does not intend to abandon the concept entirely. He stated that Grammarly would "reimagine" the feature to ensure that experts have real control over how they are represented, including the ability to choose not to be represented at all. This statement has done little to appease critics, many of whom believe the feature should never have been launched in its original form.

Broader Implications for the AI Industry

The Grammarly controversy serves as a landmark case in the ongoing tension between AI innovation and intellectual property rights. As tech companies race to integrate generative AI into every facet of digital life, the question of where "publicly available data" ends and "protected identity" begins has become a central legal battlefield.

Industry analysts suggest that this case could set a precedent for how AI companies handle the names and reputations of professionals. If the class action lawsuit is successful, it could force other AI developers to be much more transparent and cautious about how they label and market their models, particularly when those models are trained on or mimic specific human beings.

For many in the writing community, the issue is not just about legalities but about the fundamental value of human expertise. Critics like Dan Saltzstein of The New York Times have argued that creating a tool that "makes up" advice from people who have spent decades refining their craft is an affront to the profession of writing.

As the legal proceedings move forward in New York, the tech industry will be watching closely to see how the court balances the rapid development of AI technology with the established rights of individuals to control their own names and reputations in the marketplace. For now, Grammarly remains under intense pressure to prove that its "reimagined" future for AI will respect the very experts it claims to value.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *