Superhuman, the company behind the popular Grammarly service, has announced the launch of an experimental tool called 'Expert Reviews.' Its key feature is the ability to receive text analysis and edits stylized as feedback from a specific famous writer. Users can choose as their 'critic' either living authors (such as Stephen King or Margaret Atwood) or long-gone classics like William Shakespeare or Jane Austen. The tool is already available in limited testing for some Superhuman AI users.

This event marks a new turn in the race for AI writing assistants, where Grammarly competes with giants like Microsoft (Copilot) and Google. While previous tools offered general grammar and style advice, the focus is now on hyper-personalization and authoritative 'branding' of feedback. This is also part of the trend toward digital immortality and the commercial use of personalities, where AI models recreate the style, opinions, and communication manner of real people, often without their direct consent.

Technically, the tool operates based on large language models (LLMs) that have undergone fine-tuning on corpora of texts by specific authors—their books, articles, interviews, and critical notes. The system analyzes the user-submitted text and then generates detailed feedback, attempting to imitate the chosen writer's vocabulary, rhythm, typical remarks, and even sense of humor. For example, 'Shakespeare' might point out weaknesses in dialogue dramaturgy, while 'Ernest Hemingway' could advise using more concise and energetic sentences. The developers emphasize that this is not a direct simulation of a personality but a stylization of expert opinion.

So far, public market and expert reactions are restrained, but the first alarming opinions are already being voiced. Lawyers and copyright specialists point to a legal gray area: although the works of deceased authors have entered the public domain, using their name and style in a commercial product to create the impression of their personal endorsement could be challenged by heirs or foundations. Ethicists and some writers express concern over the use of living authors' personalities without their knowledge, which could be viewed as a form of deepfake in the professional sphere.

For the industry, this means further commodification of authorial style and the blurring of creative uniqueness boundaries. For users, it means the emergence of a powerful and inspiring tool for learning writing and receiving unconventional feedback. However, there are also risks: excessive trust in stylized AI advice could suppress a writer's own voice and create an illusion of genuine 'co-authorship' with classics, which is a marketing ploy.

The tool's development prospects depend on two factors: legal regulation and community acceptance. In the near future, lawsuits from heirs or foundations managing writers' legacies are likely, which will force Superhuman to clarify legal formulations and possibly enter into licensing agreements. The question of expanding the list of 'experts' also remains open—will politicians, scientists, or bloggers appear there? The function's success will determine whether 'imitating authorities' becomes a standard feature in all AI assistants for creative tasks or remains a niche experiment.