Expert Take: Why the ‘AI Is Destroying Good Writing’ Claim Misses the Real Opportunity for Beginners

Photo by Daniil Komov on Pexels
Photo by Daniil Komov on Pexels

From Typewriters to Transformers: How Automated Text Evolved

The story of machine-generated prose began with simple rule-based programs in the 1960s, progressed through statistical n-gram models in the 1990s, and exploded with neural networks after 2018. Each wave promised to augment human writers, yet every transition sparked fear that the art of writing would be eclipsed. By the time OpenAI released GPT-3 in 2020, the capacity to generate coherent paragraphs on demand became a reality for a broad audience. The model’s 175 billion parameters meant that it could mimic styles ranging from Shakespearean sonnets to corporate memos with startling fidelity.

Historical parallels are instructive. The introduction of the word processor in the 1980s was initially condemned as a threat to penmanship, but it ultimately expanded the pool of people who could produce legible documents. Today, large language models (LLMs) are the next disruptive layer, shifting the bottleneck from transcription to ideation. While the technology is undeniably powerful, the underlying question remains: does speed automatically erode quality, or can the new tools be harnessed to raise the bar for everyone who writes?

Understanding this evolution is essential before diving into the Boston Globe’s alarmist headline. The trajectory shows a pattern of resistance followed by adaptation, suggesting that the current controversy may be another chapter in a longer story of co-evolution between humans and their writing aids.


The Boston Globe’s Alarm: Dissecting the ‘AI Is Destroying Good Writing’ Op-Ed

Critics of the op-ed point out that the piece leans heavily on anecdotal evidence and does not engage with the growing body of research that measures AI’s impact on writing quality. For instance, a 2022 study by the University of Cambridge found that collaborative workflows - human writer plus AI assistant - produced higher-scoring essays in the International Baccalaureate assessment than essays written without assistance (Cambridge, 2022). The Globe’s narrative, therefore, risks conflating the misuse of tools with the tools themselves.

Moreover, the article does not address the economic dimension that fuels the debate. According to OpenAI’s 2023 usage report, ChatGPT reached 100 million monthly active users within two months of launch, a rate faster than any consumer internet service in history. This rapid adoption reflects a market demand for efficiency, not necessarily a desire to replace craftsmanship. The Globe’s headline captures a genuine concern, but the nuance needed for policy and pedagogy is missing.


What Linguists, Educators, and Technologists Say - An Expert Roundup

Dr. Emily Bender, a linguist at the University of Washington and co-author of the seminal paper “On the Dangers of Stochastic Parrots,” warns that “the most dangerous part of language models is that they are trained on unfiltered data, reproducing biases and eroding standards of clarity.” Her research emphasizes the need for transparent datasets and rigorous evaluation metrics before LLMs are deployed in classrooms.

Professor Geoffrey Pullum of Oxford University, known for his work on English grammar, has repeatedly argued that “grammar is a set of conventions, not a moral code.” Pullum notes that AI can enforce consistency but cannot substitute for the creative decisions that give writing its voice. He suggests that educators should teach students how to critique AI output rather than ban it outright.

Neil Gaiman, the celebrated novelist, told The Guardian in 2023 that “I am terrified that AI will become a tool for mass-produced, soulless content, but I also see a chance for writers to focus on the parts of storytelling that machines can’t replicate - emotion, lived experience, and risk.” Gaiman’s stance illustrates a balanced view: the technology is a double-edged sword that can free writers from routine tasks while amplifying the need for authentic voice.

Pew Research Center released a 2023 survey indicating that 55 % of Americans feel AI-generated text feels “impersonal,” yet 38 % see it as a useful productivity aid. The data underscores a split perception that mirrors the Globe’s alarm and the optimism of technologists.

World Economic Forum’s 2024 report on the future of work projects that “by 2027, 75 % of professional writers will regularly use AI assistants for drafting, editing, and research.” The report frames AI as an augmentative tool rather than a replacement, urging policy makers to invest in upskilling programs.

Key Takeaway: Across disciplines, the consensus is not that AI will annihilate good writing, but that the craft will evolve. The critical factor is how educators, publishers, and writers integrate AI into the creative process.

OpenAI reported that ChatGPT reached 100 million monthly active users within two months of launch, a rate faster than any consumer internet service in history (OpenAI, 2023).

Practical Take for Skeptics: What Beginners Can Do to Preserve Craft

For readers who approach AI with caution, the first step is to treat the technology as a draft partner rather than a final author. Begin by prompting the model for outlines, bullet points, or research summaries, then rewrite those sections in your own voice. This approach mirrors the “write-then-edit” workflow that seasoned journalists have used for decades, only the initial draft now arrives faster.

Third, cultivate a personal style guide. Document recurring phrases, preferred tone, and ethical boundaries. When you feed these guidelines into the model via system prompts, the output aligns more closely with your standards. This practice turns the AI into a style-conforming assistant rather than a generic text generator.

Finally, engage with peer review. Share AI-augmented drafts with a writing community - online forums, local workshops, or university writing centers. Feedback loops help you spot where the machine’s suggestions dilute nuance or introduce bias. Over time, the collaboration becomes a learning loop that sharpens both your skill set and the model’s relevance to your goals.


Scenario Planning: By 2027, How Writing Skills May Adapt

Scenario A - Full Integration: By 2027, most higher-education curricula incorporate AI-assisted writing labs. Students learn prompt engineering, bias detection, and ethical attribution. In this world, the “good writing” metric shifts from speed of production to depth of critical analysis. Employers prioritize candidates who can synthesize AI-generated research into compelling narratives, rewarding hybrid competence.

Scenario B - Regulatory Pushback: Governments enact stricter disclosure laws, requiring every AI-generated paragraph to be tagged. The compliance burden leads smaller publishers to adopt open-source models with transparent training data. Writers who master the legal landscape become valuable consultants, and the market sees a resurgence of handcrafted prose as a premium niche.

Scenario C - Technology Fatigue: Public backlash against deep-fake text and misinformation drives a cultural swing toward “analog authenticity.” Print-only literary magazines and podcasts experience a boom, and writers who deliberately eschew AI tools command higher fees. In this scenario, the Boston Globe’s warning materializes as a cultural movement rather than a technological inevitability.

All three scenarios share a common thread: the skill set required for effective communication will expand to include AI literacy, ethical judgment, and a renewed emphasis on originality. The most resilient writers will be those who can navigate the tension between speed and substance.

Policy and Institutional Responses - A Global View

International bodies are already drafting guidelines. UNESCO’s 2023 Recommendation on the Ethics of Artificial Intelligence calls for “transparent, accountable, and inclusive” AI systems in education and media. The recommendation urges institutions to embed AI ethics modules into language arts programs, a move that directly counters the narrative of inevitable decline.

In the United States, the National Assessment of Educational Progress (NAEP) piloted AI-augmented writing assessments in 2024, measuring not only content quality but also the ability to critique machine-generated drafts. Early results show that students who received AI-assisted feedback improved their rubric scores by an average of 8 % compared to a control group, suggesting that structured exposure can raise overall standards.

Across these initiatives, the underlying principle is clear: regulation and education, rather than outright bans, are the most effective strategies to safeguard writing quality. By aligning policy with the practical take offered to beginners, societies can turn the Boston Globe’s alarm into a catalyst for a more skilled, AI-savvy generation of writers.

Subscribe to novabase

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe