AI Writing Is Now Widespread Online — New Research Explores Its Impact

A Stanford University study led by Weixin Liang reveals the growing presence of AI-generated content across various domains, including corporate press releases, job listings, and UN reports. By the end of 2024, AI-driven writing had become a significant aspect of business communication, with 24% of corporate press releases and 15% of job postings being AI-assisted. This shift signifies a major transformation in how communications are crafted, driven by the rapid expansion of large language models (LLMs) post-ChatGPT's launch in late 2022. The study, one of the largest empirical investigations of AI writing adoption, reviewed over 300 million documents from 2022 to 2024, highlighting the accelerated adoption of AI-assisted writing, especially among smaller firms.
The implications of this trend are profound, presenting both opportunities and challenges. AI can enhance efficiency and enable non-native speakers to communicate more effectively, yet it raises concerns about authenticity and homogenization of content. The study warns of potential pitfalls like 'model collapse,' where AI models trained on AI-generated content become unreliable. This shift in content creation poses a risk of diminishing creativity and credibility in communications, affecting industries reliant on AI-aided writing. As AI-generated content becomes more prevalent, balancing efficiency with authenticity will be crucial for businesses and institutions. Researchers are exploring long-term regulatory and ethical considerations, highlighting the need for careful integration of AI into communication practices to avoid adverse effects on decision-making and knowledge sharing.
RATING
The article provides a comprehensive overview of the increasing prevalence of AI-generated content and its implications for communication. It effectively highlights both the benefits and potential drawbacks of AI writing, such as efficiency versus authenticity. The topic is timely and relevant, resonating with ongoing debates about the role of AI in society. However, the article could improve in several areas, including source quality and transparency. The reliance on a single primary source and the lack of direct links to the original research limit the ability to independently verify the claims. Additionally, while the article is generally clear and well-structured, it could benefit from a broader range of perspectives to enhance its balance and engagement. Overall, the story provides valuable insights into the impact of AI-generated content but would benefit from greater transparency and source diversity.
RATING DETAILS
The article presents several specific claims, such as the prevalence of AI-generated content in corporate press releases, job listings, and UN press releases. The claim that 24% of corporate press releases and 14% of UN press releases were AI-generated by the end of 2024 is significant and requires verification through access to the original research data. The story also mentions the analysis of over 300 million online documents, which is a substantial claim needing corroboration. While the article quotes Weixin Liang and references a Nature paper on 'model collapse,' it lacks direct citations or links to these sources, making it difficult to verify the accuracy independently.
The article provides a balanced view of the potential benefits and drawbacks of AI-generated content. It discusses the efficiency and productivity benefits of AI, such as helping non-native speakers and organizing thoughts better. However, it also highlights concerns about homogenization, loss of authenticity, and the dangers of recursive feedback loops. While these points are well-presented, the article could benefit from including more perspectives, such as those of industry professionals or ethicists, to provide a fuller picture of the debate surrounding AI-generated content.
The article is generally clear and well-structured, with a logical flow of information. It effectively introduces the topic of AI-generated content, outlines the key findings of the research, and discusses the implications. The language is accessible and avoids technical jargon, making it easy for a general audience to understand. The use of direct quotes from Weixin Liang adds to the clarity and engagement of the piece. However, the article could benefit from clearer explanations of some complex concepts, such as 'model collapse' and recursive feedback loops.
The article primarily relies on the research led by Weixin Liang at Stanford University, which is a credible source. However, it lacks a variety of sources or external validation from other experts in the field. The absence of direct quotes from other researchers or industry professionals limits the depth of the source quality. Additionally, the article does not provide any links to the original research or related studies, which would enhance the credibility and reliability of the information presented.
The article provides some context about the research and its findings but lacks transparency in methodology and source attribution. It mentions the analysis of 300 million documents but does not explain how AI-generated content was identified or the criteria used. The absence of direct links to the original research or a detailed methodology section makes it difficult for readers to assess the transparency and reliability of the claims. Furthermore, the article does not disclose any potential conflicts of interest, which could affect impartiality.
Sources
- https://acceleratelearning.stanford.edu/initiative/digital-learning/ai-and-education/
- https://provost.stanford.edu/2025/01/09/report-of-the-ai-at-stanford-advisory-committee/
- https://news.stanford.edu/stories/2025/01/report-outlines-stanford-principles-for-use-of-ai
- https://scale.stanford.edu/news
- https://wallyboston.com/storm-stanford-ai-writing-system/
YOU MAY BE INTERESTED IN

ChatGPT Hits 1 Billion Users? ‘Doubled In Just Weeks’ Says OpenAI CEO
Score 6.4
ChatGPT Is Now Great At Faking Something Tempting — But There’s A Catch
Score 7.2
AI Vs. Creativity: The Hollywood Battle You Haven’t Heard About
Score 6.4
Ex-OpenAI engineer who raised legal concerns about the technology has died
Score 7.6