- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
New York Times Cuts Ties with Freelance Book Reviewer Over AI Use
The Times found 'uncomfortably similar' language between the reviewer's work and a previous Guardian review.
Mar. 31, 2026 at 5:35pm
Got story updates? Submit your updates here. ›
As media outlets grapple with the ethical use of AI in content creation, this conceptual tech product represents the need for responsible, transparent integration of emerging tools.NYC TodayThe New York Times has permanently ended its relationship with a freelance book reviewer, Alex Preston, after an investigation found that he had used an AI tool to help draft a review, resulting in language and details similar to a previous review published in The Guardian. Preston admitted to the Times that he had failed to catch the overlapping material before publication, which the Times deemed a 'serious violation' of its standards.
Why it matters
The incident highlights the challenges media outlets face in adapting to the growing use of AI tools in content creation, and the need for robust fact-checking and plagiarism detection processes to maintain the integrity of published work. It also raises questions about the extent to which AI should be used in journalism and whether clear guidelines are needed to govern its use.
The details
The Times review in question was of the novel 'Watching Over Her' by Jean-Baptiste Andrea, published in January 2026. A reader alerted the Times that the review contained language and details similar to a Guardian review of the same book published four months earlier. After investigating, the Times spoke to Preston, who admitted to using an AI tool to help draft the piece and failing to catch the overlapping material from the Guardian review before publication. The Times added an editor's note to the review acknowledging Preston's use of AI and the similarity to the Guardian piece, and permanently cut ties with the freelancer.
- The New York Times review of 'Watching Over Her' was published in January 2026.
- The Guardian review of the same book was published in August 2025.
- The Times was alerted to the similarity between the two reviews in March 2026.
- The Times investigation and decision to cut ties with the freelancer occurred in March 2026.
The players
Alex Preston
A freelance book reviewer who had written for the New York Times since 2021, including six previous reviews. He admitted to using an AI tool to help draft a review, resulting in similarities to a previous Guardian review of the same book.
The New York Times
The prominent American newspaper that permanently cut ties with the freelance book reviewer after an investigation found he had violated the publication's standards by using AI to help draft a review.
The Guardian
The British newspaper that had published a review of the book 'Watching Over Her' four months prior to the New York Times review that contained similar language and details.
Jean-Baptiste Andrea
The author of the novel 'Watching Over Her', which was the subject of the reviews in question.
What they’re saying
“I used an A.I. editing tool improperly on a draft I had written and missed the overlapping language from the Guardian piece. I took responsibility immediately and apologized to the New York Times.”
— Alex Preston, Freelance Book Reviewer
What’s next
The New York Times has stated that it will continue to investigate its processes and policies around the use of AI tools in content creation to ensure the integrity of its published work.
The takeaway
This incident highlights the need for media outlets to establish clear guidelines and robust fact-checking procedures around the use of AI in journalism, to maintain public trust and the integrity of their reporting. It also serves as a cautionary tale for freelance writers and reviewers about the risks of relying too heavily on AI tools without proper oversight and attribution.


