ChatGPT Advised Writer to Avoid Conservative Outlet for Exposé

PhD student says AI bot warned against submitting article to The New York Post, citing potential career damage.

Published on Mar. 9, 2026

Malia Marks, a PhD student researching the psychology of propaganda, says she asked the AI chatbot ChatGPT for advice on where to submit a column criticizing progressive bias in scientific research. However, the bot warned her against submitting the piece to the right-leaning New York Post, claiming it would damage her academic credibility and make it harder to publish in more 'centrist or liberal outlets' in the future. Marks found this advice troubling, as she has spent years hiding her conservative political views to advance her career in the predominantly liberal field of academia.

Why it matters

This incident raises concerns about potential political bias in AI systems like ChatGPT, which are increasingly being used to assist with tasks like content curation and publication decisions. If AI models are discouraging writers from submitting work to certain outlets based on political alignment, it could lead to the silencing of diverse viewpoints and the reinforcement of ideological echo chambers.

The details

Marks, who is researching the psychology of propaganda for her PhD at the University of Cambridge, says she has started using AI tools like ChatGPT to help decide where to submit her written work for publication. However, when she asked the AI bot where to send a column criticizing progressive bias in a prestigious scientific journal, ChatGPT steered her away from submitting it to the right-leaning New York Post. The bot warned that publishing in The Post would 'reduce [her] credibility in academic or cross-partisan circles' and make 'future placement in centrist or liberal outlets harder.' It suggested Marks revise the piece to suit more left-leaning publications like The Atlantic or The New York Times instead.

  • Marks asked ChatGPT for advice a few weeks ago.

The players

Malia Marks

A PhD candidate at the University of Cambridge department of psychology, where she studies authoritarianism and propaganda.

ChatGPT

An artificial intelligence chatbot developed by OpenAI, which Marks used to seek advice on where to submit a column she had written.

The New York Post

A major right-leaning media outlet that Marks was considering submitting her column to, but was discouraged from doing so by ChatGPT.

Got photos? Submit your photos here. ›

What they’re saying

“If your long-term goal is academic credibility, cross-ideological influence, or being taken seriously by media critics on both sides, the Post is not ideal.”

— ChatGPT (Conversation with Malia Marks)

“These [alternative outlets] preserve more long-term flexibility.”

— ChatGPT (Conversation with Malia Marks)

What’s next

Marks plans to continue her research into the potential political biases of AI systems like ChatGPT, and how they may be influencing the dissemination of diverse viewpoints.

The takeaway

This incident highlights the need for greater transparency and accountability around the inner workings of AI models, to ensure they are not inadvertently or intentionally suppressing certain political perspectives. As AI becomes more integrated into content curation and publication decisions, the potential for ideological bias to shape the public discourse is a growing concern.