The “human baseline” approach posits that the ethical success of artificial intelligence is achieved when its decision-making mirrors or marginally improves upon that of a competent human. In the classic “trolley problem,” this implies that if an AI can consistently choose the “lesser of two evils” with more precision than a panicked human, it has cleared the ethical bar.
However, as the media and journalism industry increasingly integrates generative AI and automated editorial systems, it is becoming clear that a “slightly better than human” standard is insufficient. In the context of information dissemination, a human-level baseline for AI is not a gold standard; it is a liability.
While comparing AI to the human baseline in moral dilemmas reveals the machine’s capacity for consistency, it fails to account for the unique accountability required in journalism.
Because audiences in 2026 are caught in a “breaking verification” crisis where trust is the ultimate currency, an AI that is merely “slightly better” than a biased human is ethically insufficient. To be truly ethical, AI in media must move beyond mimicking human choice to provide a level of transparency and evidentiary rigor that transcends a journalist’s capability.
Our newsrooms are facing a speed-versus-verification dilemma. The human baseline for a journalist is breaking the story vs. being 100% accurate. AI’s logic is fundamentally different. AI shifts control from individual journalists to automated systems optimized for engagement and scalability. Therefore, an AI that performs ‘slightly better’ than a journalist at producing content quickly may be ethically inferior if its underlying logic lacks the transparency and evidentiary rigor that defines journalistic integrity.
Because so much information is published in many ways across many platforms, audiences are having a difficult time distinguishing fact from fiction.
“‘Breaking verification’ will replace ‘breaking news’ in 2026, and trust will decide who survives,” according to Vinay Sarawagi, co-founder and CEO of The Media GCC.
Audiences need to see evidence and sources to back up what they see online, because seeing is no longer believing. If AI only does as well as humans at spotting fakes, it’s not enough. To solve the trust crisis, the AI must be exponentially better at citing sources.
In 2005, Wallach and Allen argued that the principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. They distinguish between operational morality, in which an AI simply follows pre-programmed human safety rules, and functional morality, in which a system can independently navigate moral dilemmas. In journalism, an AI that merely mirrors an editor’s baseline choices is operating within a limited framework. If the media is to serve the public’s best interests, a journalist AI must move toward a functional morality that transcends basic human instinct and provides the transparency and accountability the public expects.
From a strategic standpoint, “slightly better” is a recipe for disaster. If AI-generated content results in a libel suit or negatively impacts a company’s stock price, the defense that AI is slightly more accurate than an average human is a losing argument. As the media shifts into what is being termed the ‘Answer Economy’, the traditional value proposition of a newsroom is being disrupted. When AI models synthesize reports into a single summary, the value of a news organization is no longer just the ‘answer’ or the scoop itself, but the auditable trail of evidence that allows that answer to be verified (Seo Ai Club, 2026). If an AI only meets the human baseline for producing a plausible-sounding summary without providing this rigorous, machine-readable proof of its sources, it fails to meet the ethical demands of a 2026 audience.
Note: This is an essay originally written for a course on AI and business strategy at Johns Hopkins University.
References
Wallach, Wendell and Allen, Colin. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches.” Ethics and Information Technology volume 7, no. issue 3 (September 2005): 149-155. https://link.springer.com/article/10.1007/s10676-006-0004-4.
Li, Haoran et al. “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis.” Journal of Journalism and Media volume 6, no. issue 3 (August 2025): 105. https://www.mdpi.com/2673-5172/6/3/105.
Mee, S. et al. “Moral judgments of human vs. AI agents in moral dilemmas.” Scientific Reports volume 13, no. issue 1 (February 2023). https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/.
Simon, Felix. “How AI reshapes editorial authority in journalism.” Digital Content Next (June 2025).
Reuters Institute. “How will AI reshape the news in 2026? Forecasts by 17 experts around the world.” Reuters Institute for the Study of Journalism (January 2025).
Seo Ai Club. “The Answer Economy: A Comprehensive Analysis of Answer Engine Optimization Tracking Software and Strategic Market Leadership.” Seo Ai Club (January 2025).












