I Just Got My AI Certificate. Here’s What I Actually Learned.

To borrow a phrase from Howard Stern, I am something of a king of all media. Radio, television, digital — I’ve worked in all three, and I’ve had to learn the rules of each one from scratch.

It started in college, when I was fortunate to land a part-time gig as a DJ at Laser 104.1 in Allentown, Pennsylvania.  I had no idea what I was doing. I figured it out. An internship at FOX 29 in Philadelphia got me into television, which turned into a career. Eight years in, I went back to school at Temple University to earn my MBA, while working overnight shifts producing the morning news at 6abc. I wasn’t the most rested student, but I graduated.

Then came the internet. Nobody handed me a manual for that either. I self-taught: reading voraciously, experimenting constantly, and talking to anyone who knew more than I did. That’s how I’ve always operated.

So when AI started reshaping the media business — not gradually, but all at once — I decided I wasn’t going to learn it from the sidelines. I enrolled in Johns Hopkins University’s AI for Business Strategy course. This week, I received my certificate of completion.

Before anyone rolls their eyes — yes, I know what “AI certificate” sounds like. It sounds like a LinkedIn flex. It isn’t that. It’s the result of months of real coursework: essays, lectures, readings from the World Economic Forum and MIT and McKinsey, and a final project where I built a full AI proposal for a school district from the ground up. Twelve weeks. Real work.

Here’s what I actually walked away with.


The thing nobody tells you about AI

The biggest surprise wasn’t the technology. It was realizing how little most business leaders — including me — understand about what AI actually does inside an organization.

We talk about AI as if it were a feature you add. A button you push. It’s not. AI doesn’t just automate tasks. It reorganizes how work gets done. The McKinsey framing that stuck with me: companies are moving toward “minimum viable organizations” — lean structures in which AI handles structured, repeatable work, and humans focus on oversight, judgment, and context.

That changes everything.


What the course actually covered

The curriculum was broader than I expected. We started with the AI landscape — the history, the current state, who the major players are and why. Then it got practical fast: how businesses are actually deploying AI, how to optimize it, and, critically, what can go wrong.

The week on AI bias and risk was the one that hit me hardest. In journalism, we already live inside the trust crisis. Audiences can’t tell what’s real anymore. An AI that performs “slightly better than a human” at spotting misinformation isn’t good enough — that was the core of an essay I wrote for the course. The bar for AI in media has to be higher than the human baseline, because the stakes of getting it wrong are higher.

We also covered generative AI in depth — not just what it is, but how to use it responsibly for actual business purposes. And the final weeks got into scaling AI projects and managing them at the enterprise level. What does it look like when you’re not just piloting something, but running it at scale across an organization?

The final project brought it all together. I built a full vendor proposal — a fictional AI company called EduAI Solutions — pitching an AI-powered learning platform to a real school district. Every section had to hold up: the executive summary, the implementation strategy, the data privacy compliance, the cost structure. It was the most useful assignment I’ve done in any course, because it forced me to think like someone responsible for the outcome, not just someone writing about it.


What this means for journalism

I came in thinking AI was something I needed to manage in my newsroom. I left understanding it’s something I need to lead through.

Two-thirds of U.S. newsrooms have already integrated AI into at least one workflow. The roles being created — AI Ethics Editors, Automated Content Managers, Data Journalists — are no longer niche. They’re becoming core. And the journalists who thrive won’t just be good storytellers. They’ll need data literacy, an understanding of how large language models work and where they fail, and the judgment to know when to trust the machine and when to override it.

That’s a different journalist from the one I trained to be. It’s the one I’m working to become.


ABL: Always Be Learning

Here’s the thing I’ve told younger journalists for years: the moment you think you’ve figured it out, you’re done. The industry moves too fast. The audience moves too fast. You have to stay a student.

Every transition in my career has required me to start over as a learner. Radio to TV. TV to digital. The people who get left behind in this business aren’t the ones who admit they don’t know something. They’re the ones who pretend they do.

I chose Johns Hopkins specifically because the course is big-picture focused. Not “here’s how to prompt ChatGPT.” It’s about strategy — how AI changes the structure of organizations, how leaders need to think about deploying it, and what the risks look like at scale.

The next frontier I’m focused on is agentic AI — systems that don’t just answer questions but take actions, make decisions, and complete multi-step tasks on their own. That’s where this technology is heading fast, and it has enormous implications for media organizations. I’m already working to understand it.

Getting this certificate at this stage of my career wasn’t about proving something to anyone else. It was about staying useful — to my team, to my company, to myself. The executives and media leaders who will matter in the next five years aren’t the ones who handed AI questions off to someone else. They’re the ones who got in the room, got their hands dirty, and figured out what they were looking at.

I don’t have all the answers. But I know which questions to ask now. And I know where to go next.


Bob Monek is a veteran broadcast journalist and media executive who has worked in radio, television, and digital media. He completed the AI for Business Strategy certificate program at Johns Hopkins University in April 2026.

What ONA 2026 Taught Me About AI, Newsrooms, and the Leadership Gap

I went to my first Online News Association conference four years ago, and I came back energized. Conferences can make you feel this way — like the ideas alone are enough to change something.

Then life took over. Deadlines and the pace of a daily newsroom became overwhelming. ONA became something I kept meaning to get back to.

This year I finally did. What I discovered at ONA in Chicago, not surprisingly, is the conversation has dramatically changed — from whether to use AI to something more difficult: how to use it without breaking the human systems that make journalism work.

That’s the right question. And it’s overdue.

Four years ago, AI in the newsroom was peripheral — something experimental, easy to ignore. This year, it was the center of gravity. Every room, every hallway, every lunch. Not “should we use AI” — that debate is over. The harder question is how to use it without losing what makes journalism worth doing in the first place.

From Evangelism to Pragmatism

What stood out most wasn’t any single tool or framework, though there were plenty worth bringing home. It was the tone.

There’s less evangelism now and more pragmatism. People are talking about AI the way they talk about any other production tool — what does it actually do well, where does it fall down, and who’s responsible when it gets something wrong?

That shift matters. AI isn’t the hard part anymore. Alignment is.

The 80/20 Reality

The Associated Press put a number on something many newsrooms are starting to feel: 80% of a process can be automated, but at least 20% has to remain human.

Editing. Fact-checking. Judgment.

That ratio isn’t just technical — it’s a policy position. And having something that concrete to hand to a skeptical newsroom is more useful than any demo.

The Most Useful Work Isn’t Flashy

Some of the best ideas came from smaller organizations doing unglamorous work.

City Bureau is using generative AI to synthesize civic meeting notes. Sahan Journal has built custom GPTs to create personalized media kits for sales calls — not an editorial use case, a revenue one, and it works.

And then there was El Vocero de Puerto Rico’s cautionary story about an AI agent they’d named Victor. Over time, Victor started producing bad data and sending emails nobody had asked for.

Victor got fired.

The lesson wasn’t that AI fails. It’s that you have to keep managing it long after launch — which is not how most organizations treat a tool once it’s deployed.

Culture Is the Bottleneck

The sessions on culture and leadership hit closest to home.

CNN talked about putting leaders visibly at the front of AI adoption — not as evangelists, but as practitioners. The fastest way to reduce fear in a newsroom is to show that the people above you are figuring it out too, not just mandating it from a distance.

Reuters has reorganized around cross-functional squads: editorial, product, engineering, and data science working together on specific problems instead of handing work off in sequence.

That’s not a workflow tweak. It’s a structural change. And it requires a level of trust most organizations don’t build quickly.

The Gap That’s Already Here

The tools are here. They’re getting better, quickly.

What’s lagging is the culture and leadership needed to use them well — and that’s not a technology problem. It’s a people problem. Which means it’s slower, harder, and ultimately more important than any product release.

One number from a Thomson Reuters Foundation study has stayed with me: 81% of journalists in the Global South already use AI daily or weekly. Only 13% operate under any formal newsroom policy.

That’s not a regional anomaly. It’s a preview of what happens when technology outruns leadership.

And right now, it is.

My Oscar Predictions: Confidence, Confusion, and the Inevitability of Being Wrong

Oscar season is when I briefly convince myself that reading Variety over coffee and skimming the New York Times arts section at night qualifies as rigorous film scholarship. For a few weeks each year, I develop confident opinions about categories I forgot existed the previous spring, casually reference “industry buzz,” and begin sentences with “it feels like…” — awards-season shorthand for “I have absolutely no proof, but I am emotionally invested.”

Part of this confidence comes from experience. For roughly the past fifteen years, Oscar Sunday hasn’t meant sitting on the couch with snacks and a ballot. It has meant working — watching red-carpet arrivals in real time, producing coverage as winners are announced, and helping feed the bottomless appetite for Oscars content across ABC7NY.com and the wider ABC Owned Television Stations. After enough ceremonies spent toggling between live feeds, social-media spikes, and last-second headline rewrites, you begin to believe you understand how the night unfolds. Or at least you learn how to project calm while quietly bracing for the unexpected.

Those years also come with memories that make it difficult to treat the Oscars as a purely dignified cultural institution. There was the night the broadcast veered into live-television infamy when Will Smith walked onstage and smacked Chris Rock — a moment that turned a relatively boring ceremony into a breaking-news event heard around the world. There was the unforgettable envelope mix-up in 2017, when La La Land briefly won Best Picture before graciously, and somewhat publicly, returning the trophy to Moonlight. And of course, there was Ellen DeGeneres’s 2014 selfie — perhaps the most efficient demonstration in history of how Hollywood glamour and smartphone logistics can collide.

My cinematic worldview, however, was formed long before I was filing Oscars stories on deadline. I grew up on Disney, which probably explains my enduring faith in orchestral swells and emotional clarity. Pixar later deepened the imprint by making me cry over toys, robots, and, on one memorable occasion, real estate. My favorite film of all time remains Casablanca, a movie so elegantly constructed it makes modern plotting feel like it was completed during a lunch break. Paul Newman is still my favorite actor — effortless, intelligent, impossibly cool — though George Clooney in Michael Clayton came dangerously close to redefining what a perfectly calibrated movie performance looks like. And Katharine Hepburn remains the gold standard, proof that wit and authority never go out of style.

All of which is to say: my Oscar picks arrive with both passion and baggage.

This year I managed to see all but one of the Best Picture nominees, an accomplishment I intend to reference casually for the foreseeable future. I also saw exactly one of the short films — The Singers, which is genuinely worth seeking out and, more importantly, allows me to speak with suspicious confidence about a category where most people are bluffing.

If I’m being completely honest, none of this year’s Best Picture nominees truly floored me. I admired elements, respected the ambition, occasionally enjoyed myself — but I never quite experienced that rare electric jolt that makes you feel you’re watching something you’ll carry around for decades. Personal taste and strategic prediction don’t always align. I didn’t particularly care for Sinners, yet Michael B. Jordan’s performance feels impossible to ignore. Sometimes you simply have to respect the craft even when the movie itself leaves you unmoved. Sean Penn, meanwhile, appears to have delivered the kind of supporting performance that arrives pre-packaged with historical significance — the sort that inspires solemn nodding and decisive ballot marking.

That same film, One Battle After Another, was one of the year’s more satisfying surprises for me: ambitious without being exhausting, confident without being self-important. Watching it steadily accumulate awards-season victories has felt less like suspense and more like observing a well-organized transfer of power.

The best actress race offers a gentler internal conflict. I was genuinely charmed by Kate Hudson in Song Sung Blue, a reminder that movie-star magnetism remains both real and undervalued. But Jessie Buckley’s dominance this season has been so overwhelming that resisting her now would feel less like independent thinking and more like arguing with gravity.

And so, after months of watching, reading, overanalyzing, and filling out prediction grids with the seriousness of a tax return, I arrive at the same conclusion as much of the industry. When the final envelope is opened, the night will likely belong to One Battle After Another — a title that doubles as an apt description of awards season itself: a long, suspenseful march toward an outcome we all pretend to be surprised by.

Earned, Not Assumed: Why AI Augmentation Depends on Who Gets Access to Training

The global labor market is undergoing a structural transformation driven in significant part by artificial intelligence. The World Economic Forum Future of Jobs Report 2025 projects that 92 million roles may be displaced globally by 2030, while 170 million new roles could emerge. However, AI’s impact is not best understood as simply job creation or destruction. From a task-based labor economics perspective, technological change operates at the level of discrete tasks rather than entire occupations. AI decomposes jobs into component activities, automating those that are routine, codifiable, and data-intensive, while recomposing remaining work around judgment, oversight, and complex problem-solving. In this sense, AI is less about eliminating work than about reorganizing how value is created within occupations—reshaping skill premiums and redefining the boundaries between human and machine contributions.

Similarly, McKinsey & Company stresses that AI is reshaping organizational design and career paths, moving beyond marginal productivity gains. They highlight the concept of “reconfiguring work,” where organizations build “minimum viable organizations”—a lean, technology-amplified operating model designed around AI-native workflows. In this model, AI handles an increasing share of structured and repeatable tasks, while human work shifts toward oversight, judgment-intensive functions, and areas requiring contextual interpretation.

Broadcast and digital journalism, a field I have navigated for over three decades, serves as an illustrative microcosm for these shifts. It demonstrates both the rapid creation of technical roles and the friction of workforce adaptation. A separate analysis from Microsoft Research examining occupational exposure to generative AI suggests that journalism-related roles involve tasks highly susceptible to automation, particularly drafting, summarization, and transcription. Rather than forecasting wholesale job elimination, the study highlights task-level vulnerability, indicating that portions of newsroom workflows may be restructured as AI systems assume responsibility for routine content production.

Emerging AI Roles in Journalism

Recent studies show that over 65% of U.S. newsrooms have integrated AI, experimenting with or deploying AI tools in at least one workflow, driving the creation of highly specific roles. Digital platforms leverage algorithms for personalization and fact-checking, creating a critical need for AI Ethics Editors to ensure integrity. Broadcasters utilizing AI for transcription and translation are hiring Automated Content Managers to handle these dynamic workflows. Meanwhile, the ability to scan massive datasets for market trends has elevated the Data Journalist from a niche specialty to a core newsroom necessity.

Evolving Demands for AI Skills

Journalists now require technical fluency—prompt engineering, data literacy, and understanding LLM constraints—alongside traditional skills. As AI automates copy generation, uniquely human skills command a premium. Empathy in interviewing, high-level investigative intuition, and ethical decision-making are becoming more vital than ever.

Research by Northwestern professor Nick Diakopoulos highlights this shift. While editorial job postings declined significantly post-ChatGPT (from 28,566 to 18,156), listings requiring AI skills tripled. While the timing coincides with the emergence of generative AI, the contraction in editorial roles likely reflects a confluence of technological, economic, and platform-market forces rather than direct substitution alone. Still, the growing listing of AI roles reveals four emerging roles: “AI-doers” (building tools), “AI-users” (applying tools), “AI-strategizers” (planning), and “AI-reporters” (covering AI). Crucially, demand also surged for human capabilities like ethics, critical thinking, and fact-checking—skills that directly complement AI’s weaknesses.

The Challenges of Upskilling and Reskilling

Despite the clear need for AI fluency, the media industry faces significant structural hurdles in reskilling a workforce already stretched thin by the 24-hour news cycle:

  • The Hidden Costs of Training: The financial burden extends beyond software licensing to implementation time, workflow redesign, and productivity disruption. Many newsrooms accumulate what can be described as “integration debt”—the deferred organizational costs that arise when AI systems are layered onto legacy processes without structural redesign. When tools are adopted faster than governance standards, editorial protocols, and workforce capabilities evolve, inefficiencies compound, requiring future investments in retraining, oversight, and workflow correction.
  • The New Digital Divide: A two-speed transformation is emerging. Large national networks possess the capital to build proprietary AI infrastructure and dedicated oversight teams, while under-resourced local newsrooms lack comparable investment capacity. This widening “AI divide” risks exacerbating existing inequalities in reporting depth, investigative capacity, and technological resilience across the industry.
  • Union Resistance: Automation anxiety is increasingly manifesting in labor negotiations. Organizations such as The NewsGuild and the Writers Guild of America have pushed for contractual guardrails governing AI deployment, seeking transparency, attribution protections, and limits on automation. While these efforts aim to protect workers, negotiations can slow implementation timelines as management and labor debate control, accountability, and long-term job security.
  • The Curriculum–Skill Gap: AI capabilities are evolving more rapidly than formal training programs. While academic institutions often emphasize ethical frameworks and media theory, employers increasingly seek operational competencies such as prompt design, model evaluation, and workflow integration. This misalignment leaves mid-career professionals navigating a fragmented retraining landscape without standardized pathways.

Conclusion

The same newsrooms introducing AI ethics editors and data journalists are struggling to train the veteran reporters and producers sitting beside them. The dominant pattern currently appears to be augmentation combined with role hybridization, but optimism must be earned, not assumed. The tripling of AI skill requirements signals opportunity, but journalists who fail to upskill risk being quietly filtered out of the hiring market. Augmentation is only a win for those with access to training.

AI integration represents not incremental optimization but an active restructuring of value creation. Newsrooms cannot rely solely on external hiring to fill emerging skill gaps; as McKinsey & Company notes, recruitment alone is neither cost-efficient nor strategically sustainable. Instead, organizations must invest in disciplined strategic workforce planning while cultivating superagency—an organizational condition in which journalists possess both the technical literacy and institutional authority to actively direct, interrogate, and refine AI systems within their domains. By strengthening human-in-the-loop capabilities such as ethical reasoning, investigative judgment, and contextual analysis, newsrooms can ensure that AI enhances journalistic rigor rather than eroding it.


References

Corden, Jez. “Microsoft reveals 40 jobs about to be destroyed by (and safe from) AI.” Windows Central. https://www.windowscentral.com/artificial-intelligence/microsoft-reveals-40-jobs-about-to-be-destroyed-by-and-safe-from-ai

Diakopoulos, Nick. “The Impact of Generative AI on Journalistic Labor.” Generative AI in the Newsroom. https://generative-ai-newsroom.com/the-impact-of-generative-ai-on-journalistic-labor-e87a6c333245

Fu, Angela. “As AI enters newsrooms, unions push for worker protections.” Poynter. https://www.poynter.org/business-work/2023/artificial-intelligence-writers-guild-unions-journalism-jobs/

McKinsey & Company. “The critical role of strategic workforce planning in the age of AI.” McKinsey People & Organizational Performance Practice, February 2025.

McKinsey & Company. “Generative AI and the future of work in America.” McKinsey Website. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america

Research.com. “2026 AI, Automation, and the Future of Journalism Degree Careers.” Research.com. https://research.com/advice/ai-automation-and-the-future-of-journalism-degree-careers

World Economic Forum. “Future of Jobs Report 2025.” World Economic Forum Website. https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf

AI Ethics in Journalism: Beyond Human Baseline

The “human baseline” approach posits that the ethical success of artificial intelligence is achieved when its decision-making mirrors or marginally improves upon that of a competent human.  In the classic “trolley problem,” this implies that if an AI can consistently choose the “lesser of two evils” with more precision than a panicked human, it has cleared the ethical bar.

However, as the media and journalism industry increasingly integrates generative AI and automated editorial systems, it is becoming clear that a “slightly better than human” standard is insufficient. In the context of information dissemination, a human-level baseline for AI is not a gold standard; it is a liability.

While comparing AI to the human baseline in moral dilemmas reveals the machine’s capacity for consistency, it fails to account for the unique accountability required in journalism.  

Because audiences in 2026 are caught in a “breaking verification” crisis where trust is the ultimate currency, an AI that is merely “slightly better” than a biased human is ethically insufficient. To be truly ethical, AI in media must move beyond mimicking human choice to provide a level of transparency and evidentiary rigor that transcends a journalist’s capability.

Our newsrooms are facing a speed-versus-verification dilemma.   The human baseline for a journalist is breaking the story vs. being 100% accurate.   AI’s logic is fundamentally different.   AI shifts control from individual journalists to automated systems optimized for engagement and scalability.   Therefore, an AI that performs ‘slightly better’ than a journalist at producing content quickly may be ethically inferior if its underlying logic lacks the transparency and evidentiary rigor that defines journalistic integrity.

Because so much information is published in many ways across many platforms, audiences are having a difficult time distinguishing fact from fiction. 

“‘Breaking verification’ will replace ‘breaking news’ in 2026, and trust will decide who survives,” according to Vinay Sarawagi, co-founder and CEO of The Media GCC.

Audiences need to see evidence and sources to back up what they see online, because seeing is no longer believing.   If AI only does as well as humans at spotting fakes, it’s not enough. To solve the trust crisis, the AI must be exponentially better at citing sources.

In 2005, Wallach and Allen argued that the principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. They distinguish between operational morality, in which an AI simply follows pre-programmed human safety rules, and functional morality, in which a system can independently navigate moral dilemmas.  In journalism, an AI that merely mirrors an editor’s baseline choices is operating within a limited framework.   If the media is to serve the public’s best interests, a journalist AI must move toward a functional morality that transcends basic human instinct and provides the transparency and accountability the public expects.

From a strategic standpoint, “slightly better” is a recipe for disaster.   If AI-generated content results in a libel suit or negatively impacts a company’s stock price, the defense that AI is slightly more accurate than an average human is a losing argument.  As the media shifts into what is being termed the ‘Answer Economy’, the traditional value proposition of a newsroom is being disrupted. When AI models synthesize reports into a single summary, the value of a news organization is no longer just the ‘answer’ or the scoop itself, but the auditable trail of evidence that allows that answer to be verified (Seo Ai Club, 2026). If an AI only meets the human baseline for producing a plausible-sounding summary without providing this rigorous, machine-readable proof of its sources, it fails to meet the ethical demands of a 2026 audience.

Note: This is an essay originally written for a course on AI and business strategy at Johns Hopkins University.

References

Wallach, Wendell and Allen, Colin. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches.” Ethics and Information Technology volume 7, no. issue 3 (September 2005): 149-155. https://link.springer.com/article/10.1007/s10676-006-0004-4.

Li, Haoran et al. “Artificial Intelligence and Journalistic Ethics: A Comparative Analysis.” Journal of Journalism and Media volume 6, no. issue 3 (August 2025): 105. https://www.mdpi.com/2673-5172/6/3/105.

Mee, S. et al. “Moral judgments of human vs. AI agents in moral dilemmas.” Scientific Reports volume 13, no. issue 1 (February 2023). https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/.

Simon, Felix.How AI reshapes editorial authority in journalism.” Digital Content Next (June 2025)

Reuters Institute.How will AI reshape the news in 2026? Forecasts by 17 experts around the world.” Reuters Institute for the Study of Journalism (January 2025)

Seo Ai Club.The Answer Economy: A Comprehensive Analysis of Answer Engine Optimization Tracking Software and Strategic Market Leadership.” Seo Ai Club (January 2025)

Character vs Reputation: The True Measure of Success

I recently listened to an episode of Freakonomics Radio titled “If You’re Not Cheating, You’re Not Trying,” featuring an interview with disgraced cyclist Floyd Landis. The conversation eventually turned to John Wooden’s famous maxim: “Be more concerned with your character than your reputation, because your character is what you really are, while your reputation is merely what others think you are.”

Landis, perhaps unsurprisingly for a man whose career was defined by a massive deception, rejected Wooden’s idealism. He argued that in the “real world,” reputation is the only thing that functions. It’s the currency that buys you the contract, the sponsorship, and the adoration. To Landis, character is just a consolation prize you cling to once your reputation has been torched.

I understand his cynicism. But I fundamentally disagree with it.

Landis views reputation and character as two separate assets you can trade, like stocks. But Wooden’s point was deeper: Reputation is merely the shadow cast by character. You can manipulate the shadow for a while – stand in the right light, distort the angle, make yourself look larger than you are – but eventually, the sun moves. The shadow always snaps back to the reality of the object casting it.

In my media career, I’ve seen this physics play out repeatedly. We live in an industry obsessed with the “shadow” – the ratings, the viral potential, the race to be first. I’m certainly not perfect; I’ve made mistakes in my career. But I’ve learned that the “reputation” of a news organization isn’t built on its speed; it’s built on its credibility. It’s built on the boring, invisible machinery of character: fact-checking, sourcing, and the refusal to cut corners when no one is watching.

A journalist can fake their way to a scoop once. They can build a reputation for being “first.” But if that reputation isn’t grounded in the character trait of accuracy, the fall is inevitable. When the correction comes – and it always does – the reputation doesn’t just dip; it evaporates.

Consider the case of Janet Cooke, a Washington Post writer whose heartbreaking profile of an 8-year-old heroin addict won a Pulitzer Prize. The unraveling of her reputation began, ironically, with a celebration of it.

Her former employer, the Toledo Blade, initially rushed to publish a tribute to their former staffer. But the tone shifted when editors compared the Associated Press biography—based on Cooke’s own resume—against their internal personnel files. While Cooke claimed to be a magna cum laude Vassar graduate with a master’s degree, the Blade’s records told the truth: she had only attended Vassar for a year and held a standard bachelor’s degree. Because the character didn’t match the reputation, the entire structure collapsed. Her prize-winning article, “Jimmy’s World,” was exposed as a complete lie, and the Pulitzer was returned.

I’m seeing a similar tension now as I study the business strategy and ethics of Artificial Intelligence. The temptation in the AI space is to let the “reputation” of the technology—the hype, the valuation, the promise of an AI future—outpace the “character” of the build (safety, bias, alignment).

Landis would argue that we should ride the hype wave because “that’s how the world treats you.” But history suggests that tech bubbles built on reputation without underlying substance always burst. The companies that last are the ones where the internal reality matches the external promise.

Warren Buffett famously said, “It takes 20 years to build a reputation and five minutes to ruin it.” Unlike Landis, Buffett doesn’t see reputation as a mask to wear; he sees it as a fragile byproduct of integrity.

Bob Iger, the retiring CEO of Disney, reinforces this in his memoir, The Ride of a Lifetime: “True authority and true leadership come from knowing who you are and not pretending to be anything else.” For Iger, character and decency are not merely “soft skills,” but strategic advantages that define a company’s success.

Floyd Landis believes he was punished for playing the game. I would argue he was punished for mistaking the shadow for the man. Ultimately, the spotlight always falls on a person’s character, not their reputation.

Late-Night TV’s Crisis: Adapting to Audience Changes

CBS pulled the plug on The Late Show, but the real story isn’t politics—it’s a failure to follow the audience into the digital age.

Some people on social media think The Late Show was canceled because of Trump. He’s celebrating on Truth Social, but it’s doubtful he had anything to do with it. The more likely reason is precisely what Paramount said: a financial decision.

A business that loses $40 million a year is unlikely to remain in business. Blame a shrinking linear audience, rising production costs, and a failure to evolve into a digital-first, everywhere-content machine. Whether politics played a factor is pure speculation, but the financial and market pressures are written on the wall.

When you look at the big picture, TV talk shows, regardless of daypart, are either mostly being watched in social media clips or being replaced by podcasts – video podcasts. I mostly listen, not watch—but over a billion people now watch podcasts on YouTube.

Streaming has changed everything. In June, streaming accounted for 46% of viewership while broadcast and cable combined for 41.9%. YouTube now leads all platforms in TV and streaming time, according to Nielsen.

If you’re like me, you’re not staying up for late-night shows—you’re catching the clips on YouTube, TikTok, or wherever they land.

The Late Show has declined from nearly 4 million nightly linear viewers a decade ago, but it still gets over 2.5 million viewers and leads the pack. However, Colbert lags behind Kimmel and Fallon on the platforms where more people are watching.

The Tonight Show has 32.7 million YouTube subscribers and 19.2 million on Instagram. Jimmy Kimmel Live follows with 20.7 million on YouTube and 4.3 million on Instagram.

The Late Show? 10M on YouTube and 3.7M on Instagram.

It’s not just losing the attention war, but also the ad war. According to Hollywood Reporter, brands spent an estimated $32.2 million on The Late Show this year—compared to over $50 million each for Kimmel and Fallon. ABC and NBC also bundle in digital ad packages. CBS doesn’t.

With late-night linear ad spend falling from $439 million in 2018 to $221 million in 2024, it’s shocking CBS didn’t chase the audience—and the money—harder.

From all the reports, The Late Show’s downfall looks like a case of a legacy business failing to adapt fast enough.

And for the late-night shows still standing, the future’s uncertain. Even Jimmy Kimmel asked last year if they’ll still exist in a decade.

“There’s a lot to watch and now people can watch anything at anytime, they’ve got all these streaming services. It used to be Johnny Carson was the only thing on at 11:30pm and so everybody watched and then David Letterman was on after Johnny so people watched those two shows but now they’re so many options. Maybe more significantly, the fact that people are easily able to watch your monologue online the next day, it really cancels out the need to watch it when it’s on the air and once people stop watching it when it’s on the air, networks are going to stop paying for it to be made,” he said on the Politickin’ podcast.

As Kimmel noted, good programming is expensive, and appointment TV doesn’t fit this on-demand world.

Podcasts are cheaper and created for how people consume now—scrolling on phones, watching whenever they want.

That may sound like a doomed scenario, but audiences — and algorithms — are fickle. Creators have to stay nimble, and legacy media must evolve.

At the end of the day, content is still king. Late-night isn’t dead — it’s evolving. The shows still deliver; the challenge is distributing and monetizing them across every platform that matters.

Subscribe on Substack now

Mastering Manoa Climb: My Personal Victory Over One Steep Hill

There are steeper hills.

For me, though, Manoa Climb has been torture.

I’ve done a lot of biking over the last couple of years, almost 500 miles this year. I struggle with a few hills and often walk, but none has been more frustrating than Manoa Climb.

It’s a roughly 5% grade covering a half-mile. I have started and finished this hill, but have never completed it without walking at least part of the way.

The hill is always at the end of my rides when I am most tired, especially after a long ride.

I did 20 miles today, pacing myself in the hopes I would finally conquer this hill.

The base of the hill is especially steep and usually where I run out of pedal power. Today, I pushed hard up that base, motivated by a few walkers. It’s always more motivating when there’s an audience. I knew in my heart that getting halfway up the hill was key.

When I got to the bend in the road, about a third of the way up, I felt the momentum. I knew I had finally overcome my physical limitations to reach the top without stopping.

So whatever hilltop you are trying to reach, I hope my little story inspires you. Keep climbing!

Challenges and Frustrations of New York City Commuters with Amtrak and NJ Transit

Yeah, that’s me. Stuck on an Amtrak train this summer. Over the last 16 years, I have been a New York City commuter who has relied on Amtrak and NJ Transit. This year is the worst that I can remember. I’ve been stuck on trains for hours at a time and even been forced to stay in the city because the delay was so long that I would have needed to turn around almost as soon as I got home.

The New York Times provides a great analysis of how we got to this point. A lot of people like to blame current management but the real problem with Amtrak began years ago when our leadership from Washington to New York City failed to plan for the future. The century old tunnels that urgently need replacement should have been addressed 20 even 30 years ago. The electrical system that also dates back to the 1930s also should have been addressed decades ago. Instead, politicians kicked the can down the road and here we are with a failing Amtrak rail system that by extension also impacts NJ Transit.

And it’s not just the tunnels and wires, but also the trains. Many of the engines are decades old, according to conductors with whom I have spoken. The bathrooms are often a hot mess. I’ve been in cars where the doors don’t lock or the toilets don’t flush. Talk about gross.

As extreme heat takes hold during the summer, the delays just get worse. The old trains and infrastructure just can’t handle it. While Amtrak has been better since the June meltdown, I still get a daily reminder of potential delays.

NJ Transit is another story as those commuters endured more hell on the rails this past week. They too have endured weeks of frustration, and as my colleague NJ Burkett reported earlier this month, we are still years away from real change.

The frustration for many commuters is not just the delay or the cancelation but the lack of contingency plans when something goes haywire. A few weeks back, a train from Philadelphia to Boston got stuck south of Trenton at 1:00 a.m. Those folks sat on that train until the next one came along – my morning train nearly 6 hours later. That is just wrong but there’s no contingency plan to get stuck commuters off a train and on another form of transportation – unless they are stuck at a station. Then maybe there are alternatives.

The immediate future likely holds more delays, cancelations, and frustrations, and it is clear Amtrak is a long way from getting to where we need it to be. Let’s just hope nothing else stands in the way of building that newer tunnel and finishing the replacement for the portal bridge, or the wait for a better tomorrow will only get longer.

The Media Industry is not dead

This New York Times article about how the media industry is losing its future is pretty doom and gloom, but I’d counter that media industry revenue continues to grow and hit all-time highs year after year. It’s certainly more competitive than ever, but I’d rather have an industry with a wealth of opportunities than one with only a few. And how amazing is it to be alive and working in an industry fueled by amazing technological change? Look how far we have come in such a short amount of time!

The article reminded me of Bob Iger’s book (paid link), his thoughts on disruption, and why many businesses have failed. He wrote, “Courage. The foundation of risk-taking is courage, and in ever-changing, disrupted businesses, risk-taking is essential, innovation is vital, and true innovation occurs only when people have courage. This is true of acquisitions, investments, and capital allocations, and it particularly applies to creative decisions. Fear of failure destroys creativity.”

We can’t be afraid of the future. Change may be disruptive to how things are, but how we adapt makes growth possible.