NAB CEO Presses for AI Protections for Radio, TV

LeGeyt: “Broadcasters’ expressive content is particularly valuable for AI ingestion precisely because it is vetted and trusted.


The president/CEO of the National Association of Broadcasters called for more protections for journalism and broadcasters at a forum about artificial intelligence.

Curtis LeGeyt

He said content created by broadcasters is particularly vulnerable to misuse by AI “precisely because it is vetted and trusted.”

Curtis LeGeyt participated in the AI Insight Forum, a roundtable bipartisan discussion hosted by Sen. Chuck Schumer (D-N.Y.) on Wednesday.

According to text of his remarks provided by NAB, LeGeyt emphasized the job local broadcasters do in providing news, sports and entertainment as well as the role they play as “first informers and emergency lifelines.”

He said this is particularly important in light of the glut of misinformation online and lack of clear guardrails. In many cases, he said, broadcasters are important gatekeepers in combatting such misinformation.

“The nation’s broadcasters represent one of the last bastions of truly local, unbiased journalism,” LeGeyt said. “Study after study shows that local broadcasters are the most trusted source of news and information. And [broadcasters’] unique community connection and role as a lifeline during times of emergency truly sets us apart from other mediums, especially when the internet and cellular wireless networks fail.”

LeGeyt reminded those in attendance — which included senators and members of the entertainment community — that broadcast news services are freely available over the air in every community in the country, but that local news production is costly. He quoted a statistic from the all-news WTOP(FM) in Washington, which is said to spend more than $12 million a year to run its newsrooms.

He said  broadcasters are facing an onslaught of AI challenges because stations must spent an increased amount of time to vet stories and verify sources created by AI.

He noted the significant number of fake photos and videos released on social media after the attacks on Israel in October. One network sifted through thousands of images and videos and found that only 10% of them were deemed usable or authentic, he said.

“The proliferation of easy-to-use AI tools and lack of legal guardrails are creating a perfect misinformation storm,” he said, noting that nearly 70% of Americans report coming across fake news on social media.

He shared examples of broadcasters working to combat online misinformation such as CBS News launching a unit called CBS News Confirmed to investigate misinformation and deepfakes.

“These necessary efforts further increase the cost of providing our audiences with the trusted news and information they rely on and heighten the need to ensure that broadcasters are fairly compensated when our programming content is accessed through both existing and emerging tech platforms, including AI systems,” LeGeyt said.

He also raised the issue of unauthorized capture of copyrighted material without compensation, a trend that has the potential to hinder investment in journalism.

That ingestion of copyrighted content by AI platforms is causing significant harm, he said, as it relies on broadcasters’ work product without compensating them.

“Broadcasters’ expressive content is particularly valuable for AI ingestion precisely because it is vetted and trusted,” he said.

“If broadcasters are not compensated for use of their valuable, expressive works, they will be less able to invest in local news content creation. Having fewer resources to invest in local news and content would negatively impact the communities served by those stations.”

Specifically, there is a growing concern regarding AI tools that are being used to create images, video and audio that replace the likeness of a trusted radio or TV personality to spread misinformation or perpetuate fraud.

“The use of AI to doctor, manipulate and distort information is a significant and growing problem that must be addressed in balance with the First Amendment,” he said.

He mentioned a video clip of a routine discussion between two broadcast TV anchors that was edited and manipulated to create a hateful, racist, anti-Semitic rant. The underlying AI technology used to perpetrate these events must be held accountable, he said.

LeGeyt concluded that the use of AI to manipulate and distort information is a significant, burgeoning problem that needs to be addressed. He said the trust, integrity and authenticity of journalism is at stake and that country needs to continue to have robust, meaningful conversations about the problem.


The author is the former editor of TV Technology and a long-time contributor to Radio World. She has served as editor-in-chief of two housing finance magazines and written about topics as varied as broadcasting, education, chess, music, sports and the connected home environment.