Technology

Majority of Americans think generative AI programs should credit sources they rely on

Overall, 54% of Americans say artificial intelligence programs that generate text and images, like ChatGPT and DALL-E, need to credit the sources they rely on to produce their responses. A much smaller share (14%) says the programs don’t need to credit sources, according to a new Pew Research Center survey. About a third say they’re not sure on this question.

Source: Many Americans think generative AI programs should credit the sources they rely on

Can journalism survive AI? 

Can journalism survive artificial intelligence (AI)? The answer will depend on whether journalism can adapt its business models to the AI era. If policymakers intervene to correct market imbalances, they must enforce intellectual property rights and ensure that journalism has a fighting chance in the era of generative AI. Last year alone, the U.S. journalism industry slashed 2,700 jobs, and 2.5 newspapers closed each week on average.

Source: Can journalism survive AI? | Brookings

Why watermarking won’t work

Can detection be universal without empowering those with access to exploit it? If not, how can we prevent misuse of the system itself by those who control it? Once again, we find ourselves back to square one and asking who gets to decide what is real? Without standards and public education, AI watermarking will serve as little more than a plaster, failing to address issues of misinformation.

Source: Why watermarking won’t work

OpenAI wants to use video generation model Sora to break into Hollywood 

The first people who have gained access to Sora are “red teamers” who are looking for vulnerabilities in the software, but OpenAI is giving Hollywood notables advanced access so that they can explore the ways the generative AI technology could assist their work. According to Bloomberg, “a few big-name actors and directors” have been invited to take Sora for a test drive.

Source: OpenAI wants to use video generation model Sora to break into Hollywood – Tubefilter

AI Spending to Surpass $13 Billion by 2028, Media Analysts Predict

AI spending is expected to crest above $13 billion by 2028, with the spread falling fairly evenly across analytics, development/delivery and customer experiences like personalization and discovery, media analysts announced at a Series Mania presentation on Thursday. However, the analysts do not anticipate the content creation apocalypse that has underscored much AI coverage of late.

Source: AI Spending to Surpass $13 Billion by 2028, Media Analysts Predict

ELVIS Act signed into law in Tennessee to protect artists’ voice and likeness from AI

The bipartisan ELVIS Act was signed into law on Thursday (March 21) by Tennessee Governor Bill Lee at a honky-tonk in Nashville. The ELVIS Act will officially go into effect on July 1 and will update the state’s existing right of publicity. The bill was introduced in January to update Tennessee’s Protection of Personal Rights law, to include protections for songwriters, performers, and music industry professionals’ voices from the misuse of artificial intelligence (AI).

Source: ELVIS Act signed into law in Tennessee to protect artists’ voice and likeness from the misuse of AI

House TikTok bill gives ByteDance 6 months to sell. That’s unlikely.

A forced sale of TikTok within 180 days, as House-passed legislation requires, would be one of the thorniest and most complicated transactions in corporate history, posing financial, technical and geopolitical challenges that experts said could render a sale impractical and increase the likelihood the app will be banned nationwide. A sale would require severing a company worth potentially $150 billion from its technical backbone while being the subject of legal challenges and resistance from China.

Source: House TikTok bill gives ByteDance 6 months to sell. That’s unlikely.

France Fines Google Amid A.I. Dispute With News Media

French regulators on Wednesday said Google failed to notify news publishers that it was using their articles to train its artificial intelligence algorithms, part of a wider ruling against the company for its negotiating practices with media outlets. The disclosure by the French competition authority was part of a fine of €250 million, or about $270 million, for failing to negotiate fair licensing deals with media companies to publish article links in search results.

Source: France Fines Google Amid A.I. Dispute With News Media

Why AI watermarks miss the mark in preventing misinformation

Watermarking has been floated by Big Tech as one of the most promising methods to combat the escalating AI misinformation problem online. But so far, the results don’t seem promising, according to experts and a review of misinformation conducted by NBC News. The technologies are only in their infancy and in a limited state of deployment but, already, watermarking has proven to be easy to bypass.

Source: Why AI watermarks miss the mark in preventing misinformation

DeepFake detectors have become indispensable

For around three years, a field of research has been developing around the detection of DeepFakes. There are two main approaches. The first involves spotting suspicious behaviour by a person in a video. An AI can be fed a large number of authentic videos of a celebrity, so that it learns to immediately detect any anomalies in their gestures or speech. The second, more general technique involves identifying the differences between DeepFakes and real videos.

Source: Artificial intelligence: DeepFake detectors have become indispensable

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.