Axel Springer, one of many largest media corporations in Europe, is collaborating with OpenAI to combine journalism into ChatGPT synthetic intelligence (AI) applied sciences, the German writer mentioned in a press release on its weblog on Dec. 13.
The collaboration entails utilizing content material from Axel Springer media manufacturers to advance the coaching of OpenAI’s giant language fashions. It goals to realize a greater ChatGPT consumer expertise with up-to-date and authoritative content material throughout various matters, and elevated transparency by way of attributing and linking full articles.
Generative AI chatbots have lengthy grappled with factual accuracy, sometimes producing false data, generally known as “hallucinations.“ Initiatives to cut back these AI hallucinations had been announced in June in a post on OpenAI’s website.
AI hallucinations happen when synthetic intelligence techniques generate factually incorrect data that’s deceptive or unsupported by real-world information. Hallucinations can manifest in numerous types, akin to producing false data, making up nonexistent occasions or individuals, or offering inaccurate particulars about sure matters.
The mix of AI and journalism has offered challenges, together with issues about transparency and misinformation. An Ipsos World research revealed that 56% of People and 64% of Canadians imagine AI will exacerbate the unfold of misinformation, and globally, 74% assume AI facilitates the creation of sensible pretend information.
The partnership between OpenAI and Axel Springer goals to make sure that ChatGPT customers can generate summaries from Axel Springer’s media manufacturers, together with Politico, Enterprise Insider, Bild, and Die Welt.
Nevertheless, the potential for AI to fight misinformation can also be being explored, as seen with instruments like AI Truth Checker and Microsoft’s integration of GPT-4 into its Edge browser.
The Related Press has responded to those issues by issuing guidelines proscribing the usage of generative AI in information reporting, emphasizing the significance of human oversight.
In October 2023, a group of scientists from the College of Science and Expertise of China and Tencent’s YouTu Lab developed a tool to combat “hallucinations” by synthetic intelligence (AI) fashions.