The use of artificial intelligence in journalism raises concerns about transparency and truth-telling
Sports Illustrated, once known for its sterling writing, is now facing reputation damage after it was discovered that articles on its website were written by authors who apparently don’t exist. While the publication denied reports that the stories were generated by artificial intelligence (AI), the incident highlights the ethical dilemma faced by media companies as they experiment with AI in an industry built on truth and transparency. This is not the first time a media company has faced backlash for using AI in journalism, raising questions about the future of the industry and the need for clear guidelines.
Conflicting accounts of what happened
According to a report by Futurism, Sports Illustrated used articles for product reviews that were attributed to authors the magazine could not identify. One author, Drew Ortiz, was found to have an AI-generated portrait on a website selling such images. When questioned, Sports Illustrated removed all the authors with AI-generated portraits from its website without explanation. An unnamed source at the magazine confirmed the use of AI in content creation, contradicting the magazine’s denial.
Not the first such situation
Sports Illustrated is not the first media company to face criticism for using AI in journalism. Gannett, the newspaper chain, paused an experiment earlier this year in which AI was used to generate articles on high school sports events after errors were discovered. The articles were published under the byline “LedeAI.” Similarly, CNET used AI to create explanatory news articles about financial service topics, attributed to “CNET Money Staff,” without disclosing the involvement of technology until the experiment was discovered and written about by other publications.
The importance of transparency
The controversy surrounding the use of AI in journalism highlights the need for transparency. Media companies should clearly disclose the role of technology in content creation to maintain trust with their audience. While some companies, like Buzzfeed, have been upfront about their experiments with AI, others have been less forthcoming. The Associated Press, for example, has been using technology to assist in articles since 2014 and includes a note at the end of each story explaining the role of technology in its production.
The ethical dilemma
The use of AI in journalism raises ethical concerns. Journalists are expected to adhere to principles of truth-telling and transparency, but the use of AI blurs the line between human and machine-generated content. As AI becomes more sophisticated, it may become difficult for readers to discern whether an article was written by a human or generated by a machine. This challenges the very essence of journalism and the trust that readers place in news organizations.
The future of AI in journalism
While the use of AI in journalism has its benefits, such as generating content quickly and efficiently, it also poses risks. The industry must navigate the ethical implications of using AI while maintaining the values of truth and transparency. Clear guidelines and standards should be established to ensure that AI is used responsibly and that readers are not deceived. As AI continues to advance, media organizations must find a balance between embracing new technology and upholding the principles of journalism.
The controversy surrounding Sports Illustrated’s use of AI-generated articles raises important questions about the future of journalism in the age of artificial intelligence. The incident serves as a reminder that transparency and truth-telling are essential in maintaining the trust of readers. Media companies must navigate the ethical dilemmas posed by AI while upholding the values that journalism is built upon. As AI continues to evolve, it is crucial for the industry to establish clear guidelines and standards to ensure responsible use and maintain the integrity of journalism.