Skip to Main Content
Icon MACDONALD-KELCE LIBRARY Building

Macdonald-Kelce Library Blog

Fake Sources in ChatGPT

by Nathan Schwartz - University of Tampa on 2023-05-30T12:36:00-04:00 in Business: Cybersecurity, Business: Marketing, English & Writing, Instructional Design, Journalism, Law | 1 Comment
ChatGPT

Welcome to ChatGPT

This awesome little guy is here to help, right? He can write papers, make flowery prose and do some very creative stuff. Unfortunately, it could all be fake! This is hard to understand considering ChatGPT can pass law exams and numerous other tests. GPT4 appears even more promising in its ability to complete standardized exams. GPT stands for generative pre-trained transformer, which is part of the large language model for AI learning. Basically, it's capable of processing millions of words in human speech and rendering a plausible speech flow in text.

Be mindful that what ChatGPT says might be fake.

Yes, it's excellent for poetry, and imaginative stories, but not for your assignment. A thirty-year veteran New York lawyer Steven A Schwartz (no relation) used ChatGPT to write a legal brief which he submitted to the court. He even asked if the citations were real. ChatGPT lied to him directly and he believed that lie. He submitted the brief with quotes and

citations supporting his case and now may face possible sanctions on Jun 8, 2023. "US District Judge Kevin Castel confirmed the development, adding that 'six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.' " (Sarkar, 2023).

In doing research a few months ago a UT faculty member ask me to verify sources presented with ChatGPT.  I also found these sources to be fake. I discovered the source to be fake by backtracking the journal to the year and article cited. In some cases, ChatGPT gave links to articles and DOIs that lead nowhere, or to the wrong article. When I verified the sources were indeed fake I ask ChatGPT and it said they were real sources! When I pressed that they were fake, it apologized and said it was only as accurate as its own source data. I do not think this is a source data problem, but creativity with language that starts with the text it's creating and includes all quotes and citations in that text. ChatGPT knows how to make things sound very convincing but it appears to have no morals about making up quotes and citations.

Verify all quotes and citations.

You can verify by Googling the article or using the library journal search and other tools. If you cannot verify something ChatGPT presents it might be best to assume it's fake. ChatGPT may present great imaginative story ideas, poetry, and plot starts, but be cautious about the facts presented. The AI large language models know how to mimic human speech but may also mimic facts. If you want to read more about when you should and should not use ChatGPT, see Aaron Welborn's article from Duke University Libraries.

You can read updates on the case here https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/

Marut, R. (2023, May 28). Lawyer apologizes for fake court citations from ChatGPT. CNN.  https://www.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/index.html

Sarkar, S. (2023, May 29). A New York lawyer is in trouble for using ChatGPT to write legal brief. The Tech Portal. https://thetechportal.com/2023/05/29/new-york-lawyer-chatgt-legal-brief/

Welborn, A. (2023 Mar 9). ChatGPT and fake citations. Duke University Libraries. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/
 

 


 Add a Comment



Posts: 16
Nathan Schwartz - University of Tampa 2025-02-19T14:51:26-05:00

Morgan & Morgan's AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family's house fire. Despite being an experienced litigator, Rudwin Ayala, the firm's lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT.

AI making up cases can get lawyers fired, scandalized law firm warns
Feb 19, 2025 ARS Technica
https://arstechnica.com/tech-policy/2025/02/ai-making-up-cases-can-get-lawyers-fired-scandalized-law-firm-warns/


 Reply

  Subscribe



Enter your e-mail address to receive notifications of new posts by e-mail.


  Archive



  Follow Us



  Facebook
  Instagram
  Return to Blog
This post is closed for further discussion.

title
Loading...

Macdonald-Kelce Library - The University of Tampa - 401 W. Kennedy Blvd. - Tampa, FL 33606 - 813 257-3056 - library@ut.edu - Accessibility