Critical Information Analysis in Relation to the Threat of Disinformation in Digital Media and Ai Hallucinations – an Educational Context

Katarzyna Borawska-Kalbarczyk

Abstract


Introduction: The rapid development of the Internet has led to numerous complications in the use of digital information. It has enabled the manipulation of messages, the deliberate creation of false content, and new forms of information distortion driven by advances in artificial intelligence (AI) tools.
Research Aim: This article aims to analyze the nature and causes of selected disruptions in digital information (fake news, disinformation, and AI hallucinations). The author emphasizes the importance of critically analyzing media content in the face of false information, a topic explored in the final section of the text.
Evidence-based Facts: In recent years, the number of publications on disinformation in digital media has increased significantly. Experts consider it one of the most serious global threats in the digital space. Additionally, there is a growing concern about experiencing AI-generated hallucinations, such as those produced by ChatGPT. Studies available in the literature focus on the essence of disinformation, the reasons for its spread, and its consequences, particularly in limiting critical thinking.
Summary: Given the threats posed by false information, special attention should be directed toward children and adolescents, the most active group of Internet users. Awareness of the causes of false information, the dangers of its reception, and ways to counteract the harmful effects of disinformation highlight the need to enhance the digital skills of young people. These competencies enable students to effectively protect themselves from information manipulation and build resilience against propaganda and false narratives.


Keywords


fake news; disinformation; information hallucinations; AI literacy; critical thinking

Full Text:

PDF

References


Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211

Bagherpour, A., & Nouri A. (2020). COVID misinformation is killing people. This “infodemic” has to stop. https://www.scientificamerican.com/article/covid-misinformation-is-killing-people1

Baron-Polańczyk, E. (2019). Boty, trolle i fake news – uważaj, kto cię uczy! Edukacja – Technika – Informatyka, 2(28), 218–226.

Chudzik, A. (2021). Pandemiczne mity i teorie spiskowe w memach internetowych. In W. Świerczyńska-Głownia, A. Hess, & M. Nowina-Konopka (Eds.), Komunikowanie w procesie zmian (pp. 145–164). Instytut Dziennikarstwa, Mediów i Komunikacji Społecznej Uniwersytetu Jagiellońskiego; Wydawnictwo ToC.

CILIP. (2018). CILIP definition of information literacy 2018. https://cdn.ymaws.com/www.cilip.org.uk/resource/resmgr/cilip/information_professional_and_news/press_releases/2018_03_information_lit_definition/cilip_definition_doc_final_f.pdf

European Commission. (2018). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Tackling Online Disinformation: A European Approach. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX: 52018DC0236

European Commission: Directorate-General for Communications Networks, Content and Technology. (2018). A multi-dimensional approach to disinformation. Publications Office. https://data.europa.eu/doi/10.2759/739290

Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F.F., Al-shami, S.A., & Slamene, R. (2023). Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers and Education: Artificial Intelligence, 5, 1–8. https://doi.org/10.1016/j.caeai.2023.100180

Hasanein, A.M., & Sobaih, A.E.E. (2023). Drivers and consequences of ChatGPT use in higher education: Key stakeholder perspectives. European Journal of Investigation in Health, Psychology and Education, 13, 2599–2614. https://doi.org/10.3390/ejihpe13110181

Hinrichsen, J., & Coombs, A. (2013). The five resources of critical digital literacy: A framework for curriculum integration. Research in Learning Technology, 21, 1–16. http://dx.doi.org/10.3402/rlt.v21.21334

Howard, P.N., Neudert, L.M., & Prakash, N. (2021). Digital misinformation/disinformation and children. UNICEF Office of Global Insight and Policy. https://www.unicef.org/globalinsight/media/2096/file/UNICEF-Global-Insight-Digital-Mis-Disinformation-and-Children-2021.pdf

Jemielniak, D. (2024, January 31). Polski internet jest pełen potężnych teorii altermedycznych. Gazeta Wyborcza.

Kajfosz, J. (2013). Faktoid i mistyfikacja w nowych mediach, czyli o strategiach czarowania umysłu. In P. Grochowski (Ed.), Netlor: wiedza cyfrowych tubylców (pp. 65–90). Wyd. Nauk. UMK.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of Large Language Models for education. Learning and Individual Differences, 103. https://doi.org/10.35542/osf.io/5er8f; https://osf.io/preprints/edarxiv/5er8f

Kellner, D., & Share, J. (2005). Toward critical media literacy: core concepts, debates, organizations, and policy. Discourse: Studies in the Cultural Politics of Education, 26(3), 369–386. https://doi.org/10.1080/01596300500200169; http://web.mit.edu/schock/www/docs/towardscritmedlit.pdf

Lelonek, A. (2020). Leksykon bezpieczeństwa informacyjnego, cz. 2. https://nci.org.pl/leksykon-bezpieczenstwa-informacyjnego-cz-2/

National Literacy Trust. (2018). Fake news and critical literacy. The final report of the Commission on Fake News and the Teaching of Critical Literacy in Schools. https://nlt.cdn.ngo/media/documents/Fake_news_and_critical_literacy_-_final_report.pdf

Neff, G. (2024). The New Digital Dark Age. Wired. https://www.wired.com/story/the-new-digital-dark-age/

Ng, D.T.K., Leung, J.K.L., Chu, K.W.S., & Qiao, M.S. (2021). AI literacy: definition, teaching, evaluation and ethical issues. In Proceedings of the Association for Information Science and Technology, 58(1), 504–509. https://doi.org/10.1002/pra2.487

Oviedo-Trespalacios O., Peden, A.E., Cole-Hunter, T., Costantini, A., Haghani, M., Rod, J.E., Kelly, S., Torkamaan, H., Tariq, A., Newton, J.D.A., Gallagher, T., Steinert, S., Filtness, A.J., & Reniers, G. (2023). The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science, 167, 1–22. https://doi.org/10.1016/j.ssci.2023.106244

Palczewski, M. (2019). Dyskurs fake newsa. Annales Universitatis Paedagogicae Cracoviensis, Studia de Cultura, 11(1), 15–31.

Paré, G., Trudel, M.C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2), 183–199. https://doi.org/10.1016/j.im.2014.08.008

Pennycook, G., & Rand, D.G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Prensky, M. (2001). Digital natives, digital immigrants, part 1. On the Horizon, 9(5). https://www.marcprensky.com/writing/Prensky%20%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf

Rudolph, J., Tan, Sa., & Tan, Sh. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 342–263. https://doi.org/10.37074/jalt.2023.6.1.9

Tardif, A. (2024). Rosnące obawy związane z halucynacjami i uprzedzeniami sztucznej inteligencji: raport Aporii za rok 2024 podkreśla pilną potrzebę wprowadzenia standardów branżowych. https://www.unite.ai/pl/rosn%C4%85ce-obawy-zwi%C4%85zane-z-halucynacjami-sztucznej-inteligencji-i-aporiami-uprzedze%C5%84-Raport-z-2024-r.-podkre%C5%9Bla-piln%C4%85-potrzeb%C4%99-wprowadzenia-standard%C3%B3w-bran%C5%BCowych/

Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., &Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environment, 10(15), 1–24. https://doi.org/10.1186/s40561-023-00237-x

Venturini, T. (2019). From fake to junk news, the data politics of online virality. In D. Bigo, E. Isin, & E. Ruppert (Eds.), Data Politics: Worlds, Subjects, Rights (pp. 1–15). Routledge. https://www.researchgate.net/publication/331158181_From_Fake_to_Junk_News_the_Data_Politics_of_Online_Virality

Vosoughi, S., Roy, D., & Aral, S. (n.d.). The spread of true and false news online. MIT initiative on the digital economy research brief. https://ide.mit.edu/wp-content/uploads/2018/12/2017-IDE-Research-Brief-False-News.pdf

Walsh, M. (2024). ChatGPT statistics (2024) – the key facts and figures. https://www.stylefactoryproductions.com/blog/chatgpt-statistics

Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(15), 1–29. https://doi.org/10.1186/s41239-024-00448-3

Wardle, C. (2017). Fake News. It’s Complicated. First Draft News. https://firstdraftnews.com/fake-news-complicated/

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Report to the Council of Europe. Strasbourg Cedex: Council of Europe. https://shorensteincenter.org/information-disorder-framework-for-research-and-policymaking

World Economic Forum. (2024). The Global Risk Report (19th ed.). https://www.zurich.com/knowledge/topics/global-risks/the-global-risks-report-2024

Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI Literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33, 290–324 https://doi.org/10.1007/s40593-022-00293-3




DOI: http://dx.doi.org/10.17951/lrp.2025.44.3.61-76
Date of publication: 2025-09-29 20:43:55
Date of submission: 2025-02-11 14:29:14


Statistics


Total abstract view - 0
Downloads (from 2020-06-17) - PDF - 0

Indicators





Copyright (c) 2025

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.