Problem dostosowania jako wyzwanie kulturowe i prawne. Sztuczna inteligencja, interpretowalność i poszukiwanie sensu

Karol Kasprowicz

Streszczenie w języku polskim


W artykule analizie poddano problem dostosowania sztucznej inteligencji (AI alignment problem), stanowiący fundamentalne wyzwanie w zakresie międzykulturowej komunikacji między ludzkimi ramami interpretacyjnymi a algorytmiczną optymalizacją. Autor wskazuje, że skuteczne dostosowanie AI wymaga integracji praktyk kulturowego nadawania sensu oraz ram prawnych, które różnią się w poszczególnych społeczeństwach. Analiza prowadzi do wniosku, że obecne próby regulacyjne, w tym rozporządzenie o sztucznej inteligencji Unii Europejskiej (EU AI Act) oraz krajowe strategie dotyczące AI, napotykają trzy powiązane ze sobą wyzwania: zapewnienie interpretowalności decyzji algorytmicznych, zarządzanie indeterminizmem właściwym systemom AI oraz rozwiązywanie kontrowersji związanych z pozyskiwaniem wiedzy. Poprzez analizę nowych zjawisk, takich jak agenci AI, a także zjawiska „przechwytywania” regulacji przez globalne korporacje technologiczne (Big Tech) oraz wzrostu tzw. nacjonalizmu AI, autor dowodzi, że niepowodzenia w procesie dostosowania wynikają nie tylko z ograniczeń technicznych, lecz również z niewystarczającego uwzględnienia różnorodnych kulturowych logik interpretacyjnych. Autor proponuje ramy umożliwiające adaptację systemów AI do odmiennych kontekstów przy jednoczesnym zachowaniu ich podstawowej funkcjonalności. W konkluzji wskazuje, że rozwiązanie problemu dostosowania wymaga zastosowania obliczeniowego modelowania kulturowego, zdolnego do nawigowania w warunkach pluralizmu wartości. Autor ostrzega, że bez integracji technicznych mechanizmów bezpieczeństwa z kulturowymi ramami społeczeństw systemy AI mogą stać się narzędziami eksploatacji i kontroli, a nie partnerami służącymi dobru społecznemu.


Słowa kluczowe


dostosowanie; sztuczna inteligencja; interpretowalność; regulacje; nadawanie sensu; kultura

Pełny tekst:

PDF (English)

Bibliografia


LITERATURE

Acemoglu D., The Simple Macroeconomics of AI, “Economic Policy” 2025, vol. 40(121), DOI: https://doi.org/10.1093/epolic/eiae042.

Asimov I., Runaround, “Astounding Science Fiction” 1942, no. 3.

Becker A., More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, New York 2025.

Bennett M., A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, New York–Boston 2023.

Bengio Y., Government Interventions to Avert Future Catastrophic AI Risks, “Harvard Data Science Review” 2024, no. 5 (Special Issue), DOI: https://doi.org/10.1162/99608f92.d949f941.

Biagini G., Towards an AI–Literate Future: A Systematic Literature Review Exploring Education, Ethics, and Applications, “International Journal of Artificial Intelligence in Education” 2025, DOI: https://doi.org/10.1007/s40593-025-00466-w.

Bostrom N., Deep Utopia: Life and Meaning in a Solved World, 2024.

Bostrom N., Superintelligence: Paths, Dangers, Strategies, Oxford 2014.

Christian B., The Alignment Problem: Machine Learning and Human Values, New York 2020.

Christian B., Griffiths T., Algorithms to Live By: The Computer Science of Human Decisions, Dublin 2017.

Clark A., Chalmers D.J., The Extended Mind, “Analysis” 1998, vol. 58(1), DOI: https://doi.org/10.1093/analys/58.1.7.

Coeckelbergh M., Why AI Undermines Democracy and What to Do About It, Cambridge 2024.

Conseil d’État, Artificial Intelligence and Public Action: Building Trust, Serving Performance, Paris 2022.

Crawford K., Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven 2021, DOI: https://doi.org/10.12987/9780300252392.

Czubkowska S., Bógtechy. Jak wielkie firmy technologiczne przejmują władzę nad Polską i światem, Kraków 2025.

De Vos M., The European Court of Justice and the March Towards Substantive Equality in European Union Anti-discrimination Law, “International Journal of Discrimination and the Law” 2020, vol. 20(1), DOI: https://doi.org/10.1177/1358229120927947.

Dell’Acqua F., Ayoubi C., Lifshitz H., Sadun R., Mollick E., Mollick L., Han Y., Goldman J., Nair H., Taub S., Lakhani K., The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise, “Harvard Business School Strategy Unit Working Paper” 2025, no. 25-043, DOI: https://doi.org/10.3386/w33641.

Dolniak P., Kuźma T., Ludwiński A., Wasik K., Sztuczna inteligencja w wymiarze sprawiedliwości. Między prawem a algorytmami, Warszawa 2024.

Elliott A., Making Sense of AI: Our Algorithmic World, Cambridge 2022.

Elliott A., The Culture of AI: Everyday Life and the Digital Revolution, London 2019, DOI: https://doi.org/10.4324/9781315387185.

Fishman E., Chokepoints: American Power in the Age of Economic Warfare, New York 2025.

Frazer K., A Different Alignment Problem: AI, the Rule of Law, and Outdated Legal Institutions and Practices, “Journal of Business & Technology Law” 2023, vol. 19, DOI: https://doi.org/10.2139/ssrn.4849429.

G’sell F., Regulating under Uncertainty: Governance Options for Generative AI, 2024, DOI: https://doi.org/10.2139/ssrn.4918704.

Gadamer H.-G., Truth and Method, London 1989.

Goodman B., Flaxman S., European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’, “AI Magazine” 2017, vol. 38(3), DOI: https://doi.org/10.1609/aimag.v38i3.2741.

Haidt J., The Righteous Mind: Why Good People Are Divided by Politics and Religion, New York 2012.

Hao K., Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, New York 2025.

Hofstede G., Hofstede G.J., Minkov M., Cultures and Organizations: Software of the Mind, New York 2010.

Horn E., The Future as Catastrophe: Imagining Disaster in the Modern Age, New York 2018, DOI: https://doi.org/10.7312/horn18862.

Hutter R., Hutter M., Chances and Risks of Artificial Intelligence – a Concept of Developing and Exploiting Machine Intelligence for Future Societies, “Applied System Innovation” 2021, vol. 4(2), DOI: https://doi.org/10.3390/asi4020037.

Karpiuk-Wawryszuk N., Kasprowicz K., Legal Cultures and Strategies for Implementing Artificial Intelligence Regulations: Case Studies of the United States, People’s Republic of China and European Union, “Teka Prawnicza” 2025, vol. 18(1), DOI: https://doi.org/10.32084/tkp.9619.

Kirk H.R., Gabriel I., Summerfield C., Vidgen B., Hale S.A., Why Human–AI Relationships Need Socioaffective Alignment, “Humanities and Social Sciences Communications” 2025, vol. 12(728), DOI: https://doi.org/10.1057/s41599-025-04532-5.

Korbak T. et al., Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety, 15.7.2025, https://arxiv.org/abs/2507.11473 (access: 20.6.2025).

LaCroix T., Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction, Peterborough 2025.

LaCroix T., Luccioni A., Metaethical Perspectives on ‘Benchmarking’ AI Ethics, “AI and Ethics” 2025, vol. 5, DOI: https://doi.org/10.1007/s43681-025-00703-x.

Lem S., Golem XIV, Kraków 1981.

Lessig L., Code and Other Laws of Cyberspace, New York 1999.

Lessig L., Code Is Law: On Liberty in Cyberspace, “Harvard Magazine” 2000, vol. 102(3).

Levesque H.J., On Our Best Behaviour, “Artificial Intelligence” 2014, vol. 212, DOI: https://doi.org/10.1016/j.artint.2014.03.007.

Locke J., Second Treatise of Government, (1689).

Mamak K., Whether to Save a Robot or a Human: On the Ethical and Legal Limits of Protections for Robots, “Frontiers in Robotics and AI” 2021, vol. 8, DOI: https://doi.org/10.3389/frobt.2021.712427.

Marcus G., Taming Silicon Valley: How We Can Ensure That AI Works for Us, Cambridge 2024.

Marcus G., Davis E., Rebooting AI: Building Artificial Intelligence We Can Trust, New York 2019.

Masoud R., Liu Z., Ferianc M., Treleaven P.C., Rodrigues M.R., Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede’s Cultural Dimensions, [in:] Proceedings of the 31st International Conference on Computational Linguistics, eds. O. Rambow, L. Wanner, M. Apidianaki, H. Al-Khalifa, B.D. Eugenio, S. Schockaert, Abu Dhabi 2025.

Mollick E., Co-Intelligence: Living and Working with AI, New York 2024.

Murgia M., Code Dependent: How AI Is Changing Our Lives, London 2024.

Narayanan A., Kapoor S., AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Princeton 2024, DOI: https://doi.org/10.1515/9780691249643.

O’Connor C., Weatherall J.O., The Misinformation Age: How False Beliefs Spread, New Haven 2020, DOI: https://doi.org/10.2307/j.ctv8jp0hk.

Olejnik Ł., Propaganda: From Disinformation and Influence to Operations and Information Warfare, New York 2025, DOI: https://doi.org/10.1201/9781003499497.

Park D.H., Cho E., Lim Y., A Tough Balancing Act: The Evolving AI Governance in Korea, “East Asian Science, Technology and Society: An International Journal” 2024, vol. 18(2), DOI: https://doi.org/10.1080/18752160.2024.2348307.

Pasquinelli M., The Eye of the Master: A Social History of Artificial Intelligence, London–New York 2023.

Payne K., The Geopolitics of AI, “The RUSI Journal” 2024, vol. 169(5), DOI: https://doi.org/10.1080/03071847.2024.2413274.

Peters U., Carman M., Cultural Bias in Explainable AI Research: A Systematic Analysis, “Journal of Artificial Intelligence Research” 2024, vol. 79, DOI: https://doi.org/10.1613/jair.1.14888.

Przegalinska A., Triantoro T., Converging Minds: The Creative Potential of Collaborative AI, Boca Raton 2024, DOI: https://doi.org/10.1201/9781032656618.

Russel S.J., Human Compatible: Artificial Intelligence and the Problem of Control, New York 2019.

Russell S.J., Norvig P., Artificial Intelligence: A Modern Approach, New Jersey 2010.

Saha D., Chattopadhyay A., Singh A.K., Talukdar P.P., Towards Culturally-Aware and Explainable AI: A Survey, [in:] Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society, 2024.

Sautoy M. de, The Creativity Code: Art and Innovation in the Age of AI, Cambridge 2020, DOI: https://doi.org/10.4159/9780674240407.

Simon Z.B., History in Times of Unprecedented Change: A Theory for the 21st Century, London 2019, DOI: https://doi.org/10.5040/9781350095083.

Simon Z.B., The Epochal Event: Transformations in the Entangled Human, Technological and Natural Worlds, Cham 2020, DOI: https://doi.org/10.1007/978-3-030-47805-6.

Suleyman M., Bhaskar M., The Coming Wave: AI, Power and the Twenty-First Century’s Greatest Dilemma, London 2023, DOI: https://doi.org/10.17104/9783406814143.

Sunstein C.R., Artificial Intelligence and First Amendment, “George Washington Law Review” 2024, vol. 92(6).

Tao Y., Viberg O., Baker R.S., Kizilcec R.F., Cultural Bias and Cultural Alignment of Large Language Models, “PNAS Nexus” 2023, vol. 3(9), DOI: https://doi.org/10.1093/pnasnexus/pgae346.

Torres E.P., Human Extinction: A History of the Science and Ethics of Annihilation, New York 2024, DOI: https://doi.org/10.4324/9781003246251.

Turing A., Computing Machinery and Intelligence, “Mind” 1950, vol. 59(236), DOI: https://doi.org/10.1093/mind/LIX.236.433.

United Nations Secretary-General, Our Common Agenda, 2021.

Whorf B., Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf, ed. J.B. Carroll, Cambridge 1956.

Zajdel J., Limes Inferior, Warszawa 1982.

Załuski W., Game Theory in Jurisprudence, Kraków 2014.

Zuboff S., The Age of Surveillance Capitalism, New York 2019.

Zucman G., The Hidden Wealth of Nations: The Scourge of Tax Havens, Chicago 2015, DOI: https://doi.org/10.7208/chicago/9780226245560.001.001.

ONLINE SOURCES

AI Security Institute, The Alignment Project, https://alignmentproject.aisi.gov.uk (access: 20.7.2025).

Amodei D., The Urgency of Interpretability, 2025, https://www.darioamodei.com/post/the-urgency-of-interpretability (access: 14.5.2025).

Ariai F., Demartini G., Natural Language Processing for the Legal Domain: A Survey of Tasks, Datasets, Models, and Challenges, 25.10.2024, https://arxiv.org/abs/2410.21306 (access: 5.8.2025).

Armstrong S., Kevinstein B., Low Impact Artificial Intelligences, 30.5.2017, https://arxiv.org/abs/1705.10720 (access: 13.8.2025).

ArcPrize, https://arcprize.org/leaderboard (access: 2.7.2025).

Bengio Y. (ed.), International AI Safety Report: The International Scientific Report on the Safety of Advanced AI, January 2025, https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf (access: 17.10.2025).

Bravansky M., Trhlik F., Barez F., Rethinking AI Cultural Alignment, 7.3.2025, https://arxiv.org/abs/2501.07751 (access: 10.6.2025).

Caputo N., Rules, Cases, and Reasoning: Positivist Legal Theory as a Framework for Pluralistic AI Alignment, 28.10.2024, https://arxiv.org/abs/2410.17271 (access: 3.4.2025).

Chen R., Arditi A., Sleight H., Evans O., Lindsey J., Persona Vectors: Monitoring and Controlling Character Traits in Language Models, 5.8.2025, https://arxiv.org/abs/2507.21509 (access: 19.10.2025).

Chollet F., Knoop M., Kamradt G., Landers B., ARC Prize 2024: Technical Report, 5.12.2024, https://arxiv.org/abs/2412.04604 (access: 2.7.2025).

Code of Practice for AI, https://code-of-practice.ai/?section=safety-security (access: 10.8.2025).

Conseil d’État, Intelligence artificielle et action publique : construire la confiance, servir la performance, 31.8.2022, https://www.conseil-etat.fr/publications-colloques/etudes/intelligence-artificielle-et-action-publique-construire-la-confiance-servir-la-performance (access: 12.7.2025).

Courts and Tribunals Judiciary, Artificial Intelligence (AI): Guidance for Judicial Office Holders, 12.12.2023, https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf (access: 19.10.2025).

Crawford K., Joler V., Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary Resources, https://anatomyof.ai (access: 13.8.2025).

Dell’Acqua F., Falling Asleep at the Wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters, 2023, https://www.almendron.com/tribuna/wp-content/uploads/2023/09/falling-asleep-at-the-whee.pdf (access: 15.5.2025).

Everitt T., Lea G., Hutter M., AGI Safety Literature Review, 21.5.2018, https://arxiv.org/abs/1805.01109 (access: 20.5.2025).

Future of Life Institute, Pause Giant AI Experiments: An Open Letter, 22.3.2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments (access: 20.7.2025).

GitHub, Abstraction and Reasoning Corpus for Artificial General Intelligence v1 (ARC-AGI-1), https://github.com/fchollet/ARC-AGI (access: 10.8.2025).

GitHub, Supporting Open Source and Open Science in the EU AI Act, https://github.blog/wp-content/uploads/2023/07/Supporting-Open-Source-and-Open-Science-in-the-EU-AI-Act.pdf (access: 10.8.2025).

Glaese A. et al., Improving Alignment of Dialogue Agents via Targeted Human Judgements, 28.8.2022, https://arxiv.org/abs/2209.14375 (access: 20.6.2025).

Global AI Regulation Tracker, https://www.techieray.com/GlobalAIRegulationTracker (access: 20.6.2025).

Hackenburg K., Tappin B.M., Hewitt L., Saunders E., Black S., Lin H., Fist C., Margetts H., Rand D.G., Summerfield C., The Levers of Political Persuasion with Conversational AI, 18.7.2025, https://arxiv.org/abs/2507.13919 (access: 21.7.2025).

Hadfield-Menell D., Dragan A., Abbeel P., Russell S., Cooperative Inverse Reinforcement Learning, 9.6.2016, https://arxiv.org/abs/1606.03137 (access: 15.8.2025).

Hastings-Woodhouse S., Kokotajło D., We Should Not Allow Powerful AI to Be Trained in Secret: The Case for Increased Public Transparency, 27.5.2025, https://www.aipolicybulletin.org/articles/we-should-not-allow-powerful-ai-to-be-trained-in-secret-the-case-for-increased-public-transparency (access: 20.6.2025).

Hiltzik M., Here’s the Number That Could Halt the AI Revolution in Its Tracks, 25.7.2025, https://www.latimes.com/business/story/2025-07-25/heres-the-number-that-could-halt-the-ai-revolution-in-its-tracks (access: 10.8.2025).

Horwitz J., Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats with Kids, Offer False Medical Info, 14.8.2025, https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines (access: 15.8.2025).

Human Rights Watch, Netherlands: Landmark Court Ruling Against Welfare Fraud Detection System, 5.2.2020, https://www.hrw.org/news/2020/02/05/netherlands-landmark-court-ruling-against-welfare-fraud-detection-system (access: 12.7.2025).

Jahani E., Manning B.S., Zhang J., TuYe H.-Y., Alsobay M., Nicolaides C., Suri S., Holtz D., As Generative Models Improve, People Adapt Their Prompts, 19.7.2024, https://arxiv.org/abs/2407.14333v1 (access: 30.7.2025).

Jarovsky L., Luiza’s Newsletter, https://www.luizasnewsletter.com/?utm_campaign=profile_chips (access: 10.8.2025).

Jha R., Zhang C., Shmatikov V., Morris J.X., Harnessing the Universal Geometry of Embeddings, 18.5.2025, https://arxiv.org/abs/2505.12540 (access: 25.6.2025).

Kokotajło D., Alexander S., Larsen T., Lifland E., Dean R., AI 2027, 3.4.2025, https://ai-2027.com (access: 19.10.2025).

Kostikova A., Wang Z., Bajri D., Pütz O., Paaßen B., Eger S., LLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models, 25.5.2025, https://arxiv.org/abs/2505.19240 (access: 20.8.2025).

Krakovna V., Orseau L., Kumar R., Martic M., Legg S., Penalizing Side Effects Using Stepwise Relative Reachability, 4.6.2018, https://arxiv.org/abs/1806.01186 (access: 14.8.2025).

LessWrong, Misalignment Classifiers: Why They’re Hard to Evaluate Adversarially, and Why We’re Studying Them Anyway, 15.8.2025, https://www.lesswrong.com/posts/jzHhJJq2cFmisRKB2/misalignment-classifiers-why-they-re-hard-to-evaluate (access: 16.8.2025).

Masashi T., Rzepka R., Kenji A., Towards Theory-based Moral AI: Moral AI with Aggregating Models Based on Normative Ethical Theory, 20.6.2023, https://arxiv.org/abs/2306.11432 (access: 5.8.2025).

Mishra R., Varshney G., Exploiting Jailbreaking Vulnerabilities in Generative AI to Bypass Ethical Safeguards for Facilitating Phishing Attacks, 16.7.2025, https://arxiv.org/abs/2507.12185 (access: 2.8.2025).

Moral Machine Platform, MIT Media Lab, https://www.moralmachine.net (access: 5.8.2025).

Narayanan A., Kapoor S., AI as Normal Technology: An Alternative to the Vision of AI as a Potential Superintelligence, 15.4.2025, https://knightcolumbia.org/content/ai-as-normal-technology (access: 15.5.2025).

Natale S., Biggio F., Arora P., Downey J., Fassone R., Grohmann R., Guzman A., Keightley E., Ji D., Obia V., Przegalinska A., Raman U., Ricaurte P., Villanueva-Mansilla E., Global AI Cultures: How a Cultural Focus Can Empower Generative Artificial Intelligence, 8.8.2025, https://cacm.acm.org/opinion/global-ai-cultures (access: 15.8.2025).

National Security Commission on Artificial Intelligence, Final Report, 2021, https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/03/nscai-final-report--2021.pdf (access: 15.8.2025).

Poe R.L., Why Fair Automated Hiring Systems Breach EU Non-Discrimination Law, 7.11.2023, https://arxiv.org/abs/2311.03900 (access: 25.7.2025).

RenAIssance Foundation, The Rome Call for AI Ethics, 28.2.2020, https://www.romecall.org/the-call (access: 20.6.2025).

Schreiber M., Bias in Large Language Models – and Who Should Be Held Accountable, 13.2.2025, https://law.stanford.edu/press/bias-in-large-language-models-and-who-should-be-held-accountable (access: 10.8.2025).

United Nations Development Programme, Human Development Report 2025: A Matter of Choice: People and Possibilities in the Age of AI, 2025, https://hdr.undp.org/system/files/documents/global-report-document/hdr2025reporten.pdf (access: 10.8.2025).

Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I., Attention is All You Need, 2.8.2023, https://arxiv.org/abs/1706.03762 (access: 20.7.2025).

Vijay S., Priyanshu A., KhudaBukhsh A.R., When Neutral Summaries Are Not That Neutral: Quantifying Political Neutrality in LLM-Generated News Summaries, 13.10.2024, https://arxiv.org/abs/2410.09978 (access: 2.8.2025).

Wang G., Li J., Sun Y., Chen X., Liu C., Wu Y., Lu M., Song S., Yadkori Y.A., Hierarchical Reasoning Model, 4.8.2025, https://arxiv.org/abs/2506.21734 (access: 20.7.2025).

LEGAL ACTS

Basic Act on the Development of Artificial Intelligence and Establishment of Trust (Korean AI Act).

National New Generation Artificial Intelligence Governance Specialist Committee, Ethical Norms for New Generation Artificial Intelligence, 21.10.2021.

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L 2024/1689, 19.6.2024).

United Kingdom, National AI Strategy, September 2021.

CASE LAW

Judgment of the District Court of The Hague of 5 February 2020 in the case of System Risk Indication (SyRI), C/09/550982/HA ZA 18-388.

Judgment of the United States Court of Appeals for the Eleventh Circuit of 22 May 2024, Snell v. United Specialty Insurance Co., No. 22-12581.

Judgment of the United States Court of Appeals for the Eleventh Circuit of 21 June 2024, United States v. Deleon, No. 23-10478.

United States District Court for the District of Massachusetts, Universal Music Group et al. v. Suno Inc., No. 1:24-cv-10893, 24 June 2024.

United States District Court for the Southern District of New York, The New York Times Co. v. Microsoft Corp., et al., No. 1:2023cv11195, 27 December 2023.




DOI: http://dx.doi.org/10.17951/sil.2025.34.2.441-479
Data publikacji: 2025-10-05 22:44:27
Data złożenia artykułu: 2025-08-28 00:46:41


Statystyki


Widoczność abstraktów - 0
Pobrania artykułów (od 2020-06-17) - PDF (English) - 0

Wskaźniki



Odwołania zewnętrzne

  • Brak odwołań zewnętrznych


Prawa autorskie (c) 2025 Karol Kasprowicz

Creative Commons License
Powyższa praca jest udostępniana na lcencji Creative Commons Attribution 4.0 International License.