GT News

Taxes, accounting, law and more. All the key news for your business.

| October 8, 2024

Using AI – when is it necessary to be transparent?

Share article:

As a general rule, a person who uses another author’s work in his or her own work must at the very least cite the author of that work.[1] However, content created by AI cannot be a copyrightable work under Czech law.[2]

However, this does not mean that an AI user can never be obliged to disclose that it has used generative AI in the preparation of a work. This may be required by the terms and conditions of the relevant generative AI.

For example, the terms and conditions of perhaps the most popular generative AI, ChatGPT[3], require that if users wish to publish their work and use ChatGPT to edit, revise, draft their work, they must state that the work was not created entirely by humans. Authors should thus properly endorse their works with, for example, this clause: The author created this text partly using GPT-4, the OpenAi large language model. After creating a draft of the work, the author has reviewed it, modified it as he or she sees fit, and assumes full responsibility for the content of the work.[4]

In contrast, the terms and conditions of the generative Claude AI require [5] that the user inform its clients about the use of AI – but only in certain “risk areas.” This includes, for example, automatic generation of articles by journalists or media outlets, the provision of recommendations with potential legal implications, the provision of financial advice, the evaluation of a person’s solvency, the evaluation of language and professional exams, patient diagnosis, as well as the use of AI in evaluating the resumes of potential employees, and much more.

However, the terms and conditions of some generative AIs do not require labelling at all. For example, the generative AI GEMINI allows works created using Gemini to be freely shared and published without any labelling.[6]

The obligation to label content generated by AI is therefore, for the time, being regulated mainly by the terms and conditions of the generative AI used. But as of 2 August 2026, Regulation (EU) 2024/1689 of the European Parliament and of the Council, the so-called Artificial Intelligence Act (“AI Act”), will be fully in force.[7]

Under the AI Act, only providers (e.g., OpenAI), i.e., entities marketing/operating an AI system (e.g., gpt 4.0), will be required to label content generated or modified by AI, with some exceptions.[8] This marking will have to be machine readable and at the same time be effective in line with the state of the art, e.g. using metadata. Marking will not be necessary only if the AI system in question performs merely an assistive function and standard editing, or does not substantially alter the input data or its semantics.

Given the level of penalties that providers will face (up to 3% of worldwide annual turnover), they can be expected to be rather cautious. For example, ChatGPT already attaches metadata to images generated by AI DALL·E 3.[9] It is possible to verify the creator of the image based on the stored metadata here, for example.

Special obligations will apply to the provider of a “chatbot.” The provider will have to inform its customers that they are communicating with AI unless this fact is already apparent from the context. 

A user who uses an AI system that creates “deepfakes”[10] will have to disclose that the content has been artificially created or manipulated. The same obligation will apply to a user who uses AI to prepare or edit a text informing about a fact in the public interest. However, these obligations do not apply to a person who uses AI solely for their personal non-professional activities.

In addition, chatbot operators will need to ensure that individuals are informed that they are communicating with an AI system.

We also attach a link to a form (in English) to test the impact of the AI Act on your company HERE.

 

Conclusion

Labelling of content generated by AI is not required by current legislation. The AI Act also does not impose a general obligation on users to label AI-generated content. Nevertheless, it is always important to thoroughly check the conditions for sharing content created or co-created by the respective AI system. At the same time, the fact that the user is not required to indicate that they have used AI does not mean that the information may not already be part of the generated content.

[1] Section 31(1) of the Copyright Act

[2] Section 2(1) of the Copyright Act

[3] Sharing & publication policy | OpenAI

[4] Freely translated and modified Open AI sample disclaimer, available here

[5]Usage Policy \ Anthropic

[6] Additional Terms and Conditions for Gemini API | Google AI for Developers

[7] L_202401689CS.000101.fmx.xml (europa.eu)

[8]provider” means an individual or a legal entity, public authority, agency or other entity that develops an AI system or generic AI model, or has an AI system or generic AI model developed and marketed, or that places an AI system in service under its own name, name or trademark, whether for a fee or free of charge;

[9] C2PA in DALL·E 3 | OpenAI Help Center, NOTE C2PA is the designation for the technical standard used by, among others, CHATGPT for attaching metadata to a file

[10]deep fake” image, audio, or video content created or manipulated by artificial intelligence that resembles existing persons, objects, places, subjects, or events and that could falsely appear authentic or true to a person;