Cookie settings

By clicking “Accept”, you agree to the storage of cookies on your device to improve navigation on the website and ensure the maximum user experience. For more information, please see our privacy policy and cookie policy.

AI labeling requirement: This applies in Austria

Bild des Autors des Artikels
Alexander Schurr
December 12, 2024
From August 1, 2026, the EU AI Act requires that all generative AI content must be clearly marked as AI-generated.

The transparency requirements and AI labeling requirements were set by EU AI Regulation (AI Act) Clearly regulated for the first time. The aim is to make users aware of the interaction with AI-based systems and to protect them from misleading or manipulative content. But what does that mean in practice? Here is an overview of current and future requirements:

AI labeling requirement: as of 2025

  • There is currently no legal regulation in the EU or Austria that obliges you to provide generated texts or images with a notice that they were created with AI.
  • This applies regardless of whether you use the content for private, business, or public purposes.
  • Exceptions:

    Transparency obligations in the EU currently only apply in specific contexts:

    1) When the use of AI has a significant impact on the user:

    1. Example: A chatbot in customer service that provides automated answers must clearly indicate that it is AI.
    2. There is no such obligation for texts, such as blog articles.
  • Means: Make it clear when and where AI is being used, especially in direct contact with customers!
  • Examples: “You're talking to an AI-powered system” or “This answer was created by an AI. ”
  • 2) If the content is potentially deceptive:

    1. If AI-generated content appears deceptively real or could be used specifically for manipulation, They must be marked accordingly.
      • examples:
        • Deepfake videos that imitate a real person.
        • AI-generated images that are passed off as authentic evidence images.

    AI labeling requirement: August 2026

    From August 1, 2026, with the full implementation of EU AI Acts, applies:

    1. Generative AI content:
      Texts, images and videos created with tools such as ChatGPT or DALL-E must clearly marked become.
    2. Deepfakes:
      Content that is based on real people or appears deceptively real must be clearly declared as AI-generated.

    Texts, images and videos from generative AI systems must not be published without identification — not even in internal systems or documentation.

    Example: A blog that was written by an AI would then have to be marked with a note such as “This article was created by an AI.”

    Loophole:

    • If you significantly revise an AI-generated text so that it has been significantly changed by human intervention, the labeling requirement does not apply.
    • Reason: The revised content is no longer considered “generative AI content,” but as a human work with supporting AI use.

    What does “significant revision” mean?

    • You need to make significant changes, such as:
      • Content adjustments (e.g. insert new arguments, restructure sections).
      • Stylistic revisions (e.g. word choice, tonality, sentence structure).
      • Add additional research or your own ideas
    • Minimal corrections (such as spelling, grammar, or minor adjustments) are not considered a “major revision.” In this case, the labeling requirement remains in place.

    Partial AI support

    • 70% of an article is created by AI and 30% is supplemented by human editing.
      Labeling required, as the text is mostly AI-generated.

    AI transparency obligations: Recommendation for today:

    Even though there is currently no obligation, voluntary labelling can be useful to promote trust and transparency. It is recommended that you start labeling today. In this way, you avoid pressure to adapt and potential penalties.
    examples:

    • “This image was created using AI. ”
    • “This text was generated with the help of AI and edited. ”

    Important practical tips

    1. Use standardized formulations:
      Use consistent and clear labels, such as:
      • “Built with AI technology. ”
      • “This content was generated by artificial intelligence. ”
    2. Transparency in communication:
      Make it clear when and where AI is being used, particularly in direct contact with customers.
    3. Act in a future-proof manner:
      Even though some regulations will only become binding from 2026, it is advisable to start labeling today. In this way, you avoid pressure to adapt and potential penalties.
    4. Internal training:
      Sensitize your teams to transparency obligations, particularly marketing, IT and legal departments.

    Conclusion: Transparency as the key to trust

    As of today, you don't have to mark anythingwhen you create images or texts with AI — unless there are deceptive or legally relevant aspects. However, from 2026, labeling will be mandatory for generative content. The transparency obligations surrounding AI-generated content are an essential part of the EU AI Act and help build trust in AI systems. Companies and individuals should ensure that all content created by AI is clearly labelled — today and in the future.

    Would you like assistance in implementing the new rules? Contact us — the KI Company provides you with expertise and helps you ensure compliance and transparency!