YouTube has introduced new rules for tagging AI-generated videos, addressing the challenge of transparency in the growing prevalence of AI-generated content on platforms.
Starting now, YouTube requires users to label any video that looks convincingly real but was created using altered or synthetic methods, including AI, to prevent confusion over its authenticity. This includes videos where the appearance of people, places, or events has been artificially generated or significantly altered – like changing voices, swapping faces, or modifying locations and events visuals.
Penalties for violations are possible, explicitly intended for those who constantly and in large numbers ignore the need for correct labeling. YouTube plans to roll out this policy gradually, starting with its mobile app and later expanding to web and TV versions. Labels indicating the use of altered or synthetic media will be included in the video descriptions, with details on the extent of editing or digital creation involved.
For videos with references to sensitive topics such as the current news, elections, finance, or health, the platform will make these labels more accessible, putting them directly on the video player. However, there’s no need for labels on content where AI is used for behind-the-scenes tasks like writing scripts or coming up with video ideas, nor for videos that are clearly fantastical or only slightly altered in inconsequential ways, such as through color correction or minor special effects.
Handling requests to take down content featuring identifiable people in synthetic or altered imagery. YouTube is also taking new steps to address requests as follows, and they expect to provide more details on these matters in each case in the months ahead.