📄

Paper

Entry
Type
Paper
Year
Posted at
June 2, 2021
Tags
image

Overview

  • Style Cloaksと呼ばれる人の目には認識できるかできないくらいのノイズをかけることで、Diffusionモデルでの学習をミスリード/学習を困難にする (cloacksはマント、偽装といった意味)

Abstract

Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the pro- fessional artist community. In particular, models can learn to mimic the artistic style of specific artists after “fine-tuning” on samples of their art. In this paper, we describe the design, implementation and evaluation of Glaze, a tool that enables artists to apply “style cloaks” to their art before sharing on- line. These cloaks apply barely perceptible perturbations to images, and when used as training data, mislead generative models that try to mimic a specific artist. In coordination with the professional artist community, we deploy user studies to more than 1000 artists, assessing their views of AI art, as well as the efficacy of our tool, its usability and tolerability of perturbations, and robustness across different scenarios and against adaptive countermeasures. Both surveyed artists and empirical CLIP-based scores show that even at low perturba- tion levels (p=0.05), Glaze is highly successful at disrupting mimicry under normal conditions (>92%) and against adap- tive countermeasures (>85%).

Motivation

Architecture

Results

Further Thoughts

論文を読んで考えた個人的感想

  • 1000人のアーティストにインタビューしてるのとかなかなかすごい…

Links