5

Glaze: Protecting Artists from Style Mimicry

 1 year ago
source link: https://glaze.cs.uchicago.edu/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Protecting Artists from Style Mimicry

glaze-banner.png

hr680.png

About Us

We are an academic research group of PhD students and CS professors interested in protecting Internet users from invasive uses of machine learning.

We are not motivated by profit or any political agenda, and our only goal is to explore how ethical security techniques can be utilized to develop practical solutions and (hopefully) help real users.

The team:

  • Shawn Shan
  • Jenna Cryan
  • Emily Wenger
  • Prof. Heather Zheng
  • Prof. Ben Zhao (Faculty lead)
  • Prof. Rana Hanocka (Collaborator)
  • Karla Ortiz (Artist & Collaborator)
  • Lyndsey Gallant (Artist & Collaborator)
  • Nathan Fowkes (Artist & Collaborator)

You can reach the UChicago SAND Lab team by email: here

hr680.png

Risks and Limitations

  • Changes made by Glaze are more visible on art with flat colors and smooth backgrounds, e.g. animation styles. While this is not unexpected, we are searching for methods to reduce the visual impact for these styles.
  • Unfortunately, Glaze is not a permanent solution against AI mimicry. AI evolves quickly, and systems like Glaze face an inherent challenge of being future-proof (Radiya et al). Techniques we use to cloak artworks today might be overcome by a future countermeasure, possibly rendering previously protected art vulnerable. It is important to note that Glaze is not panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and followup projects will provide some protection to artists while longer term (legal, regulatory) efforts take hold.
  • Despite these risks, we designed Glaze to be as robust as possible, and have tested it extensively against known systems and countermeasures. To the best of our knowledge, this is the only available tool for artists to proactively protect their style while posting their art online today. Note that this is ongoing research, and we will continuously update our tools to improve its robustness.
glaze-logo256.png

Downloads

  • March 18: Glaze Beta2 is now available for download here.
  • Glaze generates a cloaked version for each image you want to protect. During this process, none of your artwork will ever leave your own computer. Then, instead of posting the original artwork online, you could post the cloaked artwork to protect your style from AI art generators.

  • We will not commercialize our protection tool in any way. It is available to use for free upon release. It is solely for research purposes, with the goal of protecting artists.

  • If you are interested to hear about news updates on the application release, please join this Glaze-announce mailing list.

hr680.png

What Is Glaze?

Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney, Stable Diffusion and their variants. It is a collaboration between the University of Chicago SAND Lab and members of the professional artist community, most notably Karla Ortiz. Glaze has been evaluated via a user study involving over 1,100 professional artists. At a high level, here's how Glaze works:
  • Suppose we want to protect artist Karla Ortiz's artwork in her online portfolio from being taken by AI companies and used to train models that can imitate Karla's style.
  • Our tool adds very small changes to Karla's original artwork before it is posted online. These changes are barely visible to the human eye, meaning that the artwork still appears nearly identical to the original, while still preventing AI models from copying Karla's style. We refer to these added changes as a "style cloak" and changed artwork as "cloaked artwork."
fig1.png

For example, Stable Diffusion today can learn to create images in Karla's style after it sees just a few pieces of Karla's original artwork (taken from Karla's online portfolio). However, if Karla uses our tool to cloak her artwork, by adding tiny changes before posting them on her online portfolio, then Stable Diffusion will not learn Karla's artistic style. Instead, the model will interpret her art as a different style (e.g., that of Vincent van Gogh). Someone prompting Stable Diffusion to generate "artwork in Karla Ortiz's style" would instead get images in the style of Van Gogh (or some hybrid). This protects Karla's style from being reproduced without her consent. You can read our research paper (currently under peer review).

fig2.png

With Glaze (above), a model trains on cloaked versions of Karla's art, and learns a different style from her original visual style. When it is asked to mimic Karla, it produces art that is distinctively different from Karla's style.

Some media coverage of our project:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK