Getty Images CEO Craig Peters has a plan to defend photography from AI
craig peters,getty images,craig peters interview,ai,code,code 2023,code confernce 2023,decoder,craig peters decoder,nilay patel,kara swisher,vox,code 2023 interviews,the verge,tech,ai images,ai photo generator,ai photography,technology
How much of an existential threat to Getty Images’ entire business model is generative AI? Verge editor-in-chief Nilay Patel sat down with Getty Images CEO Craig Peters at the 2023 Code Conference to talk about Getty’s plan to get in the game… and protect its IP. #Code2023 #AI #Technology
Read more:
Subscribe:
Like The Verge on Facebook:
Follow on Twitter:
Follow on Instagram:
Follow on TikTok:
The Vergecast Podcast:
Decoder with Nilay Patel:
More about our podcasts:
Read More:
Community guidelines:
Wallpapers from The Verge:
Subscribe to The Vergecast on YouTube, new episodes on Wednesday and Friday:
#Getty #Images #CEO #Craig #Peters #plan #defend #photography
What do you think makes an exceptional photograph?
You mean to protect his profits 😂
Very curious how this lawsuit against stability / mid journey plays out. Big implications for genAI training depending on the result
Getty Images is one of the most corrupt companies and exploits its sources for images
He can’t possibly know that the model is 100% incapable of creating an image of Kelce and Swift (or any other likeness). What determines if an image resembles the likeness of a person is NOT a matter of whether those likenesses were mention in the prompt. It is a matter of whether the output image resembles that person or not. While the model has not been trained explicitly on those likenesses, it has learned all the components about a person necessary to recreate such a likeness given a sufficiently granular and descriptive prompt (or by random chance).
The model would have no way of knowing that such an output was not allowed unless it HAD been trained on those likenesses and was explicitly told to avoid them. And if they instead were to limit such granular description to avoid recreation of specific scenarios, likenesses, or images, they are effectively neutering the whole purpose of having a model that is supposed to generate images based on your descriptions.
One of the most individually empowering technologies that has ever been created is generative AI. Specifically, Stable Diffusion allows anyone to not only create images but to train their own models with any images they want. Let’s not throw that baby out with the bathwater when enacting laws to regulate AI. Empowering individuals is equally valuable as protecting copyrights.
So a locked down generative AI tool, that you pay for…vs. non-locked down generative AI that is / can be free? Good luck with that Getty lol.
If you can’t beat it, join it. Got it.