Sora2 Cameos: What could go Wrong, Probably Will…

Deepfakes are such a negative word. So much that they have been rebrand as ‘Cameos’ by OpenAI in the latest update to their text-to-video tool known as Sora. That means you are now the center of attention in that trending video. It sounds fun, but inevitably, what could go wrong… probably will.

For the past week, I’ve been watching videos. Created by Sora 2, the most download app since ChatGPT, which allows users to type text to create amazing videos. This is a spectacular improvement over their first version, thoughtfully just called Sora (1?). And if you haven’t seen things like the pepperoni pizza parachute or Ronald McDonald’s blasting hamburgers videos, then you’re probably missing out on some of the amazing creativity of some of the users.

However, aside from the fun part of creating videos out of your imagination, the new edition of Sora has a feature that’s a little more troubling called Cameos. What’s a cameo? Well, it’s all about you, and that may not be good.

This is not a real person. This is not a real interview.

[FADE IN CAMEO SCENE]

Cameos sound harmless enough. As OpenAI describes, “Giving you the power to step into any world or scene and letting your friends cast you in theirs.” Record a short selfie, and moments later, you can be cast into the video of your own making. You can also be in the video edited by your friends, or in a completely new video created by your friends. Let that sink in for a moment.

The current cameo default is that it is shared with all of your friends (aka mutuals) using Sora. This allows those people in your network to create their own videos of you doing whatever they want. As Washington Post journalist Geoffrey Fowler recently wrote, this can be fun and entertaining, especially, if it is among a group of people you trust enough with your image. It can also be a bit embarrassing when your friend’s sense of humor differs from yours.

What is troubling is that when these videos are created, you, as the cameo owner, do not approve them when they are published. For your likeness, once given, it is always ok without your say. Later (later!), you can remove the video from Sora, but not from any other platforms it may have been reposted. You might not even know it is out there, and this could be harmful in so many ways.

[Voiceover: “And Now This…”]

The more troubling aspect of this is how it can contribute to fraud, personalized scams and of course, misinformation. Since the introduction of ChatGPT in 2022, phishing cyberattacks have increased 138%, according to a McKinsey & Company study. And while it is more difficult to measure misinformation, there have been notable cases when people have fallen for fake videos, like when news journalist Chris Cuomo responded to a fake video of U.S. Representative Alexandria Ocasio-Cortez. Even the professionals are vulnerable.

Tech-savvy people might feel like they are less vulnerable because they know what to look for. OpenAI, along with rival Google (VEO 3 tool) both promote the digital watermarks placed into videos created by their platforms as safeguards. Is this enough? For the less tech-savvy, the blinking cartoonish face of Sora may look like other emoji stamped onto online content. The icon reads ‘Sora’, for the unfamiliar that might not be meaningful.

Who is going to teach people how to read the watermarks? Does a hard-to-understand watermark distance them from responsibility of any negative outcomes?

Here’s just a couple of things to consider:

  • Cameo creators own their likeness and need to control when and where it is used.
  • Is Sora a video tool for creatives (movie industry and amateurs alike)? If so, what is the purpose of the cameo feature? Or is this a social media strategy?
  • Social media platforms need to develop a consistent method to identify generative AI videos that are obvious. This may require government regulations to create standards and public awareness campaigns to diminish misinformation.

A product that can be harmful without controls for users does not squarely fit into the space of safety and benefiting humanity.

[FADE TO BLACK]

OpenAI’s charter notes their commitment to the safe development of AI to benefit all of humanity. Further, they write, “We are committed to providing public goods that help society navigate the path to AGI.”

This is a notable statement, and it is what you would expect from an organization that began as a non-profit. A product that can be harmful without controls for users does not squarely fit into the space of safety and benefiting humanity.

OpenAI is becoming aware of these issues as raised by the Washington Post and New York Times reporting. However, as this case demonstrates, the “Race for AI” is driving product over purpose, and aspects of safety, governance, and corporate morals feel like an afterthought. And while the new Sora and Cameos are a novelty in the moment, where they can go next in an unbound AI world is yet to be seen.


Thanks for reading. Consider adding a comment and sharing your thoughts on this topic below.

Bonus

Working with AI video generators are getting easier. I asked AI to help describe a scene that matched my article and after several tries, and some editing to join scenes this is what it helped me create.


Discover more from Derek W Gibson

Subscribe to get the latest posts sent to your email.

Leave a Reply