When AI Avatars Look (and Sound) Like Us — What Happens Next?

One of the areas we’re exploring at Automattic is how generative AI might help us improve video content — for support, training, and product demos.

And one of the most fascinating frontiers is AI avatars: realistic video presenters created entirely by machines, sometimes based on real people.

I wrote about this recently in an article, where I argued that once AI video becomes genuinely usable — fast, high quality, and accessible to anyone — the way we communicate online will fundamentally change.

We’re not quite there yet, but we’re getting close. And before this shift becomes mainstream, I think we need to start asking some hard questions.

Who Owns a Digital Double?

If an avatar is built using someone’s likeness, voice, or mannerisms, who owns it? The individual? The company? The AI provider?

Right now, the legal landscape is vague — and that vagueness could lead to uncomfortable grey areas.

Could a former employee’s AI avatar keep working years after they’ve left?

Should it?

What Does Informed Consent Look Like?

Most consent today is static — a one-time agreement.

But AI-generated likenesses can be reused, remixed, and repurposed indefinitely.

Should consent expire? Be revocable?

What happens when the technology evolves — and a previously cartoonish avatar becomes photorealistic?

Should AI Avatars Be Labelled?

When a customer watches a support video, should they be told if the presenter isn’t real?

Some might not care.

Others might feel misled.

As AI-generated humans become more convincing, the pressure for transparency will only grow. But how should we signal it — subtly, clearly, legally?

What’s the Line Between a Deepfake and a Useful Fake?

The same tech that powers helpful avatars can also create harmful deepfakes.

That means the line between legitimate and malicious use isn’t technical — it’s ethical.

What guardrails should we put in place?

Can content be watermarked or verified without ruining the experience?

A Moment to Choose Well

We’re on the edge of a new visual era. The tools are powerful, and the creative possibilities are extraordinary.

But we’re not just building better videos — we’re helping define what’s acceptable, what’s fair, and what’s trustworthy.

So I’d love to hear your thoughts:
What rights should people have over their AI likenesses? How should companies approach consent, transparency, and long-term responsibility?

At Automattic, we don’t have all the answers. But asking the right questions — now, before the future arrives — feels like the right place to start.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *