Meta has announced the global rollout of an AI-powered voice translation feature on Facebook and Instagram, enabling creators to reach broader audiences by automatically translating spoken content into multiple languages.
The new tool, available in all markets where Meta AI is supported, allows creators to generate translated audio tracks in their own voices, complete with optional lip-sync technology that matches mouth movements with the translation. The feature was first previewed by CEO Mark Zuckerberg at the Meta Connect 2024 conference.
In its initial phase, the feature will be offered to Facebook creators with more than 1,000 followers, while any public Instagram account will be able to use it. The system trains on the creator's original voice to produce a translated version that retains their tone and style, with users able to preview the result before publishing.
A notification will appear on translated videos to inform viewers that AI-generated audio is being used.
Meta noted that the tool works best for videos where speakers appear clearly on camera, advising against covering the mouth or adding disruptive background music. It currently supports up to two speakers per video, provided their voices do not overlap.
The company said the feature is designed to help creators expand their reach beyond their native language. It also introduces analytics tools to measure performance across different translated languages, providing insights into audience engagement.