Hyderabad

‘Safe word’ can help protect against AI deepfake frauds

Listen to Story
Vc Sajjanar Deep Fake

Hyderabad: With artificial intelligence (AI) tools now capable of cloning human faces and voices with near-perfect accuracy, cybersecurity experts are urging citizens to adopt a simple yet effective safeguard  a personal ‘safe word’.

Deepfakes, or AI-generated audio and video impersonations, are increasingly being used by fraudsters to deceive people by posing as friends, relatives or officials. The scams often involve tricking victims into revealing personal information or transferring money.

Experts advise verifying identity with a safe word

To counter such attempts, experts recommend setting up a unique ‘safe word’ known only to close contacts. The code can be used to verify any suspicious message or call before sharing sensitive information.

“Even a short, personal code known only within a trusted circle can prevent identity fraud,” said a cybersecurity analyst. “AI-generated deepfakes sound and look real, but they can’t replicate personal knowledge.”

AI’s dual edge: empowerment and risk

While AI continues to empower industries from healthcare to entertainment, its misuse poses significant privacy and security risks. Authorities and digital safety advocates stress that awareness is the first line of defence.

Citizens are advised to avoid sharing personal or financial details without confirmation, cross-check identities through alternate channels, and remain alert to sudden or urgent financial requests from familiar voices.

As AI advances, experts say vigilance  and a simple safe word could be one of the strongest defences against the growing menace of deepfake based fraud.

(For article corrections, please email hyderabadmailorg@gmail.com or fill out the Grievance Redressal Form.)