- Drudge Report is dead. America First Patriots are getting aggregated, curated, and original content every day from Discern Report.
AI researchers are calling on every business and organisation to develop a response plan to the growing threat of deep fake cyber attacks, as the perils are real, according to researchers from the Queensland University of Technology (QUT).
In a paper published this year entitled ‘Brace Yourself! Why managers should adopt a synthetic media incident response playbook in an age of falsity and synthetic media,’ academics from QUT’s Centre for Behavioural Economics, Society and Technology warn that increasingly sophisticated and accessible AI programs have ushered in a new era of synthetic media.
Get your family ready for an emergency with UNVAXXED BLOOD. IronHeart Bloodworks is a new blood bank membership that gives you control over your blood choices during medical procedures.
AI can not only collect image or video data of a person from various sources and put it into neural networks to mimic but also use deep neural networks to generate entirely new content.
This means counterfeit content of voices, images and videos that appear undeniably genuine can be easily created and manipulated by malicious actors to extract money, harm brand reputation, steal Intelligence Property (IP) and shake customer confidence.
“With such risks becoming more common, anyone whose reputation is of corporate value, including every CEO or board member that has been featured on earnings calls, YouTube videos, TED talks, and podcasts, must brace themselves for the risks of synthetic media impersonation,” the paper said.
Dan Halpin, the CEO and founder of Cybertrace, a private cyber-investigations firm, agrees that the world is now facing an alarming situation.
“Unfortunately, sophisticated deepfake scams targeting Australian businesses and organisations pose a growing threat,” he told the Epoch Times in an email.
He said industries that heavily rely on public figures, high-value transactions, or sensitive information are often at higher risk. These include financial institutes, politicians and governments, media and entertainment, law enforcement and security.
Deepfake Threats Real and Already Happening
The warning comes after the CEO of Australia’s largest bank, the Commonwealth Bank of Australia (CBA), was impersonated as part of a cyber scam scheme.
In a video posted on Meta’s Facebook platforms, an AI-generated deepfake of Matt Comyn encouraged consumers to reach out for a guaranteed income of “$5000 income per week.’’
“We launched an automatic platform for passive income. You invest in gold, banks, stocks, funds and other profitable instruments. The platform will automatically generate profits,” Comyn appears to say during a news programme mimicking the production of Nine new bulletins.
However, it was not too hard to see something was amiss. The Australian-born and educated CEO was speaking with an American accent, and the movement of his mouth doesn’t quite match his face.
CBA has alerted customers of the scam on its official website.
“The scammers misuse well-known news brands and the CommBank brand to try and legitimise their scam,” the bank warned.
“Scammers have even used fraudulent, AI-generated videos of [Mr] Comyn, and others, to try and convince people to invest.”
To manage the risks, the researchers recommend a six-phase synthetic media incident response playbook which includes preparation, assessment, detection, containment and eradication, post-incident and coordination procedures.
Halpin also reminded businesses that almost every business and organisation should be aware and alert to deepfake threats.
“As technology evolves and becomes more accessible, any sector that relies on trust, credibility, or sensitive information should remain vigilant and adopt preventive measures against deepfake threats,” he said.
“Vigilance, employee education, and robust cybersecurity measures are crucial in mitigating the risks and maintaining trust in the face of this evolving threat landscape.”
We are Wired to Believe Synthetic Media
Drawing on the theories of human behaviour, the researchers argue that individuals can only distinguish human and deepfake faces about 50 percent of the time and that consumers have little choice but to believe what they see, read, and hear online.
Due to the innate tendency to avoid cognitive overload, individuals are inclined to trust visuals—information-rich media by simplifying the evaluation process, as “ the richness of audiovisual material requires the allocation of more cognitive resources, which can lead to cognitive overload”.
“We evaluate the sources of richer audiovisual messages less systematically than leaner information presented via text, assigning more credibility to modalities such as video and audio than we do to text and images,” the paper said.
- Concerned about your life’s savings as the banking crisis decimates retirement accounts? You’re not alone. Find out how Genesis Precious Metals can help you secure your wealth with a proper self-directed IRA backed by physical precious metals.
More worryingly, the researchers argue that deep fakes often cause people to stop trying to come to a genuine understanding of information, thereby eroding the credibility of other information sources.
“These new synthetic realities generate ‘reality apathy’ by causing people to give up trying to discern between what is authentic and synthetic, ceasing their efforts to become informed citizens and thereby potentially eroding the perceived credibility of fundamental civic media, politics, academia institutions—and organizations.”
Article cross-posted from our premium news partners at The Epoch Times.