2023 saw ChatGPT equipping the masses with ground-breaking AI capabilities and functionality, but also introduced the concerning reality of AI-generated content, bringing with it the risks of AI plagiarism from both students and professionals. As a result, “ChatGPT” emerged as the breakout word of 2023, but “Deepfake” appears poised to claim that title in 2024. Deepfake extends beyond plagiarism, encompassing the creation of highly realistic yet fraudulent videos or images of influential individuals using AI tools. These deceptive creations are then maliciously disseminated across the internet, social media and even television platforms for political or commercial gains.
Previously relegated to the realms of tech journalism and cyber-security, the Taylor Swift deepfake scandal at the onset of the year propelled deepfake into the mainstream spotlight, even garnering attention from the periodical, The Scientific American. Millions of fans found themselves deceived or nearly deceived by deepfake Taylor Swift advertisements, thrusting the issue into public consciousness, and prompting widespread concern over the issue.
Fortunately, many deepfakes are discernibly fabricated, intentionally crafted for comedic or illustrative purposes, as seen in examples like the Seinfeld Pulp Fiction mashup or the Morgan Freeman impersonation. Moreover, vigilant users typically identify and flag inconspicuous deepfakes swiftly, mitigating the risk of widespread deception. However, amidst the flurry of crucial elections of global political figures this year, the spectre of deepfake-driven political propaganda looms large. Consequently, media organisations, which play a pivotal role in disseminating global news, are intensifying their efforts to detect deepfakes.
A recent example of this proactive stance around ungenuine images is the Princess of Wales’s doctored photo, which was uncovered via hyper-vigilant deepfake detection protocols within media outlets responsible for distributing the Royal’s Mother’s Day family picture. While this incident was innocent, such detection measures may prove indispensable in identifying nefarious uses of deepfakes for commercial and political manipulation in 2024.
If you are interested in studying Psychology or a Science subject, Oxford Open Learning offers you the chance to do so at several levels, listed below. You can also Contact Us here.