Skip to main content

The Weekly Round Up | Episode 5 | Kendrick Lamar Uses DeepFakes in New Music Video

This week we take a look at the tech behind Kendrick Lamar’s latest music video in which he uses deepfakes, how an AI is helping to get rid of annoying web cookie popups, Meta's new large language model, and more!
May 2022  · 6 min read
The Weekly Roundup Episode 5

The first major story this week is about how a team of researchers is combatting annoying cookie popups using AI.

These days nearly every website has a popup that asks for cookie consent and some of these are even designed to make rejecting cookies difficult in an effort to force users into making website-friendly choices that put their privacy at risk. Researchers from Google and the University of Wisconsin-Madison have created a system called “Cookie Enforcer” to combat these pervasive popup designs.

The system works by scanning the rendering pattern of a site’s HTML elements to detect how the cookie notice will be shown. CookieEnforcer then analyses the notices and predicts which actions will disable all unnecessary cookies. Finally, the machine-learning model selects the chosen settings and closes the popup, saving website users from having to decipher and click through the notices on their own.

The team of researchers has found that the system was 91% effective at automatically disabling non-essential cookies on over 500 websites.

An official public release date has yet to be announced, but the team has been clear in their intentions to make Cookie Enforcer widely available as a browser extension and hopes to both protect users from malicious or opaque design and also save them some time.

IBM releases research on AI skills gap in Europe

Next, a recent IBM report on the AI skills gap in Europe provides useful insights into the state of the AI job market. Employees, recruiters, and applicants in Germany, Spain, and the UK were surveyed to assess whether there’s a skills shortfall when it comes to AI job openings.

The report found that there is a clear shortfall in both technical and non-technical skills when it comes to the AI job market. Employers noted that they were having trouble finding candidates with the AI knowledge and experience required for the job. In addition to this, around a quarter of the tech recruiters mentioned that they are struggling to find applicants with a combination of technical and soft skills.

With AI becoming more and more prevalent in every industry, specialist tech staff are working more closely than ever with business managers. As a result, being able to demonstrate the soft skills of interpersonal communication, strategic problem solving and critical thinking can greatly improve employability and career developments in AI.

The report also suggests that upskilling and reskilling are potential solutions for the skills gap woes for organizations. Out of all respondents in Spain & Germany, 42% of employees are upskilling with training on topics including programming languages, data engineering/analysis, and software engineering. The UK falls behind with just 32% of staff receiving such training.

Meta’s large language model

Meta have publicly released a large language model in a bid to make such models more accessible. In a blog post on the 3rd of May the company announced its Open Pretrained Transformer — also known as OPT-175B — which is a natural language processing system with 175 billion parameters.

Natural language processing is big right now, with systems like GPT-3 and PaLM taking the AI world by storm and demonstrating the power of these models with exciting use-cases like language generation, code completion, and more. Meta AI says the potential of these large language models is clear, limited only by the fact that they remain largely inaccessible to the wider research community.

Alongside the gargantuan language model, Meta is also publicly releasing all the development notes, decision-making rationale, and behind-the-scenes extra documentation that went into building OPT-175B. The company has also placed high importance on the ecological footprint of the model, and has managed to create a system that is on the scale of GPT-3 but was developed with 1/7th of the carbon footprint.

Kendrick Lamar uses deepfakes in his latest music video

Lastly, let’s chat about the use of DeepFakes in the much-anticipated new release from Kendrick Lamar. Not only is “The Heart Part 5” the first release from the artist since his hit album Damn in 2017, but its music video also features a set of deepfakes applied over the rapper’s face.

OJ Simpson, Kanye, Will Smith, and Kobe Bryant all make appearances, with Lamar’s face seamlessly blending into the deepfakes over the course of the video. The tech firm behind the video, “Deep Voodoo,” was created by “South Park’’ creators Trey Parker and Matt Stone and has used deepfake technology before to create controversial viral videos.

Typically, A deepfake video uses two machine learning models. One model creates new facial data based on a set of sample videos, while the other tries to detect if the video is indeed a fake. When the second model can no longer tell if the video is counterfeit, then the deepfake is deemed believable enough as well to a human viewer. This process is a type of machine learning algorithm called a generative adversarial network and it often works alongside auto-encoders to manipulate or generate visual and audio content.

The Heart Part 5 demonstrates that in spite of the complex debate surrounding the ethical use of deepfakes, this technology is still able to be woven into artistic narratives to create meaning in a positive context.

Get the latest news on data science

For more on the latest in data science, make sure to subscribe to DataCamp's Youtube Page to get the latest Weekly Roundup. 

← Back to Blogs