DEPT®

Amsterdam, Netherlands

Contact Information

Generaal Vetterstraat 66
Amsterdam 1059 BW
Netherlands
Phone: +31 88 040 0888
Email:

Tobias Cummins

Tobias Cummins

Global SVP Clients & Head of DACH
Josefina Blattmann

Josefina Blattmann

Global Marketing Manager
Dimi Albers

Dimi Albers

Chief Executive Officer

Phone: +31 (0) 6 41 24 07 87

Marjan Straathof

Marjan Straathof

Global SVP Marketing & Partner

Phone: +31 6 54713934


Basic Info

Core Competencies: Full Service, Digital, Mobile, Social Media, E-Commerce, SEO, Branded Content/Entertainment, Market Research/Consulting, Media Buying/Planning, Branding/Naming/Product Development, Design, Visual/Sound Identity, Strategy and Planning

Founded in: 2016

Employees: 3000

Awards: 66

Creative Work: 38

Clients: 25

Core Competencies: Full Service, Digital, Mobile, Social Media, E-Commerce, SEO, Branded Content/Entertainment, Market Research/Consulting, Media Buying/Planning, Branding/Naming/Product Development, Design, Visual/Sound Identity, Strategy and Planning

Founded in: 2016

Employees: 3000

Awards: 66

Creative Work: 38

Clients: 25

DEPT®

Generaal Vetterstraat 66
Amsterdam 1059 BW
Netherlands
Phone: +31 88 040 0888
Email:
Tobias Cummins

Tobias Cummins

Global SVP Clients & Head of DACH
Josefina Blattmann

Josefina Blattmann

Global Marketing Manager
Dimi Albers

Dimi Albers

Chief Executive Officer

Phone: +31 (0) 6 41 24 07 87

Marjan Straathof

Marjan Straathof

Global SVP Marketing & Partner

Phone: +31 6 54713934

How to prevent bias in your AI-generated content

DEPT®
Digital
Amsterdam, Netherlands
See Profile
 

As humans, we are biased

Where does bias come from?

A little background information on how bias in AI occurs. I asked ChatGPT to tell us more about the topic, as I am obviously no specialist in setting up an LLM (Large Language Model) myself. 

This is what it answered: 

“Bias in AI content can occur in a number of ways, but it typically stems from biases in the data used to train AI models. Here are a few examples:

  • Biased training data: AI models can reflect the biases in their training data if the data is not diverse and inclusive.
  • Biased algorithms: Algorithms used to train and deploy AI models can introduce bias, such as facial recognition algorithms that are biased against people of color due to a lack of representative data.
  • Human bias: Bias can be introduced by the humans who design, train, and deploy AI models, especially if the team is not diverse and fails to recognize certain biases in the data or algorithms.
  • Lack of accountability: Without proper oversight and review, biased AI models may be deployed without anyone realizing the potential for harm.” 

How can you prevent bias in AI? 

Now that you know how bias in AI arises, it should be ‘rather easy’ to come up with solutions to prevent the occurrence of bias. The main topic is ensuring diversity in training samples, such as audio samples from diverse backgrounds or genders, but there are of course more things you can do (during the training phase). 

However, as copywriters, you often don’t influence those. But what you do influence, are prompts and use of output. Let’s put that statement into practice. 

My team of copywriters and I asked ChatGPT multiple questions. A while ago, ChatGPT gave rather biased answers (on a positive note: we did receive a disclaimer that the copy could be in violation of the content policy). We’ve noticed that since then, the tool is becoming less and less biased, as its makers of course receive feedback regularly – also about this topic. 

 

 

Michelle den Elzen
Team Lead Copywriting DEPT®
DEPT®
Digital
Amsterdam, Netherlands
See Profile