2022-sentimeo-bubbles-1x1-transp

AI Policy

Responsible AI Usage Policy for sentimeo

Artificial Intelligence (AI) is an increasingly important tool in the marketing landscape, offering unprecedented possibilities for content creation and customer engagement. However, with great power comes great responsibility. Therefore, this AI usage policy is designed to guide our team in the responsible, transparent, and ethical use of AI in their work. The aim of this policy is not to hinder creativity or innovation, but rather to ensure that our use of AI aligns with our overall corporate values and respects our customers’ rights.

Guidelines for Responsible AI Usage

Transparency

It’s crucial that we remain transparent about our use of AI. This includes acknowledging when AI has been used to create or modify content. This can be through a blanket statement on our website or integrated into contracts with clients.

How we use AI

We use AI to assist in some content development at our company. To ensure transparency, accountability, quality and privacy, we adhere to internal AI usage standards. These standards help us safeguard against biases, maintain data security, and uphold our commitment to ethical marketing practices. One of these standards is that AI should be used to assist in content creation, not fully automate it. We ensure that every piece of content we develop is shaped and reviewed by people who have an understanding of our audience and AI’s limitations.

Tool Selection

The following AI tools have been approved for use in our company:

  • Jasper
  • Google Bard
  • ChatGPT
  • Canva
  • Rask
  • Loom
  • HeyGen

Accountability

Responsibility cannot be outsourced to a machine. Always remember that humans are ultimately accountable for the actions of the AI. AI is an assistant, not a replacement for good judgment. Our company policy is that we should NEVER publish or send something that has been written entirely by AI without human development or review for quality and accuracy. Additionally, in case of any negative outcomes from AI-assisted content, we must take responsibility and remediate as necessary.

Addressing Specific Issues

Bias

AI systems learn from the data they are fed, and thus can unintentionally perpetuate biases found in their training material. Many language models have filters to reduce the risk of bias or harmful outputs, but filters aren’t enough. It is our responsibility to ensure that content we produce is reviewed for potential bias and developed to be inclusive and accessible.

Privacy

We must protect the privacy of our customers. See our list of approved tools with reliable privacy policies and do not submit customer data into AI tools or LLMs. In addition, we must protect the privacy of our own intellectual property (IP). Sticking with the approved list of tools (see above).

Security

AI systems can be targets for cyber-attacks. Please review the approved list of AI tools and discuss any additional tools you subscribe to or use on company devices with the security team.

Ethical Considerations

AI should not be used to mislead or manipulate customers. All content created using AI should be ethical and in line with our corporate values. AI content should go through a review process to check for bias, inaccuracies and other risks.

Impersonation

It is our company policy that employees should not use AI to impersonate any person without their expressed permission. AI can allow you to create “in the style” of public figures; as a policy we do not do that in our company. Designated employees may, with permission and review, use AI to mimic the writing style of a current sentimeo employee for the purposes of ghostwriting or editing content from that individual.

Training Employees on AI Usage

All sentimeo employees involved in creating content with AI receive appropriate training. This cover both the technical aspects of using AI, and the ethical considerations outlined in this policy.