Conscious prompting
The purpose of these guidelines is to ensure that Spoon uses generative AI in a responsible and ethical manner – we call it Conscious prompting. AI tools should be used to develop our business and streamline processes. They should be used as creative tools and to explore new types of high-quality content. At the same time, it is important that we focus on privacy and data security issues and transparency, both internally and with clients.
These guidelines apply to everyone working with all parts of client delivery. This means they apply to all employees at Spoon, our subcontractors, and partner agencies. Spoon is responsible for ensuring that external parties are aware of the guidelines. For subcontractors, this should be regulated through contracts, and for partner agencies by sharing these guidelines.
Always keep the following in mind:
2.1 Responsibility
Spoon is responsible for all use of AI-generated content used externally, and we also have processing responsibility under GDPR. Therefore, it is important that those who create AI-generated content are aware of current laws, regulations, and guidelines regarding privacy, data security, ethics, and intellectual property/copyright.
2.2 What role do AI tools play in our production?
We view generative AI as a tool that enhances our expertise, rather than replacing it. More and more of our tasks will be able to be automated and streamlined with the help of AI, but we have not yet decided how to integrate AI into our deliveries, and we are continuously and attentively evaluating this.
2.3 Use of AI tools
We do not yet have a fixed overview of which AI tools are acceptable to use, and the responsibility to ensure that deliveries follow our guidelines lies with the user of the tool. AI tools are not yet an integrated part of everyday work for everyone, and licenses for paid versions are therefore evaluated by the nearest manager/leader. We have applied for a corporate solution from ChatGPT, but the processing of this may take time due to high demand.
3.1 Client approval
Before AI is used for final deliveries in a client project, the client must have approved the use of AI.
Idea generation, research, or sketch work (mock-ups, storyboards, etc.) can be conducted without client approval, taking into account any confidentiality or sensitive information.
3.2 Quality control and source criticism
If you use AI tools, you are responsible for ensuring that the results are accurate, factual, and of high quality. This means, among other things, that you should examine the sources used to generate the content.
Generative text tools, like ChatGPT, can both lie and misinform, so all content used in production processes must be checked and quality assured before being taken further.
3.3 Refinement of content
Unless the client’s assignment clearly involves the use of purely AI-generated content, all content should always be refined by the responsible person before it is used. This is especially true for text content.
3.4 Transparency and credibility
Content created with the help of AI should always be labeled as AI-generated. This is especially true for the use of AI-generated visual material, images, and video. Under no circumstances should we use AI-generated material that could lead the recipient to believe that a fictional event is real. All content simulating a real situation should be particularly carefully labeled. This can include images or videos showing actual people, events, or places. The difference between real stories and fictional advertising should be very clear when AI tools have been used.
Sometimes it may be relevant to replace a stock image with an AI-generated image. As long as the image meets other requirements, it is sufficient to label the image as AI-generated. Labeling is done through clear crediting or text labeling over the image/video. We are in the process of developing a common standard for labeling AI-generated visual content.
4.1 Confidentiality
Different AI tools have different ways of handling the data you input, and most also use the data to train AI models. Therefore, you should never input confidential, privacy-infringing, or otherwise sensitive data into the tools. If you are unsure about the level of sensitivity, it is your responsibility to check this with the client manager or AI manager at the office.
● Never input personal data or confidential information from clients
● Anonymize clients and individuals
● Where possible, check that the tool does not use the data you input to train the model
● Never input documents you are not authorized to share
Everyone using AI tools should be aware of the tools’ shortcomings and weaknesses.
● Since AI tools are developed by humans and trained on human-created data, they may have biases and errors. They can discriminate and are currently trained on data that does not consider equality and diversity. Therefore, the results are often generic and uniform.
● AI tools have no moral compass and cannot make decisions based on ethical grounds.
● AI tools cannot assess the credibility of sources, which means the information they produce may be based on unethical, unreliable, or incorrect sources.
● Text tools like ChatGPT lie when they do not know the answer. This underscores the importance of everyone using generative AI in a conscious and responsible manner. This is where the concept of Conscious Prompting comes in.
Conscious prompting is Spoon’s own way of defining responsible use of AI and AI tools. It involves having a conscious approach to AI when we prompt – and when we receive results from our prompts. We should approach AI with care, adhere to our ethical principles, and ensure high quality in everything we do. We should never lose our morals, or compromise on truth and authenticity, and we should always deliver results that are inclusive, reflect diversity, and are representative.
Prompting as an area of expertise is growing at the same pace as AI tools, and we will develop guides in prompting techniques. In the meantime, we want to be clear that it often requires several layers of prompts to produce usable content (according to the above guidelines) and that the safety questions below should be asked in all use of AI tools.
Before we prompt an AI tool, we should always ask ourselves:
● Is this something that should be prompted?
● Is this information I should share?
When we receive the result, we should always ask ourselves:
● Is it ethical to use this content?
● Is the content accurate?
● Does the content give a manipulated view of reality?
● Is the content representative?
● Does the content reflect diversity?
● Is the content inclusive?
To support all employees in working with AI tools, there will be Conscious Prompters at all offices. If you are unsure about the use of AI tools, you can always contact your nearest Conscious Prompter for guidance.