Research Consulting Ltd

Policy on the use of artificial intelligence

Purpose and Scope

1. The purpose of this policy is to establish guidelines for the responsible and ethical use of artificial intelligence (AI) at Research Consulting. It applies to all employees, contractors, and third-party partners who utilise AI in any capacity.

2. This policy should be read in conjunction with other relevant policies, including our Employee handbook, Quality policy and manual, IT policy, Social media policy and Data protection and privacy policy.

3. This policy focuses primarily on the use of generative AI, which is the kind of AI that can create new and novel content, such as text, images, or audio. However, the principles set out in this policy should also be applied to any other use of AI tools within the organisation, such as data analysis, automation, or optimisation.

4. The term ‘AI tool’ is used in this document to refer to any machine-based system that infers from input to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This includes software products licensed by the company for use in our projects and activities and any other external AI tool that an employee, contractor, or partner may use in the course of performing their duties for the company.

Overarching Principles

5. We believe that AI can support and enhance output, but it is not a substitute for original human thinking, advice, and consultancy. Our approach to AI is aligned with our company values as follows:

  • Connection: We support the responsible use of AI by keeping up to date with the latest developments and sharing our knowledge with the communities we serve.
  • Quality: We acknowledge our responsibility for the quality of any content we generate by or with the support of AI tools.
  • Integrity: We are transparent in our use of AI and mindful of its limitations and biases.
  • Trust: We respect privacy, confidentiality and intellectual property rights and refrain from using AI tools substantially in sensitive activities. 

Accountability and authorship

6. Research Consulting recognises the need for accountability in AI use. We will:

  • Establish clear accountability: the Managing Director is responsible for overseeing our use of company-licensed AI tools and ensuring compliance with this policy. Within the context of client projects, the Project Lead is responsible for determining our use of AI tools, including appropriate disclosure of this to our clients.
  • Document decision-making processes: We will maintain records of decisions made regarding AI use where this has a substantial impact on our chosen methodologies and/or the outputs of our work, including the choice of AI tool(s) to be used.

7. An AI tool cannot be listed as the author of any of Research Consulting outputs, as AI tools cannot take responsibility for the quality and content of the work.

8. Research Consulting staff members remain responsible for the quality of any content they generate by or with the support of AI tools, including managing conflicts of interest and respecting copyright.

9. We refrain from using AI tools substantially in sensitive activities that could adversely impact other researchers, professionals or organisations, such as peer review of academic papers and formal evaluation of research proposals.

Acknowledging the use and limitations of AI

10. Where we make substantial use of AI in our work, we will be transparent about this to our colleagues and clients as follows:

  • Disclosing which AI tools have been used substantially in our work.
  • Acknowledging the potential limitations and biases inherent in the AI tools we use, including the potential lack of diversity in the training data used, and taking appropriate steps to mitigate and disclose these in our work.
  • Where reasonably required by the client, making the input (prompts) and output available, in line with open science principles. This information would typically be made available in an appendix or annex to our main outputs. 

Data privacy and security

11. We acknowledge the importance of protecting the data used in AI processes. Research Consulting will:

  • Obtain explicit consent: Ensure that any personal data used in AI tools is collected with the knowledge and consent of relevant parties. 
  • Implement security measures: Employ robust security protocols to safeguard AI-generated insights and prevent unauthorised access to sensitive information.
  • Restrict data sharing: Ensure that personal and confidential data is not shared for AI training purposes without explicit permission from relevant parties.

12. Detailed guidelines for the collection, storage and handling of data are detailed in our Data Protection and Privacy Policy.

Compliance with legal and regulatory frameworks

13. Research Consulting commits to complying with relevant laws and regulations governing the use of AI. We will regularly review and update this policy to align with any changes in the legal landscape and will take reasonable steps to ensure the AI tools we use maintain an appropriate level of cybersecurity.

Training and resourcing

14. Research Consulting will make appropriate investments in AI tools, tools and training to enable its staff to make effective use of AI tools, taking account of ethical considerations, privacy concerns, and best practices in responsible AI use.

15. We will balance the use of AI tools with the need to provide opportunities for staff to develop and maintain their skills, expertise and creativity. Research Consulting will encourage staff and line managers to exercise their professional judgment in deciding when and how to use AI tools, and when to rely on their own abilities and experience.

Continuous monitoring and improvement

16. We recognise that AI is a rapidly evolving field and we are committed to reviewing and updating this policy as needed. To promote the ongoing responsible use of AI, Research Consulting will:

  • Monitor developments: Stay informed about advancements in AI ethics and standards and respond to these by periodically updating and enhancing our policies and practices.
  • Encourage feedback: Foster an environment where employees can provide feedback on AI tools and use, allowing for continuous improvement.
  • Engage in community discussions: Be an active participant in discussions and debates about the use of artificial intelligence within the communities we serve.