AI Usage Policy of LHLK Group
1. Introduction
The AI Usage Policy aims to enable responsible, transparent, and ethical use of Artificial Intelligence within the LHLK Group. It ensures that our handling of AI complies with legal and regulatory standards, aligns with our general corporate values, and respects the rights of our customers, partners, and stakeholders. Any use of AI within the LHLK Group includes the principle that the final decision on the use of AI-generated analyses, processes, and content lies with humans, i.e., ourselves.
We already use AI in the LHLK Group for various applications: automation of routine tasks, support in research and text summarization, idea generation and creative processes, or AI-supported content creation (texts, images, videos, podcasts). Other applications are conceivable or already being tested: data analysis and reporting for performance measurement, optimization of project management and workflows, automation in social media management, or marketing and campaign optimization, and much more.
In all these applications, the focus is on efficiency gains that we can use to improve work quality, whether through the automation of repetitive or time-consuming processes or through more precise analyses and error avoidance. Our teams can thus focus on more strategic, creative, and complex activities that contribute to personal development and achieve the best possible result for our customers. By delegating tasks to AI, we unleash creativity and innovation, for which human judgment and expertise are crucial. Our goal remains to improve the quality of our results through the use of AI by minimizing errors and accelerating and improving decision-making processes.
Since AI cannot establish a connection to the world on its own, it is up to us to decide on the proximity of AI to us, our customers, and stakeholders. For us, AI is therefore an intelligent tool that helps us create a trustworthy framework for communication that is relevant and forward-thinking, balancing disruption and stability.
2. Scope
This policy applies to a machine-assisted system designed for varying degrees of autonomous operation, which can be adaptable after its operation and derive from the received inputs for explicit or implicit goals, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This includes tools that use AI for partial tasks.
The policy applies to all employees, contractors, and partners of the LHLK Group who use or interact with AI systems.
3. Guidelines
3.1. Responsible AI Use and Fairness
AI serves as an auxiliary tool and does not replace human processing and responsibility. The contractor guarantees the client that AI, if used as an auxiliary tool, only takes on a subordinate function compared to human processing and does not replace human processing.
Accordingly, employees must use AI systems responsibly and avoid any actions that could harm others, facilitate malicious activities, or violate privacy. We ensure that no discrimination or bias occurs in our work through AI. The results of AI-supported analyses are interpreted and used fairly.
3.2. Compliance with Laws and Regulations
AI systems must be used in accordance with all applicable laws and regulations, including intellectual property and privacy laws. This is particularly true for compliance with the GDPR and other data protection regulations when using customer data and data from research projects, as well as for the protection of personal rights.
In particular, it is ensured that the regulations of the European AI Regulation are complied with when using AI systems. Additionally, the provisions of the GDPR are carefully considered when using AI systems. Personal data within the meaning of Art. 4 No. 1 GDPR is only processed with explicit processing permission (e.g., consent of the data subject according to Art. 6 para. 1 GDPR).
The use of the service using personal data for the purpose of creating assessments of personality, work performance, physical and mental resilience, cognitive or emotional abilities of individuals, or predictions about the criminality of individuals or groups is prohibited.
3.3. Transparency
We are transparent with our customers and partners about the use of AI in our daily work. In the event of a dispute, the LHLK Group, as the contractor, keeps documentation that essentially documents the use of AI in the contract subject. This documentation contains at least information about which tools were used and to what extent the contract subject was reworked by a human. Additionally, reference is made to the requirements of Art. 50 of Regulation (EU) 2024/1689 (AI Regulation).
The client can request that the LHLK Group, as the contractor, confirms that the agreed guidelines for the use of AI have been complied with and that AI has only served as an auxiliary tool.
3.4. Quality and Validity
Employees are responsible for the results generated by AI systems and must be able to explain and justify these results. Employees must ensure that any content developed is reviewed by a human for factual, ethical, and legal accuracy before being used in a professional context.
For reasons of transparency (see 3.3), the LHLK Group, as the contractor, discloses the essential aspects of the use of AI concerning the contract subject within the framework of the documentation described in 3.3, by agreement with the client.
If projects heavily depend on the support of AI tools or if the price or schedule of the project was agreed upon based on the use of AI tools, this should be recorded in the contract or confirmed in writing by the client. If reduced quality requirements have been agreed upon, this should also be confirmed in writing.
3.5. Disclaimer
Employees who use AI systems in the course of their work in accordance with this policy and the applicable work instructions are generally exempt from personal liability for any violations or problems related to AI-generated content. This does not apply in cases of intentional or grossly negligent behavior.
The contractor assumes responsibility for the consequences caused by simple negligence of the employee in connection with the use of AI systems within the scope of its legal obligations, provided that the use was in accordance with internal guidelines.
3.6. Project Classification
All LHLK Group projects and their sub-projects are classified into three confidentiality levels: Red, Orange, and Green. All projects are initially assigned to the green confidentiality level by default. It is the responsibility of the project managers to independently classify the respective projects and sub-projects into a higher confidentiality level (Orange or Red) according to the applicable contractual agreements and considering the nature and sensitivity of the processed data. It must be ensured that data protection requirements and customer-specific regulations are particularly taken into account.
The classification must be documented in a traceable manner and, if necessary, justified upon request to the AI Taskforce or the client.
Red – Strictly Confidential Level
Contains data classified as strictly confidential or protected information or data with high value and high risk. The customer contract excludes the use of AI.
Examples: Projects with contracts from the public sector; financial figures if they have not yet been published, strategies, figures, reports, plans, or other materials intended only for internal users; personal information – this may also include everyday conversations from recordings that allow personal conclusions about the participants.
Approved list of tools for this level (red):
• Microsoft Copilot 365
• No other AI tools without prior consultation; released depending on the customer contract
Orange – Medium Confidentiality Level
Contains customer data classified as internal but of low value and low risk. Contains customer data classified as internal or protected but of low value and low risk.
Examples:
• Unpublished press releases on “non-sensitive” topics;
• Unpublished content intended for social media, websites, or other public channels
Approved list of tools for the level (Orange):
All approved LHLKI Dashboard tools in the areas of ChatGPT&Co, image generator, intelligent research, text assistant, PRPs, and the tool MyGoodTape, Microsoft 365 Copilot. All others only after consultation.
Green – Standard Confidentiality Level
Contains customer data that is not classified as confidential or already publicly accessible.
Examples of such data:
• All publicly accessible information
Approved list of tools for the level (Green):
• All approved tools in the various areas of the LHLKI Dashboard, Microsoft Copilot 365.
3.7. Named AI Officer
A designated AI officer is responsible for overseeing the implementation of this policy, providing guidance and support to employees, and ensuring compliance with relevant laws and regulations. In the LHLK Group, this role is held by the following individuals: Ute, Frank, Kathi, David, Louis. In addition to the designated AI officers, it is ensured that all employees using AI systems have sufficient competence regarding the use of these systems (AI competence) in accordance with Art. 4 AI Regulation. This includes compliance with all technical and legal principles for the use of AI systems.
The LHLK Group ensures that the AI competence of users of AI systems within the LHLK Group is regularly renewed if circumstances require. In particular, the LHLK Group ensures that employees, contractors, and partners have sufficient knowledge of the basics of artificial intelligence, the legal framework, copyright in relation to the use of AI, data protection law in relation to the use of AI, and AI management.
3.8. Regular Reviews
The AI Taskforce conducts regular reviews of the AI system landscape and usage within the LHLK Group to ensure compliance with this policy, identify emerging risks, and recommend updates to the policy if necessary.
3.9. Incident Reporting
Employees are required to report any suspected violations of this policy or any potential ethical, legal, or regulatory concerns related to the use of AI to the AI Taskforce or the nearest supervisor.
3.10. Policy Review
This policy is reviewed annually or as needed based on the development of AI technology and the legal environment. All significant changes to the policy are communicated to all employees.
Berlin, 23.07.2025
Want the full AI Usage Policies?
Download now