GenKI@UHB

Goal

The University of Bremen aims to leverage the possibilities of generative AI to support research, teaching and learning, as well as administration. Generative AI models permeate all areas of life and work, but their use also involves risks, such as the illegal processing of personal data and copyright infringements.

The goal is to build up AI expertise and test the technology. Therefore we are creating a data protection compliant access for all employees.

 

A guide and checklist for using Chat-AI in the GWDG Academic Cloud at the University of Bremen can be found below or as a PDF here:

Guide genKI

Checklist genKI

Contact

Projectteam

genKI@uni-bremen.de

Christina Gloerfeld
Martina Salm
Franziska Richter

 

Financial support

GenKI@UHB is supported by digitization funds from the state

 

Project duration

01st July 2024 - 31th December 2025

Project Implementation

The implementation is carried out using the academic service portal of the state of Lower Saxony (Academic Cloud) of the GWDG (Gesellschaft für wissenschaftliche Datenverarbeitung mbH). Access via the service portal is in compliance with data protection regulation and enables the testing of various LLMs (large language models) - including ChatGPT

Link: academiccloud.de/de/services/chatai/

During the project, a focus group helps to identify the needs and requirements of users and accompanying measures are tested.

 

Guidance and Support

To promote the competent use of generative AI measures such as education and training with information material, tutorials, videos, workshops and continuous support are implemented. The offer is aimed at all members of the university and is jointly supported by department 13, ZMML, team CDO and HR development. All measures are based on the basic training courses in the area of data protection and information security as well as on the previous offers on the subject.

We aim to establish a binding framework for the deployment and use of generative AI with a new guideline on the use of generative AI at the university, which has been agreed between the university management, the legal department, the staff council and the team CDO:

 

-Guideline for the Use of Generative Artificial Intelligence at the University of Bremen

-Ref13

Link to the recommendations for use in teaching and studying

ZMML

Link to the ZMML's information on the use of artificial intelligence

-Data protection and information security

Link to the DSB and ISB information and service portal

 

 

 

Courses and Events

You will find various information and educational offers relating to the application and use of generative AI in teaching and learning, but also in research and administration. These stretch from the formulation of specific prompts to legal questions and the presentation of application scenarios.

 

Please contact us if you are missing a specific topic.

 

-Courses offered by the Education and Training Center of the State of Bremen (AFZ)
Link to the pages of the z AFZ

-Training and information events of the Multimedia Kontor Hamburg (MMKH)
Link to current training courses in the fields of AI
Link to recordings of past events

-Collection of an open prompt catalog from the KI-Campus and Hochschulforum Digitalisierung
https://coda.io/@ki-campus/prompt-katalog

-Events and workshops for teaching and learning
Link for collection of dates and events

-Seminars for technical and administrative staff
Link to the internal training program

-Further events are continually being added here

Guideline for the Use of ChatAI at the University of Bremen in the GWDG Academic Cloud

Adopted by the Rectorate on 12th August 2025

The University of Bremen now offers secure access to generative AI services (genAI) using the GWDG Academic Cloud. After a one-year testing period, this service is now available to all university members. The following guide contains the most important information and recommendations for the use of these genAI tools to ensure that you make the best use of these resources. Please read it carefully, as it establishes the conditions for the permitted use of these services.

 

 

 

The University of Bremen provides access to various generative AI (genAI) models that comply with data protection regulations via the Academic Cloud of the GWDG (Gesellschaft für wissenschaftliche Datenverarbeitung /Society for Academic Data-Processing). GWDG is a joint enterprise of the University of G?ttingen and the Max Planck Society (www.gwdg.de). These genAI services are referred to as ChatAI and their use is voluntary. Access is granted via a web interface using the University of Bremen account login data (federated login). Several different open-source models can be used, some of which are hosted internally. Externally hosted commercial models such as ChatGPT from OpenAI can also be used. The advantages are as follows:

  • GWDG processes all data in compliance with the GDPR (General Data Protection Regulation).

?     GWDG is contractually obligated to protect user data. Login data is only used by GWDG to verify user authentication and authorization; this is contractually secured.

?     Even with externally hosted tools, GWDG is not required to disclose personal user data to external service providers (note: The entire input content (prompts) must, however, be passed on in an unfiltered manner).

?     Data entered will not be used to train AI models.

?     Input is not stored on GWDG servers, but exclusively locally in the browser used. With the external model (ChatGPT), however, Microsoft reserves the right to store the data for up to 30 days in order to prevent misuse.

academiccloud.de/services/chatai/

The University of Bremen provides secure AI-based systems using the GWDG platform, but this does not release you from your personal responsibility for the content you enter, your need to critically review the output, or your resulting use thereof.

The use of external generative AI-based systems (with the exception of GWDG’s ChatAI) is not recommended; research projects may be an exception to this rule. The City of Bremen is preparing to launch an administration-specific AI-based system (LLMoin), which will also be made available to the University of Bremen’s administrative staff.

This guide will be updated on an ongoing basis to reflect changing technical, legal, and ethical requirements.

  • Reflective Practice

Always carefully consider the use of generative AI. What are your goals and what results do you expect? Do you know how the selected model works as well as its strengths and limitations?  Do you know what you are entering and how you are allowed to use the results?

Always check and verify the output against primary sources, subject-matter expertise, or peer review.

  • Critical Questioning

Do not pass on unverified AI outputs – take full responsibility for what you share and carefully evaluate the output. Is the information plausible, correct, and ethical? Are there any possible biases, discrimination, or factual errors?

  • Data Protection Compliance

Do not enter any personal data into the genAI systems (without legal authorization or consent). Check to see if results include personal data that is not allowed to be shared.

  • Information Security

Do not enter confidential information into AI systems. Note the classification and sensitivity of your information.

  • Copyright Compliance

Be careful not to enter or publish any copyrighted content. Verify all outputs for potential copyright violations and determine who has the rights to AI-generated content. When in doubt, do not publish AI-generated content.

  • Ethical Considerations

Critically assess outputs for biases, factual inaccuracies, or misrepresentations; label content generated with AI and, if necessary, document your AI usage process comprehensibly.

  • Prohibited Practices

Generative AI must not be used to create profiles, for automated grading, evaluations, generating plagiarized content, or providing misleading / non-transparent information.

  • Always check against the Checklist for Legally Compliant Use of Generative AI Services #LINK to the checklist

Checklist genKI

  • Central point of contact for inquiries and for the GenKI@UHB project,

www.uni-bremen.de/en/digital-transformation/projects/genkiuhb

Email: genkiprotect me ?!uni-bremenprotect me ?!.de

  • genAI in Teaching and Learning

Administrative Unit 13

Link to the recommendations for use for teaching and learning

ZMML

Link to ZMML information on the use of artificial intelligence

  • Data protection and information security

Link to the DSB and ISB information and service portal (in German only)

Generative (GenKI) Artificial Intelligence (AI) 

AI applications that are trained with data to automatically generate content such as texts, images, sounds, videos, or program code when prompted.

Large Language Models (LLMs)

Text-generating systems (e.g., ChatGPT, Llama, Gemini, Claude) that are trained with large amounts of data. They have no understanding of the content, but calculate the statistically most likely next character or word sequence. Erroneous, fabricated, or factually incorrect outputs ("hallucinations") are possible.

Hallucinations

AI provides convincingly formulated output that is factually incorrect. The content and argumentation seem coherent, but the AI system completely fabricated the information.

Bias / Distortions

Bias or misrepresentation embedded in training data or model design can carry over to AI outputs and lead to discrimination (e.g., unequal treatment of groups of people).

Checklist for the Use of Generative AI Services

The use of generative AI services, such as the GWDG’s ChatAI (which includes ChatGPT), inevitably raises legal questions. It is important to note that comprehensive legal guidance for many challenges related to AI services is still evolving.

This document provides an overview of key legal requirements. It does not constitute legal advice and does not replace individual legal counseling. All information has been carefully reviewed but is provided without guarantee that it is accurate, complete, or current.

For personal advice, contact the relevant services:

  • For general legal questions, reach out to the university’s Legal Office, Administrative Unit 13, or the Center for Multimedia in Higher Education (ZMML).
  • For data protection inquiries, contact the university’s data protection officer.
 

I have carefully read the terms of use, and I intend to use the AI services in compliance with them.

I am accessing the service via the Academic Cloud’s federated login (provided by GDWG for universities), ensuring secure authentication and encrypted data transmission.

 

I have assessed whether I intend to enter copyrighted material (e.g. images, text excerpts, or exam submissions) into the AI services.

If yes, I have secured the legal authorization to do so – such as obtaining written consent from the copyright holder – and documented this process.

I confirm that I have not processed any third-party personal data (as defined under Art. 4 (1) GDPR (DS-GVO)) as input in generative AI services, unless I possess legal authorization for processing personal data (e.g. explicit written consent from the data subject allowing AI processing).

This includes, but is not limited to, full names, personalized email addresses, a full postal address, photos/videos identifying individuals.

I confirm that I have not processed any special categories of personal data (as defined under Art. 9 (1) GDPR) as input in generative AI services without obtaining legal authorization for the special category data (e.g. explicit written consent from the data subject allowing AI processing).

Special categories of personal data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, as well as the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, health-related data, or data concerning a natural person’s sex life or sexual orientation.

Whenever I process third-party personal data, I have anonymized it prior to use as AI input to ensure no re-identification is possible – or I have obtained legal authorization for processing the personal data in the AI service (e.g. explicit written consent from the data subject allowing AI processing).

For AI-assisted decisions affecting individuals, I have documented with an auditable trail that the final decision primarily resulted from human evaluation, confirming that this does not constitute an automated individual decision under Art. 22 (2) GDPR.

I confirm that all inputs comply with contractual confidentiality agreements, legal requirements, and university policies to the best of my knowledge and belief. This particularly includes agreements on confidentiality, obligations of secrecy, and statutory requirements.

I confirm that I did not use any materials from confidential sources as input.

I am not making automated decisions based on AI outputs under Art. 22 (1) GDPR that have legal effects or would significantly affect natural persons (e.g. employment decisions). Final decision authority remains with a natural person (e.g. myself).

I have evaluated all AI-generated outputs intended for dissemination or publication for copyright-protected content (e.g. inadvertent reproductions of existing works). To this end, I have adhered to my university’s Legal Guidelines on the Use of AI in Teaching.

I have examined the AI output according to academic principles before disseminating or publishing it, including verifying subject-matter accuracy. To this end, I have adhered to the procedures and guidelines for ensuring good scientific practice at my university.

Entering copyrighted works (e.g. photographs, news articles) into an AI service as input is prohibited without the copyright holder’s consent.

Using third-party personal or sensitive data as input (e.g. full names, personalized email addresses, a full postal address, photos/videos identifying individuals, ethnic/biometric/medical data) without a legal basis (e.g. consent) is prohibited, since personal data must be anonymized (i.e. must be unlinkable to individuals).

Copying sensitive documents that have not been anonymized (e.g. CVs, personal cover letters, or third-party application materials) and using them as input in an AI service is prohibited.

Inputting non-public documents (e.g. business proposals, internal reports, or financial statements) without consent from affected parties is prohibited.

Using student exam submissions as input (e.g. written test answers, drawings) to assist you in grading without a data protection-compliant legal basis is prohibited.

Entering confidential material (e.g. closed-session committee minutes, unpublished research, or classified information) is prohibited.

Using AI services for automated evaluations that impact individuals’ rights and implement the results without a thorough review by a human is prohibited.