Good academic practice and generative AI

The use of generative AI in degree programmes at the University of Copenhagen must always comply with good scientific practice.

Introduction

At the University of Copenhagen, we view students’ use of generative AI through the lens of good academic practice. Therefore, the degree programmes work on how generative AI can best be integrated into studies, ensuring that our graduates possess the right qualifications and relevant knowledge. Read the University of Copenhagen’s position below.

G​ood academic practice with generative artificial intelligence 

Due to the uncertainty about how generative artificial intelligence (AI) will affect research and research-based education, little consensus has yet been reached on how generative AI may and should be used. Research-based education must be linked to good research practice. 

At the time of writing, the Danish ethical guidelines for good research are being revised. However, in March 2024, the EU delivered recommendations for a code of conduct for researchers where generative artificial intelligence is incorporated (European Commission, 2024). These recommendations can form the basis for how degree programmes can design and introduce students to appropriate and ethically sound use of generative artificial intelligence. The main principles are listed here with an eye for the study and examination processes that make up research-based education: 

  • Reliability: The information produced by generative AI during study and exam processes must be verifiable. It also involves being aware of possible output issues in relation to bias and inaccuracies.

  • Honesty about how generative AI has been used during all parts of the study and exam processes. This principle includes disclosing when and for what generative AI has been used. 
  • Respect for fellow students, teachers, colleagues, participants, society, ecosystems, cultural heritage and the environment. The responsible use of generative AI in study and exam processes must involve considerations of the consequences of using the technology. This concerns both limitations of the technology and ethical issues.

  • Accountability for study products from idea to submission. Both study and exam processes and products are ultimately the student's responsibility. [i]

 

​St​​udents' 'independent work' in the light of generative artificial intelligence

The University Programme Order lays down that university programmes are to:

qualify graduates for professional careers by providing them with expertise and methodological skills in one or more subject areas" 

Uddannelsesbekendtgørelsen, 2021, s. §1

During their education, students are expected to develop the ability to "independently identify, formulate and solve complex issues within the relevant components of the subject area(s)" (Uddannelsesbekendtgørelsen, 2021, s. §2).

Independent work is not a clearly defined concept in the Danish educational tradition, although it is natural and central to the idea of the purpose of education. The following text will seek to clarify the concept and translate it to an age when generative AI is a factor in both society and the educational system.

 

Independ​​ent work inside and outside the university

Educating students to work independently in professional settings may cover functions both at and outside the University. Given good academic practice, academics are expected to be able to plan and carry out tasks with a view to what is factually correct [reliability and honesty], socially and ethically responsible [respect and accountability] as well as identify problems in the performance of the work and propose qualified solutions to such problems.

Education should qualify graduates to play a part in the functions of society, but also to form them for what Illeris calls counter-qualification, i.e. "to be able to function more or less autonomously in society as it is and at the same time contribute to its change and transgression" (Illeris, 1985, p. 15).

For graduates of research-based programmes, the requirement for independent work could be considered parallel to research requirements for publications: Research is expected to present state of the art and make clear what the publication contributes with. It is not enough to know the academic tradition, students are also expected to try to transform it (Hammershøj, 2008, p. 7).

​Learning objectives for university education place demands on students' individual work with sources and methods, their ability to transfer these learnings to other contexts and to collaborate with others. It is an important point that students can contribute independently to collaborative knowledge production (Luo, 2024).

Students working independently is treated in literature on academic staff members' supervision of students writing major independent assignments. Here discussions concern how supervisors best support students' ownership and autonomy in decisions regarding both the process and the product. Two key dimensions are:

independent work in the product (content and form together) and independent work  in the process. The first is about how students work with content, sources and methods, i.e. to what extent they use and reflect on them in their writing. The second is about how students plan and carry out their work process, including using their supervisor and other resources."

Rienecker et al., 2019, s. 70
Students can work independently in one dimension and the opposite in the other. Until now, it has not been a Danish tradition to include writing and learning processes in exams.
 

Requir​​ements for working independently and transparently at exams

In exams, a key point is that an educational institution will want to test students' attainment of the learning objectives so they can provide a credible certification. This involves all four principles of good academic practice with generative AI.

In an age without generative AI, this means that it is not the supervisor or an unknown third party that has planned and delivered the exam performance.

In the age of generative AI, it is about identifying students' (own) skills, processes and products rather than testing the work of generative AI. Or as the UCPH regulations say: "Each exam must be the result of the student's own independent performance within the framework of the exam in question." (4 (3)). This requires that the institution, as part of its efforts to develop exams and assessments in an era of generative artificial intelligence, is able to argue that the exams are designed in such a way that it is the student’s acquisition of knowledge, skills, and competencies that are tested in the chosen exam forms and practices. This argumentation can be used to justify and inform all those involved in the education about the rationale behind the way exams are conducted.​

​This has long-term didactic implications for teaching. In order for students to participate in learning and exam processes as intended, the institution must support them in developing good strategies for using generative AI.

​It is the student's task to vouch for the process and the product that constitute the academic content and formal expression of the exam performance, regardless of whether or how generative AI has been used. It is, of course, the institution's concern that this responsibility cannot be undertaken by the student who may therefore – with or without intention – give misleading information about their own efforts, cf. 4(2) of the regulations: “At an exam, students must not give misleading information about their efforts or results."

This means that the institution will want to know how the student prepared their exam performance, and this transparency could possibly be organised and communicated, for example by filling out a form on how generative AI has been used and its impact, if any, on the final product. 

 

Principle​​s for ​​exams and AI at the University of Copenhagen:

  • It is the student’s knowledge, skills, and competencies that exams should test, and it is the responsibility of the educational programs to ensure that the exams are designed with this in mind​.
  • The aids used in the study and exam process must be transparent, and arguments must be provided for the academic choices made, including the use of generative AI.
  • The content and form of the exam performance is the sole responsibility of the student, regardless of any aids used.
     

Students' indepe​​​​nden​t work when using generative artificial intelligence in exams

Proposed criteria to assess students' independent work when using generative AI. Students demonstrate that they work independently by:

  • their use - or non-use - of generative AI tools during the acquisition of the academic content and intended competences.
  • arguing for their use of generative AI tools in their approach to specific academic assignments.
  • being able to assess the processes and exercise professional judgement about the products generated by or with generative AI.
  • reflecting on how generative AI can affect the field now and in the future and contribute to developing and co-creating it.

Students who independently work with and reflect on generative AI can potentially contribute to transforming academic traditions by considering how generative AI can be part of future academic practices in a well-argued and critical manner.

The exam angle on the concept of independent work is therefore about testing what students are capable of regardless of whether generative AI has been used. This may mean that the graduate, as a competent professional, can reflect on questions about generative AI and the field concerned. At exams, such assessment interests can be included, e.g. by asking the student in an oral thesis defence to reflect on their choice of relevant tools and their argumentation for methodological choices, including generative AI.

​The University of Copenhagen Academic board on Education Strategy's working group on artificial intelligence and exams will devise tools that staff and students at UCPH can use in the work of adjusting study and exam practices on programmes.


 

 

​​​Re​ferences:

Bekendtgørelse om universitetsuddannelser tilrettelagt på heltid, BEK nr 2285 af 01/12/2021 (2021). See Retsinformation

European Commission. (2024). Living guidelines on the RESPONSIBLE USE OF GENERATIVE AI  IN RESEARCH (s. 18) [ERA Forum Stakeholders' document].

Hammershøj, L. G. (2008). At forholde sig akademisk—Om opgaveskrivning på lange videregående uddannelser (s. 22). DPU. See Hammershøj, L. G. here 

Illeris, K. (1985). Modkvalificeringens pædagogik. Problemorientering, deltagerstyring og eksemplarisk indlæring. Unge Pædagoger.

Luo, J. (Jess). (2024). A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality" of students' work. Assessment & Evaluation in Higher Education, 1–14. See the review here​

Rienecker, L., Wichmann-Hansen, G., & Jørgensen, P. S. (2019). God vejledning af specialer, bacheloropgaver og projekter (2. udgave). Samfundslitteratur.


  


 

​[i] The wording of the original EU principles for the responsible conduct of research using generative artificial intelligence:

  1. Reliability in ensuring the quality of research, reflected in the design, methodology, analysis and use of resources. This includes aspects related to verifying and reproducing the information produced by the AI for research. It also involves being aware of possible equality and non-discrimination issues in relation to bias and inaccuracies.
  2. Honesty in developing, carrying out, reviewing, reporting and communicating on research transparently, fairly, thoroughly and impartially. This principle includes disclosing that generative AI has been used.
  3. Respect for colleagues, research participants, research subjects, society, ecosystems, cultural heritage and the environment. Responsible use of generative AI should take into account the limitations of the technology, its environmental impact and its societal effects (bias, diversity, non-discrimination, fairness and prevention of harm). This includes the proper management of information, respect for privacy, confidentiality and intellectual property rights, and proper citation.
  4. Accountability for the research from idea to publication, for its management and organisation, for training, supervision and mentoring, and for its wider societal impacts. This includes responsibility for all output a researcher produces, underpinned by the notion of human agency and oversight​.

 


About this page

This page reflects the status as of autumn 2024, and the text has been adopted by the University of Copenhagen’s Educational Strategy Council.