The University of Oxford have recently published new guidance on the use of Generative AI in Research, as well as new policies on the use of GenAI in summative assessments. The university has also published guidance on the safe and responsible use of GenAI tools. This follows a recent deal between the University and OpenAI, which will see ChatGPT Edu made available to all users in the university.
Much of this is very informative, and it feels like a positive step to have clearly articulated policies on the use of GenAI in a university context, especially one which lays out guidance for both students and researchers. However, as with any blanket set of policies and recommendations, there still remain questions as to how these will apply within the context of individual subjects and fields, and how the use of specific tools and methods should be reported, registered, and cited. This is particularly the case in many fields, including History, where registering the declaring the use of specific tools and data associated has not been common practice in the past, except in the case of digital humanities specialists.
The policies introduce a key distinction between what the university terms substantive use, including the use of GenAI to “interpret and analyse data”, “formulate research aims”, “identify research gaps”, or to produce transcriptions of interviews (interestingly OCR is not mentioned). There are a few exclusions, mostly relating to the use of AI to overcome language and accessibility barriers.
