The University of Oxford have recently published new guidance on the use of Generative AI in Research, as well as new policies on the use of GenAI in summative assessments. The university has also published guidance on the safe and responsible use of GenAI tools. This follows a recent deal between the University and OpenAI, which will see ChatGPT Edu made available to all users in the university.
Much of this is very informative, and it feels like a positive step to have clearly articulated policies on the use of GenAI in a university context, especially one which lays out guidance for both students and researchers. However, as with any blanket set of policies and recommendations, there still remain questions as to how these will apply within the context of individual subjects and fields, and how the use of specific tools and methods should be reported, registered, and cited. This is particularly the case in many fields, including History, where registering the declaring the use of specific tools and data associated has not been common practice in the past, except in the case of digital humanities specialists.
The policies introduce a key distinction between what the university terms substantive use, including the use of GenAI to “interpret and analyse data”, “formulate research aims”, “identify research gaps”, or to produce transcriptions of interviews (interestingly OCR is not mentioned). There are a few exclusions, mostly relating to the use of AI to overcome language and accessibility barriers.
This substantive use of individual GenAI tools is treated separately from applications “where GenAI is a functionality in existing software”, a distinction which is likely to become blurred very quickly, especially as GenAI becomes embedded in everyday software. It’ll be interesting to see if this is revised in the coming months and years. It’s good to see the University encouraging researchers to be clearer about documenting their use of GenAI tools and being transparent about this, but I wonder whether more casual users of the technology can really be expected to systematically record their ocasional interactions with AI chatbots, especially if they don’t consider these to be instrumental to their research process.
The policies also contain reminders about the environmental impact of GenAI, and suggest that researches seek out smaller models that can be run locally. This was one of our principal recommendations on the Congruence Engine project last year. Currently this seems like something that only digital humanities folks will be taking up with any degree of seriousness, but I’ll keep an eye out for any university-wide initiatives that encourage researchers to ditch the chatbots in favour of local, task-specific models.
Finally, while chat data for users of ChatGPT Edu will remain private to all users, it seems likely that in some cases prompts and responses will be liable to FOI requests, assuming these are logged in the system and the University retains access to them. Identifying information will of course be subject to FOI exemptions, but I’d be curious to know what kinds of data such FOI requests could result in further down the line.
