tag: ai

Why I stopped listening to Hard Fork

Hard Fork is a weekly podcast published by the New York Times, and presented by Kevin Roose and Casey Newton. The show features discussions about new developments in tech, which in recent years generally translates to ‘AI and little else’. I started listening to the podcast sometime around late 2023: ChatGPT had recently hit the headlines, and I was starting a new research fellowship on the Congruence Engine project which explored the use of digital methods in historical and curatorial work. But although AI was increasingly a subject matter of mainstream news coverage, I didn’t feel like there were many places to keep up to speed with the latest developments in a way that was both detailed and accessible.

At first, Hard Fork seemed like a good way to get clued into the way that the world was being, or at any rate was about to be, transformed by the arrival of newly-transformative Large Language Models. I learned a lot by listening to Kevin and Casey discuss the inner workings of Silicon Valley, often through interviews secured with leading figures in the world of AI like Demis Hassabis or Dario Amadei. The tone, admittedly, was ocassinally a bit grating, but in this instance it had the upside of making the fast-paced world of tech understandable to a relative novice to the field.

Read more →

GenAI at Oxford

The University of Oxford have recently published new guidance on the use of Generative AI in Research, as well as new policies on the use of GenAI in summative assessments. The university has also published guidance on the safe and responsible use of GenAI tools. This follows a recent deal between the University and OpenAI, which will see ChatGPT Edu made available to all users in the university.

Much of this is very informative, and it feels like a positive step to have clearly articulated policies on the use of GenAI in a university context, especially one which lays out guidance for both students and researchers. However, as with any blanket set of policies and recommendations, there still remain questions as to how these will apply within the context of individual subjects and fields, and how the use of specific tools and methods should be reported, registered, and cited. This is particularly the case in many fields, including History, where registering the declaring the use of specific tools and data associated has not been common practice in the past, except in the case of digital humanities specialists.

The policies introduce a key distinction between what the university terms substantive use, including the use of GenAI to “interpret and analyse data”, “formulate research aims”, “identify research gaps”, or to produce transcriptions of interviews (interestingly OCR is not mentioned). There are a few exclusions, mostly relating to the use of AI to overcome language and accessibility barriers.

Read more →