Counterfactual reasoning capabilities of GPT: Preliminary findings

Presented by: Alexander Berman from University of Gothenburg
Date: September 27, 2023

Abstract: Recently, there has been a large interest in large language models (LLMs) such as GPT and their ability to engage in human-like dialogue and use commonsense reasoning. We experimentally investigate a specific aspect of these abilities, namely counterfactual explanations. The capacity to reason counterfactually and provide relevant explanations is particularly important when using AI to assist high-stake decisions and assessments such as credit approval or medical diagnostics. For example, if a loan applicant is denied credit, a counterfactual explanation conveys the conditions under which the credit would have been granted. By injecting a decision-making algorithm into the model's prompt and systematically probing and annotating responses for carefully chosen inputs, we study potential patterns in GPT's selection of counterfactual examples. Preliminary results indicate that when GPT 3.5 provides counterfactual explanations, it does not consider causal relations between variables in a way that one would expect from a model with strong commonsense reasoning capabilities. We discuss potential implications of these results for real-world applications and future research.

Location: Attend in person at C250 or via Zoom, https://gu-se.zoom.us/j/66299274809?pwd=Yjc2ejc2VVhraXVJMmhWeWtOQ2NuUT09

Time: 13:15-15:00