Date of Award

Spring 2024

Thesis Type

Open Access

Degree Name

Honors Bachelor of Arts

Department

Computer Science

Sponsor

Dr. Dan Myers

Committee Member

Dr. Dan Chong

Committee Member

Dr. Mark Anderson

Abstract

This paper details a research study evaluating AI's ability to perform qualitative deductive coding. Multiple AI models were utilized and compared against three human coders and one expert coder. A series of 107 statements were sourced from a group discussion for a qualitative impact assessment of an organization. The AI models were provided these statements and directed to code them using the Community Capitals Framework. Two generations of AI models were evaluated. Overall, the AI achieved a fair level of agreement with the human annotators, but the alignment was far from perfect. Newer AI models did not increase agreement with humans but instead increased agreement with other AI models. There were certain areas of struggle for the AI, such as differing interpretations of specific community capitals, one-word statements lacking context, and statements that did not fit into any of the categories. Further studies incorporating a greater variety of AI models with a more robust data set and experienced human coders would provide a more accurate representation of AI’s ability to align with humans on interpretive tasks.

Rights Holder

James McIntyre

Share

COinS