Date of Award

Spring 2024

Thesis Type

Open Access

Degree Name

Honors Bachelor of Arts

Department

Computer Science

Sponsor

Dr. Dan Myers

Committee Member

Dr. Dan Chong

Committee Member

Dr. Mark Anderson

Abstract

This paper continues research that evaluates the capacity of artificial intelligence (AI) to perform qualitative coding tasks. The previous study found that AI models lacked consistency with themselves and did not agree with human coded data. Since that study, AI’s general level of intelligence has increased. Hence, this study re-evaluates how well the newest set of AI models (Claude 3 and Gemini) can perform qualitative coding tasks. When tested, the new AI models perform about the same or better than previous models depending on the metric tested. While Gemini and Claude 3 do not agree with human output any more than previous models, they do agree with each other slightly more, and they agree with themselves substantially more, as shown by the Kappa statistic. Disagreement still exists around codewords that lack clear distinction from one another, such as human, social, and cultural codewords. However, overall model consistency has improved, so different outputs using the same AI are likely to agree. Although the use of AI has not reached human level abilities, they possess potential to expedite qualitative coding as a useful tool.

Rights Holder

James August Temple

Share

COinS