Since its release in November of last year, ChatGPT has increased our expectations for what machines are capable of. The US Medical Licencing Exam and a Wharton MBA Exam have been aced by the Open AI established.
Researchers used accounting exam questions to test the GPT 4-based ChatGPT in a study that was released in the journal American Accounting Association.
“When this technology first came out, everyone was worried that students could now use it to cheat. But opportunities to cheat have always existed. So for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening,” averred author David Wood, professor at Brigham Young University (BYU), in a media statement.
Although ChatGPT’s performance was remarkable, BYU found that student performance was superior. In contrast to ChatGPT’s score of 47.7%, test takers generally averaged 76.7 percent on the exam.
ChatGPT did very well in the topics of accounting information systems and auditing, outperforming the student average on 11.3% of the problems. With regard to tax, financial, and managerial assessment, the AI bot performed poorly. This might be as a result of difficulties with the mathematical operations necessary for those topics.
Additionally, ChatGPT performed better when it came to true-false and multiple-choice questions when it came to question type. Short-answer questions, however, proved difficult for it.
Nevertheless, the study’s authors are certain that GPT-4 will do far better on accounting tests by addressing the problems of ChatGPT’s earlier iteration.