ChatGPT fails to pass the accounting test in another AI vs. human test

0
21



AI chatbot ChatGPT is still no match for humans when it comes to accounting and is a game-changer in many areas, researchers say, adding that AI still has work to do in the field of accounting. Microsoft-backed OpenAI has launched its latest AI chatbot product, GPT-4 that uses machine learning to generate natural language text, passed the bar exam with scores in the 90th percentile, 13 out of 15 advanced placements ( AP) exam, and received a near-perfect score on the GRE verbal test.

“This is not correct; You are not going to use it for everything,” said Jessica Wood, currently a freshman at Brigham Young University (BYU) in the US. “It is foolish to try to learn solely using ChatGPT.”

Researchers at BYU and 186 other universities wanted to know how OpenAI’s technology would fare in the audit. He tested the basic version, ChatGPT. “We’re trying to focus on what we can do with this technology now that we couldn’t do before on improving the teaching process for faculty and the learning process for students. Testing it Doing it was eye-opening,” said lead study author David Wood, BYU professor of accounting.

Although ChatGPT’s performance was impressive, the students fared better. The students scored an overall average of 76.7 per cent as compared to ChatGPT score of 47.4 per cent. On 11.3 percent of the questions, ChatGPT scored above the student average, performing particularly well on AIS and auditing.

But the AI ​​bot performed poorly on tax, financial and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type, said the study published in the journal Issues in Accounting Education. When it came to question types, ChatGPT did well on true/false questions and multiple-choice questions, but struggled with short-answer questions.

In general, higher-order questions were difficult for ChatGPT to answer.

The study found, “ChatGPT doesn’t always recognize when it is doing math and makes nonsensical errors like adding two numbers or dividing numbers incorrectly in a subtraction problem.” ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s details are accurate, but then it will proceed to select the incorrect multiple choice answer.

“ChatGPT sometimes fabricates facts. For example, when providing a reference, it produces a real-looking reference that is completely fabricated. work and sometimes the authors don’t even exist,” the findings show. That said, the authors fully expect the GPT-4 to rapidly improve on the accounting questions asked in their study.

– IANS

The post ChatGPT fails to pass the accounting test in another AI vs. human test appeared first on Techlusive.



Read full article here

Leave a Reply