According to a recent report published in JAMIA, researchers at Indiana University Kelley School of Business came up with an idea of an online experiment. This study involved participants who viewed a COVID-19 screening session between a hotline agent—either human or chatbot—and a user with COVID-19 symptoms.

Organizations in various fields are moving toward the implementation of AI-powered chatbots to help individuals amid COVID-19, thus providing them with improved patient experiences and better outcomes.

“Patient anxiety” – yet another face of the pandemic that healthcare industries are trying to overcome by using artificial intelligence (AI).

A team is formed to perform a detailed study about chatbots where they can provide adequate information. It was quite natural to notice a bias behavior against a chatbots’ ability. Later, after realizing that the perceived ability of humans and chatbots is the same, participants did not fail to mention their more positive preference toward chatbots than humans.

This situation would have brought more implications for healthcare organizations that are struggling to reach user demand for screening services.

“The primary factor driving user response to screening hotlines—human or chatbot—is perceptions of the agent’s ability,” said Alan Dennis, the John T. Chambers Chair of Internet Systems at Kelley and corresponding author of the paper. “When ability is the same, users view chatbots no differently or more positively than human agents,” he added.

The group stated, another reason could be chatbots were able to make patients feel more comfortable and less anxious to seek medical care.

“This positive response may be because users feel more comfortable disclosing information to a chatbot, especially socially undesirable information because a chatbot makes no judgment,” researchers wrote.

“The CDC, the World Health Organization, UNICEF and other health organizations caution that the COVID-19 outbreak has provoked social stigma and discriminatory behaviors against people of certain ethnic backgrounds, as well as those perceived to have been in contact with the virus. This is truly an unfortunate situation, and perhaps chatbots can assist those who are hesitant to seek help because of the stigma.”

“Chatbots are scalable, so they can meet an unexpected surge in demand when there is a shortage of qualified human agents,” the authors wrote. “Chatbots can provide round-the-clock service at a low operational cost.”

The team also stated that the main factor in obtaining an individual’s perception of ability was the patient’s confidence in the hotline provider.

“Proactively informing users of the chatbot’s ability is important,” the authors wrote. “Users need to understand that chatbots use the same up-to-date knowledge base and follow the same set of screening protocols as human agents. Because trust in the provider strongly influences perceptions of ability, building on the organization’s reputation may also prove useful.”

Not only now, but chatbots were also looked upon as a center of technology during pre-pandemic times to accelerate patient-provider interactions and online searches for medical information.