ChatGPT can never replace physicians – Here is why!


Artificial intelligence (AI) has enormous potential to revolutionize and improve health care by improving diagnostics, detecting medical errors, and reducing paperwork burden.

However, no one believed that AI would replace physicians until the launch of ChatGPT, which scored 66% and 72% on Basic Life Support and Advanced Cardiovascular Life Support tests, respectively, and performed at or near the passing threshold on the US Medical Licensing Exam.

Although AI is notoriously bad at context and nuance, particularly in safe and effective patient care, which requires implementing medical knowledge, concepts, and principles in real-world settings, the likelihood of administrative healthcare job automation is relatively high (e.g., 91% for health information technicians). However, Frey and Osborne estimate that the likelihood of physicians’ and surgeons’ jobs being automated is only 0.42%.

Why? Although evidence suggests that fully autonomous robotic systems may be just around the corner, experts argue that a surgeon’s job extends far beyond surgical procedures. The physician’s job is complicated by the ability to provide fully integrated care by providing treatment and compassion, a clinical skill that computer algorithms have yet to comprehend. As a result, the enormous potential of AI in healthcare lies not in the ability to replace physicians but in the ability to increase physicians’ efficacy by redistributing workload and optimizing performance.

There are also some ethical concerns about using conversational AI in medical practice. Training a model necessitates massive amounts of (high-quality) data, and current algorithms are frequently trained on biased data sets. The models are vulnerable to availability, selection, and confirmation biases, which they cannot amplify.

ChatGPT, for example, can produce biased results and perpetuate sexist stereotypes, which must be addressed before similar AI can be successfully and safely implemented in clinical practice. Other ethical concerns are associated with the legal framework. For example, it is unclear who is to blame when an AI physician makes an unavoidable error.

ChatGPT, a chatbot-scientist

The launch of ChatGPT by San Francisco-based company OpenAI, which gained more than 1 million users in the first few days and 100 million in the first two months, positioning itself as the fastest-growing consumer application in history, has prompted many to consider the exciting ways artificial intelligence (AI) may change our lives very soon.

The hype surrounding ChatGPT is understandable: the model is (still) free, simple to use, and capable of authentically conversing on a wide range of topics nearly indistinguishable from human communication. ChatGPT created essays, scholarly manuscripts, and computer code, summarized scientific literature, and ran statistical analyses.

Furthermore, AI may soon be capable of performing more complex tasks, such as designing experiments or conducting peer reviews. ChatGPT performed admirably in some of the tasks mentioned.

In a recent experiment, researchers used existing publications to generate 50 research abstracts that could pass a plagiarism checker, an AI output detector, and human reviewers’ scrutiny. On the one hand, ChatGPT’s remarkable ability to write specialized texts suggests that similar tools may soon be capable of writing complete research manuscripts, allowing scientists to focus on designing and performing experiments rather than writing manuscripts.

Conversational AIs, on the other hand, are simply language models that have been trained to sound convincing but cannot interpret and understand the content. As a result, ChatGPT-generated manuscripts may be misleading because they are based on unverified or fabricated sources. Worse, ChatGPT’s ability to write text of surprising quality may deceive reviewers and readers, accumulating dangerous misinformation. A popular forum for computer programming-related discussions, StackOverflow has prohibited using ChatGPT-generated text since the average rate of getting correct answers is too low, and posting of answers created by ChatGPT is significantly harmful the site and users who are asking and looking for correct answers.

Blanco Gonzalez concluded that ChatGPT is ineffective for producing reliable scientific texts without significant human intervention. It lacks the knowledge and expertise to convey complex scientific concepts and information accurately and adequately. Furthermore, the chatbot appears to have an alarming tendency to invent references to sound convincing.

The ChatGPT creators have openly admitted that ChatGPT occasionally writes plausible-sounding answers but that correcting incorrect or nonsensical answers will be difficult. Without acknowledging the limitations of conversational AI, the publishing system may become overburdened with meaningless data and low-quality manuscripts.

Aside from the issue of unreliability, there are several other ethical concerns. There is no legal framework to decide who owns the rights to an AI-generated work: the author of the manuscript, the author of the AI, or the authors who contributed training data. Furthermore, because ChatGPT frequently fails to disclose the source of information, who is responsible for plagiarism if the chatbot decides to plagiarize? Most publishers agree that using any AI should be acknowledged and that chatbots should not be listed as authors until the ethical difficulties are resolved.


Conversational AIs are here to stay as a powerfully disruptive technology. We can only anticipate them to improve with additional optimization and training. It makes no sense to prohibit or actively discourage their use when they can significantly improve various aspects of our lives by alleviating the burden of daunting and repetitive tasks. In medicine, AI could significantly improve efficacy by removing some of the suffocating paperwork, and optimized chatbots could significantly speed up and improve literature searches. Nonetheless, we should not be swayed by AI’s enormous potential. To realize AI’s full potential in medicine and science, we should not hastily implement it but rather advocate for its gradual introduction and open discussion of the risks and benefits.