top of page
Search
  • Writer's pictureJohn Lantos

Will AI replace doctors?



Will AI replace doctors? Well, it depends what you think doctors ought to be or do. If a doctor is supposed to competently read an EKG, AI can do that better than most. If a doctor is supposed to build a trusting relationship with a patient, AI has a way to go. But so do many doctors.


Do machines think or feel or feel themselves thinking? Alan Turing once answered such questions declaring them meaningless, saying that the only way to know was “to be the machine and to feel oneself thinking.” Short of that, all we could do was use the Turing test, that is, see if a machine could communicate in a way that fooled a human communicant into thinking that it was a human.


Thomas Friedman thinks that, with current iterations of AI, we are at “a Promethean moment.” Using ChatGPT, then, is the equivalent of our stealing fire from the Gods or, to unpack the mythic metaphor, to creatively develop technology that will give us superhuman powers. Friedman asserts that, as is true of all advances in technology, this one is (to continue the ancient metaphors) a double-edged sword, one that has the potential to be used for good or evil. The choice, he thinks, will lie in how we humans regulate it.

Noam Chomsky, writing with colleagues, has nothing but scorn for those who write “hyperbolic headlines” or make “injudicious investments.” The crux of Chomsky’s argument is that machines do not reason or use language the way that humans do. As a result, they are merely souped-up search engines with a more user friendly interface. To say that they “think” in Chomsky’s view, would be saying that Google or Spotify or Amazon “understand” what we are looking for. No, the argument goes, they merely look for similarities between things we’ve said or requested or liked and other things that we have not yet explored. They guide us according to their algorithms and their deep-learning neural networks. That is not human thinking. But the argument misses the point. As Turing pointed out 70 years ago, “We do not wish to penalize the machine for its inability to shine in beauty competitions, nor penalize a man for losing a race against an airplane.”

Writing about the specific uses of AI in medicine, DeCamp and Tilburt are warn us not to trust the algorithms. Trust, they suggest, can only occur between humans. But they don’t dig deep enough. I “trust” an airplanes computer systems to land the plane safely. Sure, I’m reassured that a pilot is present to make decisions if things go wrong. But, if things did go wrong, what are the chances that the pilot would make a better decision than the algorithm. That is an empirical question. In many areas of medicine, algorithms perform better than average physicians. In some cases, they are better than the best physicians.


Meghan O’Gieblyn seems to agree with Chomsky when she writes that, with AI, “we are abdicating our duty to create meaning from our empirical observations – to define for ourselves what constitutes justice, morality, and quality of life. Meaning is an implicit human category.” But she worries, along with Harari, that we are in fact abdicating that duty. Instead of liberal humanism which posits that we have a moral duty to understand ourselves and then make ethical decisions based on self-knowledge, we now have what he calls “dataism.” Instead of listening to our feelings, we listen to the algorithms, since they know us better than we know ourselves.


What are the similarities in all these views? They are suggested by the answer that ChatGPT gave to Chomsky and colleagues when asked whether the machine’s amorality was immoral. The reply: “It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.”

229 views0 comments

Recent Posts

See All
bottom of page