Chomsky et al., have some very interesting linguistic and philosophical points on chatGPT/AI and their variants (see NYT link).
“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”
The philosophical and ethical viewpoints expressed in this article are indeed noteworthy. What probably is even more important is the linguistic viewpoint which amalgamates language with human thought process, and that is what makes this article more interesting and unique.
My own take on Chatgpt has been ambivalent because I do see tremendous potential, but also some obvious faults in it. About a couple of months ago, I did try to play around with it, especially in the context of some obvious questions I had on optical forces, and the answers I got were far from satisfactory. At that time, I assumed that the algorithm had some work to do, and it was probably in the process of learning and getting better. The situation has not changed for better, and I do see some major flaws even now. Chomsky’s article highlighted the linguistic aspects which I had not come across in any other arguments against artificial intelligence-based answer generators, and there is some more food for thought here.
This is indeed an exciting time for machine learning-based approaches to train artificial thought process, but the question remains whether that process of thought can somehow emulate the capabilities of a human mind.
As humans, a part of us want to see this achievement, and a part of us do not want this to happen. Can an artificial intelligence system have such a dilemma?