In this blog post I’ll share some of my thoughts on the development and challenges of AI, using ChatGPT as an illustration. Through examining technological progress in recent years and its effects on our daily lives and professions, I’ve explored whether ChatGPT can be considered intelligent and what does it really mean. Additionally, I’ve addressed concerns about the apprehension towards new ideas, transformation, as well as the ethical and accountability issues that arise.
Advances in technology – “it’s happening without us!”
Back at the turn of the millennium I was involved in the study of neural networks. Since then I can see the tremendous progress that has happened. Despite the fact that this progress was somewhat predictable, I still take pleasure in observing current technology development.
At the same time, I feel somewhat disconnected from these events. It reminds me of how Bill Gates reportedly shouted to Paul Allen upon hearing news about the release of the first consumer personal computer, saying ‘it’s happening without us!’ A similar feeling arose in me when I first learned about the development of bitcoin and blockchain technology. As early as 2008/2009, I was already reading Satoshi Nakamoto’s work, where this topic was being discussed.
I also have some experience in utilizing the computing power of thousands of computers as one virtual computer, similar to the concept of Seti@home. I presented this idea at Bank Zachodni BZWBK in 2004/2005, but unfortunately, at that time, I couldn’t come up with a practical and rational way to implement this technology. As a result, I only proposed connecting to the Folding@home platform.
However, I was pleasantly surprised to have the opportunity to present this idea to Mateusz Morawiecki, who was a member of the Bank’s Board of Directors at the time and is now the Prime Minister of Poland. While I’m glad that I had that chance, looking back at the technological progress made since then, I realize that my idea was only a drop in the ocean of possibilities.
So when I think about ChatGPT, somewhere inside, I feel this opening space of possibilities and also questions that need to be raised and answered.
Misunderstanding of definitions and concepts
I have this concept deep in my mind, saying more or less that most human problems come from not understanding what we are talking about. Language, which is one of the cornerstones of human civilization and has made it possible to build such amazing networks of cooperation as the internet, is at the same time a huge constraint, standing in the way of even more dynamic development. Language by its very nature (understood as a product of biological evolution) is tailored to human capabilities. Each individual human being owns individual, indistinguishable and unique arrangement of neurons in the brain, derived in part from inherited genes as well as (probably in greater part) from experiences gained along his or her individual life path. This causes each person’s brain to function uniquely. It means that each definition, concept or word is understood a little differently by each person. It’s not that the understanding of, for example, the word “table” is fundamentally different between two people, but even with this example the differences can be significant.
An illustration of this can be the concept of a class in object-oriented programming languages. It is a “virtualized” concept, describing a certain “type” of an object, consisting of specific parameters and properties. Due to the way it is written, it is a very precise definition. It is easy to tell whether a given object (e.g. a table) belongs to a given class (e.g. “wooden table”). In human language, there are no such hard definitions. There is a “class” eg table. But there are no explicitly and unambiguously assigned characteristics and properties. They are blurry and inaccurate.
I propose an experiment: ask a carpenter, a teacher, a child and a doctor to describe a table. How do they understand this word? I am sure you will be surprised. What does it mean? Imagine that you want to buy a table (described in your mind as: “a four-seater, sturdy piece of furniture made of oak wood, protected with oil”) and you convey such a request to the doctor. He will buy a table on wheels, made of composite, easy to clean and mobile. Why? Because a different context arises for everyone. There is no single “table” (class “table” with clear definition) or in other words, “table” means something different to each of us.
How does this relate to generative AI? I see two intersections. The first at the very level of expectations. I often read different arguments about whether, for example, chatGPT is intelligent. The problem for me is that each side understands the word “intelligence” completely differently. Without building a common definition, it is difficult to have a rational discussion. So how can we define the word intelligence?
Intelligence – that is, the way a person thinks, draws conclusions and generally acts.
Intelligence – that is, the sum of the ways in which an organism (or rather “genes”) is able to respond to its environment in order to survive (evolutionary approach).
Intelligence – or the way to find the best/optimal solution to a situation.
I believe that humanists most often use the first definition intuitively. Biologists will choose the second approach, and technical people will potentially choose the last one.
So how to answer the question “is GPT chat smart?”
ChatGPT is a tool that uses the vast resources of the internet (chosen by its creators) to best match answers. The algorithm is called generative, because it generates the best answer (or basically the best continuation of the question / sentence written by the user) based on statistics.
Is it a human-type intelligence?
Well, this is certainly not the way “consciousness” works for humans. Is this how intuition works? I guess it’s still too early to answer that question but I assume no. Certainly ChatGPT at this stage does not understand either the questions or the answers, hence it answers statistically correctly. ChatGPT hasn’t got “consciousness” often confused with intelligence. There is no space here to develop it further. In this context, I understand those excited “and I told you so!” as confirmation that ChatGPT gave nonsensical answers to specific questions. Those are understandable, but prove nothing.
So is chatGPT intelligent in an evolutionary sense?
Difficult question, it certainly learns from successive iterations with users, the goal might be the correctness of answers (but how to measure it?) or happiness of the users (the same question here). Both situations are quite far from the average intelligence in the evolutionary, biologic sense.
Referring to the last proposed definition of intelligence – that is, treating intelligence as a certain algorithm that is simply meant to carry out its purpose in a way that is not trivial, not capable of simple coding in the form of, for example, a decision tree, and moreover can alter itself according to changes in the environment.
Here I believe that chatGPT and other such tools have crossed the barrier that we can call “intelligence”. Furthermore, I think that ChatGPT might be a cornerstone of so called AGI (Artificial General Intelligence). This might be even more true with ChatGPT 4 which is currently being released. I see it this way: in future AGI history books, chatGPT will be the first big milestone.
Fears
Usually people don’t like changes and don’t want to step out of their comfort zone. ChatGPT is something that is the culmination (for the time being) of years of work taking place a bit off the stage. I don’t mean secretive here, just that outside of the relatively small world of IT (and AI in particular) it has never been a widely discussed topic.
Of course, AI was used, but mainly in detailed, niche solutions such as image recognition, translation, search, etc. So far, there has been no tool that would communicate with a human in such a natural way.
The quality that chatGPT revealed in the third quarter of 2022 was a big surprise, and hence the algorithm quickly became a media star.
For the last ten years (maybe more), the concept of AI as a force destined to cause many negative effects in society such as the disappearance of many professions, has been built up directly and indirectly in the media. This was probably best described by Yuval Harari in “Sapiens” as a fear, of being unnecessary.
There has been a belief the physical professions will be most affected by this effect, and that human creativity and the professions that use it will remain safe.
ChatGPT has suggested in a very tangible way that this may not be the case. People in professions such as writers, reporters, journalists, marketers, translators, programmers and many, many others suddenly felt the “breath on their backs.”
Today, most often I still see the reaction – “it won’t replace us, it makes mistakes, it needs to be corrected”, but as the tools develop (remember, ChatGPT went “public” in October 2022!) this will change.
Concerns will grow with practical applications of generative algorithms. We are definitely already facing a redefinition of several professions again. Some considered as creative, at its core might be supported or even replaced by compilation of knowledge from several sources. The journalist will focus on the message and the distribution of acceptances, on what he wants to convey. An algorithm will supplement the content with examples, facts or narrative. A marketer will focus on the target, the product, the way the message is conveyed, rather than creating the whole communication. The programmer will focus on the high-level algorithm, the required integrations, how to interact with users, the data sources, rather than the low-level coding of the solution.
There will also be new professions. The first one that comes to my mind is ‘Query Developer’ (‘Prompt Developer’) for AI systems. There is a (somewhat ugly) saying: Shit in, shit out. The quality of the query we direct to AI has a critical impact on the quality of the response. Sometimes a simple conversation with AI is enough, but sometimes the query needs to be well thought out, repeatedly tested and must be the result of a professional. The importance of correct data in the input will increase.
Another new profession could be the ‘AI teacher’, whose role would be to effectively teach algorithms specific information in a way that enables them to achieve their goals. This process occurs prior to query creation and involves determining the method, format, order, speed, length, and many other factors that will ultimately impact the final result. The ‘AI teacher’ is not necessarily a computer scientist, but rather an expert at the intersection of a specific industry/humanities and computer science. This is a truly innovative role!
Responsibility and ethics
In the context of AI, the topic of responsibility and ethics is still emerging. Although the topic is not new, we are still at the beginning of the road. For instance, take tools such as Google Maps, which were originally created as maps to support us in finding a route to our destination. Today, to a large extent, they control traffic beyond our awareness. They show us the best route (and we trust them to do so), and they may direct us with a detour when they determine that the primary route is jammed. Thus, we give them significant decision-making power, for example, for the punctuality of our arrival at an important meeting.
AI tools today support us in various areas, such as medicine, where image recognition is already standard in applications such as cancer diagnosis. If you say that you don’t trust AI and prefer your diagnosis to be made by a doctor, you may be taking a significant risk. Today, a single doctor, despite their best intentions, may not have the knowledge accumulated in an AI algorithm. Although such an algorithm is supporting the doctor, if it suggests cancer, the doctor may not have enough confidence to disagree and let the patient go home.
What about accountability? If AI gets it wrong, as in the last case, and points out cancer where it doesn’t exist, who is responsible? In extreme cases, such mistakes could be a matter of life and death. This issue is also prevalent in the world of automobiles and autonomous cars.
Additionally, ethics is strongly anthropocentric by nature, which means that it does not dictate “what is good”, but rather “what is good for human beings”. To some extent, this discredits ethics, especially in situations where a decision must be made between the well-being of humans and that of animals or the Earth.
The problem with the ethics of algorithms may originate from this fact. Algorithms are inherently neutral and do not view ethics through our eyes, nor do they single out humans. This “human” ethics has yet to be introduced to them. In other words, they may not learn it themselves, since there is no reason for them to distinguish humans.
This topic requires an in-depth discussion about a new ethics that addresses not only the needs of humans but also those of other sentient beings, as well as non-sentient and artificial ones.
Summary
As you can see, the topic of AI generative algorithms is an area where there are many questions and open topics beyond the technology itself. Here I have only touched the surface. If you find the topic interesting I will be happy to pull it further in future posts.