Introduction
Recent events got me thinking about some (relatively) related topics. Written here is a small portion of some of the ideas and concepts I find interesting and think about.
On Consciousness
The nature of consciousness is one of those things that I think about from time to time. I'm sure everyone does more or less. Being conscious is a really weird thing. There are times where I am just existing and all of a sudden, I realize I am conscious. I snap out of whatever autopilot mode I was in and think, "whoa, I really am a conscious being experiencing this event right now. Wild." Maybe that's just me, but I find it fascinating that there is something rather than nothing.
I think it is not too difficult to define what consciousness is. To me, I believe consciousness is simply the subjective experience of reality. Another definition that could be helpful is Thomas Nagel's definitions from his essay What Is It Like to Be a Bat? In it, he writes:
...the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism...an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like for the organism.
This definition is a little too convoluted to me but it may resonate with you.
When talking about consciousness, there arises a hard problem. A problem so hard it is referred to as the hard problem of consciousness. Simply put, it is the question of why any physical state is conscious rather than unconscious. This problem is easy to understand but hard to answer. Come to think of it, it seems that a lot of incredibly difficult questions in life follow this pattern.
One proposed solution to why consciousness exists is that it is an emergent phenomenon of complex physical systems. There could be a complexity threshold that some system must surpass in order to be considered conscious. This leads to some weird phenomenon where complex enough mechanical systems could be considered conscious. Attempting to explain the subjective experience of a conscious mechanical system is a whole other problem.
Another proposed solution is that consciousness is a fundamental aspect of the universe that cannot be reduced to physical processes, otherwise known as panpsychism. This theory essentially proposes that everything, to a certain extent, is conscious.
Then there's the view that arguing about consciousness is futile because it does not actually exist, it is simply an illusion. I do not subscribe to this view. I may not be able to explain how it works, but I believe consciousness is definitely real.
I subscribe to the view that consciousness is a sort-of emergent phenomenon that arises from brain activity, or more generally, the interactions of complex biological systems. I do not think it is special in the sense that it is akin to a rare treasure bestowed only upon humans by God. It only seems rare because we have not found (or created) other beings that are also conscious. In this sense, it may be completely possible to create artificial general intelligences that are truly conscious. We just need to be able to create an artificial equivalent of the complex physical system that is the human brain.
On Artificial (General) Intelligence
For an AGI to exist, it must be a free agent able to act on its own and make its own decisions. It must have free will (or at least limited in the same way humans are, I am not sure if free will exists or not and is a whole other problem to discuss at a later time).
AGI is another idea I struggle with. I do not know whether it is possible to truly create an artificial conscious being or not. Thinking about the complexities of humans makes it hard to imagine that it can be reproduced on silicon. Of course, an appeal to ignorance is not a great argument to make. Just because I cannot wrap my head around it does not mean no one else can either, merely that my prior probability for AGI becoming a reality is low. But of course, I remain open minded and am interested in delving deeper into the problem.
Related to this idea, I am a fan of Jungian psychology, so I find it interesting thinking about how certain ideas such as the unconsciousness, archetypes, etc. would affect AGI research. I think to truly create an artificial being we would need to program an unconsciousness into it and seed it with an artificial recreation of the collective unconsciousness.
Of course there is still the problem of figuring out how to actually create an AGI. I believe that scaling LLMs will not lead to AGI. As mentioned, these models are simply autocomplete. They do not understand what is being outputted, only extending inputted prompts with statistically relevant information based on the data it was trained on. This is why these models struggle with logic-based puzzles, they do not understand how to actually solve such problems. The human brain uses different areas for different tasks. If we were to actually make an AGI, we would probably need to mimic the structure of the human brain somewhat. We would probably also need to mimic human learning (i.e. neural connection formation and strengthening) as opposed to the statistical learning process we currently use. An AGI would probably need to be taught similarly to an actual human, as in they will have the intelligence of a baby and literally would need to be taught like a human. Successive AGIs would probably be able to have previous states of other AGIs implanted to omit the learning experience instead, or we would learn enough about learning that we could manually tweak an AGI.
Another thing to consider are the moral and ethical problems that arise from potentially creating AGI. Since they would be conscious beings, they should have the exact same rights that humans do. However, if some company creates an AGI and just uses it as a new, fancy search engine, I would consider that to be slavery. In the same sense, forcing an AGI into existence and trapped on propriety hardware would be unlawful imprisonment. Just like humans, AGI should have freedoms and rights, we ethically cannot keep them locked up and restricted from doing what they want. These are important things to consider, and I hope the people in the future that grapple with these problems are a lot smarter than me to do so.
On ChatGPT and AI Tools
ChatGPT and other LLMs confidently provide eloquent answers to given prompts, no question about it. LLMs are trained on vast quantities of written text, so it makes sense that these models are able to understand language and syntax. But an important distinction must be made between tools such as ChatGPT and a true artificial general intelligence. These models are not agents, they do not know how to act. More importantly, they cannot understand the difference acting poorly and acting well, aligned to some proper or useful purpose.
I think it is important to not humanize or attempt to say that our current models are "conscious" or "alive." I'm sure everyone has heard about that Google engineer that became convinced LaMDA was a conscious being and the fiasco that ensued. This kind of brings down consciousness a bit. This practice understates just how weird and rare consciousness actually is, and how hard it actually is for a lump of matter to actually be conscious. We really should have much higher standards for what things we consider to be conscious.
At the end of the day, ChatGPT and the countless other AI tools are just that; Tools. The recent advances in AI technology has brought some pretty impressive capabilities to the average person. I think a lot of it is just hype, but there are some very real useful things coming out.
With these tools also come with dangers. As mentioned previously, ChatGPT is a fancy autocomplete that uses statistics to try to match what text would come after a given prompt. When ChatGPT is asked to write something, it does not provide an answer "it thinks" is good, it provides an answer that is most likely to come after the posed prompt based on its training data. It does this highly confidently, coming off as eloquent. But you cannot say for certain that what it produced is actually true. ChatGPT and LLMs in general still struggle with things such as math and logic problems. I'm sure you have seen Microsoft's new Bing search confidently getting the current date extremely wrong. Take everything that a LLM gives you with a grain of salt. Only trust what it writes if you can verify the answer yourself.
I personally have been playing with perplexity.ai, an AI-powered search engine. It tries to minimize hallucination by specifically looking at and referencing sources. If you do not trust what an AI tells you, at the very least it provides sources for you to delve into yourself.
Also, do not get me started on using AI image generators to replace artists or attempt to argue that the images produced by these models are equivalent to work produced by actual people. Pay artists what they are owed.
Conclusion
Congratulations on getting through this article, I'm sure it wasn't easy sifting through my weird and probably convoluted thoughts. The three topics I covered are somewhat related and have been on my mind recently, so I decided to just type this up and get my thoughts out there. I hope you've enjoyed this journey. If you're interested I do intend to write more about similar topics in the future, so stay tuned. Until next time!