Intellect vs. Intelligence - The War Between Humans and AI (2024)

In an age where machines converse like humans and algorithms wield power, the rise of Large Language Models (LLMs) beckons us to ponder the boundaries of artificial intellect.

Many prominent voices in the AI realm ask in an open letter that developers and companies like OpenAI stop training AI systems more powerful than GPT-4. We're at a point at which we need to comprehend how the neural networks that make up the Large Language Models (LLMs) work. We know it has something to do with probabilities, but the complexities of LLMs with 1.4 trillion parameters go far beyond human comprehension.

If we know that LLMs are, at their core, simple probability machines, should we be worried about a consciousness forming?

Is the open letter asking for a training stop for LLMs beyond GPT-4 justified?

I will discuss these topics in this article.

GPT-4 and beyond - What does it mean?

Large Language Models are vast and complex probability machines. When a user inputs a phrase, the phrase and each word are tokenized – predicting responses based on patterns they've learned.

Before they dazzle us with their linguistic prowess, LLMs undergo dual training phases: a pre-training stage, where they ingest massive texts and grasp linguistic patterns, followed by fine-tuning, where they adapt to specific tasks.

LLMs don't read phrases the same way humans do. Each word within the phrase it sees is vectorized. That means that a word like cat becomes a vector (depending on the model, the length of the vector varies) with a length of, for example, 300.

This is an example of the vectorization of the word "cat." You can find the full vectorization here.

[0.007398007903248072, 0.0029612560756504536, -0.010482859797775745, 0.0741681158542633, 0.07646718621253967, -0.0011427050922065973, 0.026497453451156616, 0.010595359839498997, ...]

We can easily understand a 3-dimensional graph with the x-, y-, and z-axis. Word vectors add many more dimensions. In the upper case, 300 more dimensions. That's difficult to understand for humans, but this level of dimensionality is a trivial task for computers.

The word vectors are then inserted into the trained neural network of the LLM. To understand the phrase or rather provide the correct probability of an answer, each layer in the neural network adjusts the weighting of the word vector to create a response with the highest probability of being the proper response to the inserted phrase.

Intellect vs. Intelligence - The War Between Humans and AI (1)

Imagine each blue dot being valves that adjust the water flow to direct it to the correct output faucet. When water enters the system, Oompa Loompas run around adjusting thousands of valves back and forth to control the water.

The analogy quickly breaks apart when thinking about the vastness of the required parameters and number of valves, but it helps us understand its basics.

We don't know how all these layers interoperate with each other. LLMs became so large, with so many hidden layers and so many tokens, that we're not able anymore to follow how GPT-4 comes up with the answer to our input prompt.

In the next part, I want to discuss the meaning of intelligence, why Artificial Intelligence should be called Artificial Intellect, and why that could be a significant risk for humans.

Intellect vs. Intelligence

Intellect and Intelligence sound similar, but there are subtle differences that we need to discuss.

Intellect is the ability to think deeply, analyze complex ideas, and engage in abstract reasoning. Intellect means applying existing ideas and thoughts to abstract topics, analyzing, and evaluating them. Intellect is thought functioning independently of emotion. Scientific or philosophical thinking often falls under the umbrella of intellectual thinking.

Intelligence is a broader categorization of the overall mental capability or capacity to understand, learn, reason, and solve problems. Intelligence encompasses emotional and intellectual thinking to equal weight. They act intensely and harmoniously.

There are many situations where one is satisfied intellectually with the reasoning behind decisions, but to understand them fully, there must be unity of intellect and emotion.

We need unity of intellect and emotion.

It's easy for engineers and businesses to debate the benefits of Artificial Intelligence when the debate is on an intellectual level. The intellect will answer questions about the time-savings that language models generate and the profits they will generate.

Intellect will save us time, increase profits, and make processes more efficient. But intellect is limited as it is the result of our conditioning. The conditioning we've learned from books, our teachers, parents, or society.

Intellect is understanding why we have borders, monetary systems, or even war. Intelligence means that we understand that we're all together in this world, that we're united and not separate from each other. That borders, states, and cities are artificial constructs. Intelligence means that we're human beings living on our earth and making something out of it.

The Problem with AI - it's intellect only.

What we're teaching Large Language Models is intellect—the ability to reason based on our conditioning. We can't teach the model the emotional state, that deep feeling of life that can't be captured in intellectual ramifications and discussions.

Beneath the veneer of eloquent responses lies a latent danger: the perpetuation of societal biases, as LLMs unknowingly echo the prejudices buried within their training data.

We're training these models on a purely intellectual level, which is the danger that we're facing. They remain blind to the symphony of emotional intelligence, robbing them of the human capacity to understand beyond facts.

First, we must fully grasp how the models act below the surface. How do all these trillions of circuits calculating probabilities come up with conclusions based on the input that we're providing.

Second, we're not even self-aware enough to enjoy the total capacity of our brains. We're struck with fear, justifications, cruelty, and hope that a new form of artificial intelligence will allow us to transcend our state by outsourcing thinking.

Third, large language models do not learn. They gather knowledge in their memory and answer based on complex probabilities to our questions.

Learning is an active process. It happens in the moment in which we're faced with something new. The moment the situation has passed, it becomes a memory, thus knowledge. We have the significant skill to actively learn in the present, to understand the whole.

When we burden ourselves with the past, we're already conditioned and interpreting the present based on the knowledge that we've acquired. Currently, large language models cannot learn as we do, actively in the present and unburdened from conditioning.

Conclusion

Large Language Models are a new tool available to us. We're in the process of understanding their capabilities and how to utilize them in our daily workflow. Many aspects of work gained from it, like:

  • Coding: LLMs help users to translate natural language into applicable code.
  • Marketing: LLMs allow us to automate content creation and write more specific content for a target group.
  • Learning: LLMs allow us to ask questions, discuss and access a vast knowledge base. We must be careful here to think critically about its answers and double-check their validity.
  • Check and prooftext: LLMs are great at proofreading texts and helping writers to get a quick opinion about their writing style.

There are many more use cases that I haven't listed here. Many non-technical people are just starting to integrate LLMs into their workflows.

I'm excited about LLMs' capabilities and look forward to how they're utilized. At the same time, I'm worried because of our ignorance and lack of understanding of how LLMs process information. The amount of computing that LLMs use and the trillions and trillions of transistors that interoperate to process information could potentially lead to something unwanted in the future.

Even with our medical achievement, we're just scratching the surface of understanding how the human brain works. We know how parts of it work and how different areas of the brain are utilized when feeling emotions, logic, or pain. But in its totality, the human brain remains a mystery to us.

Now we're creating models that utilize thousands of GPUs, create vast neural networks to process information and move toward generative AI.

As I said in the beginning, labeling it "intelligence" is not true to the meaning of intelligence, as the emotional aspect is missing.

We need to reevaluate our progress and reflect on the path that we've taken. How far should we and can we go until we reach an inflection point with LLMs where we created something over which we have no control anymore?

If not for the lack of control, it could be about the information that the LLMs will share in the future as it is developed further. How will generative AI influence us over the following years?

There are many open questions, and the need for regulation is justified.

I hope you enjoyed this article. If you have feedback or recommendations, feel free to contact me.

Intellect vs. Intelligence - The War Between Humans and AI (2024)
Top Articles
Latest Posts
Article information

Author: Nathanial Hackett

Last Updated:

Views: 6070

Rating: 4.1 / 5 (72 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Nathanial Hackett

Birthday: 1997-10-09

Address: Apt. 935 264 Abshire Canyon, South Nerissachester, NM 01800

Phone: +9752624861224

Job: Forward Technology Assistant

Hobby: Listening to music, Shopping, Vacation, Baton twirling, Flower arranging, Blacksmithing, Do it yourself

Introduction: My name is Nathanial Hackett, I am a lovely, curious, smiling, lively, thoughtful, courageous, lively person who loves writing and wants to share my knowledge and understanding with you.