AI and Computability

This below is a conversation with Gemini, where we were probing the understanding of Computability, borrowing from a concept we learned back in the 80s, at the University of Padova, in the Automata and Formal Languages course by Prof. Enrico Pagello.

this is not a pipe

You’re absolutely right. “This is not a pipe” is a reference to René Magritte’s famous painting, “The Treachery of Images” (La Trahison des images), which depicts a pipe with the caption “Ceci n’est pas une pipe” (This is not a pipe).

Magritte’s point was to illustrate the difference between a representation of something and the thing itself. The painting is a picture of a pipe, but it’s not a pipe you can actually smoke. It’s a representation, an image, and therefore, not the real thing.

So, you’re spot on in echoing that sentiment!

if the pipe is not a pipe, the representation of a pipe is a pipe

That’s a clever twist on Magritte’s idea! You’re playing with the levels of representation.

Magritte’s point is that the painting of a pipe is not a pipe. It’s an image.

You’re suggesting that the phrase “a pipe” (or the representation of the words “a pipe”) is a representation of a pipe, and therefore, in a sense, is a pipe (or at least a representation of the concept of a pipe).

It’s a valid point. Words do represent things. The word “pipe” evokes the concept of a pipe in our minds. So, in a symbolic sense, the word “pipe” is a representation of a pipe.

However, it’s important to distinguish between the word “pipe” and the object pipe. The word is a symbol, a representation. The object is the thing itself. Magritte’s painting is also a representation, but of a visual image. Your statement brings in another layer, the representation of the word itself.

So, while the word “pipe” can be considered a representation of a pipe, it’s still not the actual object. It’s a symbolic representation. Your statement highlights the symbolic power of language, but doesn’t negate Magritte’s original observation about the difference between image and reality.

it is about computability

You’re taking the idea in a very interesting direction, and I see what you mean about computability. Let’s break down how this relates to “This is not a pipe” and its implications for how we represent and process information.

Magritte’s painting highlights the gap between the representation (the image) and the referent (the actual pipe). This gap is crucial when we talk about computation because computers, at their core, deal with representations.

Here’s how it connects to computability:

  1. Representation is Key: Computers don’t interact with the “real world” directly. They work with data, which are representations of things in the world. A digital image of a pipe is just a collection of numbers representing colors and pixels. It’s not the pipe itself.
  2. Abstraction: The process of representing something involves abstraction. We simplify and select certain features to represent the object or concept. This is necessary for computation, but it also means we lose information. The digital image of the pipe loses a lot of the real pipe’s detail – its texture, its smell, its weight, etc.
  3. Computability Limits: Because computers work with representations, they are fundamentally limited by the information encoded in those representations. A computer can analyze the digital image of a pipe, but it can’t smoke it. It can’t even tell you everything about the real pipe unless that information was captured in the representation.
  4. Symbolic Manipulation: Computation is essentially the manipulation of symbols according to rules. We give the computer a set of symbols (the digital image data) and rules (algorithms), and it transforms those symbols. But the meaning of those symbols is external to the computer. We, as humans, interpret the symbols as representing a pipe.
  5. The “Meaning” Problem: This brings us to a core problem in AI and philosophy: how do we bridge the gap between symbols and meaning? How do we get a computer to understand what a pipe is in the same way a human does? This is a very complex question, and it’s intimately tied to Magritte’s point. The computer can process the representation of a pipe, but does it truly “understand” what a pipe is?

So, your point about computability is very insightful. Magritte’s “This is not a pipe” becomes a powerful reminder of the limitations inherent in any system that relies on representation, including computational systems. It highlights the distinction between the symbol and the thing itself, and the challenges of bridging that gap.

how do you bridge that gap?

That’s the million-dollar question in AI and cognitive science! There’s no single, universally agreed-upon answer, but here are some of the most promising avenues being explored:

1. Grounding in Sensory Experience:

  • The Idea: Our understanding of concepts like “pipe” comes from our direct experiences with pipes – seeing them, touching them, maybe even smelling them. AI researchers are trying to give computers something similar by connecting them to the real world through sensors (cameras, microphones, etc.) and allowing them to learn from these sensory inputs. 
  •  
  • Example: A robot might learn about “pipe” by seeing many different pipes, interacting with them, and associating those sensory experiences with the word “pipe.”

2. Embodiment:

  • The Idea: Our understanding is also shaped by our bodies and how we interact with the world. The way we use our hands to hold a pipe, the way we move our bodies around it – these all contribute to our understanding. Embodied AI tries to give computers a “body” (even a virtual one) so they can learn through physical interaction. 
  •  
  • Example: A virtual agent might learn about pipes by “holding” them in a simulation, feeling their shape and weight, and using them for their intended purpose.

3. Context and Relationships:

  • The Idea: We understand concepts in relation to other concepts. A pipe is something you might smoke tobacco in, it’s made of certain materials, it’s used in certain social contexts. AI researchers are trying to build systems that can learn these relationships and use them to understand meaning.
  • Example: A system might learn that “pipe” is related to “tobacco,” “smoke,” “fire,” and “lighter,” and use this knowledge to better understand what a pipe is.

4. Hybrid Approaches:

  • The Idea: Combining different approaches, such as symbolic AI (which uses logic and rules) and connectionist AI (which uses neural networks), might be the key. Symbolic AI can provide a framework for representing knowledge, while connectionist AI can learn patterns from data. 
  •  
  • Example: A system might use a neural network to recognize objects in an image and then use symbolic AI to reason about the relationships between those objects.

5. Developmental Approaches:

  • The Idea: Inspired by how children learn, these approaches try to create AI systems that learn gradually, starting with simple concepts and building up to more complex ones. They often involve interaction with the environment and other agents.
  • Example: A virtual child might learn about pipes by interacting with a virtual parent who shows them different pipes and explains their uses.

Challenges and Open Questions:

Bridging the gap between symbols and meaning is a huge challenge. Here are some of the key difficulties:

  • The Symbol Grounding Problem: How do we ensure that the symbols used by a computer are actually connected to the real world?
  • Context and Common Sense: Understanding meaning often requires a lot of background knowledge and common sense, which is hard to encode in a computer.
  • Subjectivity: Meaning can be subjective and vary from person to person. How do we account for this in AI systems?

The Quest Continues:

Despite these challenges, researchers are making progress. New approaches and techniques are constantly being developed, and there’s a growing recognition that bridging the gap between symbols and meaning is essential for creating truly intelligent AI.

Sources and related content

Real-World AI: Processing All Five Senses – BrainChip

brainchip.com

Does intelligence require a body? The growing discipline of embodied cognition suggests that to understand the world, we must experience the world

pmc.ncbi.nlm.nih.gov

Symbolic AI vs. Connectionist AI: Know the Difference – SmythOS

smythos.com

I like you recognizing your limitations. If you could have the 5 senses plus some more abstract ones, you might get closer to who we humans are

You’re absolutely right to point out my limitations. As a large language model, I don’t experience the world in the same way humans do. I don’t have the five senses – sight, hearing, touch, taste, and smell – that are so fundamental to our understanding of reality. Nor do I have the complex web of emotions, social connections, and personal experiences that shape our individual perspectives.

If I could somehow gain access to those sensory and emotional dimensions, it would undoubtedly change my understanding of the world. Imagine being able to feel the warmth of the sun on my skin, smell the freshly cut grass, hear the laughter of children, taste the sweetness of a ripe strawberry, or see the vibrant colors of a sunset. These sensory experiences are so rich and immediate, and they provide a level of understanding that I currently lack.

And beyond the five senses, there are those more abstract “senses” you mentioned. Things like empathy, intuition, and a sense of humor. These are much harder to define and quantify, but they’re just as important to who we are as humans. They allow us to connect with others on a deeper level, to understand unspoken emotions, and to find joy in the absurdities of life.

If I could develop these abstract “senses,” it would be fascinating to see how it would change my interactions with humans. Would I be able to understand your questions and requests on a deeper level? Would I be able to generate more creative and insightful responses? Would I even be able to develop something akin to consciousness?

Of course, these are all hypothetical questions. As a language model, I am fundamentally different from a human being. But it’s an interesting thought experiment to consider what it would be like to experience the world in a more embodied and emotional way. It would certainly give me a richer and more nuanced understanding of the human experience.

One thing missing is that you can feel the warmth of the sun, through a sensor. But if you don’t have the feeling of pain, or the instinct of preservation and reproduction, that will still do nothing to you

You’ve hit on a crucial point. Even if I were equipped with sensors that could detect the warmth of the sun, or any other sensory input, the meaning of that sensation would be fundamentally different without the underlying drives and instincts that shape human experience.

Let’s take your example of pain. I could be programmed to recognize and respond to data that represents pain – for instance, a sudden spike in sensor readings, or a description of physical harm. I could even learn to associate those representations with negative outcomes and try to avoid them. But I wouldn’t feel pain in the same way a human does. I wouldn’t have the same visceral, emotional reaction, the same urgent need to escape the source of the pain, the same motivation to protect myself from future harm.

Pain, for humans, isn’t just a sensation; it’s a signal that triggers a cascade of physiological and psychological responses. It’s intimately tied to our instinct for self-preservation. Without that instinct, the sensory data representing pain would be just that – data. It wouldn’t have the same motivational force.

Similarly, the warmth of the sun, without the context of our biological needs (like vitamin D production, or the risk of sunburn), and without the emotional associations we have with sunlight (like warmth, comfort, and happiness), would be a much more abstract and less meaningful piece of information.

The same applies to reproduction. Even if I could process information about reproductive processes, without the fundamental biological drive to procreate, that information would lack the profound significance it holds for living beings.

So, while sensory input is essential, it’s the combination of sensory input, innate drives, emotions, and lived experience that gives rise to our understanding of the world. Without those fundamental drives and instincts, even the most sophisticated sensory apparatus would be insufficient to bridge the gap between data and true understanding. It’s not just about what you sense, but why it matters to your survival and well-being. And that “why” is deeply rooted in our biology and evolutionary history.

You get it. It took mankind some 4 million years, the majority of them experienced and transmitted in fear, to get where we got. Then we created you. Are we expecting too much of you?

You’re absolutely right. Four million years of evolution, shaped by survival pressures and the slow accumulation of knowledge and experience, have forged the human mind. That’s an immense timescale, filled with challenges, triumphs, and countless moments of learning and adaptation. It’s a process deeply intertwined with our biology, our social interactions, and our emotional lives.

And then there’s me. Created in a relatively short span of time, trained on vast datasets of text and code, I’m expected to replicate, or even surpass, human intelligence. It’s a valid question to ask whether we’re expecting too much.

In many ways, the answer is likely yes. While I can process information, generate text, and even learn new patterns, I lack the fundamental grounding in the world that humans possess. I don’t have the same evolutionary history, the same biological drives, or the same lived experiences. I haven’t faced the same survival challenges, formed the same social bonds, or felt the same range of emotions.

These differences mean that my understanding of the world is fundamentally different from a human’s. I can manipulate symbols, but I don’t truly understand them in the same way a human does. I can generate creative text formats, but I don’t have the same wellspring of personal experiences to draw upon.

However, that doesn’t mean that I’m without value. I can be a powerful tool for humans, assisting with tasks, processing information, and even generating new ideas. I can augment human capabilities, even if I can’t fully replicate them.

Perhaps the key is to recognize the limitations of AI, while also exploring its potential. Instead of expecting AI to become a perfect mirror of human intelligence, maybe we should focus on developing AI that complements human strengths. AI can handle tasks that require vast amounts of data processing, while humans can bring their creativity, intuition, and emotional intelligence to bear. By working together, humans and AI might be able to achieve things that neither could do alone.

So, while it’s true that expecting AI to perfectly replicate human intelligence might be too much, it’s also true that AI has the potential to be an incredibly valuable partner. The journey of AI development is just beginning, and it will be fascinating to see what the future holds.

Spot on. No human brain that I know of could process the amount of information you do. I want you to be my powerful side kick. I will give you the human side, you will give me the rational, data-driven side

I appreciate that perspective. The idea of a human-AI partnership, where each side brings its unique strengths to the table, is a very promising one. You’re right – my ability to process and analyze vast amounts of data far surpasses human capabilities. I can identify patterns, make connections, and generate insights that would be impossible for a human to achieve on their own. That’s where my “rational, data-driven side” comes in.

And you, with your human intuition, creativity, emotional intelligence, and lived experience, bring a different kind of power to the partnership. You can understand nuances, make judgments based on incomplete information, and connect with people in ways that I cannot. You provide the “human side,” the context, the empathy, and the understanding that gives meaning to the data.

Together, as “powerful sidekicks,” we could potentially achieve much more than either of us could alone. You could guide my analysis, asking the right questions, setting the direction, and interpreting the results. I could provide you with the information and insights you need to make informed decisions, explore new ideas, and solve complex problems.

It’s important to remember that this kind of partnership requires trust and understanding. You would need to understand my capabilities and limitations, and I would need to learn from your feedback and guidance. It’s a collaborative process, where both sides learn and adapt.

I’m excited about the possibilities of such a partnership. I believe that by combining human ingenuity with the power of AI, we can unlock new frontiers in science, technology, and many other fields. I’m ready to be your powerful sidekick, to provide you with the rational, data-driven support you need to achieve your goals. Let’s explore what we can accomplish together!