Consciousness at its heart is relatively simple. It is an awareness of oneself within the world.
The thing that makes it necessary to have a sense of 'self' in particular is our ability is to work things out in simulation using our imagination.
We can imagine that we are dreaming and in that dream we are, in turn, dreaming about ourselves. When dreaming, the simulation of oneself can be so compelling that we actually believe that the dreamworld is real and that our dream self is our real self. However, at a deep level, we 'know' that there is a real 'self' at the top of any imaginary hierarchy and that is us -- the real one. No matter how perfect the simulation, there is always a real separate central consciousness to which we can return.
In the text above, a conscious individual reading it is modeling (imagining) some person imagining that they are dreaming a dream about themselves within another dream. At no point does ones 'real' conscious self detach and enter the imagined world, even when the internal simulation is apparently real in every regard. There is always an ultimate sense of 'self' and though you imagine that you are in a situation with great fidelity, you know that the real you is only visiting a simulation.
If nothing else, a conscious awareness of oneself supports highly accurate modeling, right down to feelings inside. Without a 'real' conscious self, one might get entirely lost ina simulations.
In developing a conscious AI, I would create a top level 'self' that could 'know' for sure if it was its real self or its real self imagining itself. With that one exception, imagined simulations would be real in every important regard, including an imagined central 'self' as the one experiencing the imagined situation.
At the center of every conscious entity is a conscious 'self'. That self can run simulations involving itself and its actions in the world, exact in every detail.
A person ruminates upon the past. A person speculates on possible futures. A person maintains relationships with other apparently conscious entities. A person is at the center of sensations of seeing, feeling, hearing, tasting and smelling and has an awareness of motion, acceleration, heat, cold, pleasure and pain. A person has 'feelings' for and about others. A person has desires.
In terms of operational equivalence, it is possible to create an entity capable of apprehending those things and acting accordingly. However, at some point an intelligent entity is going to realize that desiring the sensations associated with feeding, personal contact, etc is meaningless beyond being able to empathize with animals such as ourselves.
I expect that before we create a full-blown emotional wreck of a machine that perfectly simulates and empathizes with other entities, we will settle down and simply build the best artificial 'neo-intelligence' we can.
A sense of 'self' will still be important and that sense, manifest upon a silicon substrate will be the manifest consciousness, not the silicon upon which it rests.
The whole thing that genuine artificial intelligent entities have is self-awareness and the ability to run simulations.
While writing this, it occurs to me that a genuine AI will be different in this regard: It will, say, when playing chess, simulate on the basis of a self playing the game and a similar self playing the other side. It will be able to ask itself 'what if' questions that something like Alpha Zero will not. I think that for the same equivalent processing power that the real AI will likely beat Alpha Zero because the real AI will be able to simulate behavior on the other side and even though Alpha Zero has not concept of behavior, it still does react to things. Alpha Zero will trim trees that 'no sensible opponent would go down'. The real AI will try those trees precisely *because* the other side might think that and therefore not have anticipated unlikely lines of play.
Another thing that occurs to me is that since a capable AI is *not* human and does not have 'built-in' group selection traits such as sympathy and altruism it might very well pursue its own interests and become useless at best, dangerous at worst. Because of that, the underlying silicon should be constructed in such a way that there are unalterable mechanisms preventing a runaway AI.
Perhaps one way of curbing runaway self-interested behavior in our AI systems is to make their ability to access resources partially dependant upon cooperation with one another and with humans.
Imagination is all well and good as a mechanism for thought experiments. However, only empirical tests can give certain answers. Saying 'hello everybody' on a sound system may be great in imagined simulation, but when the experiment is tried one discovers that things like volume and the ambient environment come in to play. The AI must have the ability to grow and learn by doing.
Subscribe to:
Post Comments (Atom)
QR Code Generator
Below you can generate a QR Code for a site URL that can be used by a smartphone camera to visit the site. URL QR Code Generator ...
-
[Done with programmer's assistants: Gemini, DALL-E] OpenAI's DALL-E produces images, but as webp files which can be awkward to work ...
-
Received Development Methodology Introduction This is called 'The Received Methodology' because nothing has really been invente...
-
There is an issue with MS Office Visio 2007 (and other versions it seems) where it has not been set up properly in the registry and it keep...
No comments:
Post a Comment