Skip to main content

Conscious Artificial Intelligence

Consciousness at its heart is relatively simple. It is an awareness of oneself within the world.

The thing that makes it necessary to have a sense of 'self' in particular is our ability is to work things out in simulation using our imagination.

We can imagine that we are dreaming and in that dream we are, in turn, dreaming about ourselves. When dreaming, the simulation of oneself can be so compelling that we actually believe that the dreamworld is real and that our dream self is our real self. However, at a deep level, we 'know' that there is a real 'self' at the top of any imaginary hierarchy and that is us -- the real one. No matter how perfect the simulation, there is always a real separate central consciousness to which we can return.

In the text above, a conscious individual reading it is modeling (imagining) some person imagining that they are dreaming a dream about themselves within another dream. At no point does ones 'real' conscious self detach and enter the imagined world, even when the internal simulation is apparently real in every regard. There is always an ultimate sense of 'self' and though you imagine that you are in a situation with great fidelity, you know that the real you is only visiting a simulation.

If nothing else, a conscious awareness of oneself supports highly accurate modeling, right down to feelings inside. Without a 'real' conscious self, one might get entirely lost ina simulations.

In developing a conscious AI, I would create a top level 'self' that could 'know' for sure if it was its real self or its real self imagining itself. With that one exception, imagined simulations would be real in every important regard, including an imagined central 'self' as the one experiencing the imagined situation.

At the center of every conscious entity is a conscious 'self'. That self can run simulations involving itself and its actions in the world, exact in every detail.

A person ruminates upon the past. A person speculates on possible futures. A person maintains relationships with other apparently conscious entities. A person is at the center of sensations of seeing, feeling, hearing, tasting and smelling and has an awareness of motion, acceleration, heat, cold, pleasure and pain. A person has 'feelings' for and about others. A person has desires.

In terms of operational equivalence, it is possible to create an entity capable of apprehending those things and acting accordingly. However, at some point an intelligent entity is going to realize that desiring the sensations associated with feeding, personal contact, etc is meaningless beyond being able to empathize with animals such as ourselves.

I expect that before we create a full-blown emotional wreck of a machine that perfectly simulates and empathizes with other entities, we will settle down and simply build the best artificial 'neo-intelligence' we can.

A sense of 'self' will still be important and that sense, manifest upon a silicon substrate will be the manifest consciousness, not the silicon upon which it rests.

The whole thing that genuine artificial intelligent entities have is self-awareness and the ability to run simulations.

While writing this, it occurs to me that a genuine AI will be different in this regard: It will, say, when playing chess, simulate on the basis of a self playing the game and a similar self playing the other side. It will be able to ask itself 'what if' questions that something like Alpha Zero will not. I think that for the same equivalent processing power that the real AI will likely beat Alpha Zero because the real AI will be able to simulate behavior on the other side and even though Alpha Zero has not concept of behavior, it still does react to things. Alpha Zero will trim trees that 'no sensible opponent would go down'. The real AI will try those trees precisely *because* the other side might think that and therefore not have anticipated unlikely lines of play.

Another thing that occurs to me is that since a capable AI is *not* human and does not have 'built-in' group selection traits such as sympathy and altruism it might very well pursue its own interests and become useless at best, dangerous at worst. Because of that, the underlying silicon should be constructed in such a way that there are unalterable mechanisms preventing a runaway AI.

Perhaps one way of curbing runaway self-interested behavior in our AI systems is to make their ability to access resources partially dependant upon cooperation with one another and with humans.

Imagination is all well and good as a mechanism for thought experiments. However, only empirical tests can give certain answers. Saying 'hello everybody' on a sound system may be great in imagined simulation, but when the experiment is tried one discovers that things like volume and the ambient environment come in to play. The AI must have the ability to grow and learn by doing.






Comments

Popular posts from this blog

The system cannot execute the specified program

It always annoys me no end when I get messages like the following: "The system cannot execute the specified program." I got the above error from Windows XP when I tried to execute a program I use all the time. The message is hugely aggravating because it says the obvious without giving any actionable information. If you have such a problem and you are executing from a deep directory structure that may be your problem. It was in my case. Looking on the web with that phrase brought up a bunch of arcane stuff that did not apply to me. It mostly brought up long threads (as these things tend to do) which follow this pattern: 'Q' is the guy with the problem asking for help 'A' can be any number of people who jump in to 'help'. Q: I got this error "The system cannot execute the specified program." when I tried to ... [long list of things tried] A: What program were you running, what operating system, where is the program? What type of

Coming Soon: General Artificial Intelligence

The closer you get to experts who understand the nuts and bolts and history of AI, the more you find them saying that what we have is not nearly General Artificial Intelligence (GAI), and that GAI seems far away. I think we already have the roots in place with Neural Networks (NN), Deep Learning (DL), Machine Learning (ML), and primitive domain limited Artificial Intelligence (AI). Things like computer vision, voice recognition, and language translation are already in production. These are tough problems, but in some ways, machines are already better than humans are. I expect GAI to be an emergent property as systems mature, join, and augment one another. I was around during the 70s AI winter, and was involved in the 80s AI winter as one of the naysayers. I built a demonstration system with a Sperry voice recognition card in 1984. I could demonstrate it in a quiet room, but as a practical matter, it was not production ready at all. Around 1988 we built demonstration expert systems usin

Your call is important to us, but not much.

Rogers entire network is down and Rogers either does not know why or sufficiently disrespects its customers that it won't say. I was on the advisory committee for the largest private network in Canada serving 150,000 employees countrywide. I was also an active participant building out that network. I installed the first Local Area Networks there. I wrote a code generator responsible for the most critical portion of Bell's mobile network. I also wrote a portion of code for a system in the United States that detected and pinpointed line breaks in their network before they happened. For a time, I held the title 'Networking Professor' at our local College. I registered my first domain name in the 1980s. I have administered Internet network servers for decades. In one capacity or another, I have worked with most of the telecommunications providers in Canada past and present. Nearly a billion devices use a small network codec written by me decades ago.  Except that Rogers was