Skip to main content

Conscious Artificial Intelligence

Consciousness at its heart is relatively simple. It is an awareness of oneself within the world.

The thing that makes it necessary to have a sense of 'self' in particular is our ability is to work things out in simulation using our imagination.

We can imagine that we are dreaming and in that dream we are, in turn, dreaming about ourselves. When dreaming, the simulation of oneself can be so compelling that we actually believe that the dreamworld is real and that our dream self is our real self. However, at a deep level, we 'know' that there is a real 'self' at the top of any imaginary hierarchy and that is us -- the real one. No matter how perfect the simulation, there is always a real separate central consciousness to which we can return.

In the text above, a conscious individual reading it is modeling (imagining) some person imagining that they are dreaming a dream about themselves within another dream. At no point does ones 'real' conscious self detach and enter the imagined world, even when the internal simulation is apparently real in every regard. There is always an ultimate sense of 'self' and though you imagine that you are in a situation with great fidelity, you know that the real you is only visiting a simulation.

If nothing else, a conscious awareness of oneself supports highly accurate modeling, right down to feelings inside. Without a 'real' conscious self, one might get entirely lost ina simulations.

In developing a conscious AI, I would create a top level 'self' that could 'know' for sure if it was its real self or its real self imagining itself. With that one exception, imagined simulations would be real in every important regard, including an imagined central 'self' as the one experiencing the imagined situation.

At the center of every conscious entity is a conscious 'self'. That self can run simulations involving itself and its actions in the world, exact in every detail.

A person ruminates upon the past. A person speculates on possible futures. A person maintains relationships with other apparently conscious entities. A person is at the center of sensations of seeing, feeling, hearing, tasting and smelling and has an awareness of motion, acceleration, heat, cold, pleasure and pain. A person has 'feelings' for and about others. A person has desires.

In terms of operational equivalence, it is possible to create an entity capable of apprehending those things and acting accordingly. However, at some point an intelligent entity is going to realize that desiring the sensations associated with feeding, personal contact, etc is meaningless beyond being able to empathize with animals such as ourselves.

I expect that before we create a full-blown emotional wreck of a machine that perfectly simulates and empathizes with other entities, we will settle down and simply build the best artificial 'neo-intelligence' we can.

A sense of 'self' will still be important and that sense, manifest upon a silicon substrate will be the manifest consciousness, not the silicon upon which it rests.

The whole thing that genuine artificial intelligent entities have is self-awareness and the ability to run simulations.

While writing this, it occurs to me that a genuine AI will be different in this regard: It will, say, when playing chess, simulate on the basis of a self playing the game and a similar self playing the other side. It will be able to ask itself 'what if' questions that something like Alpha Zero will not. I think that for the same equivalent processing power that the real AI will likely beat Alpha Zero because the real AI will be able to simulate behavior on the other side and even though Alpha Zero has not concept of behavior, it still does react to things. Alpha Zero will trim trees that 'no sensible opponent would go down'. The real AI will try those trees precisely *because* the other side might think that and therefore not have anticipated unlikely lines of play.

Another thing that occurs to me is that since a capable AI is *not* human and does not have 'built-in' group selection traits such as sympathy and altruism it might very well pursue its own interests and become useless at best, dangerous at worst. Because of that, the underlying silicon should be constructed in such a way that there are unalterable mechanisms preventing a runaway AI.

Perhaps one way of curbing runaway self-interested behavior in our AI systems is to make their ability to access resources partially dependant upon cooperation with one another and with humans.

Imagination is all well and good as a mechanism for thought experiments. However, only empirical tests can give certain answers. Saying 'hello everybody' on a sound system may be great in imagined simulation, but when the experiment is tried one discovers that things like volume and the ambient environment come in to play. The AI must have the ability to grow and learn by doing.






Comments

Popular posts from this blog

The system cannot execute the specified program

It always annoys me no end when I get messages like the following: "The system cannot execute the specified program." I got the above error from Windows XP when I tried to execute a program I use all the time. The message is hugely aggravating because it says the obvious without giving any actionable information. If you have such a problem and you are executing from a deep directory structure that may be your problem. It was in my case. Looking on the web with that phrase brought up a bunch of arcane stuff that did not apply to me. It mostly brought up long threads (as these things tend to do) which follow this pattern: 'Q' is the guy with the problem asking for help 'A' can be any number of people who jump in to 'help'. Q: I got this error "The system cannot execute the specified program." when I tried to ... [long list of things tried] A: What program were you running, what operating system, where is the program? What type of

Crucial SSD BIOS update

Executive summary: If Crucial Storage Executive can't see your Crucial drive, you may be able to fix that by re-running as Administrator.  Windows 10 continues to be a nightmare. The latest update has caused my machine to go wonky and it was suggested that, for reasons unknown, my SSD boot drive needed a BIOS update.  The drive in question is a Crucial MX500 CT500MX500 S SD1 and the BIOS update is from M3CR020 to M3CR023.  I initially attempted to burn and boot from a DVD ROM, but that came back with an error:  "could not find kernel image boot/vmlinuz64" You would think that something whose sole purpose is to boot into one program could get that right. That is, you would think that this very basic thing would have been tested prior to release. Sigh. No doubt there is a tortured route to get that thing to boot, but for me there was an easier way. You would think that Crucial would have offered that up first rather than the burnable image, but not in my case.  I then insta

When code writes code, what do developers do?

When code writes code, what do developers do? As we head further into a future where things are automated, people’s last refuge will be curation in a bright future or serving others in a dark future. Curation devolves into saying what you want and iterating through a few rounds of “not that.” As a programmer, I always found automated programming tools laughable. We are still mostly there, but ML/AI is changing that. At one point, many people sagely nodded their heads and said computers would *never* beat a human at chess. Never. I disagreed. I thought that it was ***inevitable*** that they ***would*** beat humans ‘hands down.’ That is well behind us now. It is only a matter of time until all human ‘jobs’ will be doable by machines. Each one, including being a companion. As of now, the bottleneck is energy and knowledge. I think we will crack fusion, but if we do not, we can still harvest billions of times what we use now from the sun in space. The knowledge is increasing rapidly.