The Difference between Life and Machine

This was a response to a thread started by “Scott” on LinkedIn’s Artificial Intelligence Researchers, Faculty + Professionals Group.  My response went over the allocated chars so I placed it here.

Clearly, human intelligence does not capture nor does it represent all forms of intelligence in the universe.  With that said, in regards to a software AI attempting to demonstrate human-level intelligence:

We often commit categorical errors from the inception of a project or thought experiment when we try to compare intelligence of a biological life form with the intelligence demonstrated by a non-biological entity.   An AI cannot experience biological qualia, for they are not biological.  It’s camera eye may see images and analyze them.   It’s microphone ears may process sound and do a fantastic job of parsing it.   Yet the true aspect of biological ‘qualia’ is missing.   A non-biological experience is full of non-biological qualia which we cannot understand since we are not non-biological!

One thing is certain to me.  Biological life has needs.  It’s primary directive is to continue living.   Different life forms are more complex than others thus their primary directive of ‘continue living’ is expressed differently.  Yet, the overall theme remains the same:  Things that are alive tend to try to stay ‘alive’.  Things that are not alive tend to stay ‘not alive’.

Consider the following:
I worked on a private project that emulated human emotion in an artificial (electronic) ‘brain’.   We had a 3d demo in a small 3d world.  In this world the human user could drop from the sky a red box or a green box on demand.   A green box would cause a beautiful light show that would bring joy and happiness to the AI.   The facial muscles in the 3d model were wired to express the proper variations and combinations of emotions based in the FACS (facial action coding system).  This was bi-directional.  Smiling caused feelings of joy, and feelings of joy caused one to smile.  When the red box would drop it would settle and then explode with a loud boom.  The noise would frighten the AI, the pieces would hit his face and cause pain.    In this demo, the user could drop whatever boxes they chose, whenever they chose, in whatever order they chose.  The AI would learn to expect bad things and this fear would be reflected in his emotion (fear, negative expectation, etc) if you have already dropped a red box and trained him with that experience.   He learned, in context, through experience, and stored this learning for future use and future expectations with associated things.

The problem with our demonstration was that the true ‘qualia’ was hard-coded.  Yes the AI learned.  Yes the experiential learning caused associated emotional stamps that wired and re-wired this AI’s electronic brain.  This was very exciting for us as a team.   Yet this AI never truly ‘experienced’ the qualia of pain.  It was instructed through code that the neuron representing pain was firing.  This was not some sadistic experiment.   No true ‘biological feeling’ of pain or joy or excitement was ever actually experienced by the software.

So with this in mind consider another example:
An autistic boy impervious to pain touches a hot stove.  He does not move his hand and it burns.  A mother trains her autistic child, over time, that this ‘hurts’.   He learns that it is not a good or acceptable behavior, thus, over time, the autistic boy learns to not do these types of things.

This example is akin to an AI that has been programmed to “feel” pain.   Neither the autistic boy nor the AI program actually feels pain, yet both can emulate its effects and act accordingly.   In essence, the autistic boy is a walking Chinese Room Experiment.

If we can come to terms and accept this to be the case in our software AIs, we would avoid many of the pitfalls that trap us.   We can create a really advanced human emulator.   It won’t be able to actually do some of the ‘human-like thinking things’ we might hope for because it won’t actually be ‘feeling’ the qualia of human experience.    Yet that would be OK for many of its uses.  A really good emulator would serve some really good purposes in many cases.    We don’t care that our calculators don’t understand what they are doing.  They are not conscious entities aware of their number crunchings.  While they do not emulate humans, I use the example to demonstrate their helpful purpose.  We use them as tools that were designed to perform a function.   In that same light, as far as non-biological AIs are concerned, those too could be fashioned and used for greater purposes even in light of the inherent limitations that non-biological AIs are up against.