Wednesday, November 22, 2023

It ain't AI if it don't learn.

You read that right: you can't call it Artificial Intelligence if it doesn't fully imitate Human Intelligence.

I used to teach this stuff at a local college in their 400-level computer science curriculum.  The book was Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig.  It's considered the book to teach first-level concepts in artificial intelligence.  It's well written and very understandable.

HI (Human Intelligence) begins developing even before a child is born.  The child can recognize sounds in the womb and can (and does) react when certain sounds are heard.  After the child is born, the same sounds heard in the womb are now heard clearly by the infant with many of the same reactions.

Ever notice how an infant will all of a sudden stop and listen to a sound?  The infant hears the sound and checks its memory - it's "knowledge base" - for the sound and reacts accordingly.  Pleasing sounds bring a smile, unpleasing sounds cause a different reaction.

As the child grows, it "learns" by adding additional sounds to it's "knowledge base".  It experiments with motor actuation by first moving limbs, then discovering how to roll over, then crawl, then stand.  All the while, the infant is also hearing and repeating sounds and noticing the reaction to those sounds.  Eventually, the infant "learns" that certain sounds always cause the same reaction - or reaction that is reasonably similar.  The infant adds both the sounds and their reactions to its "knowledge base", and eventually figures out how to manipulate those sounds in ways that are far different than the original sounds.  Adults think this is "so clever", but it's actually the child's learning process being demonstrated.

Companies developing AI in today's market are not generating true AI.  They are populating the AI system's knowledge base - its memory - with predetermined stimuli and response algorithms. Those algorithms are designed for specific purposes: modification of preexisting data to achieve new views of that data (pictures, videos, and sound), using artificial vision to cause specific motor control (driving, assembly line robotics), and the manipulation of other preexisting data to achieve a desired output.

But none of these systems demonstrates the primary purpose of intelligence: to learn.  These systems do not take previously unconsidered data and incorporate it into their knowledge bases.  They can't - and the designers are wise to not do this.  Because these AI systems are incapable of doing the one thing that humans do, the thing that separates the human mind from all other animal minds.

The ability to reason.

And, by reason, I don't mean political or moral.  I mean the ability to understand whether a new piece of knowledge should be incorporated into its knowledge base.  Whether a strangely-designed light pole moving in a strong wind is a human or an object.  Whether a series of words pasted together makes sense even if it is grammatically correct.  Whether those generated sounds claimed to be "music" are pleasing or atonal.  And whether the picture generated is ugly or beautiful.

The current demonstrations of AI are not meant to show AI capabilities but to confuse the observer into thinking that the generated picture, sound, or object are human-generated or not.  In almost every case, it's possible to determine the difference under close examination - but that raises the question why try to confuse reality by a computer-generated fantasy?

But one factor remains: these systems are not generating new ideas or new connections between ideas.  They are merely exercising algorithms developed for a specific purpose. And for now, that purpose seems to be to demonstrate the capabilities of computer systems.

Alan Turing created a test to determine whether a system could demonstrate "artificially intelligent" behavior that would be indistinguishable from human behavior.  His test has been criticized as being "not realistic", but the critics miss the entire point.  If a computer system cannot demonstrate "artificially intelligent" behavior, then it isn't intelligent: it is merely exercising highly advanced algorithms.

The proof remains the non-human system's ability to reason.  And, as yet, none of the so-called AI systems have demonstrated this ability best described as whether, or not, to do something beyond the information stored in their knowledge bases.  And if so... what, then, to do.

No comments:

Post a Comment