“Fear is debilitating and causes irrational thinking.  We need to back up a couple of steps and look at things in a better light before we throw ourselves from the building.” – J.Durant

For quite sometime we have been bombarded with articles and news about the coming of artificial intelligence (AI), robotics and all sorts of other ‘human replacement’ tools.  As I am sure some of you are aware there are just some things that these things can’t do or at least we would not permit them to do just because they might be a bit more reliable, efficient and consistent.   There again we have walked on the border with such controlled experimentation as cloning and various other forms of neuro response solutions so who knows just what kind of mad scientist may be lurking to set forth their madness in intellect driven machine.

So let’s take a moment and accept a few things…..

  1. Some form of intellect driven solutions will be put into play and these will replace legacy solutions that involve a human component.
  2. Most like intelligent solutions will embrace a consumer-to-business (C2B) paradigm.  This will ultimately reduce costs and expediate the formation of relationships.
  3. The care and oversight for such solutions will require humans.  However, rather than relying upon casual oversight they will be nurtured by intelligent analytics either in the form of predictive or preemptive forms.   And,
  4. Robots are apt to be involved in order to serve as a service conduct into the interaction with the AI environment.

Now for the really questions that never seem to get mentioned from those expounding on the AI exploits of leading companies like Google, SalesForce, DeepMind, Facebook, OpenAI, Baidu, Microsoft Research, Apple, IBM and Remark.   But in delving into these deeper it is clear that there are a few elements to recognize.  Some of these are noted as AI involved based purely on name and maybe a slight be of tinkering with the concepts and technologies.  There are some that are heavily focused on the AI mechanism that would be used to drive an AI like behavior.  Finally there are some that have wrapped the AI wrapper around an intelligent process, possible an advanced analytic element, and labeled it as AI.   You are also apt to see a similar situation with terms such as learning machine and robotics (especially those that are non-mechanical) as well.

Classical Transitioning Concern

An all to common condition that exists in transitioning is having a plan that is doomed before steps are taken due directly as a result of existing issues.  Thus far we have not seen any of these points raised up by the AI and robotics enthusiasts or those who have expressed guarded reluctance to journey toward the utilization thereof.   Some of these existing conditions include,

  • Green field conceptualization of the AI model and the metamorphic conditions one can anticipate.
  • Interface negotiations from sending as well as receivers.  Stakeholders in receiving solutions may be a bit reluctant to accept AI driven infeeds.
  • Verification and Validation (V&V) readiness.  Most recently British Airways had a system wide shutdown that crippled their operations.  If we are experiencing these conditions in complex networked but traditional systems what is it going to be like with flight by automatic systems like AI?
  • What mechanisms will be used to fuel the AI solution?  Will those mechanisms be ready to provide reliable feed information but in doing so be expedient enough to fuel the AI application?
  • Have boundary reach parameters been set?
  • Consideration for security, validation, performance, real-time conflict management, in-flight updating, and some of the more technical elements of AI?
  • Has thinking been toward ‘right solution’ and not remain focused on existing solution?   Again more green field/blue sky thinking.
  • Formulating a growth based design that will engage elements of robotics and analytics.
  • Understanding that the AI solution may be more than just an event driven paradigm and will demand the inclusion of event base stimulation, deductive modeling that builds upon (or adjusts) a rule frame repository, and the concepts of prediction/authentication/ and progressive simulation (apart from the live environment).
  • Destination driven repository containers that are distributed but interconnected globally as opposed to single destination service.  This also brings up the question about non-stop up-time.
  • Extent of human or non-human intervention schema.

And there are allot more that are required in order to insulate from failure and elevate the opportunity of success.

Circle Condition

Unbeknownst to consumer/recipients of change there exists some form of exploratory cycle.  It may be as simple as a survey and an alpha test of market, or as formal as experimental research.   I reread an article (actually from a different source) on TensorFlow Playground a working example of neural technology.  Impressive and stimulating, well illustrated form of scientific/mathematical application to draw deductive suggestive outcomes with a high probability of accuracy (but not at 100%).  Then I got thinking about whether 100% was attainable from humans either, after all we are prone to mistakes whether through random attention or the result of circumstantial conditions that exist.  Clearly the purpose is to build a sense of trust and understanding, as a commercial effort for the market place.  It also illustrates that the technology was being applied to the known science of math and to legitimize its ability.  What we have seen however is that the line between research and usable solution is often a fuzzy line.  The jumping from concept to application overlooks some grooming required and especially in this case the need for a science that has an element of runaway evolvement based on conditional stimulation and seed data.

In some respects the concepts and principals of AI follow a similar path as is the case with compilers.  There are a finite set of conditional parameters that can be involved based on formalized criteria, set by the institution, to produce and outcome.  What creates the circle is that the outcome is then feed back in the process to which some events may be repeated and others taking a totally different path.  The fear isn’t in the use of the technology it’s all of the possible things that can go wrong.  To understand their potential and to determine what the appropriate level of care that must be exercised should be.   This is not a path in which we have seen similar debates about before.  Space programs, nuclear reactors and fly-by-wire systems have all had their moments of glory and those times when intervention (and often spot creativity) must be exercised.

So Where Are We Now?

We are in some interesting times.  It remains uncertain the degree and speed in which AI will advance.  My suspicion is that for some that are already poised with intellectual driven tools, whether it be predictive analytics (ready for preemptive forms), robotic clusters looking to advance from rule based paradigms or semi-thinking information technology solutions looking to employ a bit more merging of trends with behavior change they definitely will have a leg up.  For the rest it will become a decision as whether to wait or to start taking some of those formative steps now that exists for the organizations that are poised.   Looking at past failed attempts at AI it was the result of institutional support (left mostly to universities and the Department of Defense with Ada).  Today respected institutions, like Google, provide a groundswell of interest and support by association.  Whether its rightly so is not up for debate but rather to be acknowledged as a fact.  It is not without risks but as long as we humans have control we can do what is needed to insure that our AI will succeed in a controlled and appropriate fashion.

Advertisements