The maturity of a game changing technology is reflected by the non-academic clarity of how it is characterized and transitioned to the society to which it applies. – J. Durant

I consider myself very lucky that I was given the chance to work with computers at a very young age (1967) and at a time when the concept of automation was more a novelty than a pervasive reality.  It was also quite by coincidence that it was working on a project  (Ford Grant Project/BASIC compiler construction) at an institution (Dartmouth) that was a short distance from my hometown in New Hampshire.  At the time I was fascinated by what could be done with automation and how it could be used as a tool and not as a treat to society.  But I think back to some of the words that came from my mouth, as a babe of technology, and wonder whether they have had some play in all of this.  One example was while working with a bit of machine code I grew frustrated by the never ending barrage of diagnostic error conditions.  I made statements to my mentors (Drs. Kemeny and Kurtz), “if the machine is so smart to tell me I have a problem why isn’t it smart enough to correct itself?”.  While naive it certainly is within the realm of what we envision AI to do and not require the use of time nor intervention to resolve.

Today, some 40 years later, I am no less fascinated by the potential of technology.  At the same time I am also utterly disappointed with the lack of consideration given to transitioning society for its acceptance.   During the course of my life I learned that the best ideas often fail as a result of inept ability to #transition markets (also known to some as market conversion).  Even the humble personal computer had its moments until manufacturers demystified the technology and produced an affordably simplistic paradigm.

A Bit of History

Artificial intelligence (AI) had some early roots at Dartmouth in 1956.  The basis of academic research surrounding the use of computers to perform tasks lead scientists on a pursuit of assimilating thinking (or what we now refer to as learning machines).  Since that time there has been the ebb and tide of AI exploration.  My first exploits (1970s) were with a simple personal computer based package produced by Visi Corp. called ‘VisiExpert’.  It was a rule based solution that would allow you to create lists of conditions and correlate them to one or more rule bases (or in this particular case a simple set of tables).  The example that the package came with was the pairing of wines with cheeses.  I’m sure we could have done something similar with microbreweries and draft choices, or with employment roles and applicants.  The product never got much acclaim not because of its merits but because society was still trying to get their grasp around spreadsheets and word process solutions (databases would come later as they contributed to mailing and label list processing and table feeding for spreadsheet analytic review).  We saw a reappearance of AI in the form of Ada, a high-level Pascal derivative that introduced the concept of ‘object oriented’.  For some the provided a stepping off point from linear programming paradigms and created the potential for reference-able containers (or objects in terms of 1980s nomenclature).  But was this really AI or was it that the reference-able elements created the impression that conditions could invoke established constructs?

Since this time we have seen several what has been referred to ‘AI winters’.  In simple terms these were cooling off periods cause by economics, introduction failures and the reduction in academic and industrial research.  In the corners however there have been those individuals who sustained the course and continued to chip away at the rough hewn work of their predecessors.

Why Now?

There has been allot of topically provocations that have brought AI back to the forefront.  Utilization of big data, robotics advancement from mechanical to intellectual and demand caused by shortfalls in talent resources.   So while society laments about the concerns over loss of jobs and potential infringement on confidentiality the real culprits are in fact self-inflicted.   As I quoted at the onset, the conceptual framework that reflects the interrelationships of technologies (robotics, AI, analytics/data) remains loosely defined and fraught with personal opinion based prejudice.  It isn’t without coincidence that the few models that exist have yet to be proven in either an industrial or an academic research setting. so  how far would you lay trust unless you are also in an exploratory mode?

Let me get to the point on the two concerns voiced by the person on the street.  One – job loss (YES) but this will be replaced not just by people to care and feed AI but also those who will be engaged in different, yet to be defined jobs.  Concern two – confidentiality (YES) but fear not that its a matter of sharing by others but simply a sharing based on the need to know.  Not this sounds a bit like you need to grant permission.  But in today’s world the act of engagement is an act of implied permission.  We must cast away 20th century thinking if we wish to exploit 21st century services.  I recently read a comedic piece posted by Phil Fersht, CEO and Chief Architect of HfS Research involving a phone order being placed for pizza delivery.  It is as follows:

– Hello! Gordon’s pizza?
– No sir it’s Google’s pizza.
– So it’s a wrong number?
– No sir, Google bought it.
– OK. Take my order please .. – Well sir, you want the usual? – The usual? You know me? – According to our caller ID, in the last 12 times, you ordered pizza with cheeses, sausage, thick crust
– OK! This is it
– May I suggest to you this time ricotta, arugula with dry tomato?
– No, I hate vegetables
– But your cholesterol is not good
– How do you know?
– Through the subscribers guide. We have the result of your blood tests for the last 7 years
– Okay, but I do not want this pizza, I already take medicine
– You have not taken the medicine regularly, 4 months ago, you only purchased a box with 30 tablets at Drugsale Network
– I bought more from another drugstore
– It’s not showing on your credit card
– I paid in cash
– But you did not withdraw that much cash according to your bank statement
– I have other source of cash
– This is not showing as per you last Tax form unless you got it from undeclared income source
-WHAT THE HELL? Enough! I’m sick of Google, Facebook, twitter, WhatsApp. I’m going to an Island without internet,where there is no cell phone line and no one to spy on me
– I understand sir, but you need to renew your passport as it has expired 5 weeks ago..

Although totally comedic it remains plausible.  But again my question repeats itself, is this really AI or is it simply a form of what is also being touted by the label of DevOps (Development Operations)?

More Work Required

Aside from the need for framework clarity there also is the question of transitioning of society to embrace the emergent AI paradigm.  People aren’t looking to be sold on the merits or wowed by simple examples.  What societies need certainty about is the clarity of the vision and how control can be maintained.  Images of runaway robots, inaccuracies, false actions and other elements of mistrust abound.  These represent apprehensions created from shortcomings that exist today without AI that can only be elevated with the deployment of a data driven, rule based solution.  Certainty must be earned and shown.

I also believe, in the field of many-many things that need to be done to promote and empower AI, that we need to be looking at things very differently.  If something is done in a certain way do we still need to do that?  Do we need to ask the question if we already have the information or do we need to do it differently or do we need to do it at all?  This isn’t just pertinent to AI but can be equally asked for robotics or even advance analytical applications.  Do we need to perform a six loop analytical calculation or is sufficient for us to simply be alerted and observe a present condition in anticipation of a potential outcome?  It’s these sorts of question but more importantly the thinking mind set that will put advanced technologies like AI into the realm of usable/plausible.

Next?

Of course until finality.   I believe we need AI, not because I’m a technologist that is fascinated by potential but because I am concerned.  I’m not concerned about job losses or confidentiality they have exist, will continue to exist and that is just the way it is.  If you are concerned about job losses you can escape it with being self-employed, you can lose that too with the stroke of a governmental pen or the lack of personal attention to the care and feeding of an enterprise.  You can’t avoid the loss in confidentiality by believing you can lock it away, life in the 21st century is a transparent box whether you want to believe it or not.  If you want total confidentiality then become a hermit but even the hermit has someone who will have your fingerprint.  AI like robotics and analytics has a place to create an opportunity for thinking.  Whether this is focused upon innovation, optimizing, creating or simple recreating the ability to put to task routine parts of our life should be embraced.  Those that chose not to embrace will be destine to a emerging nation paradox where brute force in numbers will overlook efficiency though work augmentation by mechanization.   You can chose to break up concrete with a workforce of thousands or employ a machine that will convert it all to rubble in a matter of hours.  The decision is ours and ours alone.

Yes there will be a next.  ‘The next’ will explore the clear thinking necessary to create a outcome based AI and not a crafted emulation of what we think is human though and logic processes.  A look into the rough constructs necessary to transition from legacy intellect (automation and non-automated systems) to the next forms of creating a durable AI learning machine paradigm.

Till then keep thinking and dreaming (unconventionally).

Advertisements