“A mind is a terrible thing to waste and I dread to think that it will become more so with the assistance of a collective of machines.” – J. Durant

No, my age has absolutely nothing to do with my concerns and apprehensions.  If anything its the result of applied human intelligence and the creative processes that spin off from it.   It is fair to say that today we are into an experimentation cycle and are using research as a means to see just how far we can take the science of artificial intelligence.  There will be some, like Alphabet, who are ahead of many.  Let be perfectly clear that this is a normal learning curve, not just for the innovators but will prove beneficial to later adopters as well.

I do have concern about the notion of intellectual implosion.  Intellectual implosion is where the deployment of AI becomes centered on a closed and narrow application universe.  A simple example would be the use of an AI framework that provides ‘what if guidance’ for business decisions.  It sources of input become limited to a single domain, the company.   Now you might respond to this by saying that we then need to entertain other external sources in order to further elaborate the possibilities both arithmetically as operationally.   But here is where we introduce several factors of concern involving these external sources.  Some of the concerns include;

  • Accessibility and negotiated access
  • Timely (at least as expedient as internal sources) Availability
  • Reliability
  • Balance of understanding (simple definitions)
  • Units of Measure
  • Raw and Deductive Elements

It then becomes a question of need and viewing the concerns with open eyes.  While traditional systems can contain damage, an AI system can propagate even the slightest condition extensively.   It’s not a ‘do not go there’ condition, it means that more regiment must be deployed to check, authenticate, isolate, repair, respond, or release in the course of use.

Beyond Implosion

Let me reiterate that my concerns are not my hesitation to progress.  In fact I am a strong proponent to most things that can make our lives more enriched and productive.  But with that said it places an immense burden on the shoulders of engineers, architects, administrators and management to see to it that we act responsibly.  Our concerns are not just something that is downstream but starts with our present conditions.  Nearly a day goes by when we don’t hear of some technological mishap.

  • Compromises
  • Attacks
  • Failures
  • Denial of Service
  • Efficiency Losses
  • Unreliability
  • Usability Challenges
  • Technological Excesses

are but a few of the things we face today.  How will these challenge what we do tomorrow if we are to advance in the direction of AI?  Overlooking these will be solved by AI, they will be magnified and even acted upon.  The old mnemonics GIGO (Garbage In-Garbage Out) takes on a whole new depth of meaning.  Where humans would act now a rule based action would occur in mostly a non-visible fashion.  To emphasize this point I recently read about experimentation being done at Alphabet where the AI platform had adapted itself to conditions that weren’t set in the rules (came up with its own auto response/reaction).  This for some might be a bit discerning but it shows not only the depth of capability but also the need for the extensive level of human consideration that must be exercised with each an every element.  Catch state, rule limitations, plausible creations elements and redo back checking are but a few of the safety nets that can be considered.

Human Intellectual Depth

As suggested in the previous section there is an elevated level of human thinking that is necessary in AI.  It’s not simply a language form, some rules, data inflow/outflow and a permissive deductive element… it requires real thought, collaboration, postulation, experimentation and a mindset of value generation maintenance.  Present experimentation aside we need to think abnormally.  I think of this is the sense that mere replication of present habits, conditions and outcomes may or may not be the way we need to go.  Why create a robotic arm that emulates the limitations of a human arm or for that matter why create an arm when it is possible that some other form of fetch-release-retain-manipulate mechanism might be better engineered?  In the same context, why think of need or solutions utilizing AI in the same way as we would today?  While it may be comforting that we ‘can do it’, the focus is upon outcomes and growing possibilities.   Even these have a strong potential for change.  The fluidity of AI will change us from thinking in a steady state sort of way to one in which we are driven by rapid adaptation.  The bigger limitation for mankind is the ability of adoption and possibly whether some of the adoption will have to remain vested with the technology.  It remains quite possible that some adoption will remain out of human hands because of the frequency and extent to which it is taking place.

Big Questions

Thinking about the topic should excite us but is also apt to raise up a multitude of questions, concerns and elements for investigation.  Listed below are few of the ones that I have been pondering and I hope that it can be used as a basis for your further inquisitiveness.

  1. How will AI-AI or even Global AI be negotiated?
  2. Will AI-AI/Global AI represent a definable limited and restricted access point(s)?
  3. How does Smart Cities play into AI applications?
  4. To what extent will not AI institutions hinder?  Who will be hindered, AI or the non-AI player or both?
  5. What happens when AI acts cross over that are either wrong or out of control?
  6. Will risk become normality and normality become risk (in the present context)?
  7. Rogue AI threats and issues?
  8. What other present day technologies and practices will put under strain?
  9. How much computing power will be required and the importance levied for comprehensive network connectivity?
  10. How does it affect society and human collateral?
  11. Will the divergence from task to intellectual focus enable or disable societies and companies?
  12. Pervasive and responsible constraint becomes a matter of philosophy.  Should it be regulated, mandated and reshaped?
  13. Convergent roles require collaborative AI interaction (eg. elements of software engineering such as dev-ops, verification & validation (V&V), analysis).  Considered or overlooked or simply ignored?
  14. Extent of human intrusion and at what level of intrusion?
  15. Is there a safe state for change or is it invoked real-time (and should it)?
  16. What paradigms will change, become obsolete or need to be totally created that involve not just AI but also it’s close partnership with Robotic Process Automation (RPA) and advanced analytics?
  17. Speed and quality have plagued businesses, will diametrically different levels of speed give way to quality issues (resolve, mask or create)?
  18. …. Others…. over time there are I’m sure more.  What will your additions be?

“For every complex problem there is an answer that is clear, simple, and wrong.” – H. L. Mencken and as I have often said “A complex problem doesn’t necessarily require a complex solution.” (J. Durant)

Do we move forward with AI…. YES.  Moving forward is not with reckless abandon but still following sound business and engineering tenants.




“Fear is debilitating and causes irrational thinking.  We need to back up a couple of steps and look at things in a better light before we throw ourselves from the building.” – J.Durant

For quite sometime we have been bombarded with articles and news about the coming of artificial intelligence (AI), robotics and all sorts of other ‘human replacement’ tools.  As I am sure some of you are aware there are just some things that these things can’t do or at least we would not permit them to do just because they might be a bit more reliable, efficient and consistent.   There again we have walked on the border with such controlled experimentation as cloning and various other forms of neuro response solutions so who knows just what kind of mad scientist may be lurking to set forth their madness in intellect driven machine.

So let’s take a moment and accept a few things…..

  1. Some form of intellect driven solutions will be put into play and these will replace legacy solutions that involve a human component.
  2. Most like intelligent solutions will embrace a consumer-to-business (C2B) paradigm.  This will ultimately reduce costs and expediate the formation of relationships.
  3. The care and oversight for such solutions will require humans.  However, rather than relying upon casual oversight they will be nurtured by intelligent analytics either in the form of predictive or preemptive forms.   And,
  4. Robots are apt to be involved in order to serve as a service conduct into the interaction with the AI environment.

Now for the really questions that never seem to get mentioned from those expounding on the AI exploits of leading companies like Google, SalesForce, DeepMind, Facebook, OpenAI, Baidu, Microsoft Research, Apple, IBM and Remark.   But in delving into these deeper it is clear that there are a few elements to recognize.  Some of these are noted as AI involved based purely on name and maybe a slight be of tinkering with the concepts and technologies.  There are some that are heavily focused on the AI mechanism that would be used to drive an AI like behavior.  Finally there are some that have wrapped the AI wrapper around an intelligent process, possible an advanced analytic element, and labeled it as AI.   You are also apt to see a similar situation with terms such as learning machine and robotics (especially those that are non-mechanical) as well.

Classical Transitioning Concern

An all to common condition that exists in transitioning is having a plan that is doomed before steps are taken due directly as a result of existing issues.  Thus far we have not seen any of these points raised up by the AI and robotics enthusiasts or those who have expressed guarded reluctance to journey toward the utilization thereof.   Some of these existing conditions include,

  • Green field conceptualization of the AI model and the metamorphic conditions one can anticipate.
  • Interface negotiations from sending as well as receivers.  Stakeholders in receiving solutions may be a bit reluctant to accept AI driven infeeds.
  • Verification and Validation (V&V) readiness.  Most recently British Airways had a system wide shutdown that crippled their operations.  If we are experiencing these conditions in complex networked but traditional systems what is it going to be like with flight by automatic systems like AI?
  • What mechanisms will be used to fuel the AI solution?  Will those mechanisms be ready to provide reliable feed information but in doing so be expedient enough to fuel the AI application?
  • Have boundary reach parameters been set?
  • Consideration for security, validation, performance, real-time conflict management, in-flight updating, and some of the more technical elements of AI?
  • Has thinking been toward ‘right solution’ and not remain focused on existing solution?   Again more green field/blue sky thinking.
  • Formulating a growth based design that will engage elements of robotics and analytics.
  • Understanding that the AI solution may be more than just an event driven paradigm and will demand the inclusion of event base stimulation, deductive modeling that builds upon (or adjusts) a rule frame repository, and the concepts of prediction/authentication/ and progressive simulation (apart from the live environment).
  • Destination driven repository containers that are distributed but interconnected globally as opposed to single destination service.  This also brings up the question about non-stop up-time.
  • Extent of human or non-human intervention schema.

And there are allot more that are required in order to insulate from failure and elevate the opportunity of success.

Circle Condition

Unbeknownst to consumer/recipients of change there exists some form of exploratory cycle.  It may be as simple as a survey and an alpha test of market, or as formal as experimental research.   I reread an article (actually from a different source) on TensorFlow Playground a working example of neural technology.  Impressive and stimulating, well illustrated form of scientific/mathematical application to draw deductive suggestive outcomes with a high probability of accuracy (but not at 100%).  Then I got thinking about whether 100% was attainable from humans either, after all we are prone to mistakes whether through random attention or the result of circumstantial conditions that exist.  Clearly the purpose is to build a sense of trust and understanding, as a commercial effort for the market place.  It also illustrates that the technology was being applied to the known science of math and to legitimize its ability.  What we have seen however is that the line between research and usable solution is often a fuzzy line.  The jumping from concept to application overlooks some grooming required and especially in this case the need for a science that has an element of runaway evolvement based on conditional stimulation and seed data.

In some respects the concepts and principals of AI follow a similar path as is the case with compilers.  There are a finite set of conditional parameters that can be involved based on formalized criteria, set by the institution, to produce and outcome.  What creates the circle is that the outcome is then feed back in the process to which some events may be repeated and others taking a totally different path.  The fear isn’t in the use of the technology it’s all of the possible things that can go wrong.  To understand their potential and to determine what the appropriate level of care that must be exercised should be.   This is not a path in which we have seen similar debates about before.  Space programs, nuclear reactors and fly-by-wire systems have all had their moments of glory and those times when intervention (and often spot creativity) must be exercised.

So Where Are We Now?

We are in some interesting times.  It remains uncertain the degree and speed in which AI will advance.  My suspicion is that for some that are already poised with intellectual driven tools, whether it be predictive analytics (ready for preemptive forms), robotic clusters looking to advance from rule based paradigms or semi-thinking information technology solutions looking to employ a bit more merging of trends with behavior change they definitely will have a leg up.  For the rest it will become a decision as whether to wait or to start taking some of those formative steps now that exists for the organizations that are poised.   Looking at past failed attempts at AI it was the result of institutional support (left mostly to universities and the Department of Defense with Ada).  Today respected institutions, like Google, provide a groundswell of interest and support by association.  Whether its rightly so is not up for debate but rather to be acknowledged as a fact.  It is not without risks but as long as we humans have control we can do what is needed to insure that our AI will succeed in a controlled and appropriate fashion.

From that single organic nodule of package life an offspring is produced, or not.  This week has been particular enriched with insight and wisdom, some unsolicited and others remaining a bit of a quandary.

This week Hubert Dreyfus passed away.  A professor and a human being, a philosopher who challenged us to consider the practical limits of computers.  Aside from his academic acclaim and intense experience as a human being his message was far deeper than the antidotal point on message.  Yes, he asked us to consider the depth of use and application of technology in our personal lives, business and society.  He appears to have know the extent of human temptation and addiction.  Today we are drawn to the light like a moth to a candle, embracing the new and forsaking the tried and true. Is it because we are afraid of being left behind or are we considering the real vs. illusionary value of it all?  Or is it that we are sitting on the edge of restrained obsolesce and the jump seems right even if we might be stepping out into the darkness with hardly a basis of comfort.  Although I never had the chance to meet him in person I enjoy some brief ‘technological’ interchanges to better understand his stance on artificial intelligence.  During my advanced studies he provided invaluable opinions about the difference between machine learning, the need for social contact and interchange, along with some quite private discussions about risk.   It was a bit unnerving to consider to realize which was always in front of my eyes that the success or destruction is not in the device but the enablement provided by the human steward.   To me it wasn’t just a learned opinion, although gifted with experience at MIT and Rand, but his deep and profound thought given on the subject.

The Junk Drawer

Few people do not have a junk drawer.  Whether it be made up of household repair items or kitchen gadgets we all manage to eek out a space to stash away an much anticipated device of salvation.  Likewise we see the emergence of the same for abandoned technology.  Cables, cell phones, chargers and various explored ancillary devices find their way to ‘the drawer’.  We pretty much know that what goes in is unlikely to come and be used, time is not on the side of technology and the obsolescence that occurs.  But our frugal nature suggests that we or someone might find a need or use for these cast aside items.   I mention the junk drawer in a broader context that we have lots of technologies that have come and gone.  Often replaced by what appeared to be superior solutions, that later prove to be less superior that even more future ones.

I think back to my very first expose to artificial intelligence in the 1970s.  It was with a very simple but quite illustrative product called VisiEXPERT produced by the now defunct Visi Corp.  The product was a very rudimentary rule based artificial intelligent (AI) solution.  Its operational example used the pairing of cheese with wine and allowed for the addition of new elements and relationships.  At that time it was robust enough to learn or be driven by inference, it required aggressive assertions in order to advance outcome delivery.  Later in the 1980s we saw the ADA and LISP development as service languages support for data driven behavioral modification processes.  In both cases their emergence was not months but decades in the making and although solidly formed it struggles to produce a groundswell of disciples.

So our junk drawer continues to grow and with this we see the rekindling of interest.  For those in the AI community the drive is not so much from the technology as it is the promotional support of the business community.  AI is talked about in a single breath with learning machines but how does this all fit together with life?

Embryonic Appearance

This past week (May 5, 2017) I read a piece that Scott Ambler a legendary agile disciple wrote about the “Darth of Qualified Agile Coaches”.  The points reflected a condition in which labels get applied but the lack of substantive value creates an abundance of non-value.  Even though the focus was on promoting professional qualifications there remains a quite similar condition as it pertains to AI.  We see a plethora of AI involved entities who for all intense purposes are new market entrants.  But let us not also be fooled by capabilities driven by attending a course, as Scott pointed out, it involves intense and purposeful experience to fulfill the obligations as an expert in a given field.  Even in my case with over 40 years in the information technology field and intensely active engagement in AI related activities I still feel I have lots to learn.  Its from this vantage point I wish to ask the question about capabilities.

When I think in quite simplistic terms as pertaining to AI I think of ‘the seed’.  That kernel catalyst that will drive the growth of technology based learning.  I also think about risks and what level of permissiveness that we should allow the AI model to undertake.  Embedded in that kernel is data and we need to be astutely aware that data is not always clean, controlled and ready for use.  It necessitates sterilization to make it ready and all must be done in as near to real-time as possible.  Momentary lapses in time or hesitation in commitment of cleanliness jeopardizes the AI value proposition.

To further emphasize this point I will refer you back to some earlier writings on did on advanced analytics.  I find that while analytics also makes use of data, it also has the potential to become a close partner with AI and the learning machine.  As stated in this earlier article (10/15/2014) the real next generation is not in preceptor or predictive analytics, it in the real of preemptive which takes action vs. alerting us of or indicating that a condition has the potential to occur.  In short, the model reflected the raw basis of the learning machine.  While not centered on growth of knowledge and more centered on action the elements exists for the feeding the AI model through preemptive analytics.  I also contend that anything short of being soundly grounded preemptive logic, including predictions is really shaky ground for AI.  The basis of this opinion is the potential for runaway illogical reaction by the AI model paradigm.  In the most simple of examples I think of how I look at a situation and react to later discover it was not exactly as I saw it.  If this had been applied to an AI scenario who knows whether reversion would have occurred and if it did was it through intervention or a separate set of rules to deal with error management?

Conclusion… Just a Wee Bit of Fertilizer

No planting would be complete without a bit of care and accelerated nurturing.  AI is no exception.  In this context our growth enhancement hormone is a combination of pragmatic engineering, anticipatory examination and a purposeful examination of our present state of intellectual discourse.  Most would agree that humans make errors and thus anything we do is both prone to error creation but also possibly error propagation caused by what we presently do.  One cannot view human emulated thought process as simplistic.  Even the most rudimentary movements, considerations, evaluations and commitments entail literally thousands of possible paths and choices.  While technology can handle volume and responsiveness it remains the dutiful obligation of humans to craft the paths, the gates of decisions, correlation of relationships and discernment of probable paths with rational and common place or dissenting opposition.  Its for these reasons that the engagement of personnel involved with AI cannot be causal technological Spartans.  Technical proficiency will remain important but the management of the operating intellectual paradigm will remain critically essential.  This involves raw in-sources from data, progressive analytics, paradigm development and deployment to error corrective models as the minimum for sound control.   Even then understanding is bounded by experience and therefore secondary ability to communicate, examine and model will help bolster the skill set needed for application of AI.

To the consumer there may be some worry or dissension, much as was the case with the use of voice response systems.  People are hesitant to embrace what is uncomfortable or viewed as inadequate by comparison to what they have grown accustom.  So while AI proponents dabble in the science there remains a great degree of need for transitioning of people to a new world order.  Some may enjoy a more hidden affront to consumers where others will be challenged regularly by real-time consumerism exchange.  Simply remember that all things are solvable as long as we understand the nature of the best, the human condition.

Each and every day we face change.  Our autonomic response isn’t always logical, planned or successful.  Even when facing success outcomes one has to understand where its the result of fate or the result of experience.  In more pragmatic endeavors we cannot rely on these two elements completely and need to adopt a framework that guides these efforts.

Daily Life

In 2010 I made the decision to move to Asia.  My decision was driven by circumstances and opportunity.  Relying on my many years of work in the region, and supported by solid pragmatic skills I venture forward.   Was I confident… YES, was I committed and determined… YES, was I confident to address the unforseen… YES but did I really understand the full extent of the endeavor and all of the things would come my way?  I can honestly that I was to confident to see the barriers even after decades of living life.  This situation is akin to many projects and transitions facing business whether it be the disruption created by technologies or the advent of new leadership in the company.  The transition is seldom without issues and excesses that could have been avoided.  So why didn’t we avoid these repetitions?

Part of our challenge is to avoid looking at the end point (goal/objective) and then reverse engineering a plan to fit the commitment of scope-schedule-cost, also known as the triple constraint.  Instead we need to better understand and craft a consistent means of achieving sound transitioning.  Yes, we still need to have goals/objectives.  Yes, we still need to understand the triple constraint.  But we also need to have a means by which transitioning will occur in a lean, consistent, risk reduced fashion.


As with many things in life it moves on whether we like it or not.  We can hold onto the car with both hands or we can grasp the steering wheel to guide it to its objective with major concern for a calamity.   I have also had a fond love and adopted the principal of Newton’s Third Law of Motion, “for every action there is an opposing and equal reaction” (thus the formation of 3rdLAW).  The concept of Transitional Sciences is that for every transition it will create equal and opposing reactions in so far as achieving success points.  Out of concern of making the topic too complex let it suffice to say that these embarkations of transitioning produce outcomes that can have equal consequences.  There is no such thing as too small to address this concern since a wasted moment involves time-costs-diversion of focus-lost/gain in confidence.  I was recently challenge by a colleague as to the mere term “transitional sciences” and in response I chose to break it down into pieces.  “Transition” the means of movement from our present to our next state. “Science” a body of facts or truths supported by research and experience.  Thus it seems quite appropriate and proper to state that “Transitional Sciences” is properly titled.  Furthermore, while it involves change it commands the need for the sound management thus the concept of “Transitional” control.

While We Spin

We can’t stop the world from spinning nor should we even suggest that it should as we might fall off this planet.  The same can hold true as it relates to inhibiting advancement of technologies.  I’m sure that there are others who feel a bit like I do that overnight we have experts in a emerging topic that previously were sitting in some other area of expertise.  We see data scientists, big data managers, cloud professionals, artificial intelligent gurus (have you heard of LISP or ADA?), and so on.  Maybe its from an acquired educational foundation, as we see with statisticians owning analytics or artificial intelligence (AI) by those who had some experience with preemptive logic.  However with all due respect a point of reference does not make you an expert simply because you need to understand the broad context of cause-and-effect.

So why am I (and my team) qualified in the area of Transitional Sciences?  When I asked this question the response that I got was less than reassuring.  Upon closer examination I discovered that it wasn’t the topic but the question, so I re-framed it.  “What kinds of work engagements have you had that encountered challenges?” was my next question.  I got a loads of examples and upon closer examination two elements were revealed.  The first was that they all involved change, and nearly all of them had some sort of project plan to guide change.  So what went wrong?   It was then we discovered, in somewhat of a bolt of revelation,  was that the plans failed to address the transition from where we are to where we want to reach.  Further it overlooked the impact of interim events, the human factor and was too heavily oriented towards tasks.  It was that moment we realized that transitioning wasn’t being considered.

Our next step was to investigate whether others had been involved in transitioning.  A simple search revealed that it was really non-existent treatment in the worlds of business, technology, innovation, startups and even management disciplines?  We were particularly concerned not for simply this void but the plaguing question of why, had the issue already been addressed or had we stumbled upon the holy grail of needs?  Deeper examination showed that it wasn’t something overlooked, but it became buried in deeper programmatic processes of plans and goals.  In other words instead of removing the paint of past behavior the direction followed was one of a skim coat over the top of what had been taking place.

The final step was to question was to whether this approach was ok.  We relied heavily on an examination of the change in project management during the last four decades.  The range involved ad hoc process, craftsman paradigm, simplistic waterfall, permutations of waterfall to present day agility and all of its variations.  Resounding the evidence showed two things.

  • the transitioning from each and every project management paradigm was a significant problem (created allot of consulting opportunities to help with… “transitioning”, and
  • secondly that project management has shift from ‘the plan’ to ‘the means by which an outcome can be achieved’.

Therefore we have come to realize that to save time and money we have to be equipped to transition efficiently.  Whether you are the customer receiving new technology solutions, a company producing product, a company changing leadership or business direction transitioning plays an essential role.

Our Transition

Looking forward is exciting but it can also be, for some, with trepidation.  The key in transitioning new discoveries is market conversion or the ability to transition tradition thinking into productive gains.  Often this relates to importance of saving time and money while doing so with minimal disruption and risk.  Our confidence resides in the reality that these aspects are in fact ‘transitional’ and to prove the model to be sound we must be successful in doing what we are promoting (hope that makes sense… so read it slow again and maybe you will understand).

Transitioning is not only exciting for me, it is also essential in dealing with a successfully producing forward moving outcomes.  The transitional framework has to be lean, efficient, effective and can be understood without exhaustive workshops, skill development and customization.  Much of this will be illustrated by some upcoming papers, projects and presentations that will be taking place over the next several months.

If you have any interest in or got ideas and suggestions please forward them to me at


We hear allot about design…..

  • Simplicity of the design,
  • Durability of design,
  • Beautiful design,
  • Failure in design, and
  • Blah-Blah-Blah design.

What the heck is all of this stuff about design?  Thousands if not tens of thousands of books have discussed, formulated and promoted design as the key to outcomes.   But as we see the buck stops at the talk and often goes no where beyond that stage.  Why is this happening?


Myth #1 – Design is not a Title

Sure you may have the word in your position but design is not reserved to one person or one group.  Everyone does some sort of design.  Design is not a task but an interconnected set of events that compositely give us a sense of doability, complexity and direction in which to advance the development of a product or solution.  I’m sure that most have sat in meetings in which business units will sketch out what they are doing and what they would like to have happen… this is design.  Albeit possibly not to the formal liking of the purists it is none the less design, don’t make fun of it!  Even the administrative assistant who is arranging that all important luncheon meeting is doing so by design.  Some possibly from experience and habit and in some cases by a sketch of things to do and organized into some paradigm that they (or others) can utilize.  So design is not reserved just for those titled but everyone is a designer unto their own rights.  If we think back to our childhood, the moment we observed things and later picked up a crayon, pencil or pen our goal was to design something.  Maybe it was a big red barn, a portrait of our family (in stick figure form along with fido) or maybe it was a more ambitious project to form shapes that lead us to the creation of letters.  All of this is design.


Myth #2 – Design is Necessary

I prefer to think of design as not being necessary but merely the natural bi-product of thinking.  The human mind cannot process chaos, it strives to have order and seeks out some place where past experiences have had similarities.  The freeing of process to seek a base of information is essential but in doing so we need something that gives us certainty about so so many questions.  Some look for solutions, others question the importance and some even wonder whether we have got some reasonable level of completeness.  To answer these and many other questions we look for help and to achieve this we look to formed methods and techniques.  Experienced individuals will have their own preferences and even some custom made methods that they rely upon to acquire design awareness.  I have relied on tools such as Unified Modeling Language (UML) (structural, behavioral and interactive models) to exercise the multifaceted features of an application project but my design toolbox isn’t limited to this one approach.  Such techniques as high impact inspections, test driven design (TDD) and rapid story creation have been equally valuable and gratifying. These all are beginnings leading to a design vision.  It very well can be and often is more than one, and this is okay.  Multiple options give us flexibility and alternatives which we are most often going to need during the course of a project.  In the case of Agile projects if we look at the arrangement of stories into families, usually surrounding the notion of work to be performed we have a design that can be lifted from it.  In the more classical context of waterfall or v-model methods we are more apt to conceive design as a bi-product of understanding and embracing design.  Both the same yet achieved in difference ways (and possible with prejudice different expected value to be achieved from each of the approaches).


Myth #3 Design Is Magical

This is where it gets a bit dicey…. is design something you do regardless of sound engineering principals being applied or not?  Those who are proponents of building and exploring (which is by definition research) really do so without a design.  I would, from practical experience, say yes.  Its not that its not important or necessary its because the process being used is intended and focused on doability and the possibilities that can be exploited, nothing more. For this reason this sort of work is intended to lead to knowledge, which in classical terms is a defined requirements vision, to which the formalization for completeness can be exercised.  Unless we intend never to do anything more with it, including repair work, then design is not needed.  However, most will reverse engineer a design from what is produced to not only create a positive maintenance atmosphere but also to definitively answer the question about whether the endless possibilities of the research has been reached.  For others design ‘anything’ are used as guides to form work and task lists of things to be constructed to achieve the desire outcome.  This includes a definitive understanding of how the piece parts will work in harmony or contention, based on the desire of the design outcome (and requirement visions).  The creative aspect of design takes place as the result of formation and the convergence with specific technologies be it hardware, software or idiom (SaaS, Cloud, Big Data, etc.).  Its this exercise that we start to seeing balancing, or the absence of, occurring.  We hear the term dependencies to reflect the level of cohesive and interdependencies that exist in design.  There are reasons for tight cohesiveness, often centered around constraints (memory, bandwidth…) but sometimes its caused by gross negligence and poor practices.  Design management cannot be taken lightly, it required purposeful and dedicated attention to insure that all is made right.


Myth #4 Design Is a One Size Fits All Proposition

Will the concept is rational, at least in so far as the initial deployment, subsequent activities that involve changes and modifications results in an erosion to design.  This produces some interesting challenges for such paradigms as the software reuse factory, refactoring and agile driven modularity.  So how can this be overcome if at all?  Since it is a foregone conclusion that our lives will not remain static we have to assume change.  We might even have a pretty good idea as to where it will take place, when it is like to occur and to what extent it can be expected.  If we have even the slightest idea about any of these things we can insult our general design in two possible ways.  The first is by utilizing a plug-n-play approach where pieces can be exchanged in an out of the solution set.  The second more difficult approach  involves design retrofitting, the act of redesign taking place as changes occur.  In order to sustain durable and reliable design one must have software support to provide us with real time dynamic vision  into the solutionset.  Using this information is like having a dashboard that permits us to address the impact of change as to the target design it is being made against.  Not all design is evil.  In fact it can be a welcomed relief for legacy applications and those in which design was really never a strong point from inception (e.g. may be caused by rapid response driven deployment or first time out solutions).

imagesMyth #5  Design Mastery Can Be Taught

There are some things that you simply can’t teach without experiencing it.  It takes practice, failure, listening, observing and mentored guidance to become not just the ‘titled’ individual but one that clearly understands and has mastered design.  Disciplines such as structural and civil engineers mandate not just schooling but also board examinations and a period of internship.  Yet in information technology certifications are a matter of choice and not one of mandate.  Some might argue that the risks are different, but are they?  Maybe the most compelling reason for experience is the result of converting intellectual visualization into an habitual endearment to the means of achieving designs that work.  We sometimes make the mistake of thinking about the end and not about the means of achieving it.  In the immortal words of Steve Jobs, “Design is not just what it looks like and feels like.  Design is how it works.”.   Maybe this is why Steve Jobs worked so so well with his long time friend Steve Wozniak.