Converge exco attended the SAP NOW event in Sandton. A highlight of the conference was the appearance of Sophia https://en.wikipedia.org/wiki/Sophia_(robot). It is impressive and at the same time, quite disturbing to consider the practical consequences of technology that has evolved to the point where we are starting to simulate human behaviour and thinking. Neural networks and machine learning are certainly leaving us with a couple of currently rhetorical, but moral questions.
It was after this event, that our CEO felt inspired to write his monthly newsletter, seen below.
I recently had the privilege of witnessing a live interview with Sophia. It gave me goose bumps to see how in real-time, an Artificially Intelligent construct was able to respond in natural language, to questions not only in a coherent manner, but with what I dare call “innovative thinking” and even humour.
A week later someone showed me part of a TV show where parents need to predict what their children would do in unsupervised environments. At some point during the show, the children were confronted with the topic of temptation. Strangely, the temptation came in the form of a little robot that would try and convince the children to take something without asking. In several of the cases, the children at first responded in the negative, knowing that it would be wrong. A minute or two later, the children proceeded to indulge in the “forbidden” cupcakes. When asked later why they did it, their responses were all the same: “The robot said I should.”
This fuelled the discomfort that was still lodged somewhere between awe and concern after witnessing Sophia dance through the interview with unintimidated ease and grace.
We have for centuries, perfected the art of judging one another, which has time and again, led to genocide, even though in truth, we were called to love, not judge. Now that we seem to have perfected this skill, it makes sense that the next logical venture on the “god-curve” (a term I heard someone used recently) would be to create something “in our own image”.
I find myself in-between two minds. On the one hand, I marvel at the limitless imagination of the human mind and our ability to create things that defy logic. On the other hand, I am confronted with a deep and profound sadness and fear at our ability to consciously sidestep the moral consequences of our actions. I also question whether we have thoroughly considered the unintended consequences of our progressive pioneering. In a world where already, we suffer from “human anorexia” where we are seldom fully present in the moment – due to our digital omnipresence – it doesn’t seem out-worldly that in a not-too-distant future, people would have the ability to choose an AI partner over a human partner – one that can be programmed and re-programmed to behave exactly in accordance with your every wish; the dawn of a 21st century slave trade…only I am not 100% sure who is the master and who is the slave.
I perpetuate through questions that may not have an immediate answer, but that certainly warrants thought. What is the opportunity cost of our progress? When is it no longer we who are making the decisions? Do we trust that through a series of “if..then..else” statements, we are able to mimic human morals? Who determines what those morals are? Am I satisfied that my child can indemnify themselves on the back of an instruction from an AI construct?
I am all for progress and the adoption of technology. I believe without question that in business as in life, the adoption of technology is pivotal and unavoidable. We have however, witnessed how human innovation (such as the splitting of the atom) can be abused. I believe therefore, that in every such radical venture, it would be irresponsible not to simultaneously, challenge ourselves to explore the unintended and the moral consequences, not to stop progress, but to progress mindfully.