top of page

Does AI have the potential to lead to decidophobia?

Recently, I was asked to comment on the future of artificial intelligence (AI) and bots for a magazine article. The public interest in AI is of no surprise to me, given the growth of AI seems to have no limits, yet it seems to me that this growth also installs a reactionary fear of robots taking over the world, in many people. The exponential growth of AI can be quantified by referencing research undertaken by Forbes that reports that there has been a 14 X increase in the number of active AI start-ups and investment into AI start-ups by venture capitalists have increased 6 X since 2000. Incredible growth that is set to accelerate, I’m sure you will agree.


Earlier in the year, at RattleHub, we went on a hiring drive to recruit Data-Scientists and Senior Developers, and I found it interesting to note that the share of jobs requiring AI skills has grown 4.5 X since 2013. Machine learning and natural language processing are the most in-demand skills today. As a software developer, this makes me wonder about peoples fear of AI and how best industry leaders should try to manage this. For a fascinating insight into AI, I recommend you read Stanford University’s inaugural AI Index study here https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf. Stanford has undertaken a One Hundred Year Study on Artificial Intelligence (AI100) looking at the effects of AI on people’s lives, basing their inaugural report and index on the initial findings. While reading this article, the following sentence jumped out at me. "Society is now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote rather than hinder democratic values such as freedom, equality, and transparency." The AI Index study is explicitly focused on tracking the activity and progress on AI initiatives, and to facilitate informed conversations grounded with reliable, verifiable data. This year’s report highlights just how much AI is becoming even more vibrant as a field, based on academic, public interest, and business metrics.


So there is little doubt in my mind that the rise of AI is often met with reactionary fears of robots taking over. Yet, I feel that its what’s left out of this conversation that is a far more practical threat. As developers (and even humanity in general), we should instead be concerned that AI could potentially be hijacked. I don’t personally believe that the promise of AI will be hijacked by rogue computers out to destroy humanity (but it certainly does make a good movie script), but instead by people with ulterior motives. Allow me to elaborate by highlighting that today, I have five different apps and even more steps to follow to order a pizza for tonight. The rest of our lives are just as, if not more, complicated. We all probably have a vast collection of apps designed to help us with a small part of our life, yet in the pursuit of simplicity, we’ve landed up making our lives far more complex. The Stanford study highlights that we’re mostly “flying blind” in our conversations and decision-making related to Artificial Intelligence.


Firstly, let me state that AI is not a future technology. A basic form of AI is already here and is called decision support. It is what helps us make decisions based on our behaviour: Recommendation engines already suggest products for us to buy, and navigation systems provide us with information about the best way to drive home, taking into consideration factors like real-time traffic flow, accidents etc. As AI gets more embedded into our existing social fabric, it will undoubtedly play a more significant role in shaping how we perceive the world and how we share our data. So the first question I think we should really be asking is "What happens when our computer powered assistance is so commonplace that we become totally dependent on it?" We have all probably heard about the meme FOMO (Fear Of Missing Out). FOMO is a result of social media warping our human trait of seeking recognition from our peers, by allowing us to create a landscape in which we present only the best versions of ourselves to the world. To the outside world at large, our lives look like one colossal fun party, and we left to feel that if don’t keep up with others, we are missing all the fun.


So if AI continues along the same path it's going down today, I think its safe to say FOMDA will become a reality namely the “fear of making decisions alone”, aka FOMDA. Don’t bother looking up the term as I just made it up, even though admittedly, there is a term to cover FOMDA, namely "decidophobia" but that isn't really going to stick in my mind so someone, please come up with a better term. FOMDA will, therefore, feed off the very same human trait exhibited in FOMA, only, in this case, we will look to machines, not each other, for validation. In my mind, our growing dependence on decision support is where artificial intelligence is the most dangerous. Behind every computer algorithm, there is a programmer, and behind every programmer is someone setting a strategy with a specific business or political motive. It therefore surely would not be a leap too far, to envision developers of AI systems, motivated by self-interest, to train computers to manipulate peoples lives in subtle ways, essentially lying to us all through the very same algorithms that guide our thinking.


The worst is, at this point, FOMDA would then ensure that because we’re so terrified of making our own decisions, we would probably go along with it anyway. The coming tidal wave of decision support based AI threatens to give a few people a disproportionately higher amount of suggestive power over many people. This evocative power over others is the kind of power that is hard to trace and almost impossible to stop. Facebook has seen this all along and has understood and sold this ability to advertisers for years.


The term “Butterfly Effect”, coined by Edward Lorenz, jumps to mind. In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in significant differences in a later state. FOMDA is the perfect example of the butterfly effects potential, where a tiny gap in the world can cascade into much more significant changes over time. In the case of AI, it can play out through very subtly modified software algorithms. For example, a computer programmer could make a small tweak to a search algorithm, to then direct people to what may be viewed as a more radical type of content, over what could possibly be seen as more moderate content, leading to a subtle but undetectable change in one system potentially altering the outcome for billions of people. People's perceptions would, therefore, change when the more radical type of material is subsequently seen as the norm. Such power is invaluable to motivated politicians or businesses, and it is, therefore, in my humble opinion, a far more pressing challenge that we face in a world in which computers make many more decisions for us.


It's not all doom and gloom though, and I believe there is hope. AI is definitely not something we should fear. Our daily lives are full of situations in which we are trained to react with our most basic instincts: political views, financial decisions, attitudes toward social justice etc. These are all examples of big decisions often fueled by poor logic and misinformation. In the best circumstances and with the right moral compass, AI can even be seen as a tool to save us from ourselves. It can achieve this simply by helping us understand each other better, see the world more clearly, and ensure we collectively make better decisions. As developers, we will have to be very careful though, and I feel the onus will be, in part, on software developers to develop human-centred solutions that resist the urge to manipulate the algorithms we exploit. As we most certainly do care about the world we live in, we should think long and hard about the interfaces, rules, and policies that will govern AI in the future and even how we will make moral decisions as we empower our users to embark on our new way of life.



bottom of page