I spend my time helping companies prepare for change.  This involves developing leaders and teams to be more digital, to sense the technology world, make prototypes, become more agile and more questioning.  It involves new ways of working and new forms of leadership.  In March 2023 it also means discussing AI.

There are a few people who just see AI as the next over-hyped technology following The Metaverse, crypto and NFTs.  I’m with those who see this as more of an iPhone moment when a technology genuinely changes what we can do and how we can do it.  We cannot predict the full effects of ChatGPT4, New Bing, Meta’s Llama and many others but we should be exploring them now and considering our response as individuals and as companies.

I have put together a draft set of ideas to think about that I hope will become part of my toolkit for helping others explore the impact of this technology:

My starting point is the same as with any other technology.  It consists of two components: sensing and customers.  It is vital that everyone pays attention while trying to make sense of what might be important and what can be ignored.  Sensing is about listening, spotting patterns, asking questions, projecting ideas into the future and starting to experiment.  Most of us will have already done that for ChatGPT and image generation software such as mid-journey.

Customers should be our launch pad for any transformation.  Who are they and in this new world how can we meet their needs in new ways?  How can we use AI to do what we do for them better and what new services can we deliver?

These two ideas however, sensing and customers, start to reveal ways in which this technology might be different.  We are dealing with humans and in understanding their needs we also need to recognise their fears and the possible side effects of AI technology.  Everyone is rushing towards powering their businesses with AI; instead they should be focusing on empowering their consumers.  The ends might look the same but a human centred approach to AI deployment will be more resilient and better for not just the company but also the societies in which it operates.

There are real dangers ahead with AI and we don’t know which of these are silly science-fiction inspired nightmares and which are genuinely likely to pose threats.  If AI puts vast numbers of people out of work, increases polarisation and inequality, and provides new attack surfaces to bad actors, then we need to make sure that we are not part of the problem and are part of thinking through solutions.  I will suggest that every organisation needs to define where it stands, morally, on how it will treat AI.  This may require setting up advisory boards and industry councils to approve our decisions and will certainly involve explaining our actions to our customers, our staff and our communities. We need to build society as well as business.

We need to enrich our businesses to provide more for our consumers anyway.  Imagine a world in which every service is automated well, devoid of human attention, efficient and unaccountable.  Instead we need to build brands and experiences that are fun, provide learning, are gamified and human even if they have AI behind the scenes.

Analyse the real risks

We can already see some of the risks that will confront the widespread use of tools such as ChatGPT.  These include the general homogenisation of knowledge as it becomes harder and harder to discern individual opinion and expertise.  We need to celebrate individual human talent and creativity.

We need to see that bad actors (and potentially rogue states) will use these technologies to cheat, confuse, steal and destroy.  This will require government and civic regulation and protection.  But our leaders don’t have a good picture of the risks and individuals and companies need to help them see the dangers.  

Everything that we can do online (shopping, gambling, organising, messaging, flirting, trading, form filling, fighting, spying, impersonating, manipulating etc etc) can already be done by AI.  This means any computing task or any physical action that can be initiated or controlled by someone on the Internet.  This simply requires that ChatGPT or Bing is linked via an API to be able to send messages out to other services.

Every job that can be done remotely for your business where tasks or instructions are sent to someone else to perform will be replaceable with AI.  This is the scale of the challenge facing employees and while plenty of new jobs will emerge, they may not happen fast enough to deal with the displacement of labour.  Remember that this will happen in every industry and all of those displaced staff are also your customers and without work this could have catastrophic economic consequences.

Any AI you use in your business will be biased; simply because it has been trained using data that will naturally be biased.  This means that we must demand ways of finding and addressing these biases.  ChatGPT shows that AI can be held accountable; challenge every response with questions about the underlying models, risks and dangers and it will bring those to the fore.

One final risk, for now, is the dark fear that ultimately we are harnessing an alien presence without knowing if it will be benign or hostile.  This feels overblown as I write it but simply ignoring the tiny possibility does not make it go away.  We would understand that in a science laboratory exploring viruses, we would have to take precautions to ensure that the virus did not escape.  Why then would we not take similar measures to ensure that mind viruses (created by AI) did not have similar consequences?

Some of these risks are at a human scale and some should affect our individual decisions and business choices. I believe it is vital to have inclusive discussions and to plan our responses informed by experts and human fears.  

Back to strategy

The positive business opportunities are huge and include addressing many of the systemic problems facing us including demographics, the climate crisis and healthcare.  We need to identify these opportunities and ask ourselves how our businesses and our staff can contribute.  We need to take our staff with us, upskilling and retraining them for the new (editorial) roles that will be possible and we need to think about new ways of working such as 3-4 day weeks and flexible part-time contracts.

The priority for now is to start to experiment and talk and listen.  This requires a change to the project led, projected managed, KPI-driven work that many companies have developed over the last 20 years.  There are huge efficiencies that can be driven by AI but we should use the efficiencies to create breathing space rather than simply hike profits and drive growth.  If we don’t, we may be fit and ready for the next 12 months but not for the next 10 years.

If you want to discuss how I can help your company to build an AI Strategy or want to comment on any of the ideas here, please get in touch.

Categories: insights

1 Comment

Sachin Rai · 31/03/2023 at 15:05

As mentioned: while thinking about consumer needs and how it can be addressed using AI, we should also be considerate about consumers fears and possible side effects of the new tech. This is my key takeaway and I believe this is something that will truly differentiate whether we as an organisation truly cares about our consumers, society and business or we are just for business.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.