Gen AI is everywhere today. It’s going to change everything beyond recognition. Some say for bad: mass unemployment, engineered bioweapons, democracy destroyed by deep fakes. Some for good: cancer cured, peace on earth, mass unemployment. Big corporates are busy ‘AI washing’ their products and everyone is suffering from FOMO. But is it real this time? After all, AI has been around since the mid ‘50s and has gone through a series of waves of optimism – ‘summers’ – and busts – ‘winters’ – since then.
Back in 1970 Marvin Minksy the MIT computational scientist promised: In from three to eight years we will have a machine with the general intelligence of an average human being. Three years later AI’s first winter set in as the UK government pulled funding. Our current summer dates from huge increases in funding in 2012 (deep learning, pattern recognition) and 2017 (transformer architecture, attention mechanism). But already some of the wilder predictions are being dialed down. In 2016 a still widely quoted PwC report forecast global GDP being $15.7 trillion higher in 2030 because of AI. More than the output of China and India combined! By 2020 Forrester was quoting just $17 billion. Others are even more sanguine now.
As a possible corrective it’s worth recalling that it’s only a year ago that $100 billion was wiped off Google’s share price when its Bard chatbot claimed the new James Webb Space telescope had discovered exoplanets: outside our solar system. In fact, they were discovered back in 2004 by the European Southern Observatory. This year’s Google’s AI powered search recommends eating one small rock a day and thinks cats have been to the moon. The fastest growing sector in AI ironically enough is in regulation and legislation designed to keep it in check as the UK bids to become the world leader in AI safety. To such a degree have we over invested both our hopes and fears in the promise of AI.
There’s no more intelligence in today’s AI than there was in the Deep Blue chess program that beat Kasparov in 1996 thanks to a doubling in processing power. Large Language Models (LLMs) behind the likes of ChatGPT impress with their conversational abilities but they are powered by statistical models ranging over massive amounts of data (and that of increasingly questionable quality as they start to consume their own output). They produce natural language outputs, but they don’t understand what they are saying: grasping neither syntax nor semantics. The attention method weights words in a sentence. An attention network identifies the highest correlated words in a sentence. This allows for predictions on data without explicit programming, based on identifying underlying patterns in the data based on a probability distribution and, when given a prompt, creating similar patterns or outputs based on those patterns. In other words, Gen AI is guessing what the next word should be in its output based on statistical probability. Grounded in statistics rather than reality, it’s no surprise that it suffers from hallucinations, happily serving up what is plausible rather than what is true.
All this said, there’s benefit to be had from Gen AI’s ability to generate text, images, and data. Just start small and build from there. Many companies have reacted by banning ChatGPT and the like fearing internal data making its way onto the internet. So why not sort out your internal data and deploy a custom model in Azure or AWS with a simple ChatGPT clone as the web front end? It can easily be done for around £20-30k. Staff can use it to summarise documents and email, write outlines, or simply speed up everyday tasks that might take longer to do with Google searches or by doing calculations in Excel. Test data can be generated e.g., create sample data for the following input fields marked with <>: <housenumber>, <streetname>, <firstname>, <lastname>. PDFs can be uploaded and the AI can extract data to a table for analysis in Excel. It’s no substitute for digitising data, for integrating systems, and automating processes, but it helps. Next, deploy an inhouse chatbot or Knowledge / How Do I Do That Assistant trained on your intranet, your user manuals, and process guides. All this is highly scalable and provides a natural language layer over your data. Just remember that any number of AI models will not allow you to dodge the hard work involved in having well-structured good quality data in the first place.
You can also deploy robots cheaply and effectively using Robotic Process Automation tools like UIPath. Tools themselves now increasingly powered by AI and able to read in and curate wide ranges of data formats. Or use Gen AI to help you spot the output of Gen AI: claims departments need to be able to tell if pictures of damage are real or generated. Upload the image and ask if it is real or not.
Over the next 3-5 years there will be more that can be done. In the case of insurance, it seems that the generation of an output (premium) from an input (a description of something at risk of a peril) on the basis of a statistical model would be something that would lend itself very well to large language models and Gen AI. The better the prompt, the description of the item to be insured and the peril, the better the output. Or maybe in the London Market, Gen AI could play the role of match maker between insurance clients with risks and carriers with appetites for them leading to better allocation of capacity and pricing? It remains to be seen but for now, don’t be swept away by the hype and see Gen AI for what it is. It will be more useful that way.
Build IT Now
Want to learn more about what we can do for your business? And how quickly? Go to the Systems iO services page or
Enjoyed this post? Stay updated with our latest insights, industry news, and exclusive content by following us on LinkedIn! Join our growing community of professionals and be part of the conversation. Follow us on LinkedIn and never miss an update!
If you would like to receive our newsletter direct to your inbox, simply sign up at the bottom of this page.