IT Pro Panel: The practical CIO’s guide to AI
Smart machines are all the rage, but what are they actually doing?
Artificial intelligence (AI) has been a hot-button topic in business IT for the last several years. Advancements in fields like deep learning and neural networks have made it easier than ever to commercialise AI without needing three years and a team of MIT lab scientists to train a machine learning model, and analysts have been busily predicting the dawn of a new AI-enabled age.
We’ve extensively covered the possible applications of AI technology within business, including speeding up security teams, pulling insights out of operational data and automating customer service tasks. Vendors are also keen to tout the flexibility of AI systems, boasting of the across-the-board efficiency gains that can be leveraged with their help.
But while it may be more technically viable than it’s ever been, how practical is it to actually deploy AI tools in a business environment? In this month’s IT Pro Panel discussion, we gathered some of our expert IT leaders to find out how they’re using AI in the real world.
Don’t cross the streams
When we asked our panellists about their use of AI within their own organisations, one thing became immediately apparent: There’s a big difference between rolling your own AI and integrating off-the-shelf AI components into a pre-existing app. For organisations that need to deliver value to the business quickly, it’s often much more expedient to simply integrate functionality developed by others within the community than to recreate it from scratch.
“From my perspective, there are two ‘streams’,” says Studio Graphene founder, Ritam Gandhi, “the first one being developing AI algorithms and the second one being applying them. My personal experience and passion is more focused on the latter.
“For instance, we have used Google Vision AI on a number of projects that we have worked on and I find it fascinating how it can be applied to so many different use cases. In terms of creating the underlying AI capability, this is the work of folks who are passionate about creating artificial intelligence algorithms, for people to leverage in the real world and create use cases around them.”
While building a machine learning algorithm from the ground up requires large amounts of time, data, and compute resources, many of the companies and organisations that specialise in machine learning development – such as the aforementioned Google, as well as Nvidia, AWS and others – have gone to great lengths to make it easy for developers to integrate their AI capabilities into their own apps.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The most common way of doing this is by connecting to a cloud service and using API calls to consume the required functionality as a service – something which TempCover CTO Marc Pell recommends for any organisations who are considering making use of AI.
“I would definitely say that it’s well worth writing your own software that utilises these features. Some of the ‘off the shelf’ solutions offer decent functionality but come at a very high cost when compared to the consumption model of the API calls to the cloud providers, even when taking into account the internal build cost,” he says.
“It’s relatively straightforward to roll your own software that consumes these services. The consumption of third party APIs is bread and butter for developers and this is no different. Depending on your use case, the post-processing of the results can be a little challenging but not to a level that should put anyone off. The challenge is normally removing the white noise and leaving yourself with the valuable data.”
“I 100% agree,” Gandhi adds. “The reality is that based on the specific use case and the evolutionary stage of the product you are working on, there are differing levels of customisation requirements, but using API endpoints to build a custom layer on top makes a lot of sense. You get the power of technology that has cost millions to develop on a pay-per-use basis – which can sometimes even be free in the initial stages as part of a trial account.”
Integrating third-party AI components into custom apps, then, seems to be the most popular option, and one useful component that our panellists highlighted is image recognition and machine vision. Studio Graphene, for example, used Google’s machine vision API to build DesignLife, an app that allows users to take a photograph of a piece of furniture and automatically find the nearest match.
Pell has also overseen a machine vision pilot at Tempcover, where the technology has been used to read the data from customers’ driver’s licenses for faster onboarding. The initial proof-of-concept, which has been running for around a year and a half, started out using Microsoft Azure’s computer vision service, but after unsatisfactory results, the team also switched to Google’s offering. Pell notes, however, that each provider had different strengths and weaknesses in different situations, including low lighting and with different photocard styles.
“Interestingly, we embarked on the project in Q1 2019 and had something live to customers (via an A/B test and a much larger UX transformation) by mid-Q2 2019 and found during the research phase that the feature appealed to early adopters, but also to those who sought an improvement in accessibility.”
According to Pell, getting the technology itself working proved to be less difficult than finding a balance between giving people the option to use it and not putting off those who preferred a more traditional interaction model; “the human element of UX design turned out to be more of a challenge than the AI/computer vision implementation.”
Hit me with your best bot
AI works particularly well with numerical analysis, forecasting and pattern recognition (most of the time), so it makes sense that financial services is an area which has seen particularly high demand for the technology. Newcastle Building Society CIO Manila McLean, however, has been focusing on a slightly different approach.
“Within our organisation, we build robotic process automation capabilities, rather than AI, to automate existing manual processes,” she says. “There will be AI capabilities within some of the SaaS products we utilise, such as credit decisioning and anti-money laundering detection, but I would say it's an area of interest for us, rather than an area of active development at this point.”
“For us it’s all internal for anti-money laundering, credit rating – that type of thing. In my last company, we also used it for pricing insurance by predicting claims, though.”
Unsurprisingly, claims prediction is something that Pell is also interested in, and the two discussed this subject in depth. McLean reveals that, in contrast to the other panellists, her team built the function internally, rather than using a third-party tool.
“We had started exploring the area ourselves in the data science team,” she says. “We ran a short hackathon between data science, dev and customer insights to build a simple model.”
“Did you find that it opened a can of worms, or did you manage to ‘hack’ your way to something useful quickly,” Pell asks, to which McLean replies that it “led to more questions, and the acknowledgement that it would require significant data and maturing before we’d use it in real pricing decisions”.
The pair also compared notes on chatbots; Pell has run an internal trial with Azure’s chatbot builder. Similarly to McLean’s AI-based claims prediction tool, it began life as a home project that one of Tempcover’s developers was working on in their spare time, but while he notes that Azure’s tool was quite effective out-the-box, the project has yet to be rolled out to any customers.
“We're not convinced by the ROI yet and have more pressing priorities,” he explains. “I'm sure we'll run a little A/B test of something similar at some point in the future though, assuming we are confident it has a good chance of improving the customer experience and doesn’t just use AI for the sake of it.”
McLean remains similarly lukewarm on the idea of putting resources into developing a chatbot; while an interesting technical demonstration of how far areas like natural language processing and speech generation have come, our panellists’ view was that chatbots lack a firm value proposition.
“We’re considering it, but we’re unconvinced on the business case,” McLean says. “We would rather put the resources into enabling greater capability through the app and online portal for self service. Anything that drives a call is usually as a result of ‘I can’t find X or I have a problem with X on the website’, so we want to solve the root cause, rather than enable a more efficient way of contact – if it really is more efficient.”
“There's always a debate of cost and time vs reward,” Pell adds; “as a scale-up business, we're focused on delivering growth for the business and value for our customers by building according to our expertise, whilst leaning on the industry's AI tooling where it can add value.”
Adam Shepherd has been a technology journalist since 2015, covering everything from cloud storage and security, to smartphones and servers. Over the course of his career, he’s seen the spread of 5G, the growing ubiquity of wireless devices, and the start of the connected revolution. He’s also been to more trade shows and technology conferences than he cares to count.
Adam is an avid follower of the latest hardware innovations, and he is never happier than when tinkering with complex network configurations, or exploring a new Linux distro. He was also previously a co-host on the ITPro Podcast, where he was often found ranting about his love of strange gadgets, his disdain for Windows Mobile, and everything in between.
You can find Adam tweeting about enterprise technology (or more often bad jokes) @AdamShepherUK.