Jim Webber, Chief Scientist at Neo4j, believes that to get reliable answers, it is time to bolster ChatGPT with smart data.
Generative AI in the form of ChatGPT has undoubtedly captured our attention, spotlighting how AI is ready at last for business prime time. In a few short months, we’ve been dazzled as it has written code, composed music, and rewritten text in the style of Shakespeare. But it isn’t ‘intelligent’—and it still has many limitations.
Although ChatGPT has practical applications, we should take a cautious approach where we need to harness its power safely. The language model can generate answers in response to human inputs, but the accuracy of its responses is not always to be trusted. That’s because the tech underpinning ChatGPT-like tools, Large Language Models (LLMs), is trained on huge amounts of typically public training data. The LLM has a basic knowledge about a broad range of topics, but is not good at explaining how it came up with the answer.
There is also the allied challenge of ChatGPT’s now notorious ‘hallucinations’, where it produces answers that sound very plausible but they are either factually incorrect or not linked to context.
Generative AI, however, is already making its mark in areas like pharmaceuticals for genetics research and gaming for visual effects, for example. But what is the best way forward for CIOs in the customer service industry to exploit LLMs without running into accuracy and transparency issues?
The good news is that there are a number of strategies available that can help address accuracy and transparency issues. One approach is to train the LLM exclusively with a domain-specific knowledge graph. Doing so means your LLM isn’t trying to be an all-encompassing genius, but it can instead correctly and helpfully answer queries from a smaller, carefully-defined domain.
But how should we go about building such knowledge graphs? Graph technology is particularly well-suited for applications where relationships between data elements are as important as the data itself. As a result, graphs are increasingly being deployed with LLMs to make us more confident of AI predictions.
Are smaller ‘large’ language models the way forward?
It turns out that if you train an LLM on curated, high-quality, structured data via a knowledge graph, you significantly enhance its capabilities. That’s partly because the range of responses will be more limited, as it has been trained on smaller data sets. As a result, the answers will be more conclusive.
Such “small” language models (SLMs) present a very promising route of travel among multinationals to enhance customer service to increase competitive edge.
ChatGPT as part of customer service transformation
When it comes to answering customer queries in contact centres, for example, ChatGPT will undoubtedly be a valuable tool, providing 24/7 availability. Accepting its limitations as outlined, ChatGPT can understand and generate human-centric conversations, act as a virtual assistant, and enable customers to interact with it, receiving personalised answers.
In addition, the technology has the advantage of being able to manage multiple conversations and queries in real time. As service agents engage with customers via various channels such as social media, email and websites, use of Generative AI can help them rapidly develop responses to grievances, for example. Even better, it can also provide multilingual support.
Automating mundane processes streamlines operations and will leave agents free to deal with more complex issues and value-added tasks. We’re also seeing how ChatGPT can support intelligent routing, sending customer queries to the most appropriate departments.
Real-world GenAI use cases
Using Generative AI can help knowledge workers and specialists carry out natural language queries without having to understand a query language or create multi-layered APIs. This can increase operational efficiencies, improve customer service and focus human resources on value-added tasks. For example, UK energy provider Octopus Energy has said that 44% of its customer service emails are now being answered by AI.
And energy is far from being the only sector using ChatGPT to help customer service. A leading healthcare staffing company for hospitals and health systems, for example, is leveraging an LLM in combination with knowledge graphs to help better recruit, hire, and match specialized surgeons to job descriptions across its broad customer base. Surgeons, for example, tend to have multiple job titles (Head of Critical Care, Pulmonologist etc.). AI tools help to reduce such sourcing barriers and deliver the best care possible at the right level of impact and efficiency.
Finally, a large pharmaceutical company we’re working with has built a GenAI-powered chatbot that enables access to its supply chain digital twin in natural language, drawing on supply chain information loaded into the graph for risk assessment.
Enhancing customer interactions
It seems businesses are generating more and more data, but find it a growing challenge to leverage for valuable insight. That’s a problem, as when used effectively and compliantly, it can improve operations, increase productivity, and open up new opportunities.
LLMs, if bolstered by knowledge graphs, can really help with the data problem. By training an LLM on a knowledge graph’s curated, high-quality, structured data, the drawbacks associated with using ChatGPT for internal and external customer support are radically reduced.
About the Author
Jim Webber is Chief Scientist at graph database and analytics leader Neo4j, co-author of Graph Databases (1st and 2nd editions, O’Reilly) and Graph Databases for Dummies (Wiley). More detail on this discussion can be found in the just published (August) Building Knowledge Graphs (O’Reilly).