Last month, we met as the Signal Working Group for the 2nd time. Our first meeting was so well received that we decided to make it a regular thing. Here's a bit of the insights from our time together:
Austin Brender, subject matter and Senior Analytics Services Manager at Invoca, started off the session by discussing the importance of Signals in training AI models. He explained that Signals provide a way for machines to learn which words or phrases should trigger an AI response or action. He also discussed the importance of data set size and quality when training a model, explaining that more and better data can mean more accurate predictions by the model.
Cass Benson from SCI Shared Resources kicked off the discussion by sharing how their use of Signals revolves measuring the call experience - via Al-fueled scorecards and phrase-spotting Signals. Cass explained how they are using Signals to track how consumers are being treated when they reach out to the funeral homes in their network, as well as to measure how each location is performing. They have a monthly check-in with their four key markets' leadership teams in order to go over scores and let them know where improvement might be needed. Over time, they would like to get score individuals in addition to the entire location. Austin shared that Invoca just released Agent Voice ID, a voice recognition product which can identify individual agents based on voice printing.
Aaron Weinberger asked about how to develop scoring and rating for their client service team or receptionist, and Austin and Cass suggested using AI and phrase spotting Signals, depending on the specific use case.
Austin suggested Aaron consider building out scoring and rating for his client service team or receptionist, using AI to listen for short phrases. Cass explained that they have trained the AI to listen for when someone needs to give condolences vs. when someone doesn’t need to, giving examples of scenarios in which this would be necessary. They also looked at two ways they can score calls: spoken phrases or listening for the entire conversation with AI.
John Barnes with Omni Fiber LLC discussed his experience using keyword spotting for conversions and competitive keywords, and Kathleen, who also works in telecommunications, shared her experience using IVR API signals to track sales and other metrics. Additionally, Cass noted the importance of understanding customer sentiment while on the phone in order to best serve their needs.
Austin Brender noted the various ways data can be collected in order to gain a better understanding of customer engagement and ultimately close sales. This includes using real-time API signals or other methods such as uploading lists of customers who have made appointments over the phone.
Austin shared that Invoca currently has tools such as word maps to help analyze the content of a call but is working towards expanding capabilities such as sentiment analysis in order to better understand how customers feel during their call. He also suggested that paid search optimization may be an effective way to further engage with customers after a successful sale.
Lynn Duffy explained that the technology had been used for over two decades in both marketing and operational contexts. Dena Read from Senior Resource Group shared her own experience of using call tracking to differentiate qualified versus unqualified calls as well as introducing an IVR system that routed incoming calls according to need. Cass Benson from the same company went into further detail about a statistic reporting nearly 50% unanswered calls which was eye-opening - they discovered that many unanswered calls were simply robo or junk calls rather than people actually trying to contact the company.
Attendees then discussed the importance of breaking up phrases when training an AI model. To get better results in transcriptions from calls, they suggest breaking down long phrases into shorter ones as this can help reduce false positives. They also suggest listening to hundreds of calls and manually tracking them to identify mistakes and calibrate the AI signal. It is important to have a good confidence level (85-100%) before deploying an AI model, as it needs to match what a human would do. Lastly, they mentioned that it currently takes at least 300 calls of data for training an AI model but new methods are being developed which could reduce this number significantly.
Lynn Duffy with Butler/Till Media Services, Inc. continued the conversation by speaking about her experience working with clients in different industries to train their AI models using signals. She mentioned that it is important to start with a good foundation of data at the beginning, and that tweaking AI as you progress over time can help improve accuracy. Lynn also noted that manual listening to calls can help to understand customer service and experiences better, in order to make improvements where needed.
Cass Benson then spoke about how she was currently going through all their calls manually, checking whether their predictive model had gotten it right or wrong. She asked the group if there was ever an instance where one could over train the AI. Lynn noted that while getting a good confidence level of 85% or more is ideal, it does take some time and effort to get there. Austin added that currently the minimum data set size for training is 300 calls, but they are working on reducing this number in order to make the process simpler and easier.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.