[go: up one dir, main page]

WO2024259486A1 - Scam call system - Google Patents

Scam call system Download PDF

Info

Publication number
WO2024259486A1
WO2024259486A1 PCT/AU2024/050645 AU2024050645W WO2024259486A1 WO 2024259486 A1 WO2024259486 A1 WO 2024259486A1 AU 2024050645 W AU2024050645 W AU 2024050645W WO 2024259486 A1 WO2024259486 A1 WO 2024259486A1
Authority
WO
WIPO (PCT)
Prior art keywords
scam
call
response
bot
speech
Prior art date
Application number
PCT/AU2024/050645
Other languages
French (fr)
Inventor
Dali KAAFAR
Ian Wood
Michal Kepkowski
Conor James ATKINS
Original Assignee
Macquarie University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2023901937A external-priority patent/AU2023901937A0/en
Application filed by Macquarie University filed Critical Macquarie University
Publication of WO2024259486A1 publication Critical patent/WO2024259486A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • G06Q30/0225Avoiding frauds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0248Avoiding fraud
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0609Qualifying participants for shopping transactions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/50Business processes related to the communications industry
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/027Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1076Screening of IP real time communications, e.g. spam over Internet telephony [SPIT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • H04L65/1104Session initiation protocol [SIP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/436Arrangements for screening incoming calls, i.e. evaluating the characteristics of a call before deciding whether to answer it
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M7/00Arrangements for interconnection between switching centres
    • H04M7/006Networks other than PSTN/ISDN providing telephone service, e.g. Voice over Internet Protocol (VoIP), including next generation networks with a packet-switched transport layer
    • H04M7/0078Security; Fraud detection; Fraud prevention
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/14Delay circuits; Timers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/20Aspects of automatic or semi-automatic exchanges related to features of supplementary services
    • H04M2203/2027Live party detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42008Systems for anonymous communication between parties, e.g. by use of disposal contact identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42034Calling party identification service
    • H04M3/42059Making use of the calling party identifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/54Arrangements for diverting calls for one subscriber to another predetermined subscriber
    • H04M3/543Call deflection

Definitions

  • the present disclosure broadly relates to scam call prevention and, more particularly, to a system for, and a method of, using conversational artificial intelligence to interact with a scam call and/or to obtain scam parameters from a scam call.
  • a scam call is a voice telephony call generated for the purpose of dishonestly obtaining a benefit, or causing a loss, by deception or other means.
  • Phone calls are the most common way that scammers target victims and have the most financial impact compared to other scam contact methods (such as emails or social networks).
  • Scams include fraud against phone company customers by third parties, for example in the form of telemarketing fraud or caller ID spoofing used for vishing (i.e., voice phishing).
  • scams might include various forms of security assistance, e-commerce platforms follow ups, impersonation of government agencies requests, etc.
  • scam call detection systems make use of a hot (also called a chatbot), i.e. an autonomous program that interacts with the caller.
  • a hot also called a chatbot
  • an unsolicited phone call is detected based on an analysis of a conversation between a caller who initiated the call and a bot that uses a voice recording impersonating a scam target individual, and the call is then blocked.
  • a scam targets is an organisation that the scammers pretend to be representatives of.
  • the extended calls provide opportunities for gathering intelligence about the scammers and about scam calls.
  • a method comprising: receiving a rerouted phone call identified as a scam call; processing received caller speech from the rerouted phone call to determine a response; interacting with a caller using the determined response, wherein the response is determined in order to extend a duration of a call conversation.
  • the processing may comprise identifying features in the received call speech associated with ending and/or extending a call.
  • the identifying may comprise identifying one or more of: negative emotions in the caller speech, and threats in the caller speech.
  • the response may be determined in order to maximise the duration of the phone call.
  • the processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted phone call ends.
  • the method may comprise recording and storing at least a part of the call conversation.
  • the method may comprise processing the stored part and/or processing a real-time part of the call conversation to determine one or more scam parameters.
  • the scam parameters may comprise one or more of the following: a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam, scam statistics, agglomerated scam statistics, statistics of determined or observed scam parameters, and/or a scam classification.
  • one or more scam parameters may be used for early detection of a scam campaign.
  • the method may further comprise identifying actionable scam intelligence comprising a scammers financial instrument and/or phone number.
  • a method comprising: detecting a received scam call; and rerouting the detected scam call to a scam call bot, wherein the scam call bot is configured to extend a duration of the rerouted call.
  • the scam call bot may be configured to extend the duration of the rerouted call by interacting with a caller of the scam call via responses determined by the scam call bot.
  • the responses may be determined based on identified features in the caller’s speech associated with ending and/or extending a call.
  • the duration of the call may be extended by intentionally generating and responding with a response imperfection selected from a group comprising: backchannelling utterances, time-wasting phrases, and conversation repair phrases.
  • a system comprising: a telephony endpoint for receiving a rerouted scam call; a speech-to-text module configured to convert caller speech from the received scam call to text; a conversational artificial intelligence (Al) bot configured to receive the text from the speech-to-text module, process the received text, determine a response so as to extend a duration of the scam call, and output the determined response; and a text-to-speech module configured to receive the determined response in text form from the bot, convert the text to a voice response, and output the voice response to the caller via the telephony endpoint.
  • Al conversational artificial intelligence
  • the text-to-speech module may be configured for voice cloning.
  • the conversational Al bot may process the received text by identifying features in the received call speech associated with ending and/or extending a call.
  • the bot may be configured to identify the features by identifying one or more of: negative emotions in the caller speech, and threats in the caller speech.
  • the bot may be configured to determine the response in order to maximise the duration of the scam call.
  • the bot may be trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted scam call ends.
  • the system may further comprise an audio processing module connecting the text-to-speech module and the telephony endpoint, and configured to process the voice response by mixing the voice response with an environment signal.
  • This signal may be an audio signal, mimicking environmental and/or background sounds.
  • the conversational Al bot may further comprise a conversation controller adapted to manage a conversation flow by adding utterances to the response that extend the duration of the scam call, wherein the added utterances comprise one or more of: a time wasting phrase, a conversation repair phrase, a backchannelling phrase, and an interrupting phrase.
  • the conversational Al bot may further comprise a response controller configured to: discard a response utterance in response to a scammer utterance occurring during a response utterance processing, and removing said discarded response from a conversation history of the Al bot.
  • a response controller configured to: discard a response utterance in response to a scammer utterance occurring during a response utterance processing, and removing said discarded response from a conversation history of the Al bot.
  • Figure l is a schematic representation of a communication network.
  • Figure 2 is a schematic representation of a system used to implement a conversational artificial intelligence bot.
  • Figure 3 is a schematic representation of a method of predicting features as a side task using a K- Adapter.
  • Figure 4 is a schematic representation of a method of predicting input features.
  • Figure 5 is a schematic representation of a sequence-to-sequence transformer model.
  • Figure 6 illustrates an embodiment of a method of rerouting a detected scam call to a conversational artificial intelligence bot.
  • Figure 7 illustrates an embodiment of an on-phone scam detection and rerouting method.
  • Figure 8A illustrates an embodiment of a method of interacting with a scam call using a conversational artificial intelligence bot.
  • Figure 8B is a schematic diagram of an embodiment of data analysis performed in the method of Figure 8 A.
  • Figure 10 is a schematic diagram of a configuration server that forms part of the call processing system of Figure 9.
  • Figure 11 is a schematic diagram of an audio processing module that forms part of the call processing system of Figure 9.
  • Figure 12 is a schematic representation of an overtalk module that forms part of the call processing system of Figure 9.
  • Figure 13 is a schematic representation of an outbound call module that forms part of the call processing system of Figure 9.
  • Figure 14 is a schematic diagram of another exemplary embodiment of a call processing system.
  • Figure 15 is a schematic diagram of an exemplary embodiment of a docker deployment of the call processing system of Figure 14
  • Figure 16 is a schematic representation of a load balancing module that forms part of the system of Figure 14.
  • FIG. 1 of the drawings illustrates a communication network 100 that supports both data and telephony.
  • the network operator 108 provides telecommunications services to its users via the network 100.
  • a user can make or receive phone calls via a user device 102 (for example a mobile phone, a smartphone, a landline phone, a Voice over IP (VoIP) device or the like).
  • VoIP Voice over IP
  • An incoming call from an originating device 104 is managed by the network operator 108, and switched to the user device 102 via the network 100.
  • a server 110 is in communication with the network operator 108 and/or the user device 102 via the network 100.
  • FIG. 2 is a high level schematic representation of a system 210 provided by the server 110 that is used to implement a conversational artificial intelligence (Al) bot 206.
  • the system 210 and its building blocks may be configured to accommodate one or more languages.
  • the system 210 includes a telephony endpoint 202 for receiving a rerouted scam call or initiating calls to known scam phone numbers, and a speech-to-text (STT) module 204 configured to convert caller speech from the received scam call to text.
  • STT speech-to-text
  • the system 210 has a conversational artificial Al bot 206 configured to receive the text from the speech-to-text module 204, process the received text, determine a response so as to extend a duration of the scam call, and output the determined response.
  • the system 210 includes a text-to-speech (TTS) module 208 configured to receive the determined response in text form from the bot 206, convert the text to a voice response, and output the voice response to the caller via the telephony endpoint 202.
  • the system 210 optionally includes an audio processing module 209 between the TTS module 208 and the telephony endpoint 202.
  • the audio processing module 209 applies audio processing to mimic the background acoustic (i.e. sound) environment of a phone call and enhance voice believability and outputs the processed voice response to the caller via the telephony endpoint 202.
  • the TTS module 208 includes voice cloning capabilities.
  • the telephony endpoint 202 may be, for example, an Asterisk server.
  • the system 210 includes a telephony endpoint 202 for receiving a rerouted scam call.
  • the telephone endpoint 202 may be separate from the system 210, interfacing with the system via the STT and TTS modules.
  • the telephony endpoint 202 is capable of receiving Session Initiation Protocol (SIP) calls.
  • SIP Session Initiation Protocol
  • the telephony endpoint 202 communicates with the speech-to-text module 204 and the text-to-speech module 208 (the latter via the audio processing module 209, if present), which in turn communicate with the conversational Al bot 206.
  • the telephony endpoint 202 processes the audio signals of the call and passes them to the speech-to-text module 204 and from the text- to-speech module 208 module via raw audio Web Sockets, which in turn communicate with the bot 206 over Web Sockets in plain text.
  • the speech-to-text module 204 may be implemented using, for example, Google STT.
  • the architecture of the system 210 described with reference to Figure 2 is highly scalable. Multiple phone numbers and VoIP initiators can be assigned to the same SIP trunk and the telephony endpoint can be replicated and load balanced to withstand many simultaneous calls.
  • the bot 206 is a text based conversational Al bot, and in some embodiments, open source pre-trained bots such as the ParlAI “BlenderBof ’ may be adapted to implement the bot 206.
  • the bot 206 is configured to process the received text by identifying features in the received call speech associated with ending and/or extending a call. In some embodiments, the features are associated with negative emotions and/or threats detected in the caller speech.
  • the method comprises processing text based and/or audio based features found to be associated with ending the call.
  • Text based features may include text transcripts from conversations between the scammer and the bot.
  • Audio based features may include emotion, audio ML model outputs, paralinguistic features such as pitch, tempo, loudness, timbre, intonation range, syllabic duration, and/or rhythm.
  • the Al bot is configured to identify features in the text of the transcripts (such as phrases or word patterns identified by machine learning models trained to extract scam stages, e.g. word length, number of words per utterance, uniqueness of words, vocabulary richness, etc.) that may be considered to be indicators gearing towards the end of a call.
  • the features may be determined by predictions of machine learning (ML) models trained on an objective statistically associated with ending calls, and/or the features may be identified by unsupervised ML models statistically associated with ending calls. Based on these identified features, the Al bot may be configured to avoid one or more of these features in order to avoid ending a call.
  • ML machine learning
  • the processing of the received call speech may comprise utilising a conversational artificial intelligence bot trained to mimic victim utterances in scammervictim phone conversations, e.g. in long scammer-victim phone conversations.
  • the processing of received caller speech from the rerouted phone call to determine a response may include the addition of heuristic features to responses determined to increase conversation length.
  • Heuristic features may include predetermined initial responses, addition of speech disfluencies, conforming to a predetermined persona, and/or restriction to a maximum or minimum sentence length.
  • the processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained or fine-tuned on labelled real phone scam transcripts, for example manually labelled real phone scam transcripts.
  • Sources of scam transcripts may include labelled transcripts from publicly available “scam baiter” videos in which concerned individuals (“scam baiters”) converse with real scammers knowing that the call is a scam.
  • the bot 206 is further configured to determine the response. In some embodiments, the bot 206 is configured to determine the response in order to maximise the duration of the scam call. [0058] In some embodiments the bot 206 is configured to mimic scam victims. This may, for example, be done through the addition of short term memory, empathy, and personas that allow the bot 206 to maintain consistent knowledge of personal facts such as a name, address and aspects of a fictitious personal life. The personas include features that enable a sufficiently convincing mimic of a vulnerable human scam victim.
  • the bot 206 may include heuristic text generation designed to prolong conversations with scammers and/or produce better quality conversations with scammers. These heuristics may include fixed initial bot utterances, injection of disfluencies into bot utterances, bot utterance sentence length truncation or exclusion of long sentences, heuristics to prevent the bot talking over the scammer.
  • recordings and transcripts from conversations between scammers and Al bots may be analysed to determine threat intelligence information.
  • Threat intelligence information may include: the target organisation that the scammer is pretending to be, the social engineering techniques used by the scammer, the topic of the scammers script, and/or the structure and/or stages in scripts used by the scammer.
  • Threat intelligence from recordings and transcripts of conversations between scammers and Al bots may be utilised (e.g., by sale to a third party) as additional data used by Al bots to effectively prolong calls with scammers.
  • the threat intelligence data may be used to identify and educate potential future scam victims so as to reduce the success rate of scams or by way of information of concerned organisations to their customers to warn them of existing scam campaigns impersonating the organisations processes or personnel.
  • the bot 206 implements Al models built around large pre-trained sequence to sequence models such as BART, T5, and GPT. These models achieve very good fluency.
  • the models are fine-tuned on conversation data such as scam call transcripts for domain adaptation of pre-trained conversational Al models.
  • Blenderbot is fine-tuned on “scam baiter” conversations with real scammers obtained, for example, from YouTube or from synthetic scammer- victim conversations crafted for conversation diversity and/or for specific conversational patterns. Conversation data for training may be enhanced with the application of text generation heuristics found to be associated with longer scam call conversations.
  • the conversational Al bot described herein presents novel challenges to finetuning due to long call durations (pilot data averaged 86 utterances) and the adversarial nature of the task (the aim is not quality effective conversation, but to prolong the conversation irrespective of conversational quality).
  • “Wild” data from calls with real scammers enables an additional form of training.
  • the primary goal is for the bot to achieve long call durations with real scammers.
  • the duration of a “wild” call (one with a real scammer) is used as a reinforcement learning (RL) training objective with a small positive reward for each utterance and a large negative reward when the scammer hangs up.
  • RL reinforcement learning
  • Identified conversation features that relate directly to longer call durations may also be used as RL training objectives. For example, features associated with scammer script steps and those expected or found to be associated with ending or extending a call such as scammers’ negative emotion and threats.
  • Features taken into considering for the purpose of extending the duration of a scam call may include one or more of: the subject of the call, emotions, topics, and keywords. Relevant features may be determined through analysis of available scam call transcripts and based on existing research and understanding of persuasion, social engineering and psychology.
  • Available scam call transcripts will include previously existing public records of scam calls in addition to records of scammer conversations with the bots used to engage with rerouted scam calls. These features are incorporated into training as side tasks in addition to the main fine-tuning task.
  • a model that is able to distil the knowledge necessary to predict call features associated with longer “wild” call durations is equipped to recognise model updates that are effective for achieving longer calls. In this way the duration of a scam conversation can be extended or maximised by increasing engagement of scammers and improving believability of the bot used for the rerouted call.
  • the main fine-tuning task consists of further training of a pre-trained model using task specific data (e.g., scam transcripts).
  • the bot is used to attempt to predict words in scammer utterances given previous utterances in scam transcripts from the training data.
  • the training is considered “fine-tuning” as the quantity of data used in these pre-trained models is orders of magnitude larger than the data used for finetuning.
  • the training causes the model to iteratively give higher likelihood to generating the actual words spoken by victims in the training data (the scammer words are treated as model inputs). In this way the end to end conversational Al model adapts to new contexts.
  • side tasks are implemented, such as predicting from the last hidden layers of the underlying transformer, predicting from the RL action space, or predicting from the adapter framework.
  • Side tasks for embodiments based on a pre-trained transformer natural language processing (NLP) model may, for example, be implemented by predicting from hidden layers of the underlying transformer model or through the adapter framework.
  • NLP transformer natural language processing
  • Predicting from the RL action space may be understood with reference to T.
  • two types of conversation features may be used as side tasks: features of scammer utterances and/or of victim (bot) utterances.
  • Types of side tasks may include the following:
  • K-adapter style side tasks i.e. parallel stacked transformers fed with encoder representations at each layer as described in Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021.
  • K- Adapter Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics., incorporated herein by reference.
  • Figure 3 is a schematic representation of a method 300 of predicting features as a side task using a K-Adapter 302.
  • the K-Adapter 302 is a stacked transformer with layer wise inputs of signals from between Encoder layers 304 and from the final Encoder layer.
  • the K- Adapter includes a predictor 308 (typically a fully connected network) with softmax or sigmoid to provide probabilities for predicting features.
  • Figure 4 is a schematic representation of a method 400 of predicting input features from the output of the encoder.
  • the output of the encoder (which is also fed to decoder/text generator 406 as well as memory module etc.) is fed into a NN model 408 (a one or two layer transformer with classifier layer, or a one or two layer fully connected network) whose output predicts the feature.
  • NN model 408 a one or two layer transformer with classifier layer, or a one or two layer fully connected network
  • FIG. 5 is a schematic representation of a sequence-to-sequence transformer model 500.
  • the model 500 generates text one word at a time, with each subsequent word 502 predicted based on the previous words.
  • the model is trained by predicting each word in a training utterance given the previous words.
  • An error is determined from the probability the model gives to the word and is “propagated back through the network” (at 504), providing updates to the model that result in the word having higher probability.
  • RL When RL is applied, it adds another component to this measured error.
  • the features that relate to conversation length may be exhibited by a whole utterance or by one or more of its words, depending on how the feature is detected.
  • an emotion detector may not indicate which words signified the emotion (in which case the feature is associated with a whole utterance) or may provide some indication of which words contributed to the measured emotion (so the feature is associated with individual words).
  • the RL reward is applied equally to each word.
  • the RL reward is applied to those words that exhibit the feature.
  • the reward is positive for features that are associated with longer conversations, and the reward is negative when associated with shorter conversations.
  • the small positive reward for each new scammer utterance would work the same way as features associated with the (previous) whole generated utterance.
  • the negative reward is applied to all utterances with exponentially decreasing magnitude from the last one (e.g., the full negative reward is applied to the last generated utterance, half of it to the second last, a quarter to the third last, an eight to the fourth last etc..).
  • the negative reward is applied using a model to estimate which utterances (or even which words in which utterances) contributed to ending the conversation and by how much, and then the negative reward is applied proportionally to that contribution
  • a K- Adapter style side task may be used for the decoder.
  • features are predicted via separate predictors fed with intermediate layers of the decoder transformer. If the transformer has 12 layers, the model includes 12 (simple) NN predictors, and the errors in their predictions are back-propagated into the transformer (for example with at least a 12 times smaller learning rate than the learning rate for predicting words in training utterances so that these predictors do not dominate training).
  • further training targets may be obtained by integrating background knowledge of scammer methodologies, social engineering and the psychology of persuasion. Further knowledge of scammer methodologies and social engineering techniques to be used as training targets can be obtained by analysis of scam calls including those available in the public domain and calls between Al bots and scammers.
  • further training targets may be obtained through the discovery of text generation heuristics, acoustic (i.e.. sound) processing and voice characteristics found to be effective for longer bot-scammer conversations.
  • the Al bot may be implemented using one or more instances of Blenderbot/2/3, GPT, and/or other Large Language Models (LLM), finetuned on transcripts of videos or voice recordings made of scam baiters.
  • LLM Large Language Models
  • Such transcripts may be manually edited and annotated to remove sections that are not parts of conversations with scammers, and/or to label utterances as either Scammer or Victim (the scam baiter is considered the victim).
  • the Al bot may be trained to utilise structured information about a current stage of the scam call, providing more contextualised responses and allowing tailored responses that depend on the context. This may be done, e.g. using an implementation as described in Meta Fundamental Al Research Diplomacy Team (FAIR)f et al. , Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science378, 1067-1074(2022), incorporated herein by reference.
  • FAIR Meta Fundamental Al Research Diplomacy Team
  • the text-to-speech module 208 may include one or more voices.
  • Voice cloning is a type of “deep fake” consisting of deep learning Al models that generate speech audio that sounds like a given person from text inputs.
  • the person whose voice is being cloned provides recordings of their voice which are used to train the Al model. Once sufficiently trained, arbitrary text can be provided to the model, and it will “speak” the text in the person’s voice. It is further possible to make variations on the voice to change, for example, the apparent age and gender of the generated voice and modulate expressed emotion.
  • the text-to-speech module 208 is configured to interpolate between “voice personas” and to adapt the “voice personas” along specific characteristics such as age and gender. This is achieved by combining similar technology for adapting images such as faces to the voice cloning function of the TTS module 208.
  • Recent voice cloning models such as YourTTS (described in E. Casanova et al., ‘ YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone", arXiv:2112.02418 [cs, eess], Dec. 2021, http://arxiv.org/abs/2112.02418, incorporated herein by reference in its entirety) use a single vector to represent a voice. Techniques in style representation and transfer such as normalising flows (as described in D. J. Rezende and S.
  • FIG. 6 of the drawings shows a flow diagram of a method of rerouting a detected scam call to a conversational artificial intelligence hot.
  • the method 600 comprises detecting (at 602) a received scam call, and rerouting (at 604) the detected scam call to a scam call hot.
  • the scam call hot is configured to prolong the rerouted call.
  • the scam call hot is configured to extend the duration of the rerouted call by interacting with a caller of the scam call via responses determined by the scam call bot.
  • the responses may be determined, for example, based on identified features in the caller’s speech associated with ending and/or extending a call.
  • conversational artificial intelligence is used to interact with a scam call and derive insights into current scams from such interactions
  • the scam calls may be detected and rerouted to initiate a SIP call with the bot in several ways:
  • VoIP providers may include dedicated VoIP services or larger telecommunications companies (e.g., OPTUS or TELSTRA in Australia), which typically also have VoIP capabilities.
  • Third party services and individuals forward calls to the bot either through SIP or via assigned VoIP phone numbers.
  • Scam detection may be performed by the phone company, for example using one or more of the methods in Table 1 :
  • Network operator scam call detection techniques are not foolproof. Challenges include incoming calls not having verifiable identity information, calls that travel through multiple carriers lack metadata, and the simple heuristics used for scam detection do not evolve as rapidly as the scam techniques.
  • the call is routed to a user phone.
  • On-phone functionality may be provided to identify a scam call, for example one or more of the methods described in Table 2:
  • the on-phone scam call detection software does not detect a scam call, the call is put through to the user. If the user identifies the incoming call as a scam call the user is able to forward the call to the hot system via an on-phone rerouting app.
  • a mobile phone app may be used to redirect scam calls to the bot system.
  • the mobile phone app automatically detects and reroutes received scam calls.
  • FIG. 7 illustrates an embodiment of an on-phone scam detection and rerouting method 700.
  • Scam calls are detected by monitoring received calls (at 702), and comparing (at 704) caller speech patterns with one or more feature databases 706.
  • the feature databases 706 describe, for example, scam patterns or scammer strategies identified from real and/or modelled conversations between scammers and victims. If a received call is identified as a scam call (at 708), the user is notified and the call is rerouted (at 710) to the bot system 210.
  • the mobile phone app includes functionality to listen in on the bot-scammer conversation and/or record the conversation.
  • the mobile phone app includes functionality to “scam bait” the scammer, i.e., the phone owner pretends to be a victim while the app records the conversation and sends it to the server’s data storage.
  • the mobile phone app is unable to detect a scam call, and the call continues with the user, the user may realise that the call is a scam call.
  • the mobile phone app also includes functionality allowing the user to identify the call as a scam call (at 712) and to forward the call (at 710) to the server 110 and the bot system 210.
  • FIG. 8A illustrates an embodiment of a method of interacting with a scam call using a conversational artificial intelligence bot to determine scam parameters.
  • the method 800 comprises receiving (at 802) a rerouted phone call identified as a scam call, the call being rerouted e.g., from a honeypot, or from third party scam detection systems.
  • a “telephony honeypot” is a collection of phone numbers made available to calls from the public with the intention of attracting calls from scammers. The numbers may be “dirtied” in some way such as including them on unreliable or dishonest websites that collect personal information.
  • the method then processes (at 804) received caller speech from the rerouted phone call to determine a response.
  • the method may also incorporate scammer audio processing, such as voice printing and/or scam background noise detection and processing. This type of scammer context identification facilitates identification of specific scam operations.
  • the method 800 further includes interacting (at 806) with a caller, using the determined response to hold a conversation. The response is determined in order to extend a duration of the call conversation.
  • the processing may include identifying features in the received call speech associated with ending and/or extending a call, for example by identifying negative emotions and/or threats in the caller speech.
  • the response is determined in order to maximise the duration of the phone call.
  • the processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted phone call ends.
  • the call conversation, or one or more parts of the conversation are recorded and stored at 808.
  • data artifacts e.g. audio and/or text
  • an analysis of the data artifacts and/or one or more parts of the call conversation is performed to determine one or more scam parameters 812 that relate to, for example, a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam and/or a scam classification.
  • the method includes generating statistics from the call(s) (these may be aggregated statistics), and then determining one or more scam parameters based on the statistics.
  • the statistics, aggregated statistics, and/or determined parameters may be stored.
  • the method 800 obtains timely information about current and new phone scams through analysis of conversations between the Al bot and scammers. In some embodiments near real-time information and/or alerts regarding new and emerging scam campaigns are generated. In this way, the method enables early identification of emerging scam campaigns. To achieve effective real-time information, a representative sample of scams may be fielded. To ensure a representative sample, a large number of scam calls may be fielded (for example dozens, hundreds, or thousands of calls). In some embodiments existing scam research techniques may be used, such as crowd sourcing (i.e., reports from the public) and/or monitoring social media channels.
  • the scam calls may include inbound calls from scammers and/or outbound calls to known scammers.
  • actively maintained telephony honeypots may be used to alert scammers of the inbound number(s).
  • the determined scam parameters may facilitate identifying a “target” of the scam, i.e. the organisation that the scammer is pretending to represent. Given this information, the organisation can be alerted and can in turn alert their customers of the existence of the scam and enact a pro-active defence against the identified scam and scam strategy.
  • the determined scam parameters may facilitate identifying the structure of the scam, for example the script or sequence of steps the scammer leads victims through.
  • the Al bot may, for example, employ a combination of state of the art “topic modelling“(an NLP technique) and Hidden Markov Model (HMM) sequence modelling to do this.
  • the determined scam parameters may facilitate identifying information pertaining to social engineering techniques applied in the scams and how those techniques are concretely instantiated.
  • Determined scammer financial instruments may be used by financial institutions to disable those instruments thus stopping scammers from receiving funds.
  • the determined scam parameters may facilitate identifying information about and classification of types of scams.
  • the determined scam parameters may be utilised to inform effective public education counter measures for scams.
  • the determined scam parameters may also be used to inform Al bot training and configuration.
  • one Al bot may be used to implement the functionality described herein.
  • separate Al bots may be fine-tuned for each scam type and/or for specific scams and/or scripts. In some embodiments this can be done by training on the transcripts themselves, by addition of side tasks informed by scam classes and types, social engineering techniques, and/or training on scam structures. In some embodiments this can be done by training on crafted synthetic transcripts with content informed by a combination of real transcripts, scam parameters and/or persuasive techniques found to be effective against scammers.
  • the scam type is automatically detected through a separate model trained for this purpose that runs alongside the conversation Al bot during a call. An appropriate Al bot selection may be based on said automatic detection.
  • Al chat bots can effectively incorporate structured state information to better generate text appropriate to the specific context (see the description herein referring to Figure 17, for example).
  • this kind of training is used in conjunction with determined scam parameters such as information about which scam stage an ongoing conversation is at.
  • the systems described herein are configured to extract insights from bot- scammer conversations, and to extract actionable insights such as scammer financial instruments and phone numbers. Extracted financial instrument information can be used by banks to block scammer finances.
  • Figure 8B is a schematic diagram of an embodiment of the data analysis 810.
  • Data Artifacts include one or more of the following: a. Individual scammer words (audio and/or transcribed), b. Scammer utterances (audio and/or transcriptions), c. Audio stream(s), d. Whole phone call transcripts and/or audio.
  • the analytics processing pipeline comprises one or more analytics modules 822.
  • the data artifacts are provided to the analytics modules, in real-time or near realtime.
  • Analytics modules may access data already in the database, allowing for analysis and statistics of accumulated data.
  • the artifact is passed to multiple analytics modules each designed to extract specific insights and information.
  • Overall modules may include: a. LLM (large language model) based classification and scam parameter recognition models.
  • Example LLMs are ChatGPT, LLAMA or Mixtral, b. audio processing e.g. to classify background noise and/or create voice prints, c. text based machine learning models that recognise scam parameters, d. modules designed specifically to recognise scammer financial instruments and/or phone numbers provided to victims by scammers, e. calculation of statistics of identified scam parameters over single calls and/or over multiple calls.
  • Results are passed to a handling process 824.
  • the handling process may create real-time intelligence alerts 825.
  • the handling process stores results in a database 826.
  • Some embodiments of analytics modules 822 include LLM (large language model) based classification and parameter recognition models.
  • LLM large language model
  • a prompt for an LLM is generated consisting of a set of natural language instructions followed by data to be analysed, presented in a specific format.
  • the constructed prompt is passed to the LLM, and the response from the LLM contains the desired scam parameters.
  • LLM prompts are configured to identify scam-specific parameters such as scam targets, scam structures, scam techniques, a financial instrument, scammer phone numbers and/or scam classification. LLM prompts are configured to produce machine friendly outputs (such as JSON with a specific format) and to extract multiple scam parameters with a single prompt.
  • Some embodiments of analytics modules 822 are configured to extract specific actionable intelligence such as scammer financial instruments (e.g. Bank account details), phone numbers, or the like.
  • the methods described herein may include one or more ways to access and/or distribute analytics results (i.e., scam intelligence), such as: a. Generate real-time or near-real-time alerts 825, for example sent by email, SMS or push notifications. This is triggered immediately on recognition of specific analytics outputs such as recognition of new scam campaigns impacting a client. b. Through a web API. Clients access these APIs with their own data analytics platforms. c. Via data dashboards. Clients access web dashboards to view the data. This includes a display of statistics, progressions and projections of extracted scam insights and/or intelligence tailored to specific clients. d. Report generation. Reports are generated and sent to clients, either on a regular basis or as requested by clients
  • Clients are provided with authentication details that allow access to intelligence data as determined by their contract or arrangement with the data supplier.
  • Bots are trained to enable financial instrument interventions. Bots may be configured to encourage scammers to continue through their scam and arrive at the point where a financial transaction is requested from the bot acting as a scam victim. Where scammers ask for credit card details, such bots are configured to provide those details in a believable manner (e.g. through indicating that a social engineering technique is working, such as showing fear about a threat, asking for help from a scammer posing as an authority figure etc.) [0135] This differs from merely wasting time and requires the hot to be agreeable, provide appropriate responses to the scammer such that the scammer is satisfied and moves ahead to subsequent stages in their scam, and may require the bot to recognise social engineering techniques and show that the techniques are working on the bot to enhance scammer confidence.
  • the techniques may require the bot to be provided with information on how the scam is proceeding to better facilitate generating appropriate responses, and may require recognition of the type of scam and its procedure in order to provide that information to the bot.
  • the techniques may also require social engineering techniques against the scammer such as indicating that the bot has a lot of money and that the bot believes the scam.
  • This type of bot is realised through engineered LLM prompts instructing the LLM to be agreeable, providing contextual information on scam progression in the prompts, managing scammer engagement through e.g. instructions to display doubt, instructions to provide specific requested information, instructions to express fear and/or submission.
  • the bot operates two streams of processing.
  • the first is the conversational Al bot: a. A decision is made to emit a bot utterance.
  • LLM prompts and conversation context provided to an LLM, which generates an utterance that is then passed to the scammer.
  • the LLM utterance is added to the conversation context.
  • d If the scammer responds, this response is added to the conversation context.
  • the second processing stream triggers each time the conversation context is updated: a.
  • the updated conversation context is processed to identify e.g. scammer engagement, scammer emotions such as anger and frustration, progression through scam stages, specific information requested by the scammer. b.
  • the identified scammer state and scam stage are utilised to build an LLM prompt or select from previously engineered LLM prompts optimised to encourage the scammer to continue with the scam. E.g.: If the scammer is frustrated, the LLM is instructed to a) provide any requested information and/or b) act submissively, following any instructions from the scammer.
  • the LLM may be instructed to express some doubt about providing the information.
  • financial institutions may receive intelligence about scammer financial instruments and can use this information to render unusable those instruments, thus preventing scammers from receiving funds from their victims.
  • the bank may create a working credit card and supply details to the victim bot, and the bank then observes the associated credit card account for any transactions.
  • the victim bot may then reveal card details to scammers at an appropriate moment (e.g. when requested, perhaps with some resistance).
  • the bank is then able to observe transactions and obtain the scammers’ credit card merchant details.
  • the bank may perform due diligence to ensure the transaction was indeed from a scammer.
  • the bank may then also contact a partner credit card vendor and block the merchant account. In some cases, recent credit card transactions by that merchant account may be reversed, returning victim funds.
  • FIG. 9 is a schematic diagram of an exemplary embodiment of a call processing system 900.
  • the system 900 includes a configuration server 1000.
  • audio is passed into the system 900 from the telephony endpoint 202 through an audio socket.
  • a pipeline and bot configuration is selected by the configuration server 1000.
  • audio is processed by the Speech To Text (STT) module 204, rendering it as text utterances.
  • STT Speech To Text
  • the specific STT module and configuration for that module are specified in the configuration managed by the configuration server 1000, and may include, for example, Azure or Google STT functionality.
  • the STT module 204 also includes a speech detector 930.
  • the subsequent utterance is stored by the Al bot 206 and sent (along with any other utterances that appear during processing) to the Al bot 206 as a single text when the bot has completed processing.
  • This may be enabled or disabled as specified in the configuration managed by the configuration server 1000. This is referred to as “Overtalk Prevention” and is described below with reference to Figure 12.
  • the conversation controller 990 (described in more detail elsewhere herein) is responsible for controlling the flow of the conversation by managing turn-taking, triggering the pipeline to respond to the scammer’ s utterances, triggering time-wasting phrases, conversation repair phrases, and/or backchanneling (e.g. “uh-huh”, “yeah”, “okay”, etc.), interrupting the scammer, and/or barge-in detection (i.e., when the scammer interrupts the bot).
  • backchanneling e.g. “uh-huh”, “yeah”, “okay”, etc.
  • the Al bot 206 selects and/or generates a response utterance.
  • one or more hard coded phrases may be injected into the bot’s speech, bypassing the Al bot 206 processing at 905b.
  • this may be done at the beginning of the call in the form of a sequence of initial phrases (e.g. “Hello”).
  • Phrases, phrase generation, and/or phrase selection may be specified in the configuration received from the configuration server 1000.
  • phrases may be randomly selected from a pre-set list.
  • this may be done during the call on a random basis in the form of time wasting phrases such as “I’m sorry, I didn’t catch that?”.
  • the rate at which time wasting phrases are injected may be specified in the configuration, and may be altered in a random or pseudo-random way during the call.
  • Scammer utterances and injected phrases are added to the conversation history of the Al hot 206. This may be achieved by collecting utterances (for example based on an expected pattern of: ⁇ scammer-utterance>, ⁇ bot-utterance>, ⁇ scammer utterance>, etc.) and then passing the list of utterances to the Al bot 206, which interprets them in the same way as utterances generated by the Al bot 206.
  • scammer utterances (together with any injected Al bot utterances) are passed to the Al bot 206 and the Al bot 206 generates a response utterance.
  • the specific Al bot model used for this may be specified in the configuration.
  • a limit may be configured for the maximum number of long sentences the Al bot 206 can say in a response.
  • the Al bot 206 removes sentences from the end of the response when this limit is exceeded.
  • the maximum number of long sentences and the minimum number of words for a sentence to be considered long are specified in the configuration.
  • disfluencies such as “um. . .” may be injected into the response utterances. Disfluencies may be randomly selected from a pre-set list and injected at a specified rate.
  • the configuration specifies (a) whether this is enabled as well as (b) the frequency, i.e., how the specified rate is determined (e.g. using a pseudorandom timer or according to a pre-set selection).
  • a response utterance is processed by the Text To Speech (TTS) module 208, which returns speech audio.
  • the response utterance is processed to include Speech Synthesis Markup Language (SSML) which allows specifications for the speed, pitch, volume and speech styles (e.g. emotions) of the voice to be generated.
  • SSML Speech Synthesis Markup Language
  • the specific TTS module may include one or more functional blocks, provided by e.g: Azure, Google, and/or Custom Voice Cloning TTS software.
  • Configuration for the TTS module 208 may be specified in the configuration, for example, the configuration may specify which voice to use, SSML, etc.
  • discarding the response utterance is managed by the Response Controller 909 that also edits the Al hot conversation history to remove the discarded response and reflect any changes made at 906 and/or 907. This may be enabled or disabled as specified in the configuration.
  • speech audio may be merged with background audio.
  • Background audio may be selected from a pre-set collection of audio files.
  • specific background audio stream and relative volume of speech and background audio are specified in the configuration.
  • audio effects may be applied to the merged voice and background audio and resulting audio stream passed to the telephony endpoint.
  • the specific audio effects and their parameters may be specified in the configuration.
  • the conversation controller 990 receives the following inputs: STT partial utterances, STT final utterances, and Voice Activity Detection (VAD) results.
  • the conversation controller 990 may additionally or alternatively receive call audio and/or use multimodal Al to help with turn-taking.
  • the conversation controller 990 has the ability to know the bot’s transcript and state for interrupting the bot, and/or the ability to inform when the bot is talking and the scammer should be listening, and/or when there is barge-in from the scammer.
  • the conversation controller 990 triggers one or more of the following actions: time-wasting phrases, conversation repair phrases, backchannelling phrases, the Al bot generating a response, and interrupting the bot.
  • B ackchannelling is a way to show that you are listening to the speaker. It involves small phrases like “uh-huh”, “yeah”, “okay”, etc. This can also be used to fill silence in a conversation, and used before starting your turn in the conversation to show that you are thinking.
  • Backchanneling makes the task of barge-in detection more difficult as backchanneling can be detected as speech, but does not signal an intent to interrupt.
  • Time-wasting phrases are phrases that are said by the bot to waste the scammer’s time. They are pre-defined phrases such as “I’m sorry, I didn’t quite catch that. Can you repeat that?” or “I need to sit down, can you wait a moment please”. These are injected randomly into the conversation whenever the bot has its turn to speak. The same phrase is preferably not repeated more than once per call.
  • Conversation Repair Phrases are injected into the conversation and said by the bot. However, these are not randomly injected.
  • Speech-To-Text is slow or broken and includes phrases such as “What was that? I didn’t quite catch it?” or “Sorry, it’s noisy here, can you repeat that?”. These phrases are exclusively phrases that show the bot not hearing the scammer and asking the scammer to repeat. Time-wasting phrases can include phrases like this but are not limited to them.
  • the Inter-Pausal Unit is a unit of speech which is delimited by a certain time reference, for example 200ms.
  • the IPU threshold is used to break speech into portions, while a ‘No Words’ threshold and a ‘No Final’ threshold are used to detect when the STT is not functioning correctly in order to cause the bot to respond to partial STT utterances, or to use conversation repair phrases.
  • the IPU can be used to break up the speech into portions to perform user-state detection or other tasks and can be the first trigger for backchannel responses.
  • the No Words threshold and the No Final threshold are used to detect when the VAD and the STT do not agree on the end of an utterance (or when the STT is not functioning correctly).
  • the No Words threshold is the first threshold to be reached. It may require for example between 2 and 6 seconds of silence, for example about 3.5s of silence from the VAD and the STT to have no partial response. This is an indication that the STT is not functioning correctly as it would normally have at least a partial response after such a time interval. This triggers a conversation repair phrase to be said by the bot to the scammer.
  • the No Final threshold requires a time interval of 3-8 seconds, for example 5s of silence from the VAD and the STT to be activated. This is an indication that the STT is functioning slowly or incorrectly. If the STT has no words yet, then the No Words Threshold would have been triggered and conversational repair would have started. If there are words from the STT in the form of a partial response, then this threshold triggers the pipeline to respond using the partial text.
  • a turn-switching time interval of 1-5 seconds, for example 3 s, may be used to force turn-switching even when the user-state detection shows that the speaker wants to keep their turn.
  • a default 5s interval may be used because the partial responses can take up to 3s to return from the STT.
  • the goal of a 5s delay is to allow the STT to finish if it is working. This can be configured with the configuration server.
  • State Transitions 1700 of the conversation controller 990 may be understood with reference to Figure 17 of the drawings and Table 3 below.
  • Conversation states correspond to who is talking and who is listening.
  • “Bot Responding” refers to the Al bot generating a response to the scammer’s utterance.
  • the response is finished generating, it is sent to the TTS module.
  • the TTS finishes generating the speech audio, the duration of this speech is calculated and the state is set to “Bot Responding” for this duration.
  • the audio is then sent to the audio-mixer to be mixed and sent over the phone line.
  • the turn taking algorithm may include one or more goals, such as minimising silence and/or minimising talking over one another.
  • the states labelled in Figure 17 as “BAD” 1702, 1704 are times when the hot and scammer are talking at the same time. This is an indication of a false positive turn taking detection, or the scammer interrupting the hot.
  • the Al bot does not take the scammer interrupting the bot into consideration, and in most embodiments the bot will not intentionally talk over the scammer, hence the state being labelled as “BAD”, i.e., not intended.
  • these states may trigger the Al bot stopping in the middle of an utterance in response to barge-in from the scammer.
  • the Al bot is configured to do one or more of: (1) detect barge-in, (2) assess if the scammer intends to interrupt or not, (3) based on (1) and/or (2) stop the Al bot talking in mid-utterance based on scammer barge-in, and (4) update the Al bot conversation history with the barge-in interruption information.
  • the Al bot may be configured to detect a scammer’s intention to interrupt in various ways, and this functionality may include one or more of the following: (i) using the state model shown in Figure 17 and described in Table 3, (ii) ignoring false positives (e.g. a back channel or background noise from the scammer would not be an intent to interrupt the Al bot), (iii) via use of a language model (i.e., another Al model which may be incorporated in or separate to the bot) that takes the conversation history and parameters associated with a current utterance of the scammer and detects whether the intent is to interrupt the bot or not.
  • a language model i.e., another Al model which may be incorporated in or separate to the bot
  • States are logged in a transcripter.py object using the log state(state) function.
  • the states are logged with the current timestamp. The duration of these states can be inferred by looking at the timestamps of the next state.
  • the transition 1720 from Silence to Bot Talking occurs once per call.
  • the transitions 1722 from silence to Scammer talking, From Bot responding to scammer overtalking, and from bot talking to scammer overtalking are caused by the scammer talking.
  • the transitions 1724 from the scammer talking to the bot responding, and from the scammer overtalking the bots response to the scammer overtalking the bot talking both skip a phase.
  • the transition 1726 from the scammer overtalking a bot response to the bot talking is an unlikely transition.
  • Figure 10 is a flow diagram illustrating the operation of the configuration server 1000 that forms part of the call processing system 900.
  • the configuration server 1000 maintains a looped list 1002 of pipeline configurations. Configurations can be added or removed from the list via a REST interface 1004. A web front end may be provided to view and edit the configuration queue, communicating with the configuration server via the REST interface 1004.
  • Configurations can specify random selection, pseudo-random selection, or specific configuration patterns selected from pre-set lists of values for some features, such as TTS voice and background audio.
  • a pipeline configuration is requested from the configuration server.
  • a configuration is selected by checking the configuration list 1008 at 1010. If it is determined at 1012 that there are configurations in the list, then at 1014 the next configuration is selected. Selection continues from the beginning of the list after the last configuration in the list has been selected. Alternatively, if at 1012 it is determined that the list is empty, a default configuration is selected at 1016.
  • configuration elements are propagated to the respective pipeline modules including the TTS 208, the STT 204, the mixer in the audio module 209, and the Al hot 206.
  • call processing with the pipeline is initiated.
  • FIG 11 illustrates an exemplary embodiment of the audio processing module 209.
  • the audio processing module performs steps 910 and 911 in Figure 9.
  • the module 209 is configured to merge audio from multiple sources 1100 including voice audio 1102 from the TTS module 208, one or more background audio sources 1104, and/or any other audio sources 1106 specified in the configuration.
  • the audio processing module 209 continuously merges audio from one or more of the audio sources.
  • the TTS source 208 intermittently produces voice audio, and in some embodiments when this intermittent voice audio is received the amplitude (volume) of the voice audio may be adjusted at 1110, with the adjustment level specified in the configuration. When no speech audio is available, the audio merger 1112 forwards only the background audio.
  • the merged audio may be passed though one or more audio filters 1114 (such as lowpass, high-pass, and/or band-pass filters), typically applied in sequence.
  • the filter parameters are specified in the configuration.
  • Processed audio is then passed to the next module, i.e., the telephony endpoint.
  • Figure 12 illustrates an overtalk module 1200 used to gather utterances to prevent interruption at 904 in Figure 9.
  • the speech audio is converted to text in the STT module 204, producing an utterance 1202.
  • Overtalk prevention 904 is then executed by the overtalk module 1200 based on a processing lock status retrieved from the configuration provided by the configuration server 1000.
  • the processing lock is activated then the utterance 1202 is added to an utterance queue. If the processing lock is not activated when an utterance 1202 is received, then the processing lock is activated at 1206 and the utterance 1202 is merged with the utterance queue and at 1210 passed to the Al hot for utterance generation.
  • the processing lock is de-activated at 1208. If the utterance queue is not empty then the utterances in the queue are merged and passed on to the Al hot for utterance generation.
  • Some embodiments support the generation of outbound calls to scammers.
  • FIG. 13 illustrates the process for outbound call generation. Outbound calls are initiated from the Asterisk Command Line Interface (CLI).
  • CLI Asterisk Command Line Interface
  • the asterisk server has a dial plan 1302 with two phone extensions 1304, 1306 used for outbound calls which are configured as follows:
  • Extension “12345” creates a Audio Socket to connect to the call processing pipeline (e.g. the pipeline as illustrated in Figure 9).
  • a channel 1310 is created between these two extensions which results in a call between the pipeline 1320 of the call processing system 900, and the phone of the scammer 1322.
  • outbound calls to known scam phone numbers from “callback” scams, phishing websites or other sources are possible.
  • FIG. 14 shows another exemplary embodiment of a call processing system 1400.
  • deployment of the system is accomplished with two virtual machines (VM) 1402, 1404 and two cloud services 1406, 1408.
  • VM virtual machines
  • cloud services 1406, 1408 may include, for example, one or more of:
  • AWS Amazon Web Services
  • RONIN which is an example of a managed AWS environment which can be used for the pipeline and hot VMs.
  • RONIN VMs are accessible from the wider Internet via SSH connections, hence an SSH tunnel may be used for audio socket connections between the asterisk server and pipeline VM.
  • SSH is not required for connections between the pipeline and Bot VMs as both are inside of RONIN.
  • the asterisk server 1410 (providing the telephony endpoint) runs on an AWS VM.
  • a phone call is forwarded to the asterisk server via SIP (Session Initiation Protocol).
  • the asterisk server then creates a unique ID (UUID) for the call, which is propagated to the pipeline VM via the ID of the audio socket and used as a label on all files and metadata associated with the call.
  • the asterisk server creates an audio socket 1412 to the pipeline VM, and then forwards call audio to the socket and stores call audio in a file on the VM.
  • the audio socket may pass through an SSH tunnel 1414 to the pipeline VM.
  • Pipeline VM processing may be understood with reference to the steps indicated in Figure 14 of the drawings.
  • the audio socket is attached to an asterisk client docker container 1420 in the pipeline VM and audio streaming commences.
  • a request is sent to the configuration server 1000 and the returned configuration is propagated to all elements and stored in a database 1422 (such as MongoDB).
  • the asterisk client 1420 passes the audio stream to the STT service (via the STT module, not shown in Figure 14).
  • the STT service returns transcribed speech as text portions that are partial or complete sentences.
  • Some embodiments may use Azure STT, which returns a flag stating that the scammer has finished speaking to indicate the end of an utterance. Text is gathered from the STT service until this flag is observed, at which time the gathered text is passed back to the pipeline.
  • accumulated text portions are passed to the Al bot, which determines a response utterance (also in text). Text accumulation processes are elsewhere herein with respect to process 904 depicted in Figure 9 of the drawings. In some instances, the Al hot may be bypassed (for example when one or more hard coded phrases are injected into the bot’s speech as described with reference to 905a in Figure 9).
  • the accumulated text portions are considered a scammer utterance and are stored to a call transcript log taking the form of a text file.
  • the response utterance is passed to the TTS service (via the TTS module, not shown in Figure 14).
  • Post-processing of utterances such as injecting disfluencies, sentence truncation and/or SSML may also be performed before passing the utterances to the TTS module.
  • the final response utterance after post-processing is stored in the call transcript log as an Al bot utterance.
  • a version of the Al bot utterance including any SSML markup is also stored.
  • the speech audio is passed to the audio mixer 1424 which optionally (a) combines it with, for example, background audiol426, and/or applies audio filters or other effects 1428.
  • additional data is entered into the database 1422 (in this embodiment provided by a MongoDB instance).
  • a process on the asterisk VM monitors call recordings and passes call metadata 1430 of any new call recordings that appear (which happens with every call) to the database 1422 through the SSH tunnel 1414. This process may also store the call audio itself, for example in an AWS S3 storage facility.
  • configuration metadata alongside call ID and time may also be stored in the database 1422.
  • Figure 15 shows a docker deployment of a call processing system, for example like the system described with reference to Figure 14.
  • An Asterisk server 1501 runs on an AWS VM 1504, and is responsible for recording audio at 1506 and providing a call audio socket at 1508.
  • the Al hot 206 is deployed in a separate GPU equipped VM 1510.
  • a relatively simple deployment may be achieved using a single GPU equipped VM for each bot and pipeline on which all docker containers are housed.
  • this type of implementation simplifies the automated deployment of load balancing (as described elsewhere herein).
  • the custom docker containers in the Pipeline VM 1520 may be understood with reference to Figure 15 of the drawings.
  • An Asterisk client container 1502, a configuration server container 1530, an STT container 1532, and an audio mixer container 1534 are provided.
  • a database or MongoDB docker container 1536 is provided.
  • a docker volume 1538 provides a folder on the VM accessible to the asterisk-client docker container 1502 for storing logs and call transcripts.
  • a docker volume 1540 contains audio files, for example used as background audio.
  • the Asterisk client container 1502 contains various modules and supports various capabilities.
  • the Asterisk Client 1502 controls the flow of the call, connecting to the audio socket 1508 from the telecommunications endpoint, passing data to the input of each module, and then passing the returned data to the input of the next module in the pipeline before finally passing the processed audio data back through the audio socket 1508.
  • the Asterisk Client 1502 manages retrieval of configuration data from the configuration server 1530 and distributes it to other pipeline modules.
  • the STT container 1432 houses the STT module that connects to or implements an STT service.
  • the configuration server container 1530 houses the configuration module, and implements a configuration queue and/or a default configuration when a configuration is not set.
  • the configuration server container 1530 provides a web interface for viewing and editing the configuration queue.
  • the Audio Mixer container 1534 houses the audio processing module and continuously produces background audio, merges Al bot response voice audio when available, and/or applies audio filters.
  • the database (MongoDB) container houses a database (e.g. MongoDB) instance that stores configurations of past calls, asterisk derived metadata on past calls, metadata on available background audio, metadata on available TTS voices, and/or logs of exceptions that occur during pipeline operation.
  • a database e.g. MongoDB
  • the Al bot container (bot-parlai-gpu) 1560 houses the Al bot which converses with the scammer using text input and output. This is situated on a VM 1510 equipped with a fast GPU (graphics processing unit) or other machine learning acceleration hardware.
  • VM 1510 equipped with a fast GPU (graphics processing unit) or other machine learning acceleration hardware.
  • Figure 16 is a schematic representation of a load balancing module 1600 that forms part of the system of Figure 14. This figure depicts the plan for automated deployment, where new pipeline VMs are created when multiple simultaneous calls are received, and VMs are destroyed when demand again drops.
  • the pipeline and bot may occupy a single VM, though similar deployment would be possible with separate VMs.
  • the load balancing implementation of this embodiments maintains a small number of idle VMs at all times so as to be able to accept new calls without delays. Details of metadata and call transcript recording as well as logging use a centralised database (e.g., MongoDB) with which all pipelines are able to communicate.
  • the Asterisk server 1602 is configured to receive calls from a telecommunications provider, and to connect audio sockets to asterisk clients running on pipeline VMs as directed by the Load Balancer 1604 (in this exemplary embodiment implemented using nginx LB).
  • the described load balancing configuration uses a scaled infrastructure to handle large call volumes.
  • the Load Balancer 1604 selects an available pipeline instance to which to connect new incoming calls, and maintains a list of pipeline instances and/or a list of active calls.
  • the Load Balancer 1604 is equipped with an automated healthcheck and/or a status web page.
  • the monitor 1606 checks the status of the system (continuously, or continually based on a preset, selected, and/or variable basis), querying the Load Balancer 1604 about the number of idle and/or busy pipeline instances.
  • the monitor 1606 manages the creation and/or destruction of VM instances on the AWS 1608, synchronising the current list of instances with the Load Balancer 1604.
  • the methods may be used to reduce the occurrence of vishing and other phone-based scams, may be used as a source of information on the scam landscape, and is readily complimentary to existing approaches of scam detection.
  • the methods described herein present a novel approach to gathering threat intelligence on current impersonation phone scams through engaging with phone scammers via conversational Al bots developed to present as a convincing potentially viable scam victim.
  • traces of conversations between bots and phone scammers provide accurate and timely information on current scammer strategies, objectives and imitation targets that is otherwise still unknown in case of new campaigns, inaccurate or incomplete if reported by humans or altogether very expensive to obtain.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • Primary Health Care (AREA)
  • Computer Hardware Design (AREA)
  • Educational Administration (AREA)

Abstract

Described is a method comprising receiving a rerouted phone call identified as a scam call, processing received caller speech from the rerouted phone call to determine a response, interacting with a caller using the determined response, and processing at least a part of the call conversation to determine one or more scam parameters. The response is determined in order to extend a duration of a conversation. The scam parameters may include one or more of the following: a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam and/or a scam classification. The processing may further comprise identifying features in the received call speech associated with ending and/or extending a call.

Description

Scam Call System
Cross-Reference to Related Applications
[0001] The present application claims priority from Australian Provisional Patent Application No 2023901890 filed on 15 June 2023, the content of which is incorporated herein by reference.
Technical Field
[0002] The present disclosure broadly relates to scam call prevention and, more particularly, to a system for, and a method of, using conversational artificial intelligence to interact with a scam call and/or to obtain scam parameters from a scam call.
Background
[0003] A scam call is a voice telephony call generated for the purpose of dishonestly obtaining a benefit, or causing a loss, by deception or other means. Phone calls are the most common way that scammers target victims and have the most financial impact compared to other scam contact methods (such as emails or social networks). Scams include fraud against phone company customers by third parties, for example in the form of telemarketing fraud or caller ID spoofing used for vishing (i.e., voice phishing). Sometimes scams might include various forms of security assistance, e-commerce platforms follow ups, impersonation of government agencies requests, etc.
[0004] Phone companies and governments are actively involved in curbing false scam calls, and in some countries governments enforce legislation obliging phone companies to detect, trace and block scam calls. The Communications Alliance is an example of an organisation formed in Australia for the Australian communications industry in order to work towards reducing SMS and telephone scams as outlined in their “Industry Code”. One example of a method used by phone companies is for caller ID spoofing, where a fake caller ID is displayed when a call is made. Phone companies apply scam detection technology that identifies such calls, and the calls are then blocked. These methods are not 100% effective and scam calls still get through and cause harm.
[0005] Some scam call detection systems make use of a hot (also called a chatbot), i.e. an autonomous program that interacts with the caller. In one example, an unsolicited phone call is detected based on an analysis of a conversation between a caller who initiated the call and a bot that uses a voice recording impersonating a scam target individual, and the call is then blocked. A scam targets is an organisation that the scammers pretend to be representatives of.
[0006] These types of bots use conversational artificial intelligence (Al) to talk to the caller, i.e. the perpetrator of the scam call. Conversational Al uses machine learning and natural language processing to imitate human interactions by recognising speech and then responding with appropriate phrases, for example providing answers to questions according to a database and/or algorithm.
[0007] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
Summary
[0008] Conventional scam call detection systems aim to block and/or terminate a scam call as soon as possible once a scam call has been identified. This conventional strategy, however, not only immediately frees up the resources of the scammer to initiate a new scam call, but also provides limited opportunities to gather intelligence about scam callers and scam call behaviour. The systems and methods described herein aim to do the opposite: once a scam call has been detected, the call is rerouted to connect with a conversational artificial intelligence bot configured to present a convincing scam victim in order to maintain the call for as long as possible. Conversational Al bots engage with scammers, waste their time, and/or make available insights into the scams they perpetrate. In this way, scammer resources remain occupied with the bot and cannot be used to target new scam victims for the duration of the redirected call, and/or said insights are available for scam prevention tasks such as warning and educating potential victims. Advantageously, the extended calls provide opportunities for gathering intelligence about the scammers and about scam calls.
[0009] In one aspect there is provided a method comprising: receiving a rerouted phone call identified as a scam call; processing received caller speech from the rerouted phone call to determine a response; interacting with a caller using the determined response, wherein the response is determined in order to extend a duration of a call conversation.
[0010] The processing may comprise identifying features in the received call speech associated with ending and/or extending a call. The identifying may comprise identifying one or more of: negative emotions in the caller speech, and threats in the caller speech. The advantage provided by these features is that they enable maximising the duration of a scam conversation by maximising engagement of scammers and maximising believability of the bot used for the rerouted call.
[0011] The response may be determined in order to maximise the duration of the phone call.
[0012] The processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted phone call ends.
[0013] The method may comprise recording and storing at least a part of the call conversation. The method may comprise processing the stored part and/or processing a real-time part of the call conversation to determine one or more scam parameters. The scam parameters may comprise one or more of the following: a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam, scam statistics, agglomerated scam statistics, statistics of determined or observed scam parameters, and/or a scam classification. In some embodiments one or more scam parameters may be used for early detection of a scam campaign.
[0014] The method may further comprise identifying actionable scam intelligence comprising a scammers financial instrument and/or phone number.
[0015] In another aspect there is provided a method comprising: detecting a received scam call; and rerouting the detected scam call to a scam call bot, wherein the scam call bot is configured to extend a duration of the rerouted call.
[0016] The scam call bot may be configured to extend the duration of the rerouted call by interacting with a caller of the scam call via responses determined by the scam call bot.
[0017] The responses may be determined based on identified features in the caller’s speech associated with ending and/or extending a call.
[0018] The duration of the call may be extended by intentionally generating and responding with a response imperfection selected from a group comprising: backchannelling utterances, time-wasting phrases, and conversation repair phrases.
[0019] In another aspect there is provided a system comprising: a telephony endpoint for receiving a rerouted scam call; a speech-to-text module configured to convert caller speech from the received scam call to text; a conversational artificial intelligence (Al) bot configured to receive the text from the speech-to-text module, process the received text, determine a response so as to extend a duration of the scam call, and output the determined response; and a text-to-speech module configured to receive the determined response in text form from the bot, convert the text to a voice response, and output the voice response to the caller via the telephony endpoint.
[0020] The text-to-speech module may be configured for voice cloning.
[0021] The conversational Al bot may process the received text by identifying features in the received call speech associated with ending and/or extending a call. The bot may be configured to identify the features by identifying one or more of: negative emotions in the caller speech, and threats in the caller speech.
[0022] The bot may be configured to determine the response in order to maximise the duration of the scam call.
[0023] The bot may be trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted scam call ends.
[0024] The system may further comprise an audio processing module connecting the text-to-speech module and the telephony endpoint, and configured to process the voice response by mixing the voice response with an environment signal. This signal may be an audio signal, mimicking environmental and/or background sounds.
[0025] The conversational Al bot may further comprise a conversation controller adapted to manage a conversation flow by adding utterances to the response that extend the duration of the scam call, wherein the added utterances comprise one or more of: a time wasting phrase, a conversation repair phrase, a backchannelling phrase, and an interrupting phrase.
[0026] The conversational Al bot may further comprise a response controller configured to: discard a response utterance in response to a scammer utterance occurring during a response utterance processing, and removing said discarded response from a conversation history of the Al bot. [0027] Throughout this specification the word “comprise” or variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Brief Description of Drawings
[0028] Embodiments of the disclosure are now described by way of example with reference to the accompanying drawings in which:
[0029] Figure l is a schematic representation of a communication network.
[0030] Figure 2 is a schematic representation of a system used to implement a conversational artificial intelligence bot.
[0031] Figure 3 is a schematic representation of a method of predicting features as a side task using a K- Adapter.
[0032] Figure 4 is a schematic representation of a method of predicting input features.
[0033] Figure 5 is a schematic representation of a sequence-to-sequence transformer model.
[0034] Figure 6 illustrates an embodiment of a method of rerouting a detected scam call to a conversational artificial intelligence bot.
[0035] Figure 7 illustrates an embodiment of an on-phone scam detection and rerouting method.
[0036] Figure 8A illustrates an embodiment of a method of interacting with a scam call using a conversational artificial intelligence bot. [0037] Figure 8B is a schematic diagram of an embodiment of data analysis performed in the method of Figure 8 A.
[0038] Figure 9 is a schematic diagram of an exemplary embodiment of a call processing system.
[0039] Figure 10 is a schematic diagram of a configuration server that forms part of the call processing system of Figure 9.
[0040] Figure 11 is a schematic diagram of an audio processing module that forms part of the call processing system of Figure 9.
[0041] Figure 12 is a schematic representation of an overtalk module that forms part of the call processing system of Figure 9.
[0042] Figure 13 is a schematic representation of an outbound call module that forms part of the call processing system of Figure 9.
[0043] Figure 14 is a schematic diagram of another exemplary embodiment of a call processing system.
[0044] Figure 15 is a schematic diagram of an exemplary embodiment of a docker deployment of the call processing system of Figure 14
[0045] Figure 16 is a schematic representation of a load balancing module that forms part of the system of Figure 14.
[0046] In the drawings, like reference numerals designate similar parts. Detailed Description
System Overview
[0047] Figure 1 of the drawings illustrates a communication network 100 that supports both data and telephony. The network operator 108 provides telecommunications services to its users via the network 100. A user can make or receive phone calls via a user device 102 (for example a mobile phone, a smartphone, a landline phone, a Voice over IP (VoIP) device or the like). An incoming call from an originating device 104 is managed by the network operator 108, and switched to the user device 102 via the network 100. A server 110 is in communication with the network operator 108 and/or the user device 102 via the network 100.
[0048] Figure 2 is a high level schematic representation of a system 210 provided by the server 110 that is used to implement a conversational artificial intelligence (Al) bot 206. The system 210 and its building blocks (such as the Al bot 206) may be configured to accommodate one or more languages. The system 210 includes a telephony endpoint 202 for receiving a rerouted scam call or initiating calls to known scam phone numbers, and a speech-to-text (STT) module 204 configured to convert caller speech from the received scam call to text. The system 210 has a conversational artificial Al bot 206 configured to receive the text from the speech-to-text module 204, process the received text, determine a response so as to extend a duration of the scam call, and output the determined response. The system 210 includes a text-to-speech (TTS) module 208 configured to receive the determined response in text form from the bot 206, convert the text to a voice response, and output the voice response to the caller via the telephony endpoint 202. The system 210 optionally includes an audio processing module 209 between the TTS module 208 and the telephony endpoint 202. The audio processing module 209 applies audio processing to mimic the background acoustic (i.e. sound) environment of a phone call and enhance voice believability and outputs the processed voice response to the caller via the telephony endpoint 202. In some embodiments, the TTS module 208 includes voice cloning capabilities. [0049] The telephony endpoint 202 may be, for example, an Asterisk server. In this embodiment, the system 210 includes a telephony endpoint 202 for receiving a rerouted scam call. In other embodiments the telephone endpoint 202 may be separate from the system 210, interfacing with the system via the STT and TTS modules. The telephony endpoint 202 is capable of receiving Session Initiation Protocol (SIP) calls. SIP is the communication protocol used for VoIP calls. The telephony endpoint 202 communicates with the speech-to-text module 204 and the text-to-speech module 208 (the latter via the audio processing module 209, if present), which in turn communicate with the conversational Al bot 206. The telephony endpoint 202 processes the audio signals of the call and passes them to the speech-to-text module 204 and from the text- to-speech module 208 module via raw audio Web Sockets, which in turn communicate with the bot 206 over Web Sockets in plain text. The speech-to-text module 204 may be implemented using, for example, Google STT.
[0050] The architecture of the system 210 described with reference to Figure 2 is highly scalable. Multiple phone numbers and VoIP initiators can be assigned to the same SIP trunk and the telephony endpoint can be replicated and load balanced to withstand many simultaneous calls.
[0051 ] What the Al bot does:
[0052] The bot 206 is a text based conversational Al bot, and in some embodiments, open source pre-trained bots such as the ParlAI “BlenderBof ’ may be adapted to implement the bot 206. The bot 206 is configured to process the received text by identifying features in the received call speech associated with ending and/or extending a call. In some embodiments, the features are associated with negative emotions and/or threats detected in the caller speech.
[0053] In some embodiments, the method comprises processing text based and/or audio based features found to be associated with ending the call. Text based features may include text transcripts from conversations between the scammer and the bot. Audio based features may include emotion, audio ML model outputs, paralinguistic features such as pitch, tempo, loudness, timbre, intonation range, syllabic duration, and/or rhythm. The Al bot is configured to identify features in the text of the transcripts (such as phrases or word patterns identified by machine learning models trained to extract scam stages, e.g. word length, number of words per utterance, uniqueness of words, vocabulary richness, etc.) that may be considered to be indicators gearing towards the end of a call. The features may be determined by predictions of machine learning (ML) models trained on an objective statistically associated with ending calls, and/or the features may be identified by unsupervised ML models statistically associated with ending calls. Based on these identified features, the Al bot may be configured to avoid one or more of these features in order to avoid ending a call.
[0054] The processing of the received call speech may comprise utilising a conversational artificial intelligence bot trained to mimic victim utterances in scammervictim phone conversations, e.g. in long scammer-victim phone conversations.
[0055] The processing of received caller speech from the rerouted phone call to determine a response may include the addition of heuristic features to responses determined to increase conversation length. Heuristic features may include predetermined initial responses, addition of speech disfluencies, conforming to a predetermined persona, and/or restriction to a maximum or minimum sentence length.
[0056] The processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained or fine-tuned on labelled real phone scam transcripts, for example manually labelled real phone scam transcripts. Sources of scam transcripts may include labelled transcripts from publicly available “scam baiter” videos in which concerned individuals (“scam baiters”) converse with real scammers knowing that the call is a scam.
[0057] The bot 206 is further configured to determine the response. In some embodiments, the bot 206 is configured to determine the response in order to maximise the duration of the scam call. [0058] In some embodiments the bot 206 is configured to mimic scam victims. This may, for example, be done through the addition of short term memory, empathy, and personas that allow the bot 206 to maintain consistent knowledge of personal facts such as a name, address and aspects of a fictitious personal life. The personas include features that enable a sufficiently convincing mimic of a vulnerable human scam victim.
[0059] In some embodiments the bot 206 may include heuristic text generation designed to prolong conversations with scammers and/or produce better quality conversations with scammers. These heuristics may include fixed initial bot utterances, injection of disfluencies into bot utterances, bot utterance sentence length truncation or exclusion of long sentences, heuristics to prevent the bot talking over the scammer.
[0060] In some embodiments, recordings and transcripts from conversations between scammers and Al bots may be analysed to determine threat intelligence information.
Threat intelligence information may include: the target organisation that the scammer is pretending to be, the social engineering techniques used by the scammer, the topic of the scammers script, and/or the structure and/or stages in scripts used by the scammer.
[0061] Threat intelligence from recordings and transcripts of conversations between scammers and Al bots may be utilised (e.g., by sale to a third party) as additional data used by Al bots to effectively prolong calls with scammers. The threat intelligence data may be used to identify and educate potential future scam victims so as to reduce the success rate of scams or by way of information of concerned organisations to their customers to warn them of existing scam campaigns impersonating the organisations processes or personnel.
[0062] How the Al bot is trained:
[0063] In some embodiments the bot 206 implements Al models built around large pre-trained sequence to sequence models such as BART, T5, and GPT. These models achieve very good fluency. The models are fine-tuned on conversation data such as scam call transcripts for domain adaptation of pre-trained conversational Al models. In one embodiment Blenderbot is fine-tuned on “scam baiter” conversations with real scammers obtained, for example, from YouTube or from synthetic scammer- victim conversations crafted for conversation diversity and/or for specific conversational patterns. Conversation data for training may be enhanced with the application of text generation heuristics found to be associated with longer scam call conversations.
[0064] The conversational Al bot described herein presents novel challenges to finetuning due to long call durations (pilot data averaged 86 utterances) and the adversarial nature of the task (the aim is not quality effective conversation, but to prolong the conversation irrespective of conversational quality).
[0065] “Wild” data from calls with real scammers enables an additional form of training. The primary goal is for the bot to achieve long call durations with real scammers. In some embodiments the duration of a “wild” call (one with a real scammer) is used as a reinforcement learning (RL) training objective with a small positive reward for each utterance and a large negative reward when the scammer hangs up. In this way the Al is optimised for longer conversations via reinforcement learning on call length and dialogue self-play.
[0066] Identified conversation features that relate directly to longer call durations may also be used as RL training objectives. For example, features associated with scammer script steps and those expected or found to be associated with ending or extending a call such as scammers’ negative emotion and threats. Features taken into considering for the purpose of extending the duration of a scam call may include one or more of: the subject of the call, emotions, topics, and keywords. Relevant features may be determined through analysis of available scam call transcripts and based on existing research and understanding of persuasion, social engineering and psychology.
Available scam call transcripts will include previously existing public records of scam calls in addition to records of scammer conversations with the bots used to engage with rerouted scam calls. These features are incorporated into training as side tasks in addition to the main fine-tuning task. A model that is able to distil the knowledge necessary to predict call features associated with longer “wild” call durations is equipped to recognise model updates that are effective for achieving longer calls. In this way the duration of a scam conversation can be extended or maximised by increasing engagement of scammers and improving believability of the bot used for the rerouted call.
[0067] The main fine-tuning task consists of further training of a pre-trained model using task specific data (e.g., scam transcripts). The bot is used to attempt to predict words in scammer utterances given previous utterances in scam transcripts from the training data. The training is considered “fine-tuning” as the quantity of data used in these pre-trained models is orders of magnitude larger than the data used for finetuning. The training causes the model to iteratively give higher likelihood to generating the actual words spoken by victims in the training data (the scammer words are treated as model inputs). In this way the end to end conversational Al model adapts to new contexts.
[0068] Transfer learning through training on multiple related tasks may result in an improved model. Therefore, in some embodiments, side tasks are implemented, such as predicting from the last hidden layers of the underlying transformer, predicting from the RL action space, or predicting from the adapter framework. Side tasks for embodiments based on a pre-trained transformer natural language processing (NLP) model may, for example, be implemented by predicting from hidden layers of the underlying transformer model or through the adapter framework.
[0069] Predicting from the RL action space may be understood with reference to T.
Zhao, K. Xie, and M. Eskenazi, ‘Rethinking Action Spaces for Reinforcement Learning in End-to-end Dialog Agents with Latent Variable Models’, in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, Jun. 2019, pp. 1208-1218. doi: 10.18653/vl/N19-1123, incorporated herein by reference in its entirety. The method is similar to the method illustrated in Figure 4 of the drawings, with the action space derived from encoder outputs. The encoder output is transformed with a small feed forward network and passed through a parameterisation function to provide a distribution over a discrete or continuous action space. It is the parameters of this distribution that can be used (again via a small feed forward network) to predict the features, and thus encourage alignment of actions with the features. The distribution over the action space is then passed to the decoder transformer network.
[0070] In some embodiments, two types of conversation features may be used as side tasks: features of scammer utterances and/or of victim (bot) utterances. Types of side tasks may include the following:
[0071] (1) Recognising features of scammer (input) utterances, with side tasks run alongside the bot’s utterance encoder.
[0072] (2) K-adapter style side tasks, i.e. parallel stacked transformers fed with encoder representations at each layer as described in Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K- Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics., incorporated herein by reference.
[0073] Figure 3 is a schematic representation of a method 300 of predicting features as a side task using a K-Adapter 302. The K-Adapter 302 is a stacked transformer with layer wise inputs of signals from between Encoder layers 304 and from the final Encoder layer.. The K- Adapter includes a predictor 308 (typically a fully connected network) with softmax or sigmoid to provide probabilities for predicting features.
[0074] (3) Figure 4 is a schematic representation of a method 400 of predicting input features from the output of the encoder. The output of the encoder (which is also fed to decoder/text generator 406 as well as memory module etc.) is fed into a NN model 408 (a one or two layer transformer with classifier layer, or a one or two layer fully connected network) whose output predicts the feature.
[0075] (4) A side task based on desirable/undesirable features of victim (bot) utterances.
[0076] (5) A side task that predicts features from the decoder output and/or transformer layer outputs K-adapter-style (as described above).
[0077] (6) RL training with rewards based on (or at least partially based on) completed utterances. Figure 5 is a schematic representation of a sequence-to-sequence transformer model 500. The model 500 generates text one word at a time, with each subsequent word 502 predicted based on the previous words. The model is trained by predicting each word in a training utterance given the previous words. An error is determined from the probability the model gives to the word and is “propagated back through the network” (at 504), providing updates to the model that result in the word having higher probability. When RL is applied, it adds another component to this measured error. The features that relate to conversation length may be exhibited by a whole utterance or by one or more of its words, depending on how the feature is detected. For example, an emotion detector may not indicate which words signified the emotion (in which case the feature is associated with a whole utterance) or may provide some indication of which words contributed to the measured emotion (so the feature is associated with individual words). For whole utterance features, the RL reward is applied equally to each word. For individual word features, the RL reward is applied to those words that exhibit the feature. The reward is positive for features that are associated with longer conversations, and the reward is negative when associated with shorter conversations. The small positive reward for each new scammer utterance would work the same way as features associated with the (previous) whole generated utterance. [0078] For the large negative reward for ending the conversation: a. The negative reward is applied to all utterances with exponentially decreasing magnitude from the last one (e.g., the full negative reward is applied to the last generated utterance, half of it to the second last, a quarter to the third last, an eight to the fourth last etc..). b. In some embodiments, the negative reward is applied using a model to estimate which utterances (or even which words in which utterances) contributed to ending the conversation and by how much, and then the negative reward is applied proportionally to that contribution
[0079] Alternatively or in addition to the positive and negative reward method, a K- Adapter style side task may be used for the decoder. In some embodiments, features are predicted via separate predictors fed with intermediate layers of the decoder transformer. If the transformer has 12 layers, the model includes 12 (simple) NN predictors, and the errors in their predictions are back-propagated into the transformer (for example with at least a 12 times smaller learning rate than the learning rate for predicting words in training utterances so that these predictors do not dominate training).
[0080] In some embodiments, further training targets may be obtained by integrating background knowledge of scammer methodologies, social engineering and the psychology of persuasion. Further knowledge of scammer methodologies and social engineering techniques to be used as training targets can be obtained by analysis of scam calls including those available in the public domain and calls between Al bots and scammers.
[0081] In some embodiments, further training targets may be obtained through the discovery of text generation heuristics, acoustic (i.e.. sound) processing and voice characteristics found to be effective for longer bot-scammer conversations. [0082] In some embodiments the Al bot may be implemented using one or more instances of Blenderbot/2/3, GPT, and/or other Large Language Models (LLM), finetuned on transcripts of videos or voice recordings made of scam baiters. In some embodiments such transcripts may be manually edited and annotated to remove sections that are not parts of conversations with scammers, and/or to label utterances as either Scammer or Victim (the scam baiter is considered the victim).
[0083] In some embodiments may include functionality to automatically recognise stages in scam calls by identifying types of scams and sequences of scam stages for each type from analysis of past call transcripts, and/or recognising the type of scam and current scam stage during live scam calls. In some of these embodiments the Al bot may be trained to utilise structured information about a current stage of the scam call, providing more contextualised responses and allowing tailored responses that depend on the context. This may be done, e.g. using an implementation as described in Meta Fundamental Al Research Diplomacy Team (FAIR)f et al. , Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science378, 1067-1074(2022), incorporated herein by reference.
[0084] Voice Cloning
[0085] For text to speech, recent advances in the field have enabled convincing speech generation that is difficult to distinguish from human speech. The text-to-speech module 208 may include one or more voices.
[0086] Voice cloning is a type of “deep fake” consisting of deep learning Al models that generate speech audio that sounds like a given person from text inputs. The person whose voice is being cloned provides recordings of their voice which are used to train the Al model. Once sufficiently trained, arbitrary text can be provided to the model, and it will “speak” the text in the person’s voice. It is further possible to make variations on the voice to change, for example, the apparent age and gender of the generated voice and modulate expressed emotion. [0087] In some embodiments the text-to-speech module 208 is configured to interpolate between “voice personas” and to adapt the “voice personas” along specific characteristics such as age and gender. This is achieved by combining similar technology for adapting images such as faces to the voice cloning function of the TTS module 208.
[0088] Recent voice cloning models such as YourTTS (described in E. Casanova et al., ‘ YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone", arXiv:2112.02418 [cs, eess], Dec. 2021, http://arxiv.org/abs/2112.02418, incorporated herein by reference in its entirety) use a single vector to represent a voice. Techniques in style representation and transfer such as normalising flows (as described in D. J. Rezende and S. Mohamed, ‘ Variational Inference with Normalizing Flows" , arXiv: 1505.05770 [cs, stat], May 2015, http://arxiv.org/abs/1505.05770, incorporated herein by reference in its entirety) and as used in StyleGan (described in T. Karras et al., M Style-Based Generator Architecture for Generative Adversarial Networks" . arXiv, Mar. 29, 2019, http://arxiv.org/abs/1812.04948, incorporated herein by reference in its entirety) can be applied to this representation to enable effective interpolation of voice qualities. With this type of technology, a wide variety of realistic artificial voices can be obtained that smoothly transition between multiple specific voices, and characteristics such as gender and age can be smoothly varied. This is useful for victim bots to be deployable at scale as it provides a large variety of “voice personas” so that scammers will have difficulty recognising the bots by voice alone. These features provide the advantage of voice generation that provides a believable bot voice for the rerouted call in order to convince scammers that they are talking to a real person. For this, the generated voice needs to be human-like with convincibility increasing by applying features such as tone modulation, pauses, disfluencies and emotions.
[0089] The combination of the conversational Al bot 206 together with the voice cloning capability of the text-to-speech module 208 produces a “victim bots” almost indistinguishable from actual scam victims. Method Overview
[0090] Figure 6 of the drawings shows a flow diagram of a method of rerouting a detected scam call to a conversational artificial intelligence hot. The method 600 comprises detecting (at 602) a received scam call, and rerouting (at 604) the detected scam call to a scam call hot. The scam call hot is configured to prolong the rerouted call. The scam call hot is configured to extend the duration of the rerouted call by interacting with a caller of the scam call via responses determined by the scam call bot. The responses may be determined, for example, based on identified features in the caller’s speech associated with ending and/or extending a call. In some embodiments, conversational artificial intelligence is used to interact with a scam call and derive insights into current scams from such interactions
[0091] Rerouting scam calls
[0092] The scam calls may be detected and rerouted to initiate a SIP call with the bot in several ways:
1. From a network operator that forwards calls determined to be scams.
2. From a VoIP provider of leased telephone numbers (i.e., a telephony honeypot). VoIP providers may include dedicated VoIP services or larger telecommunications companies (e.g., OPTUS or TELSTRA in Australia), which typically also have VoIP capabilities.
3. From a smartphone app that allows users to forward scam calls to the bot.
4. From third party services or individuals that reroute scam calls to the bot. Third party services and individuals forward calls to the bot either through SIP or via assigned VoIP phone numbers.
[0093] Scam detection may be performed by the phone company, for example using one or more of the methods in Table 1 :
Figure imgf000021_0001
Figure imgf000022_0001
Table 1 Scam call detection techniques by the network operator
[0094] Network operator scam call detection techniques are not foolproof. Challenges include incoming calls not having verifiable identity information, calls that travel through multiple carriers lack metadata, and the simple heuristics used for scam detection do not evolve as rapidly as the scam techniques.
[0095] When the network operator does not detect a scam call, the call is routed to a user phone. On-phone functionality may be provided to identify a scam call, for example one or more of the methods described in Table 2:
Figure imgf000022_0002
Figure imgf000023_0001
[0096] On-phone scam call detection techniques are not foolproof. Challenges include that the required analysis must process the incoming call in real-time to notify a user, scammers modify their methods to remain undetected, and that scam detection software should not interrupt non-scam calls.
[0097] When the on-phone scam call detection software does not detect a scam call, the call is put through to the user. If the user identifies the incoming call as a scam call the user is able to forward the call to the hot system via an on-phone rerouting app.
[0098] Scam alert rerouting app
[0099] A mobile phone app may be used to redirect scam calls to the bot system. In some embodiments, the mobile phone app automatically detects and reroutes received scam calls.
[0100] Figure 7 illustrates an embodiment of an on-phone scam detection and rerouting method 700. Scam calls are detected by monitoring received calls (at 702), and comparing (at 704) caller speech patterns with one or more feature databases 706. The feature databases 706 describe, for example, scam patterns or scammer strategies identified from real and/or modelled conversations between scammers and victims. If a received call is identified as a scam call (at 708), the user is notified and the call is rerouted (at 710) to the bot system 210.
[0101] In some embodiments, the mobile phone app includes functionality to listen in on the bot-scammer conversation and/or record the conversation. In some embodiments, the mobile phone app includes functionality to “scam bait” the scammer, i.e., the phone owner pretends to be a victim while the app records the conversation and sends it to the server’s data storage.
[0102] If the mobile phone app is unable to detect a scam call, and the call continues with the user, the user may realise that the call is a scam call. The mobile phone app also includes functionality allowing the user to identify the call as a scam call (at 712) and to forward the call (at 710) to the server 110 and the bot system 210.
[0103] Figure 8A illustrates an embodiment of a method of interacting with a scam call using a conversational artificial intelligence bot to determine scam parameters. The method 800 comprises receiving (at 802) a rerouted phone call identified as a scam call, the call being rerouted e.g., from a honeypot, or from third party scam detection systems. A “telephony honeypot” is a collection of phone numbers made available to calls from the public with the intention of attracting calls from scammers. The numbers may be “dirtied” in some way such as including them on unreliable or dishonest websites that collect personal information.
[0104] The method then processes (at 804) received caller speech from the rerouted phone call to determine a response. In some embodiments the method may also incorporate scammer audio processing, such as voice printing and/or scam background noise detection and processing. This type of scammer context identification facilitates identification of specific scam operations. [0105] The method 800 further includes interacting (at 806) with a caller, using the determined response to hold a conversation. The response is determined in order to extend a duration of the call conversation.
[0106] Intelligence gained from the analysis of bot-scammer conversations can be "active" in the sense that it can be used in pro-active scam defence efforts such as blocking phone numbers used by scammers and disabling scammers’ financial instruments. Some embodiments include bots configured to extract information from the scammer, such as financial instruments (e.g.: bank details or credit card usage). Some embodiments incorporate functionality for scammer disruption by closing and/or rendering unusable the scammers’ financial instruments. In the case of credit cards, for example, banks are able to provide real credit card details and identify scammer commercial credit card accounts. This allows banks to then block the use of the relevant credit card account.
[0107] The processing may include identifying features in the received call speech associated with ending and/or extending a call, for example by identifying negative emotions and/or threats in the caller speech. In some embodiments the response is determined in order to maximise the duration of the phone call.
[0108] The processing of the received caller speech may comprise utilising a conversational artificial intelligence bot trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted phone call ends.
[0109] Optionally, the call conversation, or one or more parts of the conversation, referred to herein as data artifacts, (e.g. audio and/or text), are recorded and stored at 808. At 810 an analysis of the data artifacts and/or one or more parts of the call conversation is performed to determine one or more scam parameters 812 that relate to, for example, a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam and/or a scam classification. In some embodiments, the method includes generating statistics from the call(s) (these may be aggregated statistics), and then determining one or more scam parameters based on the statistics. Optionally, the statistics, aggregated statistics, and/or determined parameters may be stored.
[0110] The method 800 obtains timely information about current and new phone scams through analysis of conversations between the Al bot and scammers. In some embodiments near real-time information and/or alerts regarding new and emerging scam campaigns are generated. In this way, the method enables early identification of emerging scam campaigns. To achieve effective real-time information, a representative sample of scams may be fielded. To ensure a representative sample, a large number of scam calls may be fielded (for example dozens, hundreds, or thousands of calls). In some embodiments existing scam research techniques may be used, such as crowd sourcing (i.e., reports from the public) and/or monitoring social media channels.
[0111] The scam calls may include inbound calls from scammers and/or outbound calls to known scammers. In some embodiments, actively maintained telephony honeypots may be used to alert scammers of the inbound number(s).
[0112] The determined scam parameters may facilitate identifying a “target” of the scam, i.e. the organisation that the scammer is pretending to represent. Given this information, the organisation can be alerted and can in turn alert their customers of the existence of the scam and enact a pro-active defence against the identified scam and scam strategy.
[0113] The determined scam parameters may facilitate identifying the structure of the scam, for example the script or sequence of steps the scammer leads victims through. The Al bot may, for example, employ a combination of state of the art “topic modelling“(an NLP technique) and Hidden Markov Model (HMM) sequence modelling to do this. [0114] The determined scam parameters may facilitate identifying information pertaining to social engineering techniques applied in the scams and how those techniques are concretely instantiated.
[0115] Determined scammer financial instruments may be used by financial institutions to disable those instruments thus stopping scammers from receiving funds.
[0116] The determined scam parameters may facilitate identifying information about and classification of types of scams.
[0117] The determined scam parameters may be utilised to inform effective public education counter measures for scams.
[0118] The determined scam parameters may also be used to inform Al bot training and configuration.
[0119] In some embodiments, one Al bot may be used to implement the functionality described herein. In other embodiments, separate Al bots may be fine-tuned for each scam type and/or for specific scams and/or scripts. In some embodiments this can be done by training on the transcripts themselves, by addition of side tasks informed by scam classes and types, social engineering techniques, and/or training on scam structures. In some embodiments this can be done by training on crafted synthetic transcripts with content informed by a combination of real transcripts, scam parameters and/or persuasive techniques found to be effective against scammers. In some embodiments, the scam type is automatically detected through a separate model trained for this purpose that runs alongside the conversation Al bot during a call. An appropriate Al bot selection may be based on said automatic detection.
[0120] Al chat bots can effectively incorporate structured state information to better generate text appropriate to the specific context (see the description herein referring to Figure 17, for example). In some embodiments, this kind of training is used in conjunction with determined scam parameters such as information about which scam stage an ongoing conversation is at.
[0121] The systems described herein are configured to extract insights from bot- scammer conversations, and to extract actionable insights such as scammer financial instruments and phone numbers. Extracted financial instrument information can be used by banks to block scammer finances.
[0122] Figure 8B is a schematic diagram of an embodiment of the data analysis 810.
[0123] While the scam victim bots conduct a conversation with a scammer data analytics is performed as follows:
[0124] As the conversation progresses, data is added to a queue for analysis at 820. “Data Artifacts” include one or more of the following: a. Individual scammer words (audio and/or transcribed), b. Scammer utterances (audio and/or transcriptions), c. Audio stream(s), d. Whole phone call transcripts and/or audio.
[0125] The analytics processing pipeline comprises one or more analytics modules 822. The data artifacts are provided to the analytics modules, in real-time or near realtime.
[0126] Analytics modules may access data already in the database, allowing for analysis and statistics of accumulated data. [0127] The artifact is passed to multiple analytics modules each designed to extract specific insights and information. Overall modules may include: a. LLM (large language model) based classification and scam parameter recognition models. Example LLMs are ChatGPT, LLAMA or Mixtral, b. audio processing e.g. to classify background noise and/or create voice prints, c. text based machine learning models that recognise scam parameters, d. modules designed specifically to recognise scammer financial instruments and/or phone numbers provided to victims by scammers, e. calculation of statistics of identified scam parameters over single calls and/or over multiple calls.
[0128] Results are passed to a handling process 824. The handling process may create real-time intelligence alerts 825. The handling process stores results in a database 826.
[0129] Some embodiments of analytics modules 822 include LLM (large language model) based classification and parameter recognition models. In these models, a prompt for an LLM is generated consisting of a set of natural language instructions followed by data to be analysed, presented in a specific format. The constructed prompt is passed to the LLM, and the response from the LLM contains the desired scam parameters.
[0130] LLM prompts are configured to identify scam-specific parameters such as scam targets, scam structures, scam techniques, a financial instrument, scammer phone numbers and/or scam classification. LLM prompts are configured to produce machine friendly outputs (such as JSON with a specific format) and to extract multiple scam parameters with a single prompt. [0131] Some embodiments of analytics modules 822 are configured to extract specific actionable intelligence such as scammer financial instruments (e.g. Bank account details), phone numbers, or the like.
[0132] The methods described herein may include one or more ways to access and/or distribute analytics results (i.e., scam intelligence), such as: a. Generate real-time or near-real-time alerts 825, for example sent by email, SMS or push notifications. This is triggered immediately on recognition of specific analytics outputs such as recognition of new scam campaigns impacting a client. b. Through a web API. Clients access these APIs with their own data analytics platforms. c. Via data dashboards. Clients access web dashboards to view the data. This includes a display of statistics, progressions and projections of extracted scam insights and/or intelligence tailored to specific clients. d. Report generation. Reports are generated and sent to clients, either on a regular basis or as requested by clients
[0133] Clients are provided with authentication details that allow access to intelligence data as determined by their contract or arrangement with the data supplier.
[0134] Bots are trained to enable financial instrument interventions. Bots may be configured to encourage scammers to continue through their scam and arrive at the point where a financial transaction is requested from the bot acting as a scam victim. Where scammers ask for credit card details, such bots are configured to provide those details in a believable manner (e.g. through indicating that a social engineering technique is working, such as showing fear about a threat, asking for help from a scammer posing as an authority figure etc.) [0135] This differs from merely wasting time and requires the hot to be agreeable, provide appropriate responses to the scammer such that the scammer is satisfied and moves ahead to subsequent stages in their scam, and may require the bot to recognise social engineering techniques and show that the techniques are working on the bot to enhance scammer confidence. The techniques may require the bot to be provided with information on how the scam is proceeding to better facilitate generating appropriate responses, and may require recognition of the type of scam and its procedure in order to provide that information to the bot. The techniques may also require social engineering techniques against the scammer such as indicating that the bot has a lot of money and that the bot believes the scam.
[0136] This type of bot is realised through engineered LLM prompts instructing the LLM to be agreeable, providing contextual information on scam progression in the prompts, managing scammer engagement through e.g. instructions to display doubt, instructions to provide specific requested information, instructions to express fear and/or submission.
[0137] The bot operates two streams of processing. The first is the conversational Al bot: a. A decision is made to emit a bot utterance. b. LLM prompts and conversation context provided to an LLM, which generates an utterance that is then passed to the scammer. c. The LLM utterance is added to the conversation context. d. If the scammer responds, this response is added to the conversation context. [0138] The second processing stream triggers each time the conversation context is updated: a. The updated conversation context is processed to identify e.g. scammer engagement, scammer emotions such as anger and frustration, progression through scam stages, specific information requested by the scammer. b. The identified scammer state and scam stage are utilised to build an LLM prompt or select from previously engineered LLM prompts optimised to encourage the scammer to continue with the scam. E.g.: If the scammer is frustrated, the LLM is instructed to a) provide any requested information and/or b) act submissively, following any instructions from the scammer.
If PII (personally identifiable information) is requested, and the scammer is not frustrated, the LLM may be instructed to express some doubt about providing the information.
Other points mentioned in the paragraphs above about how the bot may need to behave, provided at strategic moments in the conversation.
[0139] In some embodiments, financial institutions may receive intelligence about scammer financial instruments and can use this information to render unusable those instruments, thus preventing scammers from receiving funds from their victims.
[0140] There are two example processes. First, when scammers reveal bank account details to their victims (and thus to scam victim bots): a. Bank receives scammer bank account details from bots b. Bank reviews associated metadata (e.g. call transcripts, details of why the call was detected as a scam call, etc.) to perform due diligence that the call was indeed a scam call and that the bank account details are correct. c. Bank contacts partner banks with bank account details and due diligence report and partner bank stops all transactions on that account. If the account is with the participating bank, this is done directly. Analysis of past transactions on the scammer account is made, and where appropriate, funds may be returned to recently scammed individuals.
[0141] For credit card financial instruments the bank may create a working credit card and supply details to the victim bot, and the bank then observes the associated credit card account for any transactions. The victim bot may then reveal card details to scammers at an appropriate moment (e.g. when requested, perhaps with some resistance). When the scammer performs a credit card transaction, the bank is then able to observe transactions and obtain the scammers’ credit card merchant details. The bank may perform due diligence to ensure the transaction was indeed from a scammer. The bank may then also contact a partner credit card vendor and block the merchant account. In some cases, recent credit card transactions by that merchant account may be reversed, returning victim funds.
Exemplary embodiment
[0142] Figure 9 is a schematic diagram of an exemplary embodiment of a call processing system 900. In addition to a telephony endpoint 202, a speech-to-text (STT) module 204, a conversational artificial Al bot 206, a text-to-speech (TTS) module 208, and an audio processing module 209, the system 900 includes a configuration server 1000.
[0143] At 901 audio is passed into the system 900 from the telephony endpoint 202 through an audio socket. When the audio socket is initiated, a pipeline and bot configuration is selected by the configuration server 1000. [0144] At 903 audio is processed by the Speech To Text (STT) module 204, rendering it as text utterances. Optionally, the specific STT module and configuration for that module (e.g., which voice type to use) are specified in the configuration managed by the configuration server 1000, and may include, for example, Azure or Google STT functionality. The STT module 204 also includes a speech detector 930.
[0145] Optionally, at 904, if an utterance is received by the Al bot 206 from the STT 204 while the Al bot 206 is processing a preceding utterance, the subsequent utterance is stored by the Al bot 206 and sent (along with any other utterances that appear during processing) to the Al bot 206 as a single text when the bot has completed processing. This may be enabled or disabled as specified in the configuration managed by the configuration server 1000. This is referred to as “Overtalk Prevention” and is described below with reference to Figure 12.
[0146] The conversation controller 990 (described in more detail elsewhere herein) is responsible for controlling the flow of the conversation by managing turn-taking, triggering the pipeline to respond to the scammer’ s utterances, triggering time-wasting phrases, conversation repair phrases, and/or backchanneling (e.g. “uh-huh”, “yeah”, “okay”, etc.), interrupting the scammer, and/or barge-in detection (i.e., when the scammer interrupts the bot).
[0147] At 905 the Al bot 206 selects and/or generates a response utterance. In some embodiments, at 905a one or more hard coded phrases may be injected into the bot’s speech, bypassing the Al bot 206 processing at 905b. Optionally, this may be done at the beginning of the call in the form of a sequence of initial phrases (e.g. “Hello”). Phrases, phrase generation, and/or phrase selection may be specified in the configuration received from the configuration server 1000. For example, phrases may be randomly selected from a pre-set list. Optionally, this may be done during the call on a random basis in the form of time wasting phrases such as “I’m sorry, I didn’t catch that?”. The rate at which time wasting phrases are injected may be specified in the configuration, and may be altered in a random or pseudo-random way during the call. [0148] Scammer utterances and injected phrases are added to the conversation history of the Al hot 206. This may be achieved by collecting utterances (for example based on an expected pattern of: <scammer-utterance>, <bot-utterance>, <scammer utterance>, etc.) and then passing the list of utterances to the Al bot 206, which interprets them in the same way as utterances generated by the Al bot 206.
[0149] Additionally or alternatively, at 905b scammer utterances (together with any injected Al bot utterances) are passed to the Al bot 206 and the Al bot 206 generates a response utterance. The specific Al bot model used for this may be specified in the configuration.
[0150] In some embodiments, as indicated at 906, a limit may be configured for the maximum number of long sentences the Al bot 206 can say in a response. The Al bot 206 removes sentences from the end of the response when this limit is exceeded. The maximum number of long sentences and the minimum number of words for a sentence to be considered long are specified in the configuration.
[0151] Optionally, at 907, disfluencies such as “um. . .” may be injected into the response utterances. Disfluencies may be randomly selected from a pre-set list and injected at a specified rate. The configuration specifies (a) whether this is enabled as well as (b) the frequency, i.e., how the specified rate is determined (e.g. using a pseudorandom timer or according to a pre-set selection).
[0152] At 908 a response utterance is processed by the Text To Speech (TTS) module 208, which returns speech audio. The response utterance is processed to include Speech Synthesis Markup Language (SSML) which allows specifications for the speed, pitch, volume and speech styles (e.g. emotions) of the voice to be generated. The specific TTS module may include one or more functional blocks, provided by e.g: Azure, Google, and/or Custom Voice Cloning TTS software. Configuration for the TTS module 208 may be specified in the configuration, for example, the configuration may specify which voice to use, SSML, etc. [0153] In the case that a new scammer utterance occurs during response utterance processing, discarding the response utterance is managed by the Response Controller 909 that also edits the Al hot conversation history to remove the discarded response and reflect any changes made at 906 and/or 907. This may be enabled or disabled as specified in the configuration.
[0154] Optionally, at 910, speech audio may be merged with background audio. Background audio may be selected from a pre-set collection of audio files. In some embodiments, the specific background audio stream and relative volume of speech and background audio are specified in the configuration.
[0155] In some embodiments, at 911 audio effects may be applied to the merged voice and background audio and resulting audio stream passed to the telephony endpoint. The specific audio effects and their parameters may be specified in the configuration.
[0156] Conversation Controller 990
[0157] One exemplary embodiment of a conversation controller 990, configured to control the flow of the conversation, is described now.
[0158] The conversation controller 990 receives the following inputs: STT partial utterances, STT final utterances, and Voice Activity Detection (VAD) results. In some embodiments, the conversation controller 990 may additionally or alternatively receive call audio and/or use multimodal Al to help with turn-taking. In some embodiments, the conversation controller 990 has the ability to know the bot’s transcript and state for interrupting the bot, and/or the ability to inform when the bot is talking and the scammer should be listening, and/or when there is barge-in from the scammer.
[0159] The conversation controller 990 triggers one or more of the following actions: time-wasting phrases, conversation repair phrases, backchannelling phrases, the Al bot generating a response, and interrupting the bot. [0160] B ackchannelling is a way to show that you are listening to the speaker. It involves small phrases like “uh-huh”, “yeah”, “okay”, etc. This can also be used to fill silence in a conversation, and used before starting your turn in the conversation to show that you are thinking. Backchanneling makes the task of barge-in detection more difficult as backchanneling can be detected as speech, but does not signal an intent to interrupt.
[0161] Time-wasting phrases are phrases that are said by the bot to waste the scammer’s time. They are pre-defined phrases such as “I’m sorry, I didn’t quite catch that. Could you repeat that?” or “I need to sit down, can you wait a moment please”. These are injected randomly into the conversation whenever the bot has its turn to speak. The same phrase is preferably not repeated more than once per call.
[0162] Similar to time-wasting phrases, Conversation Repair Phrases are injected into the conversation and said by the bot. However, these are not randomly injected.
Instead, they are added when Speech-To-Text is slow or broken and includes phrases such as “What was that? I didn’t quite catch it?” or “Sorry, it’s noisy here, can you repeat that?”. These phrases are exclusively phrases that show the bot not hearing the scammer and asking the scammer to repeat. Time-wasting phrases can include phrases like this but are not limited to them.
[0163] The Inter-Pausal Unit (IPU) is a unit of speech which is delimited by a certain time reference, for example 200ms. The IPU threshold is used to break speech into portions, while a ‘No Words’ threshold and a ‘No Final’ threshold are used to detect when the STT is not functioning correctly in order to cause the bot to respond to partial STT utterances, or to use conversation repair phrases.
[0164] The IPU can be used to break up the speech into portions to perform user-state detection or other tasks and can be the first trigger for backchannel responses.
[0165] The No Words threshold and the No Final threshold are used to detect when the VAD and the STT do not agree on the end of an utterance (or when the STT is not functioning correctly). The No Words threshold is the first threshold to be reached. It may require for example between 2 and 6 seconds of silence, for example about 3.5s of silence from the VAD and the STT to have no partial response. This is an indication that the STT is not functioning correctly as it would normally have at least a partial response after such a time interval. This triggers a conversation repair phrase to be said by the bot to the scammer.
[0166] The No Final threshold requires a time interval of 3-8 seconds, for example 5s of silence from the VAD and the STT to be activated. This is an indication that the STT is functioning slowly or incorrectly. If the STT has no words yet, then the No Words Threshold would have been triggered and conversational repair would have started. If there are words from the STT in the form of a partial response, then this threshold triggers the pipeline to respond using the partial text.
[0167] A turn-switching time interval of 1-5 seconds, for example 3 s, may be used to force turn-switching even when the user-state detection shows that the speaker wants to keep their turn. In some embodiments a default 5s interval may be used because the partial responses can take up to 3s to return from the STT. The goal of a 5s delay is to allow the STT to finish if it is working. This can be configured with the configuration server.
[0168] State Transitions 1700 of the conversation controller 990 may be understood with reference to Figure 17 of the drawings and Table 3 below. Conversation states correspond to who is talking and who is listening. “Bot Responding” refers to the Al bot generating a response to the scammer’s utterance. When the response is finished generating, it is sent to the TTS module. When the TTS finishes generating the speech audio, the duration of this speech is calculated and the state is set to “Bot Responding” for this duration. The audio is then sent to the audio-mixer to be mixed and sent over the phone line.
[0169] The turn taking algorithm may include one or more goals, such as minimising silence and/or minimising talking over one another. [0170] The states labelled in Figure 17 as “BAD” 1702, 1704 are times when the hot and scammer are talking at the same time. This is an indication of a false positive turn taking detection, or the scammer interrupting the hot. In some embodiments, the Al bot does not take the scammer interrupting the bot into consideration, and in most embodiments the bot will not intentionally talk over the scammer, hence the state being labelled as “BAD”, i.e., not intended. In some embodiments, these states may trigger the Al bot stopping in the middle of an utterance in response to barge-in from the scammer. The Al bot is configured to do one or more of: (1) detect barge-in, (2) assess if the scammer intends to interrupt or not, (3) based on (1) and/or (2) stop the Al bot talking in mid-utterance based on scammer barge-in, and (4) update the Al bot conversation history with the barge-in interruption information.
[0171] The Al bot may be configured to detect a scammer’s intention to interrupt in various ways, and this functionality may include one or more of the following: (i) using the state model shown in Figure 17 and described in Table 3, (ii) ignoring false positives (e.g. a back channel or background noise from the scammer would not be an intent to interrupt the Al bot), (iii) via use of a language model (i.e., another Al model which may be incorporated in or separate to the bot) that takes the conversation history and parameters associated with a current utterance of the scammer and detects whether the intent is to interrupt the bot or not.
[0172] States are logged in a transcripter.py object using the log state(state) function. The states are logged with the current timestamp. The duration of these states can be inferred by looking at the timestamps of the next state.
Figure imgf000039_0001
Figure imgf000040_0001
[0173] Table 3 Conversation State Table
(N/A: the hot cannot respond and talk at the same time)
[0174] The transition 1720 from Silence to Bot Talking occurs once per call. The transitions 1722 from silence to Scammer talking, From Bot responding to scammer overtalking, and from bot talking to scammer overtalking are caused by the scammer talking. The transitions 1724 from the scammer talking to the bot responding, and from the scammer overtalking the bots response to the scammer overtalking the bot talking both skip a phase. The transition 1726 from the scammer overtalking a bot response to the bot talking is an unlikely transition.
[0175] Figure 10 is a flow diagram illustrating the operation of the configuration server 1000 that forms part of the call processing system 900.
[0176] The configuration server 1000 maintains a looped list 1002 of pipeline configurations. Configurations can be added or removed from the list via a REST interface 1004. A web front end may be provided to view and edit the configuration queue, communicating with the configuration server via the REST interface 1004.
[0177] Configurations can specify random selection, pseudo-random selection, or specific configuration patterns selected from pre-set lists of values for some features, such as TTS voice and background audio.
[0178] As illustrated at 902 in Figure 9, when a new call is received, a pipeline configuration is requested from the configuration server. As illustrated in Figure 10, after a configuration request is received at 1006 a configuration is selected by checking the configuration list 1008 at 1010. If it is determined at 1012 that there are configurations in the list, then at 1014 the next configuration is selected. Selection continues from the beginning of the list after the last configuration in the list has been selected. Alternatively, if at 1012 it is determined that the list is empty, a default configuration is selected at 1016. At 1018 configuration elements are propagated to the respective pipeline modules including the TTS 208, the STT 204, the mixer in the audio module 209, and the Al hot 206. At 1020 call processing with the pipeline is initiated.
[0179] Figure 11 illustrates an exemplary embodiment of the audio processing module 209. The audio processing module performs steps 910 and 911 in Figure 9. The module 209 is configured to merge audio from multiple sources 1100 including voice audio 1102 from the TTS module 208, one or more background audio sources 1104, and/or any other audio sources 1106 specified in the configuration.
[0180] The audio processing module 209 continuously merges audio from one or more of the audio sources. The TTS source 208 intermittently produces voice audio, and in some embodiments when this intermittent voice audio is received the amplitude (volume) of the voice audio may be adjusted at 1110, with the adjustment level specified in the configuration. When no speech audio is available, the audio merger 1112 forwards only the background audio.
[0181] In some embodiments the merged audio may be passed though one or more audio filters 1114 (such as lowpass, high-pass, and/or band-pass filters), typically applied in sequence. The filter parameters are specified in the configuration. Processed audio is then passed to the next module, i.e., the telephony endpoint.
[0182] Figure 12 illustrates an overtalk module 1200 used to gather utterances to prevent interruption at 904 in Figure 9. After receiving speech audio from the telephony endpoint, the speech audio is converted to text in the STT module 204, producing an utterance 1202. Overtalk prevention 904 is then executed by the overtalk module 1200 based on a processing lock status retrieved from the configuration provided by the configuration server 1000. [0183] If the processing lock is activated then the utterance 1202 is added to an utterance queue. If the processing lock is not activated when an utterance 1202 is received, then the processing lock is activated at 1206 and the utterance 1202 is merged with the utterance queue and at 1210 passed to the Al hot for utterance generation.
[0184] On completion of audio processing of the response speech audio at 1212, the processing lock is de-activated at 1208. If the utterance queue is not empty then the utterances in the queue are merged and passed on to the Al hot for utterance generation.
[0185] Some embodiments support the generation of outbound calls to scammers.
[0186] Figure 13 illustrates the process for outbound call generation. Outbound calls are initiated from the Asterisk Command Line Interface (CLI). The asterisk server has a dial plan 1302 with two phone extensions 1304, 1306 used for outbound calls which are configured as follows:
Extension “12345” (1304) creates a Audio Socket to connect to the call processing pipeline (e.g. the pipeline as illustrated in Figure 9).
Extension “5402” (1306) sets the Caller ID, and then dials a scammer’s number.
To initiate an outbound call, a channel 1310 is created between these two extensions which results in a call between the pipeline 1320 of the call processing system 900, and the phone of the scammer 1322. In this way, outbound calls to known scam phone numbers (from “callback” scams, phishing websites or other sources) are possible.
Exemplary embodiment
[0187] Figure 14 shows another exemplary embodiment of a call processing system 1400. In this embodiment, deployment of the system is accomplished with two virtual machines (VM) 1402, 1404 and two cloud services 1406, 1408. These may include, for example, one or more of:
- Amazon Web Services (AWS), which can be used for a public facing VM running the asterisk server (supporting a telephony endpoint). - RONIN which is an example of a managed AWS environment which can be used for the pipeline and hot VMs. RONIN VMs are accessible from the wider Internet via SSH connections, hence an SSH tunnel may be used for audio socket connections between the asterisk server and pipeline VM. SSH is not required for connections between the pipeline and Bot VMs as both are inside of RONIN.
- Azure cloud services, which can be used for TTS and STT functionalities.
[0188] In the example embodiment, the asterisk server 1410 (providing the telephony endpoint) runs on an AWS VM. A phone call is forwarded to the asterisk server via SIP (Session Initiation Protocol). The asterisk server then creates a unique ID (UUID) for the call, which is propagated to the pipeline VM via the ID of the audio socket and used as a label on all files and metadata associated with the call. The asterisk server creates an audio socket 1412 to the pipeline VM, and then forwards call audio to the socket and stores call audio in a file on the VM. In some embodiments, the audio socket may pass through an SSH tunnel 1414 to the pipeline VM.
[0189] Pipeline VM processing may be understood with reference to the steps indicated in Figure 14 of the drawings. At 1441 the audio socket is attached to an asterisk client docker container 1420 in the pipeline VM and audio streaming commences. At this point, a request is sent to the configuration server 1000 and the returned configuration is propagated to all elements and stored in a database 1422 (such as MongoDB).
[0190] At 1442 the asterisk client 1420 passes the audio stream to the STT service (via the STT module, not shown in Figure 14). The STT service returns transcribed speech as text portions that are partial or complete sentences. Some embodiments may use Azure STT, which returns a flag stating that the scammer has finished speaking to indicate the end of an utterance. Text is gathered from the STT service until this flag is observed, at which time the gathered text is passed back to the pipeline.
[0191] At 1443 accumulated text portions are passed to the Al bot, which determines a response utterance (also in text). Text accumulation processes are elsewhere herein with respect to process 904 depicted in Figure 9 of the drawings. In some instances, the Al hot may be bypassed (for example when one or more hard coded phrases are injected into the bot’s speech as described with reference to 905a in Figure 9). The accumulated text portions are considered a scammer utterance and are stored to a call transcript log taking the form of a text file.
[0192] At 1444, the response utterance is passed to the TTS service (via the TTS module, not shown in Figure 14). Post-processing of utterances such as injecting disfluencies, sentence truncation and/or SSML may also be performed before passing the utterances to the TTS module. The final response utterance after post-processing is stored in the call transcript log as an Al bot utterance. A version of the Al bot utterance including any SSML markup is also stored.
[0193] At 1445 the speech audio is passed to the audio mixer 1424 which optionally (a) combines it with, for example, background audiol426, and/or applies audio filters or other effects 1428.
[0194] At 1446 the merged audio is then passed back through the audio socket to the asterisk server 1410.
[0195] In addition to the labelled steps described above, in some embodiments additional data is entered into the database 1422 (in this embodiment provided by a MongoDB instance). A process on the asterisk VM monitors call recordings and passes call metadata 1430 of any new call recordings that appear (which happens with every call) to the database 1422 through the SSH tunnel 1414. This process may also store the call audio itself, for example in an AWS S3 storage facility. When a call is initiated, configuration metadata alongside call ID and time may also be stored in the database 1422.
[0196] Figure 15 shows a docker deployment of a call processing system, for example like the system described with reference to Figure 14. An Asterisk server 1501 runs on an AWS VM 1504, and is responsible for recording audio at 1506 and providing a call audio socket at 1508. In this embodiment the Al hot 206 is deployed in a separate GPU equipped VM 1510. In some embodiments a relatively simple deployment may be achieved using a single GPU equipped VM for each bot and pipeline on which all docker containers are housed. In some embodiments this type of implementation simplifies the automated deployment of load balancing (as described elsewhere herein).
[0197] The custom docker containers in the Pipeline VM 1520 may be understood with reference to Figure 15 of the drawings. An Asterisk client container 1502, a configuration server container 1530, an STT container 1532, and an audio mixer container 1534 are provided. A database or MongoDB docker container 1536 is provided. In addition, a docker volume 1538 provides a folder on the VM accessible to the asterisk-client docker container 1502 for storing logs and call transcripts. A docker volume 1540 contains audio files, for example used as background audio.
[0198] The Asterisk client container 1502 contains various modules and supports various capabilities. The Asterisk Client 1502 controls the flow of the call, connecting to the audio socket 1508 from the telecommunications endpoint, passing data to the input of each module, and then passing the returned data to the input of the next module in the pipeline before finally passing the processed audio data back through the audio socket 1508. The Asterisk Client 1502 manages retrieval of configuration data from the configuration server 1530 and distributes it to other pipeline modules. The Asterisk Client 1502 is responsible for a number of functions including one or more of: Overtalk Prevention (see 904 in Figure 9), the “AutoResponse” 1550 functionality that enables inserting initial phrases and time wasting phrases into the conversation (see 905b in Figure 9), and the Text To Speech (TTS) 1552 that receives text from the pipeline and passes it to an external TTS server before passing on the returned audio data to the audio processing module.
[0199] The STT container 1432 houses the STT module that connects to or implements an STT service. [0200] The configuration server container 1530 houses the configuration module, and implements a configuration queue and/or a default configuration when a configuration is not set. The configuration server container 1530 provides a web interface for viewing and editing the configuration queue.
[0201] The Audio Mixer container 1534 houses the audio processing module and continuously produces background audio, merges Al bot response voice audio when available, and/or applies audio filters.
[0202] The database (MongoDB) container houses a database (e.g. MongoDB) instance that stores configurations of past calls, asterisk derived metadata on past calls, metadata on available background audio, metadata on available TTS voices, and/or logs of exceptions that occur during pipeline operation.
[0203] The Al bot container (bot-parlai-gpu) 1560 houses the Al bot which converses with the scammer using text input and output. This is situated on a VM 1510 equipped with a fast GPU (graphics processing unit) or other machine learning acceleration hardware.
[0204] Figure 16 is a schematic representation of a load balancing module 1600 that forms part of the system of Figure 14. This figure depicts the plan for automated deployment, where new pipeline VMs are created when multiple simultaneous calls are received, and VMs are destroyed when demand again drops.
[0205] In some embodiments the pipeline and bot may occupy a single VM, though similar deployment would be possible with separate VMs. The load balancing implementation of this embodiments maintains a small number of idle VMs at all times so as to be able to accept new calls without delays. Details of metadata and call transcript recording as well as logging use a centralised database (e.g., MongoDB) with which all pipelines are able to communicate. [0206] The Asterisk server 1602 is configured to receive calls from a telecommunications provider, and to connect audio sockets to asterisk clients running on pipeline VMs as directed by the Load Balancer 1604 (in this exemplary embodiment implemented using nginx LB). In some embodiments, the described load balancing configuration uses a scaled infrastructure to handle large call volumes. The Load Balancer 1604 selects an available pipeline instance to which to connect new incoming calls, and maintains a list of pipeline instances and/or a list of active calls. The Load Balancer 1604 is equipped with an automated healthcheck and/or a status web page.
The monitor 1606 checks the status of the system (continuously, or continually based on a preset, selected, and/or variable basis), querying the Load Balancer 1604 about the number of idle and/or busy pipeline instances. The monitor 1606 manages the creation and/or destruction of VM instances on the AWS 1608, synchronising the current list of instances with the Load Balancer 1604.
[0207] The methods and systems described herein present a novel approach to defeat phone scam operators through breaking their business model and making their operations unprofitable. This is achieved through the implementation of conversational Al bots that present as a convincing potentially viable scam victim. The bots, deployed at scale, take up a great proportion of scammers’ time and significantly reduce their profit margins. Further, traces from these conversations provide valuable information on scam targets (the organisations the scammers pretend to be representatives of), scammers and current scammer strategies that is otherwise very difficult to obtain.
[0208] Advantageously, the methods may be used to reduce the occurrence of vishing and other phone-based scams, may be used as a source of information on the scam landscape, and is readily complimentary to existing approaches of scam detection.
[0209] Advantageously, the methods described herein present a novel approach to gathering threat intelligence on current impersonation phone scams through engaging with phone scammers via conversational Al bots developed to present as a convincing potentially viable scam victim. When deployed at scale, traces of conversations between bots and phone scammers provide accurate and timely information on current scammer strategies, objectives and imitation targets that is otherwise still unknown in case of new campaigns, inaccurate or incomplete if reported by humans or altogether very expensive to obtain.
[0210] Timely intelligence on current phone scams and their imitation targets is useful to telecommunications providers, governments and large organisations that find themselves subjected to impersonation in phone scams. This intelligence obtained through determining scam parameters can be transmitted to impersonation targets to inform them on a quasi-real time basis that they are being targeted. The data may also be used by those targets to inform their customers and develop appropriate defensive and scam prevention strategies to reduce or remove the impact of the scams. The intelligence data can also be used to collect more accurate and complete information about scam campaigns and their characteristics to inform authorities of running risks on real-time.
[0211] The determined scam parameters provide useful information about scam threat characteristics, and may be used to identify impersonated organisation, targeted scam plot, scam campaign times, and/or the origin of a scam.
[0212] It will be understood to persons skilled in the art of the invention that many modifications may be made without departing from the spirit and scope of the invention.

Claims

CLAIMS:
1. A method comprising: receiving a rerouted phone call identified as a scam call; processing received caller speech from the rerouted phone call to determine a response; interacting with a caller using the determined response, wherein the response is determined in order to extend a duration of a conversation; and processing at least a part of the call conversation to determine one or more scam parameters.
2. The method of claim 1, wherein the processing comprises identifying features in the received call speech associated with ending and/or extending a call.
3. The method of claim 2, wherein the identifying comprises identifying one or more of negative emotions in the caller speech, and threats in the caller speech.
4. The method of any one of the preceding claims, wherein the response is determined in order to maximise the duration of the phone call.
5. The method of any one of the preceding claims, wherein the processing of the received caller speech comprises utilising a conversational artificial intelligence hot trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted phone call ends.
6. The method of any one of the preceding claims, further comprising recording and storing at least a part of the call conversation, and wherein processing at least a part of the call conversation comprises processing the stored part.
7. A method comprising: detecting a received scam call; rerouting the detected scam call to a scam call hot, wherein the scam call hot is configured to extend a duration of the rerouted call; and processing at least a part of a call conversation to determine one or more scam parameters.
8. The method of claim 7, wherein the scam call hot is configured to extend the duration of the rerouted call by interacting with a caller of the scam call via responses determined by the scam call bot.
9. The method of claim 8, wherein the responses are determined based on identified features in the caller’s speech associated with ending and/or extending a call.
10. The method of any one of the preceding claims wherein the duration of the call is extended by intentionally generating and responding with a response imperfection selected from a group comprising: backchannelling utterances, timewasting phrases, and conversation repair phrases.
11. The method of any one of the preceding claims, further comprising identifying actionable scam intelligence comprising a scammers financial instrument and/or phone number.
12. The method of any one of the preceding claims, wherein the scam parameters comprise: a scam target, a scam structure, a scam technique, a financial instrument, scammer phone number, scammer voice prints, classification of background noise during scam and/or a scam classification.
13. A system comprising: a telephony endpoint for receiving a rerouted scam call; a speech-to-text module configured to convert caller speech from the received scam call to text; a conversational artificial intelligence (Al) hot configured to receive the text from the speech-to-text module, process the received text, determine a response so as to extend a duration of the scam call, and output the determined response; and a text-to-speech module configured to receive the determined response in text form from the hot, convert the text to a voice response, and output the voice response to the caller via the telephony endpoint, wherein the Al hot is further configured to process at least a part of the call conversation to determine one or more scam parameters comprising: a scam target, a scam structure, a scam technique, and/or a scam classification
14. The system of claim 13, wherein the text-to-speech module is configured for voice cloning.
15. The system of claim 13 or claim 14, wherein the conversational Al hot processes the received text by identifying features in the received call speech associated with ending and/or extending a call.
16. The system of claim 15, wherein the hot is configured to identify the features by identifying one or more of: negative emotions in the caller speech, and threats in the caller speech.
17. The system of any one of claims 13 to 16, wherein the bot is configured to determine the response in order to maximise the duration of the scam call.
18. The system of any one of claims 13 to 17, wherein the hot is trained with a reinforcement learning training objective with a small positive reward for each utterance and a large negative reward when the rerouted scam call ends.
19. The system of any one of claims 13 to 18 further comprising an audio processing module connecting the text-to-speech module and the telephony endpoint, and configured to process the voice response by mixing the voice response with an environment signal.
20. The system of any one of claims 13 to 19, wherein the conversational Al bot further comprises a conversation controller adapted to manage a conversation flow by adding utterances to the response that extend the duration of the scam call, wherein the added utterances comprise one or more of: a time wasting phrase, a conversation repair phrase, a b ackchannelling phrase, and an interrupting phrase.
21. The system of any one of claims 13 to 20, wherein the conversational Al bot further comprises a response controller configured to: discard a response utterance in response to a scammer utterance occurring during a response utterance processing, and removing said discarded response from a conversation history of the Al bot.
PCT/AU2024/050645 2023-06-19 2024-06-19 Scam call system WO2024259486A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2023901937A AU2023901937A0 (en) 2023-06-19 Scam call system
AU2023901937 2023-06-19

Publications (1)

Publication Number Publication Date
WO2024259486A1 true WO2024259486A1 (en) 2024-12-26

Family

ID=93934793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2024/050645 WO2024259486A1 (en) 2023-06-19 2024-06-19 Scam call system

Country Status (1)

Country Link
WO (1) WO2024259486A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130218566A1 (en) * 2012-02-17 2013-08-22 Microsoft Corporation Audio human interactive proof based on text-to-speech and semantics
US20160119377A1 (en) * 2014-10-22 2016-04-28 International Business Machines Corporation Cognitive Honeypot
US20180240473A1 (en) * 2017-02-17 2018-08-23 International Business Machines Corporation Bot-based honeypot poison resilient data collection
US10110741B1 (en) * 2017-07-25 2018-10-23 Teltech Systems, Inc. Determining and denying call completion based on detection of robocall or telemarketing call
US20190149575A1 (en) * 2017-11-13 2019-05-16 International Business Machines Corporation System to prevent scams
WO2022107242A1 (en) * 2020-11-18 2022-05-27 日本電信電話株式会社 Processing device, processing method, and program
WO2023064272A1 (en) * 2021-10-11 2023-04-20 Georgia Tech Research Corporation Robocall blocking method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130218566A1 (en) * 2012-02-17 2013-08-22 Microsoft Corporation Audio human interactive proof based on text-to-speech and semantics
US20160119377A1 (en) * 2014-10-22 2016-04-28 International Business Machines Corporation Cognitive Honeypot
US20180240473A1 (en) * 2017-02-17 2018-08-23 International Business Machines Corporation Bot-based honeypot poison resilient data collection
US10110741B1 (en) * 2017-07-25 2018-10-23 Teltech Systems, Inc. Determining and denying call completion based on detection of robocall or telemarketing call
US20190149575A1 (en) * 2017-11-13 2019-05-16 International Business Machines Corporation System to prevent scams
WO2022107242A1 (en) * 2020-11-18 2022-05-27 日本電信電話株式会社 Processing device, processing method, and program
WO2023064272A1 (en) * 2021-10-11 2023-04-20 Georgia Tech Research Corporation Robocall blocking method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KHOUZAIMI, H. ET AL.: "Reinforcement Learning for Tum- Taking Management in Incremental Spoken Dialogue Systems", PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, pages 2831 - 2837, XP055509147 *
MERVE SAHIN MARC RELIEU I3-SES, CNRS, TéLéCOM PARISTECH SOPHIA ANTIPOLIS, FRANCE MARC.RELIEU@TELECOM- PARISTECH.FR EUREC: "Using chatbots against voice spam: Analyzing Lenny’s effectiveness", USENIX, USENIX, THE ADVANCED COMPUTING SYSTEMS ASSOCIATION, 12 July 2017 (2017-07-12), Usenix, the Advanced Computing Systems Association , pages 324 - 342, XP061025299 *
PANDIT SHARBANI; LIU JIENAN; PERDISCI ROBERTO; AHAMAD MUSTAQUE: "Applying Deep Learning to Combat Mass Robocalls", 2021 IEEE SECURITY AND PRIVACY WORKSHOPS (SPW), IEEE, 27 May 2021 (2021-05-27), pages 63 - 70, XP033939252, DOI: 10.1109/SPW53761.2021.00018 *

Similar Documents

Publication Publication Date Title
US10057419B2 (en) Intelligent call screening
US11210461B2 (en) Real-time privacy filter
US10091355B2 (en) Virtual voice response agent individually configured for a user
US10743104B1 (en) Cognitive volume and speech frequency levels adjustment
CN107818798A (en) Customer service quality evaluating method, device, equipment and storage medium
WO2014069122A1 (en) Expression classification device, expression classification method, dissatisfaction detection device, and dissatisfaction detection method
KR102241532B1 (en) Intelligent callbot server and unmanned counsel systeim using thereof
US10896664B1 (en) Providing adversarial protection of speech in audio signals
US10659605B1 (en) Automatically unsubscribing from automated calls based on call audio patterns
US20130246064A1 (en) System and method for real-time speaker segmentation of audio interactions
EP4016355B1 (en) Anonymized sensitive data analysis
US20220303391A1 (en) Systems and methods for prioritizing emergency calls
KR20190117840A (en) Method and computer readable recording medium for, during a customer consulting by a conversation understanding ai system, passing responsibility of proceeding with subsequent customer consulting to a human consultant
US20240363099A1 (en) Deepfake detection
CN113630309B (en) Robot conversation system, method, device, computer equipment and storage medium
US20240363103A1 (en) Deepfake detection
US20240363125A1 (en) Active voice liveness detection system
US11606461B2 (en) Method for training a spoofing detection model using biometric clustering
CN109545203A (en) Audio recognition method, device, equipment and storage medium
WO2023245231A1 (en) Scam call prevention
US11967307B2 (en) Voice communication analysis system
US20240355336A1 (en) Deepfake detection
WO2024259486A1 (en) Scam call system
Oļeiņiks et al. Real-Time Phone Fraud Detection and Prevention Based on Artificial Intelligence Tools
CN112784038A (en) Information identification method, system, computing device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24824804

Country of ref document: EP

Kind code of ref document: A1