WO2005041057A1 - System and method for identification, detection and investigation of maleficent acts - Google Patents
System and method for identification, detection and investigation of maleficent acts Download PDFInfo
- Publication number
- WO2005041057A1 WO2005041057A1 PCT/US2003/031237 US0331237W WO2005041057A1 WO 2005041057 A1 WO2005041057 A1 WO 2005041057A1 US 0331237 W US0331237 W US 0331237W WO 2005041057 A1 WO2005041057 A1 WO 2005041057A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- category
- transaction
- datasets
- function
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
Definitions
- the invention relates generally to means for transforming large amounts of stored data into knowledge useful for performing functions necessary to achieve goals of an enterprise. More particularly, the invention relates to enterprise software for enabling organizations to apply business intelligence in an operational environment by integrating real-time transactional data with remote and disparate historical data to achieve true enterprise intelligence for maleficent activity detection, such as fraud and threat detection. It provides an enterprise level solution to large-scale workflow processes for searching, analyzing and operating on transactional and historical data in remote and disparate databases in real time to uncover obvious and nonobvious relationships, apply analytical results in an operational environment, and interoperate with other enterprise applications.
- the invention enables a workflow process comprising classification of data in remote disparate databases for identity verification, arbitration of fuzzy differences between data sets for detection of maleficent acts, and investigation of relationships between people, places and events using refined search techniques, analytical tools and visualization methods.
- the invention finds wide application in diverse environments for detection of maleficent activities, it is described in terms of application to threat and fraud detection through identification, detection and investigation based on information contained in disparate databases.
- Applications according to the present invention include insurance claims evaluation for detection and prevention of insurance fraud, transaction risk detection, identification verification for use in credit card processing and airline passenger screening, records keeping verification, and government list comparisons.
- Standard plug-in applications for use with the invention include similarity search agents for searching disparate databases and reporting search results, a classification engine for classifying transactions or search results, an analytic engine for analyzing results such as biometrics and providing inputs to a decision engine, a rules engine, a report engine, a link analysis engine for nonobvious relationship analysis, a method for arbitrating fuzzy differences between data, and a transformation server for virtualization of multiple disparate data sources.
- Customer applications may be readily integrated into the workflow process, which include cultural names data, report engines, rules engines, neural networks, decision trees and cluster engines. When attempting to identify, detect, or investigate maleficent acts such as potential security threats or fraudulent claims activities, businesses and governmental entities face a number of problems.
- a process to accomplish these objectives must combine the efficiency of automated processes in the front-end with the judgment of trained investigators in a hybrid classification workflow.
- the process must provide a fast and automated methodology for detecting and identifying maleficent activities such as threats or fraudulent behavior prior to an event occurring. It must also streamline an otherwise labor intensive, manual process.
- Key requirements for such a process include an ability to uniquely solve the previously stated problems, as well as an ability to identify and detect maleficent activities such as threats and fraud before they occurs rather than afterwards, so that remediation and investigation activities can take place to prevent the occurrence of fraud and/or threat at an early stage. Also required is an ability to perform these functions in real time processing, near-real time processing, and batch processing for significantly large transaction sets by analyzing and dynamically filtering these transaction sets.
- the present software system and method is a process that integrates technologies drawn from the areas of transaction processing, document management, workflow management, and similarity search.
- the system and method comprises three stages, including identity verification, detection and investigation.
- the first stage of identity verification is an automated batch process for providing automated data gathering and decision processes carried out with the goal of resolving as many cases as possible without human intervention.
- the classification stage begins with the arrival of an input document represented by a dataset.
- human judgment may be employed to resolve ambiguities, obtain additional data, and arbitrate borderline classification decisions.
- the goal is to classify the cases that the previous stage was unable to resolve and to select the high-risk cases to forward to the third stage for a more thorough and time-consuming investigation by highly skilled investigators.
- the present invention is enterprise software that provides a configurable, plug- and-play solution that will search, analyze, and operate on transactional and historical data in real-time across remote and disparate databases.
- the software has the unique ability to discover similarities and non-obvious relationships in data in real-time and apply the results of data analysis to an operational environment. It has a flexible framework for building a variety of applications that can be configured based on application requirements. Using an open API, the framework enables organizations to easily incorporate multiple technologies, analytics, software components, and both internal and external data sources.
- the system performs tasks such as decision automation, transaction processing, and extraction of knowledge from data sources. It can provide the following capabilities: search, analyze, and operate on both transactional and historical data in remote, disparate databases; uncover non-obvious relationships; find similarities as well as exact matches; apply analytical results in an operational environment; easily interoperate with other enterprise applications; combine the results from several different analytics to produce one comprehensive score; search and process large amounts of data in real-time; protect data ownership by using remote search; ensure technology investment due to the ability to easily update and expand the system; operate in serial and parallel environments; protect privacy by returning scores instead of actual data; operate on data with different data types, platforms, and formats; produce a complete audit trail for all search and analytical results; and quickly and easily incorporates multiple analytics, software components, and internal and external data sources.
- the invention enables more accurate and informed decisions; streamlines operational processes; increases efficiencies and reduces operational costs; transforms data in real-time into useful and useable information; improves customer service and customer interaction; and drives more profitable relationships with customers. It may be used in business-critical applications including employee background checks, risk assessment, fraud detection, data mining, alias identification, market analysis, and customer identification. Modular software components provide unique analytical capabilities such as link analysis, fuzzy search, similarity scoring and classifications, and rules processing as well as a complete decision audit trail. The invention also accepts and integrates third party analytics and software components.
- An embodiment of the present invention is a method for identification, detection and investigation of maleficent acts, comprising the steps of receiving one or more transaction datasets, verifying each transaction dataset identity and classifying each transaction dataset into a first category, a second category and a third category, detecting and arbitrating ambiguities in each transaction dataset in the second category for reclassifying into the first category and the third category, investigating each transaction dataset in the third category for affirming the third category classification of a first group of investigated datasets and reclassifying the third category classification of a remaining second group of investigated datasets into the first category classification, enabling transaction datasets in the first category, and disabling transaction datasets in the third category.
- the step of receiving one or more transaction datasets may further comprise receiving one or more transaction datasets selected from the group consisting of airline reservations, cargo transactions, border crossings, Patriot Act transactions, insurance claims, underwriting insurance transactions, and credit applications.
- the step of verifying and classifying may further comprise verifying each transaction dataset identity by assigning a composite score to each transaction dataset and classifying each transaction dataset by assigning each dataset to the predetermined categories according to each dataset composite score.
- the composite score assigned to each transaction dataset may be determined by combining one or more analytical scores based on a comparison between each transaction dataset and one or more similar datasets located in disparate databases.
- the means for determining the one or more analytical scores may be selected from the group consisting of a similarity search engine, a biometric analytic, a rules engine, and a neural net.
- the method may further comprise the step of assigning a composite score to each transaction dataset according to a schema defined by a user.
- the method may further comprise designating an analytic function in the schema selected from the group consisting of a similarity search function, a biometric function, a rules function, and a neural net function.
- the step of classifying datasets into categories may be determined by preset classes, business rules and associations determined by a user to meet specific business needs.
- the method may further comprise the step of controlling and monitoring a workflow process comprising the steps of receiving, verifying and classifying, detecting and arbitrating, investigating, enabling and disabling.
- the step of detecting and arbitrating ambiguities may comprise the steps of receiving transaction datasets classified into the second category in the verifying step, enabling an arbitrator to view a summary list screen showing transaction dataset identification, classification, status, justification, and links to a transaction dataset detail screen, a search form screen, and a search queue screen, enabling the arbitrator to view a task detail screen for comparing analytical scores between selected transaction datasets and datasets contained in disparate databases, and enabling the arbitrator to change the classification of transaction datasets from the second category into a category selected from the group consisting of the first category and the third category.
- the method may further comprise enabling the arbitrator to select an analytic function for determining a comparative analytical score of a selected transaction dataset, the analytic function selected from the group consisting of a similarity search function, a biometric function, a rules function, a neural net function, a model engine and a decision tree.
- the method may further comprise enabling the arbitrator to update a classification and status of selected transaction datasets.
- the step of investigating each transaction dataset in the third category may comprise the steps of receiving transaction datasets classified into the third category in the steps of verifying and detecting, enabling an investigator to view a summary list screen showing transaction datasets containing links to a task detail screen, a search form screen, and a search queue screen, enabling the investigator to view a task detail screen for comparing elements of a selected transaction dataset to elements from comparison datasets contained in disparate databases, and enabling the investigator to change the classification of transaction datasets from the second category into the first category and the third category.
- the method may further comprise enabling the investigator to select an analytic function for determining a comparative analytical score of a selected transaction dataset, the analytic function selected from the group consisting of a similarity search function, a biometric function, a rules engine, a neural net, a model engine, an auto link analysis, a decision tree, and a report engine.
- the method may further comprise activating remote similarity search agents in disparate databases to be searched by a similarity search function, the remote similarity search agents returning similarity scores and results to the similarity search function without a requirement for relocating the searched information from the disparate databases.
- An embodiment is a computer-readable medium containing instructions for controlling a computer system according to the method described above.
- Another embodiment is a system for identification, detection and investigation of maleficent acts, comprising a means for receiving one or more transaction datasets, a means for verifying each transaction dataset identity and classifying each transaction dataset into a first category, a second category and a third category, a means for detecting and arbitrating ambiguities in each transaction dataset in the second category for reclassifying into the first category and the third category, a means for investigating each transaction dataset in the third category for affirming the third category classification of a first group of investigated datasets and reclassifying the third category classification of a remaining second group of investigated datasets into the first category classification, a means for enabling transaction datasets in the first category, and a means for disabling transaction datasets in the third category.
- the means for receiving and the means for verifying and classifying may comprise a classification engine, the means for detecting and arbitrating may comprise an arbitration function, and the means for investigating may comprise an investigation function.
- the system may further comprise a workflow manager for controlling and monitoring a workflow process comprising the means for of receiving, verifying and classifying, detecting and arbitrating, investigating, enabling and disabling.
- the classification engine, the arbitration function and the investigation function may have access to disparate databases through analytic functions.
- the system disparate databases may comprise an alias identification database, an expert rules database, a government threat database, public databases, and known threat databases.
- the disparate databases may contain remote similarity search agents for returning similarity scores and results to the similarity search engine without a requirement for relocating the searched information from the disparate databases.
- the analytic functions may comprise a similarity search function, a biometric function, a rules engine, a neural net, a model engine, an auto link analysis, a decision tree, and a report engine.
- the arbitration function may include a user interface for enabling a user to arbitrate the second category classification decisions made by the classification engine into the first and third category classification.
- the investigation function may include a user interface for enabling a user to investigate the third category classification decisions made by the classification engine and the arbitration function and to reassign them to the first and the third category classification.
- Yet another embodiment of the present invention is a method for identification, detection and investigation of maleficent acts, comprising the steps of controlling a workflow process for classifying transaction datasets into a high risk category and a low risk category, including the steps of verifying and classifying transaction datasets, detecting and arbitrating transaction dataset ambiguities, investigating high risk transaction datasets for ensuring correct classification, initiating analytic functions comprising a similarity search function, a biometric function, a rules engine, a neural net, a model engine, an auto link analysis, a decision tree, and a report engine, and accessing disparate databases including an alias identification database, an expert rules database, a government threat database, public databases, and known threat databases.
- Figure 1 shows overlapping multiple reasoning methodologies according to the present invention
- Figure 2 shows a functional flow diagram depicting the process according to the present invention
- Figure 3 shows a system block diagram depicting the technology for supporting the present invention
- Figure 4 shows system configuration of an embodiment of the present invention
- Figure 5A shows a process used historically for airline security activities
- Figure 5B shows a process according to the present invention for airline security activities
- Figure 6A shows a process used historically for insurance claim processing for fraud detection
- Figure 6B shows a process according to the present invention for insurance claim processing for fraud detection
- Figure 7 shows a screenshot summarizing the detection arbitration step
- Figure 8 shows a screen shot that provides a user with a summary view of a result from a specific identity verification and classification
- Figure 1 shows overlapping multiple reasoning methodologies 100 according to the present invention.
- No single reasoning model can detect all possible maleficent activities, and no reasoning model is inherently more superior to other models. Since each reasoning model is able to provide unique characteristics of maleficent acts, an ideal system for identification, detection an investigation of maleficent acts would be multimodal with capability for detecting threats along each of three dimensions, which include instance base 110, rules based 120 and pattern based 130 detection capabilities.
- the overlapping methodologies shown in Figure 1 illustrate the fact that each methodology incorporates some of the capabilities of other methodologies.
- rules may be embedded as hypothetical instances in a database, such as the case of embedding an example bad record in a database of the purchase of a one-way ticket within 30 minutes prior to an aircraft departure. Pattern instances embedded within data may also be identified. For more complex rules and pattern applications, other methodologies must be incorporated into a solution.
- the present invention provides a framework and tools necessary to support three reasoning methodologies for identification, detection and investigation of maleficent activities, including instance base reasoning 110, rules based reasoning 120 and pattern based reasoning 130. It uses a "federation" based architecture for ease of adding new databases, new reasoning engines work flow components, etc. through well documented APIs and interfaces that make use of XML and synchronous message queues.
- Rule based reasoning 120 provides an ability to set rules and detect violation of rules. For example, in a given situation, only a given set of actions may be appropriate. These systems are generally equipped with a rules-based reasoning engine, and are trained through a period of testing and qualification of the rules base. Pattern based reasoning 130 provide the ability to detect explicit and implicit patterns involving inputs that may predict maleficent activities. Explicit pattern based systems are characterized by model-based reasoning and probabilistic reasoning systems such as Bayesian networks. Implicit pattern-based systems typically fall into the domain of neural networks that are capable of discovering predictability patterns that a human might not be able to perceive. These systems require extensive training through introduction of numerous previously classified results and input patterns, and are generally not strong at detecting instance-based threats.
- Instance based reasoning 110 provides an ability to detect attribute values that have been seen previously, either in known good or known bad situations. For example, a travel reservation may contain a name, address or telephone number that is the same or similar to a known good or known ad person who is listed in a database. Data instances used for classification are not necessarily directly associated with the primary transaction, because instance based systems employ schematic and semantic mapping tools that enable searching of disparate data sources.
- FIG. 2 shows a functional flow diagram depicting the process 200 according to the present invention.
- the process 200 comprises three stages: identity verification and classification 220, maleficent act detection 230 and maleficent act investigation 240. Multiple procedures may be involved in each stage and the automated classification technologies may vary with the application.
- the process starts with an automated classification process 220 where identifying information is extracted from input documents or transaction datasets 210 and used to search databases containing the records of individuals.
- Identity data can consist of biometric data and/or standard identification such as name, address, phone number, etc. This data can then be matched against a variety of databases, such as biometric, public records, etc., to confirm whether the identity exists and if the person is, in fact, who he/she claims to be.
- identification analytics can be employed to look for inconsistent representations of identity where, for example, a person claims to be 42 years old when the identification data indicates the age of 10 years old. Analytics may also detect fraudulently manufactured identification, for example, a created false identification or assumption of another person's identification.
- identity verification and classification stage 220 transaction datasets 210 are received by this stage from various sources.
- the transaction datasets 210 may be airline reservations, way bills for cargo, border crossings, Patriot Act transactions, insurance claims, underwriting insurance documents, credit applications, etc.
- the identity verification and classification stage 220 classifies individuals identified in the transaction datasets, sending datasets associated with high risk individuals 270 to the investigation stage 240, and sending datasets associated with medium risk individuals 272 to the detection stage 230, and categorizing as approved 260 those datasets associated with low risk individuals 274.
- the second stage 230 of the process 200 enables an organization to quickly sort out the high and low risk individuals from the medium risk individuals determined by the first stage 220 by sending datasets associated with the high risk individuals 270 to the investigation stage 240 and categorizing as approved 260 those datasets associated with low risk individuals 274.
- the process facilitates a workflow according to level of risk.
- High-risk individuals 270 may be work-flowed directly to the investigation stage 240.
- Medium risk individuals 272 can be work-flowed to the detection stage 230 in order to apply the additional human insight and judgment needed to resolve them.
- Low risk individuals 274 can be cleared immediately as approved.
- external analytics can examine an individual's history and demographics for indicators of maleficent activities, returning scores that enable another percentage of the inputs to be classified for approval or further examination.
- the detection stage 230 brings human judgment and additional data resources into the process in order to resolve another percentage of the cases. However, the knowledge and experience of the personnel involved in this stage do not need to be on a par with those involved in the investigation stage 240. Their function in the detection stage 230 is to review the data for unresolved cases and to determine which can be approved immediately, which need to go on the investigation stage 240, and which need additional data.
- a web-based tool provides the arbitrators involved in the detection stage 230 with data used in the classification stage 220, the rationale for the classification, and access to additional data that may be needed to make a clear identification or classification.
- Information such as references to stolen credit cards and/or similarity matches with known threats are provided for a more in- depth analysis as to the nature of the threat or fraud. Additional information and external databases may be accessed to perform this task.
- cases datasets that have been found to have the highest probability of maleficent activities are work-flowed to specialists for further investigation. These specialists are equipped with sfrong tools for similarity searching, link charting, time lining, pattern matching, and other investigative techniques.
- the first two stages of the process 200 guarantee that when a case dataset reaches the investigative stage 240, the data involved in the classification has been gathered and reviewed, and every effort has been made to resolve it based on all information available.
- FIG. 3 shows a system block diagram 300 depicting the technology for supporting the present invention.
- the overall workflow process is controlled by a workflow manager 310 that is disclosed in U.S. patent application number 10/653,457, filed on September 2, 2003, and incorporated herein by reference.
- the classification engine 330 is disclosed in U.S. patent application number 10/653,432, filed on September 2, 2003, and incorporated herein by reference.
- the arbitration function 340 is disclosed in U.S. patent application 10/653,689, filed on September 2, 2003, and incorporated herein by reference.
- the similarity search engine 370 is disclosed in U.S. patent application number 10/653,690, filed on September 2, 2003, and incorporated herein by reference.
- the classification engine 330, the arbitration function 340 and the investigation function 350 all are able to access multiple analytics, including similarity search engines 370, biometric analytics 372, rules engines 274, neural nets 376, model engines 378, auto link analysis 380, decision tree analysis and report engines 384.
- These analytic function provide access to numerous disparate databases, including alias identification databases 390, expert rule databases 392, government threat databases 394, public databases 396, and known threat databases 398.
- Remote similarity search agents 388 are located in each database to facilitate similarity searching.
- the classification engine 330 automatically gathers transaction datasets 320 and processes decisions to resolve as many cases as possible without human intervention.
- the classification engine 330 has an ability to verify identity, score risk and classify the results employing multiple analytics and multiple target datasets in near real-time, and to notify or alert the relevant authorities on high risk individuals and provide an audit trail or justification of results. It provides an ability to adjust threat tolerance levels, to analyze risk on a higher level and relative to historical patterns for providing trends and patterns, and to maintain relevant data for intelligence purposes.
- the arbitration function 340 human judgment is employed to resolve ambiguities, obtain additional data, and arbitrate borderline classification decisions.
- the goal of the arbitration function 340 is to classify the cases that the classification engine 330 was unable to resolve and to select the high-risk cases to forward to the investigation function 350 for further investigation.
- trained investigators pursue high-risk individuals identified in the previous stages. Most of their cases come from referrals from the arbitration function 340, but may also arrive directly from the classification engine 330.
- Figure 4 shows system configuration of an embodiment of the present invention connected in a flexible services network configuration 400. Performance and scalability is achieved by using a flexible, dynamic services network 400, also referred to as a "Federation", as depicted in Figure 4.
- a workflow manager 410 is connected to the network and invokes services needed via the network.
- the workflow manager 410 instead of calling an explicit computer or node, the workflow manager 410 makes a request to application nodes such as an investigator 450, an arbitrator 440 or a classification engine 430 using a queue 455 and other resources in the network 400.
- the request is routed to available application nodes 430, 440, 450, which performs the request and returns the result back to the workflow manager 410.
- the workflow manager 410 defines and uses the applications nodes 430, 440, 450 as virtual services, and a network controller determines actual available services dynamically.
- the workflow manager 410 shares a common repository 435 with the application nodes 430, 440, 450 and various analytic functions, including a similarity search server 470, biometric modules 472, rules engine 474, neural network 476, model engine 478, auto link analysis 480, decision tree 482, and report engine 484.
- a data warehouse 445 also provides a data source for use by the system.
- the similarity search server 470 uses remote search agents 488 to access and search remote databases 490-498.
- the flexible services network 400 may easily accommodate various user applications through use of standard APIs.
- the workflow manager 410 controls the overall workflow process.
- the workflow manager 410 controls a transaction dataset through the process model, keeping track of progress along the way.
- the workflow manager 410 functions as a command server.
- the workflow manager 410 is a workflow service node, communicating to other components through the network "Federation".
- workflow utility applications are included in the workflow manager 410 for various utilitarian functions. Utilities include workflow scheduling, data cleanup, data importing and exporting, batch processing, and various process and data management tools.
- Another utility is a workflow monitor that monitors specific events, actions, or data from a node, indicating completion or status. Workflow user applications may be connected to the network 400 to make use of workflow results.
- the workflow manager 410 is controlled by a user-defined task definition, which presents a list of transaction datasets for the system to process.
- the workflow manager 410 creates and manages these tasks per process model definitions.
- the workflow manager 410 also enables users to interface with workflow artifacts via the arbitrator 440 and investigator 450.
- Figure 5 A shows a process used historically for airline security activities according to a timeline 545. After an airline reservation 510 was made, there were no further checks made to ascertain security threats until physical security 540 was conducted when an individual checked in 520 at an airline counter and a flight departed the airport 530.
- Figure 5B shows a process according to the present invention for airline security activities according to a timeline 595. Threat detection 585 is initiated when an airline reservation 550 is made and processed through time of airline check-in 560 and flight departure 570.
- FIG. 6A shows a process used historically for insurance claim processing for fraud detection.
- a claim is made to an insurance company 610, it is processed 615 and the claim is paid 620. Only after a claim has been paid 620 is an investigation conducted 625 to discover if there has been any fraudulent activity 630. At this point, it may be impossible to recover any claim fees paid.
- Figure 6B shows a process according to the present invention for insurance claim processing for fraud detection.
- a claim is made to an insurance company 650, it is processed according to the present invention through the steps of identification, detection and investigation 660.
- Claims determined to be low risk 662 are processed 670 and paid 675.
- Claims determined to be high risk 664 are further processed 680 and refused 685.
- Figure 7 shows a screenshot 700 summarizing the detection arbitration step.
- Figure 7 depicts a partial list of transaction datasets being arbitrated in the detection stage of the present invention.
- the screenshot 700 lists an identification 710 of each transaction dataset displayed.
- a classification status 720 is indicated as red, yellow and green, indicative of high risk, medium risk and low risk.
- FIG. 8 shows a screen shot 800 that provides a user with a summary view of a result from a specific identity verification and classification.
- the screenshot 800 displays five main sections, an original criteria for the search 810 and a summary table of search results 820, 822, 824 826 from each query that was executed by an arbitrator.
- the query document section 810 displays the field name, and criteria value for each populated field in the original query document. Unpopulated fields may be hidden.
- the results for each query are displayed in a separate summary table section 820, 822, 824, 826 having a header label indicating the target schema and mapping used to generate the query.
- Each table section 820, 822, 824, 826 displays the following links in it's header: "Side by Side” 830 opens the selected documents in a side-by-side view with the franslated criteria used to generate that result set; "FI” 840 opens an entire result set in the investigative analysis application; "Check AU” 850 selects all documents in the result set; and "Clear AU” 860 deselects all documents in the result set.
- the table sections of search results 820, 822, 824, 826 display a row for each returned document in the result set. Each row displays the following items in columnar fashion: a checkbox 870 to select the given document, the document Identification 880 linked to the document detail view for this document, and the document's score 890.
- the summary table sections of search results 820, 822, 824, 826 are not paginated. The user is able to select one or more documents from the results table sections and navigate to a side-by-side view by pressing the "Side-by- Side" button 830 next to selected documents.
- Figure 9 shows a screen shot 900 of a link analysis tool used in the investigative step of the present invention.
- the link analysis tool 900 may access the results from the classification or detection process.
- the link analysis tool 900 may illustrate an identification section 910 for identifying an individual and an additional list of possible associations related to the individual.
- a second graphic section 920 provides a graphical depiction of the individual with links to other related information.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Hardware Design (AREA)
- Game Theory and Decision Science (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB0513372A GB2414317A (en) | 2002-09-30 | 2003-09-30 | System and method for identification, detection and investigation of maleficent acts |
| CA002543869A CA2543869A1 (en) | 2003-09-29 | 2003-09-30 | System and method for identification, detection and investigation of maleficent acts |
| AU2003282911A AU2003282911A1 (en) | 2003-09-29 | 2003-09-30 | System and method for identification, detection and investigation of maleficent acts |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10/673,911 US20050043961A1 (en) | 2002-09-30 | 2003-09-29 | System and method for identification, detection and investigation of maleficent acts |
| US10/673,911 | 2003-09-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2005041057A1 true WO2005041057A1 (en) | 2005-05-06 |
Family
ID=34520480
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2003/031237 Ceased WO2005041057A1 (en) | 2002-09-30 | 2003-09-30 | System and method for identification, detection and investigation of maleficent acts |
Country Status (3)
| Country | Link |
|---|---|
| AU (1) | AU2003282911A1 (en) |
| CA (1) | CA2543869A1 (en) |
| WO (1) | WO2005041057A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10771239B2 (en) | 2018-04-18 | 2020-09-08 | International Business Machines Corporation | Biometric threat intelligence processing for blockchains |
| CN111859922A (en) * | 2020-07-31 | 2020-10-30 | 上海银行股份有限公司 | Application method of entity relation extraction technology in bank wind control |
| US10970792B1 (en) * | 2019-12-04 | 2021-04-06 | Capital One Services, Llc | Life event bank ledger |
| CN114374560A (en) * | 2018-02-07 | 2022-04-19 | 阿里巴巴集团控股有限公司 | Data processing method, device and storage medium |
| CN114818846A (en) * | 2022-02-17 | 2022-07-29 | 恒安嘉新(北京)科技股份公司 | Victim prediction method, device, equipment and storage medium |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5557773A (en) * | 1991-06-12 | 1996-09-17 | Wang; Cheh C. | Computational automation for global objectives |
-
2003
- 2003-09-30 WO PCT/US2003/031237 patent/WO2005041057A1/en not_active Ceased
- 2003-09-30 CA CA002543869A patent/CA2543869A1/en not_active Abandoned
- 2003-09-30 AU AU2003282911A patent/AU2003282911A1/en not_active Abandoned
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5557773A (en) * | 1991-06-12 | 1996-09-17 | Wang; Cheh C. | Computational automation for global objectives |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114374560A (en) * | 2018-02-07 | 2022-04-19 | 阿里巴巴集团控股有限公司 | Data processing method, device and storage medium |
| US10771239B2 (en) | 2018-04-18 | 2020-09-08 | International Business Machines Corporation | Biometric threat intelligence processing for blockchains |
| US10970792B1 (en) * | 2019-12-04 | 2021-04-06 | Capital One Services, Llc | Life event bank ledger |
| US11538116B2 (en) | 2019-12-04 | 2022-12-27 | Capital One Services, Llc | Life event bank ledger |
| CN111859922A (en) * | 2020-07-31 | 2020-10-30 | 上海银行股份有限公司 | Application method of entity relation extraction technology in bank wind control |
| CN111859922B (en) * | 2020-07-31 | 2023-12-01 | 上海银行股份有限公司 | Application method of entity relation extraction technology in bank wind control |
| CN114818846A (en) * | 2022-02-17 | 2022-07-29 | 恒安嘉新(北京)科技股份公司 | Victim prediction method, device, equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CA2543869A1 (en) | 2005-05-06 |
| AU2003282911A1 (en) | 2005-05-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20050043961A1 (en) | System and method for identification, detection and investigation of maleficent acts | |
| CN110188198B (en) | Anti-fraud method and device based on knowledge graph | |
| US7827045B2 (en) | Systems and methods for assessing the potential for fraud in business transactions | |
| Kamiran et al. | Quantifying explainable discrimination and removing illegal discrimination in automated decision making | |
| Seifert | Data mining: An overview | |
| US9916584B2 (en) | Method and system for automatic assignment of sales opportunities to human agents | |
| US20050097051A1 (en) | Fraud potential indicator graphical interface | |
| CN112668859A (en) | Big data based customer risk rating method, device, equipment and storage medium | |
| Seifert | Data mining and homeland security: An overview | |
| US20140058763A1 (en) | Fraud detection methods and systems | |
| US20140081652A1 (en) | Automated Healthcare Risk Management System Utilizing Real-time Predictive Models, Risk Adjusted Provider Cost Index, Edit Analytics, Strategy Management, Managed Learning Environment, Contact Management, Forensic GUI, Case Management And Reporting System For Preventing And Detecting Healthcare Fraud, Abuse, Waste And Errors | |
| US20050273453A1 (en) | Systems, apparatus and methods for performing criminal background investigations | |
| US20150279372A1 (en) | Systems and Methods for Detecting Fraud in Spoken Tests Using Voice Biometrics | |
| Jans et al. | A framework for internal fraud risk reduction at it integrating business processes | |
| US20040098405A1 (en) | System and Method for Automated Link Analysis | |
| Pal | Usefulness and applications of data mining in extracting information from different perspectives | |
| CN112598499A (en) | Method and device for determining credit limit | |
| Sparrow | Network vulnerabilities and strategic intelligence in law enforcement | |
| WO2005041057A1 (en) | System and method for identification, detection and investigation of maleficent acts | |
| Elhadad | Insurance Business Enterprises' Intelligence in View of Big Data Analytics | |
| EP3764287A1 (en) | Transaction policy audit | |
| CN119693109B (en) | Business data processing method, device, equipment, medium and program product | |
| Ardhaninggar et al. | A Review of Cybersecurity Framework Implementation for Retail Industry-Challenges and Recommendation | |
| Constantin et al. | Internal Managerial Control-Perspectives on Some Modern Methods of Reducing the Risk of Fraud in Public Administration. | |
| Grossman | Alert management systems: A quick introduction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| ENP | Entry into the national phase |
Ref document number: 0513372 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20030930 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 0513372.3 Country of ref document: GB |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2543869 Country of ref document: CA |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |