[go: up one dir, main page]

US20170085584A1 - Detecting and thwarting spear phishing attacks in electronic messages - Google Patents

Detecting and thwarting spear phishing attacks in electronic messages Download PDF

Info

Publication number
US20170085584A1
US20170085584A1 US14/861,846 US201514861846A US2017085584A1 US 20170085584 A1 US20170085584 A1 US 20170085584A1 US 201514861846 A US201514861846 A US 201514861846A US 2017085584 A1 US2017085584 A1 US 2017085584A1
Authority
US
United States
Prior art keywords
electronic message
senders
sender
database
purported
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/861,846
Inventor
Sebastien GOUTAL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vade USA Inc
Original Assignee
Vade Retro Technology Inc
Vade Secure Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vade Retro Technology Inc, Vade Secure Inc filed Critical Vade Retro Technology Inc
Assigned to Vade Retro Technology Inc. reassignment Vade Retro Technology Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOUTAL, SEBASTIEN
Publication of US20170085584A1 publication Critical patent/US20170085584A1/en
Assigned to VADE SECURE, INCORPORATED reassignment VADE SECURE, INCORPORATED CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VADE RETRO TECHNOLOGY, INCORPORATED
Assigned to TIKEHAU ACE CAPITAL reassignment TIKEHAU ACE CAPITAL SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VADE USA INCORPORATED
Assigned to VADE USA INCORPORATED reassignment VADE USA INCORPORATED TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS RECORDED AT REEL 059510, FRAME 0419 Assignors: TIKEHAU ACE CAPITAL
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • G06F17/30339
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1483Countermeasures against malicious traffic service impersonation, e.g. phishing, pharming or web spoofing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

Definitions

  • a spear phishing email is an email that appears to be from a known person or entity. But it is not. The spear phisher often knows the recipient victim's name, address, job title and professional network. The spear phisher knows a lot about his intended victim, thanks to the quantity and rich variety of information available publicly through online sources, the media and social networks.
  • FIG. 1 is a table showing examples of legitimate email address and spoofed email addresses.
  • FIG. 2 is a table showing legitimate email address, spoofed email addresses and a calculated string metric (e.g., a Levenshtein distance) between the two, according to one embodiment.
  • a calculated string metric e.g., a Levenshtein distance
  • FIG. 3 is a flow chart of a method according to one embodiment.
  • FIG. 4 is a system configured according to one embodiment.
  • FIG. 5 is a block diagram of a computing device configured according to one embodiment.
  • Spear phishing is a growing threat. Spear phishing is, however, very different from a phishing attack. The differences between a phishing attack and a spear phishing attack may include the following:
  • a protection layer may be applied for each phase of the spear phishing attack. That is, during the first phase of the spear phishing attach, one embodiment detects whether an impersonation of a known sender is likely. During the second phase of the spear phishing attack, a detection procedure may be carried out, to determine whether the suspicious email may contain a malicious attachment, a malicious URL or contains suspect text in the body of the email.
  • the “From” email address (the sender's email address) may be scrutinized to detect whether the sender is a legitimate, known and trusted entity or is potentially an impersonation of the same.
  • a check may be carried out to determine if the sender's email address is a known contact of the email recipient.
  • the email recipient may be warned (through the generation of a visual and/or audio cue, for example) that the email is at least potentially illegitimate, as impersonating a known contact—the essence of a spear phishing attack.
  • One embodiment is configured to protect the user (e.g., an email recipient) by carrying out activities including:
  • FIG. 1 is a table showing examples of legitimate email address and spoofed email addresses, to email address impersonation.
  • the legitimate email address is john.smith@gmail.com.
  • the legitimate joh.smith@gmail.com has been spoofed by replacing the domain “gmail.com” with “mail.com”.
  • “gmail.com” has been replaced with another legitimate domain; namely, “yahoo.com”. Indeed, the user may not remember whether John Smith's email is with gmail.com, mail.com or yahoo.com, and may lead the user to believe that the email is genuine when, in fact, it is not.
  • the period between “john” and “smith” has been replaced by an underscore which may appear, to the user, to be a wholly legitimate email address.
  • the fourth row shows another variation, in which the period between “john” and “smith” has been removed, which change may not be immediately apparent to the user, who may open the email believing it originated from a trusted source (in this case, john.smith@gmail.com).
  • a trusted source in this case, john.smith@gmail.com
  • an extra “t” has been added to “smith” such that the email address is john.smitth@gmail.com, which small change may not be noticed by the user.
  • the sixth row exploits the fact that some letters look similar, such as a “t” and an “l”, which allows an illegitimate email address of johnsmilh@gmail.com to appear legitimate to the casual eye.
  • a list of his known email contacts called KNOWN CONTACTS may be created and maintained. All email addresses in this list may be stored in lowercase.
  • the KNOWN_CONTACTS list may be initially seeded by the protected user's address book.
  • the protected user's address book for performance and accuracy reasons, may not be used if it exceeds a predetermined (say 1,000, for example) maximum number of entries. This predetermined maximum number of entries may be represented by an ADDRESS_BOOK_MAX_SIZE variable (whose default value may be set a 1,000).
  • Very large address books may, for example, be associated with very large companies that share the whole company address book with all employees.
  • KNOWN_CONTACTS may be updated in one or more of the following cases:
  • a list of blacklisted email contacts called BLACKLIST may also be established and managed. All email addresses in this list are stored as lowercase. According to one embodiment, if an email is sent by a sender whose email address belongs to BLACKLIST, then that email will be dropped and will not be delivered to the protected user.
  • a check may be carried out to determine whether the sender's email address is known.
  • the KNOWN_CONTACTS list may be consulted for this purpose. If the email address is not known (e.g., is not present in the KNOWN_CONTACTS list), a determination may be carried out, according to one embodiment, to determine whether the email address looks like or is otherwise similar to a known address.
  • An email address is made up of a local part, the @ symbol and a domain part:
  • an email may be considered to be suspect or potentially illegitimate if both of the following conditions are met:
  • a detection process may be carried out to determine whether the local part of the received email address has been spoofed, to appear to resemble the local part of an email address in the KNOWN_CONTACTS list.
  • such a detection process may utilize a string metric to compare the local part of an email address in the KNOWN_CONTACTS with the local part of the received email address.
  • a string metric also known as a string similarity metric or string distance function
  • a string metric is a metric that measures distance (“inverse similarity”) between two text strings for approximate string matching or comparison and in fuzzy string searching.
  • a string metric may provide a number that is an indication of the distance or similarity between two (e.g., alpha or alphanumeric) strings.
  • One embodiment utilizes the Levenshtein Distance (also known as Edit Distance).
  • the Levenshtein Distance operates between two input strings, and returns a number equivalent to the number of substitutions and deletions needed in order to transform one input string (e.g., the local part of the received email address) into another (e.g., the local part of an email address in the KNOWN_CONTACTS list).
  • One embodiment therefore, computes a string metric such as the Levenshtein distance to detect if there has been a likely spoofing of the local part of the received email address.
  • the Levenshtein distance between two sequences of characters is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one sequence of characters into the other.
  • Other string metrics that may be used in this context include, for example, the Damerau-Levenshtein distance. Others may be used to good benefit as well.
  • FIG. 2 is a table showing a legitimate email address, a spoofed email addresses and a calculated string metric (e.g., a Levenshtein distance) between the two, according to one embodiment.
  • the Levenshtein Distance between the legitimate email address and the address in the Spoofed email address column is zero, meaning that they are the same and that no insertions, deletions or substitutions have been made to the local part.
  • the spoofed email addresses' domain is yahoo.com
  • the legitimate address' domain is gmail.com.
  • the spoofed email address therefore, would not be present in the KNOWN_CONTACTS, even though the Levenshtein Distance between the local part of the legitimate email and the local part of the spoofed email is zero, meaning that they are identical.
  • the email address is not in KNOWN_CONTACTS and the local part of the email address is equal or close to the local part of an email address of KNOWN_CONTACTS
  • the received john.smith@yahoo.com email would be considered to be suspect or at least likely illegitimate.
  • the third row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1.
  • the difference between the two local parts of the legitimate and spoofed email addresses is a single substitution of an underscore for a period.
  • the fourth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1.
  • the difference between the two local parts of the legitimate and spoofed email addresses is a single deletion of period in the local part of the received email address.
  • the fifth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1 as well. In this case, however, the difference between the two local parts of the legitimate and spoofed email addresses is a single insertion of an extra letter “t” in the local part.
  • the sixth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 2. Indeed, the difference between the two local parts of the legitimate and spoofed email addresses is a single insertion and a single deletion, as the period has been deleted and an “1” has been substitute for the “t” in the local part.
  • an email address is considered as suspect if the string metric (the Levenshtein Distance in one implementation) d between the local part of the email address and the local part of an email address of KNOWN_CONTACTS is such that
  • One implementation may include the following functionality:
  • the minimum length for the local part of the email address has been set at 6 characters and the STRING_METRIC_DISTANCE_THRESHOLD has been set a 2.
  • other values may be substituted for these values.
  • the parameters STRING_METRIC DISTANCE_THRESHOLD and localpart_min_length may be readily configured according to operational conditions and according to the security policies of the deploying organization.
  • STRING_METRIC_DISTANCE_THRESHOLD is increased, a greater number of spoofing attempts may be detected, but a greater number of false positives (email addresses that are legitimate but are flagged as potentially illegitimate) may be generated. A greater number of false positives may erode the user experience and degrade the confidence of the protected user in the system and may lead the user to disregard flagged emails.
  • a visual (for example) cue (such as a message) may be generated to warn the protected user.
  • the protected user may then be called upon to make a decision to:
  • One implementation may include the following functionality:
  • FIG. 3 is a flow chart of a method according to one embodiment.
  • block B 31 calls for receiving an electronic message (an email, for example) from a purported known sender over a computer network.
  • a database configured to store a plurality of known senders of electronic messages (including, for example, the KNOWN_CONTACTS list discussed above) may be accessed (either locally or over a LAN or WAN) and it may be determined whether the purported known sender of the electronic message matches one of the plurality of known senders of electronic messages in the database of known senders.
  • the degree of similarity of the purported known sender of the electronic message to one or more one of the plurality of known senders of electronic messages stored in the database may be quantified.
  • it may be determined whether the purported known sender matches one of the plurality of known senders in the database of known senders. If, so (Yes branch of B 34 ), the electronic message originates from a legitimate sender, as shown at B 35 , and the message may be safely delivered to its intended recipient.
  • the purported known sender does not match one of the plurality of known senders in the database of known senders (No branch of B 34 ), it may be determined, as shown at B 35 , whether the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is greater than a threshold value (such as, for example, the value of the STRING METRIC DISTANCE THRESHOLD variable, as discussed above). If no, the electronic message may be legitimate as suggested at B 37 or no information may be determined (at least, the electronic message may be determined to be an unlikely candidate for a spear phishing attack).
  • a threshold value such as, for example, the value of the STRING METRIC DISTANCE THRESHOLD variable, as discussed above.
  • the received electronic message may be flagged as being suspect. Thereafter, a visual and/or other perceptible cue, warning message, dialog box and the like may be generated when the received electronic message has been flagged as being suspect, to alert the recipient thereof that the flagged electronic message is likely illegitimate.
  • the electronic message may be or may comprises an email.
  • the quantifying may comprise calculating a string metric of the difference between the purported sender and one of the plurality of known senders in the database of known senders.
  • the string metric may comprise a Levenshtein distance between the purported sender and one of the plurality of known senders in the database of known senders.
  • a prompt may be generated, to solicit a decision confirming the flagged electronic message as being suspect or a decision denying that the flagged electronic message is suspect. Thereafter, the electronic message flagged as suspect may be dropped when the prompted decision is to confirm that the flagged electronic message is suspect and the flagged electronic message may be delivered to its intended recipient when the prompted decision is to deny that the flagged electronic message is suspect.
  • FIG. 4 is a block diagram of a system configured for phishing detection, according to one embodiment.
  • a spear phishing email server or workstation (as spear phishing attacks tend to be somewhat more artisanal than the comparatively less sophisticated phishing attacks) 402 (not part of the present spear phishing detection system, per se) may be coupled to a network (including, for example, a LAN or a WAN including the Internet), and to a client computing device 412 's email server 408 .
  • the email server 408 may be configured to receive the email on behalf of the client computing device 412 and provide access thereto.
  • a database 406 of known and trusted senders may also be coupled to the network 404 .
  • a Blacklist database 414 may also be coupled to the network 404 .
  • a phishing detection engine 410 may be coupled to or incorporated within, the email server 408 .
  • some or all of the functionality of the spear phishing detection engine 410 may be coupled to or incorporated within the client computing device 412 .
  • the functionality of the spear phishing detection engine 410 may be distributed across both client computing device 412 and the email server 408 .
  • the spear phishing detection engine may be configured to carry out the functionality described herein above and, in particular, with reference to FIG. 3 .
  • the databases 406 , 414 may be merged into one database and/or may be co-located with the email server 408 and/or the spear phishing detection engine 410 .
  • Any reference to an engine in the present specification refers, generally, to a program (or group of programs) that perform a particular function or series of functions that may be related to functions executed by other programs (e.g., the engine may perform a particular function in response to another program or may cause another program to execute its own function).
  • Engines may be implemented in software or hardware as in the context of an appropriate hardware device such as an algorithm embedded in a processor or application-specific integrated circuit.
  • FIG. 5 illustrates a block diagram of a computing device such as client computing device 412 , email server 408 spear phishing detection engine 410 upon and with which embodiments may be implemented.
  • Computing device 412 , 408 , 410 may include a bus 501 or other communication mechanism for communicating information, and one or more processors 502 coupled with bus 801 for processing information.
  • Computing device 412 , 408 , 410 may further comprise a random access memory (RAM) or other dynamic storage device 504 (referred to as main memory), coupled to bus 501 for storing information and instructions to be executed by processor(s) 502 .
  • Main memory (tangible and non-transitory) 504 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 502 .
  • Computing device 412 , 408 , 410 may also include a read only memory (ROM) and/or other static storage device 506 coupled to bus 501 for storing static information and instructions for processor(s) 502 .
  • a data storage device 507 such as a magnetic disk and/or solid state data storage device may be coupled to bus 501 for storing information and instructions—such as would be required to carry out the functionality shown and disclosed relative to FIG. 3 .
  • the computing device 412 , 408 , 410 may also be coupled via the bus 501 to a display device 521 for displaying information to a computer user.
  • An alphanumeric input device 522 including alphanumeric and other keys, may be coupled to bus 501 for communicating information and command selections to processor(s) 502 .
  • cursor control 523 Another type of user input device is cursor control 523 , such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor(s) 502 and for controlling cursor movement on display 521 .
  • the computing device 412 , 408 , 410 may be coupled, via a communication device (e.g., modem, network interface card or NIC) to a network 404 .
  • a communication device e.g., modem, network interface card or NIC
  • Embodiments of the present invention are related to the use of computing device 412 , 408 , 410 to detect and compute a probability that received email may be or may include a spear phishing attack.
  • the methods and systems described herein may be provided by one or more computing devices 412 , 408 , 410 in response to processor(s) 502 executing sequences of instructions contained in memory 504 .
  • Such instructions may be read into memory 504 from another computer-readable medium, such as data storage device 507 .
  • Execution of the sequences of instructions contained in memory 504 causes processor(s) 502 to perform the steps and have the functionality described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions to implement the described embodiments.
  • the computing devices may include one or a plurality of microprocessors working to perform the desired functions.
  • the instructions executed by the microprocessor or microprocessors are operable to cause the microprocessor(s) to perform the steps described herein.
  • the instructions may be stored in any computer-readable medium. In one embodiment, they may be stored on a non-volatile semiconductor memory external to the microprocessor, or integrated with the microprocessor. In another embodiment, the instructions may be stored on a disk and read into a volatile semiconductor memory before execution by the microprocessor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A computer-implemented method may comprise receiving an electronic message from a purported known sender; accessing a database of known senders and determining whether the sender matches one of the known senders. The degree of similarity of the sender to at least one of the known senders may then be quantified. The received message may then be determined to be legitimate when the purported known sender is determined to match one of the known senders. The received electronic message may be flagged as being suspect when the purported known sender does not match one of the plurality of known senders and the quantified degree of similarity of the purported known sender to one of the known senders is greater than a threshold value. A perceptible cue may then be generated when the received message has been flagged as being suspect, to alert the recipient that the flagged message is likely illegitimate.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is related in subject matter to commonly-owned and co-pending U.S. patent application Ser. No. 14/542,939 filed on Nov. 17, 2014 entitled “Methods and Systems for Phishing Detection”, which is incorporated herein by reference in its entirety.
  • BACKGROUND
  • A spear phishing email is an email that appears to be from a known person or entity. But it is not. The spear phisher often knows the recipient victim's name, address, job title and professional network. The spear phisher knows a lot about his intended victim, thanks to the quantity and rich variety of information available publicly through online sources, the media and social networks.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a table showing examples of legitimate email address and spoofed email addresses.
  • FIG. 2 is a table showing legitimate email address, spoofed email addresses and a calculated string metric (e.g., a Levenshtein distance) between the two, according to one embodiment.
  • FIG. 3 is a flow chart of a method according to one embodiment.
  • FIG. 4 is a system configured according to one embodiment.
  • FIG. 5 is a block diagram of a computing device configured according to one embodiment.
  • DETAILED DESCRIPTION
  • Spear phishing is a growing threat. Spear phishing is, however, very different from a phishing attack. The differences between a phishing attack and a spear phishing attack may include the following:
      • The target of a spear phishing attack is usually a member of the corporate market, and especially people who have access to sensitive resources of the company. Typical targets are accountants, lawyers and top management executives. In contrast, phishing attacks tend to target all end users more indiscriminately.
      • Most often, a spear phishing attack is initiated only after a thorough analysis of the target victim. This analysis is aided by the great amount of personal and professional information available on social networks (including, for example, Facebook, Twitter, LinkedIn and the like), company website and other media. Consequently, a spear phishing attack is often crafted to be unique to the targeted individual. Phishing attacks, on the other hand, tend to be somewhat more indiscriminate, typically targeting thousands of people.
      • In the first phase of a spear phishing attack, the email purports to originate from a well-known (to the targeted victim) and trusted individual, such as a coworker. In contrast, phishing emails typically appear to originate from a trusted company (PayPal, Dropbox, Apple, Google, etc.).
      • The second phase of spear phishing attack has a different modus operandi: a malicious attachment or a malicious Uniform Resource Locator (URL) that leads the victim to install malware that will perform malicious operations (e.g., theft of data). Alternatively, the spear phishing email may contain text in the body of the email that induces or dupes the victim to perform a predetermined action (e.g., send a wire transfer, disclose sensitive information or the like). Instead, phishing attacks typically rely on the inclusion of a malicious URL only.
  • According to one embodiment, to protect a user from a spear phishing attack, a protection layer may be applied for each phase of the spear phishing attack. That is, during the first phase of the spear phishing attach, one embodiment detects whether an impersonation of a known sender is likely. During the second phase of the spear phishing attack, a detection procedure may be carried out, to determine whether the suspicious email may contain a malicious attachment, a malicious URL or contains suspect text in the body of the email.
  • According to one embodiment, to detect whether an email constitutes a potential spear phishing attack, the “From” email address (the sender's email address) may be scrutinized to detect whether the sender is a legitimate, known and trusted entity or is potentially an impersonation of the same. According to one embodiment, if a user receives an email from an unknown recipient, a check may be carried out to determine if the sender's email address is a known contact of the email recipient. If the sender's email address looks like but is in any way different from a known contact of the recipient, the email recipient may be warned (through the generation of a visual and/or audio cue, for example) that the email is at least potentially illegitimate, as impersonating a known contact—the essence of a spear phishing attack.
  • One embodiment is configured to protect the user (e.g., an email recipient) by carrying out activities including:
      • 1. Managing, for the protected user, a list of his or known email contacts called KNOWN_CONTACTS;
      • 2. Managing, for the protected user, a list of blacklisted email contacts called BLACKLIST;
      • 3. Checking each incoming email to determine whether the sender email address looks like the email address of a known email contact; and
      • 4. Warning the end user if an incoming email is determined to be potentially illegitimate.
  • FIG. 1 is a table showing examples of legitimate email address and spoofed email addresses, to email address impersonation. As shown, the legitimate email address is john.smith@gmail.com. In the first row, the legitimate joh.smith@gmail.com has been spoofed by replacing the domain “gmail.com” with “mail.com”. In the second row, “gmail.com” has been replaced with another legitimate domain; namely, “yahoo.com”. Indeed, the user may not remember whether John Smith's email is with gmail.com, mail.com or yahoo.com, and may lead the user to believe that the email is genuine when, in fact, it is not. In the third row, the period between “john” and “smith” has been replaced by an underscore which may appear, to the user, to be a wholly legitimate email address. The fourth row shows another variation, in which the period between “john” and “smith” has been removed, which change may not be immediately apparent to the user, who may open the email believing it originated from a trusted source (in this case, john.smith@gmail.com). In the fifth row, an extra “t” has been added to “smith” such that the email address is john.smitth@gmail.com, which small change may not be noticed by the user. Lastly, the sixth row exploits the fact that some letters look similar, such as a “t” and an “l”, which allows an illegitimate email address of johnsmilh@gmail.com to appear legitimate to the casual eye.
  • Managing List of Known Email Contacts
  • According to one embodiment, a list of his known email contacts called KNOWN CONTACTS may be created and maintained. All email addresses in this list may be stored in lowercase. According to one embodiment, the KNOWN_CONTACTS list may be initially seeded by the protected user's address book. According to one embodiment, the protected user's address book, for performance and accuracy reasons, may not be used if it exceeds a predetermined (say 1,000, for example) maximum number of entries. This predetermined maximum number of entries may be represented by an ADDRESS_BOOK_MAX_SIZE variable (whose default value may be set a 1,000). Very large address books may, for example, be associated with very large companies that share the whole company address book with all employees.
  • Another source of legitimate email address to populate the KNOWN_CONTACTS list are email addresses of emails received by the end user, with the exception of automated emails such as email alerts, newsletters, advertisements or any email that has been sent by an automated process. The email addresses of people to whom the end user has sent an email is also another source of legitimate email addresses. According to one embodiment, KNOWN_CONTACTS may be updated in one or more of the following cases:
      • When the address book is updated;
      • When the protected user receives an email from a non-suspect new contact, with the exception of automated emails such as email alerts, newsletters, advertisements or any email that has been sent by an automated process; and/or
      • When the end user sends an email to a new contact.
  • Managing List of Blacklisted Contacts
  • According to one embodiment, a list of blacklisted email contacts called BLACKLIST may also be established and managed. All email addresses in this list are stored as lowercase. According to one embodiment, if an email is sent by a sender whose email address belongs to BLACKLIST, then that email will be dropped and will not be delivered to the protected user.
  • Detecting a Potentially Suspect or Illegitimate Email Address
  • When a protected user receives an email, a check may be carried out to determine whether the sender's email address is known. The KNOWN_CONTACTS list may be consulted for this purpose. If the email address is not known (e.g., is not present in the KNOWN_CONTACTS list), a determination may be carried out, according to one embodiment, to determine whether the email address looks like or is otherwise similar to a known address. An email address is made up of a local part, the @ symbol and a domain part:
      • The local part is the left side of the email address, before the @ symbol. For example, john.smith is the local part of john.smith@gmail.com.
      • The domain is the right side of the email address, after the @ symbol. For example, gmail.com is the domain of john.smith@gmail.com.
  • According to one embodiment, an email may be considered to be suspect or potentially illegitimate if both of the following conditions are met:
      • The email address is not in KNOWN_CONTACTS, and
      • The local part of the email address is equal or close to the local part of an email address of KNOWN_CONTACTS.
  • According to one embodiment, a detection process may be carried out to determine whether the local part of the received email address has been spoofed, to appear to resemble the local part of an email address in the KNOWN_CONTACTS list. According to one embodiment, such a detection process may utilize a string metric to compare the local part of an email address in the KNOWN_CONTACTS with the local part of the received email address. A string metric (also known as a string similarity metric or string distance function) is a metric that measures distance (“inverse similarity”) between two text strings for approximate string matching or comparison and in fuzzy string searching. A string metric may provide a number that is an indication of the distance or similarity between two (e.g., alpha or alphanumeric) strings.
  • One embodiment utilizes the Levenshtein Distance (also known as Edit Distance). The Levenshtein Distance operates between two input strings, and returns a number equivalent to the number of substitutions and deletions needed in order to transform one input string (e.g., the local part of the received email address) into another (e.g., the local part of an email address in the KNOWN_CONTACTS list). One embodiment, therefore, computes a string metric such as the Levenshtein distance to detect if there has been a likely spoofing of the local part of the received email address. The Levenshtein distance between two sequences of characters is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one sequence of characters into the other. Other string metrics that may be used in this context include, for example, the Damerau-Levenshtein distance. Others may be used to good benefit as well.
  • FIG. 2 is a table showing a legitimate email address, a spoofed email addresses and a calculated string metric (e.g., a Levenshtein distance) between the two, according to one embodiment. In the first row of the table of FIG. 2, the Levenshtein Distance between the legitimate email address and the address in the Spoofed email address column is zero, meaning that they are the same and that no insertions, deletions or substitutions have been made to the local part. In the second row, the spoofed email addresses' domain is yahoo.com, whereas the legitimate address' domain is gmail.com. The spoofed email address, therefore, would not be present in the KNOWN_CONTACTS, even though the Levenshtein Distance between the local part of the legitimate email and the local part of the spoofed email is zero, meaning that they are identical. As both conditions are met (the email address is not in KNOWN_CONTACTS and the local part of the email address is equal or close to the local part of an email address of KNOWN_CONTACTS), the received john.smith@yahoo.com email would be considered to be suspect or at least likely illegitimate. The third row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1. In this case, the difference between the two local parts of the legitimate and spoofed email addresses is a single substitution of an underscore for a period. Similarly, the fourth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1. In this case, the difference between the two local parts of the legitimate and spoofed email addresses is a single deletion of period in the local part of the received email address. The fifth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 1 as well. In this case, however, the difference between the two local parts of the legitimate and spoofed email addresses is a single insertion of an extra letter “t” in the local part. Lastly, the sixth row of the table in FIG. 2 shows that the Levenshtein Distance between the legitimate email address and the spoofed email address is 2. Indeed, the difference between the two local parts of the legitimate and spoofed email addresses is a single insertion and a single deletion, as the period has been deleted and an “1” has been substitute for the “t” in the local part.
  • According to one embodiment, an email address is considered as suspect if the string metric (the Levenshtein Distance in one implementation) d between the local part of the email address and the local part of an email address of KNOWN_CONTACTS is such that

  • d≦STRING_METRIC_DISTANCE_THRESHOLD
  • One implementation may include the following functionality:
  • input :
    •  address : address to test. lowercase string.
    •  known_contacts : list of known contacts. Each contact is a
    lowercase string.
    output :
    •  true if suspect, false otherwise
    # these parameters can be configured according to the operational
    conditions and security policy
    levenshtein_distance_threshold = 2
    localpart_min_length = 6
    # if the localpart is too short, it is not relevant
    if address.localpart.length < localpart_min_length :
      return false
    # if the address is already known, it is not suspect
    if address in known_contacts :
      return false
    # otherwise we check each contact of known contacts
    for each known_contact in known_contacts:
      d     =    levenshtein_distance(address.localbart,
    known_contact.localpart)
      if d >=0 and d <= localpart_levenshtein_distance_threshold :
       return true
    # email address is not suspect
    return false
  • Above, the minimum length for the local part of the email address has been set at 6 characters and the STRING_METRIC_DISTANCE_THRESHOLD has been set a 2. Of course, other values may be substituted for these values. Indeed, the parameters STRING_METRIC DISTANCE_THRESHOLD and localpart_min_length may be readily configured according to operational conditions and according to the security policies of the deploying organization.
  • For example, if the STRING_METRIC_DISTANCE_THRESHOLD is increased, a greater number of spoofing attempts may be detected, but a greater number of false positives (email addresses that are legitimate but are flagged as potentially illegitimate) may be generated. A greater number of false positives may erode the user experience and degrade the confidence of the protected user in the system and may lead the user to disregard flagged emails.
  • Flagging an Email as Potentially Illegitimate/Generating Warning Cue
  • If the email address is suspect, a visual (for example) cue (such as a message) may be generated to warn the protected user. According to one embodiment, the protected user may then be called upon to make a decision to:
      • confirm that the email address is suspect—the email address is then added to BLACKLIST and the email is dropped; or
      • deny that the email address is suspect—the email address is then added to KNOWN_CONTACTS and the email is delivered to the protected user.
    IMPLEMENTATION EXAMPLE
  • One implementation may include the following functionality:
  •   function : process_email
      input :
    • email : email received.
    • known_contacts : list of known contacts. Each contact is a lowercase
      string.
    • blacklist : list of blacklisted contacts. Each contact is a
      lowercase string.
      output :
    • true if email has to be dropped, false otherwise
      # extract address from From header [1]
      address = email.from_header.address
      address = lowercase (address)
      # if address is blacklisted, drop email
      if address in blacklist :
        return true
      # if address is suspicious, warn user
      if is_address_suspicious(address, known_contacts) :
        # decision is confirmed or denied
        decision = warn_end_user(address)
        if decision is confirmed :
         blacklist.append(address)
         return true
        else if decision is denied :
         known_contacts.append(address)
         return false
      # otherwise add address to known_contacts
      else :
        known_contacts.append(address)
        return false
  • FIG. 3 is a flow chart of a method according to one embodiment. As shown, block B31 calls for receiving an electronic message (an email, for example) from a purported known sender over a computer network. In block B32, a database configured to store a plurality of known senders of electronic messages (including, for example, the KNOWN_CONTACTS list discussed above) may be accessed (either locally or over a LAN or WAN) and it may be determined whether the purported known sender of the electronic message matches one of the plurality of known senders of electronic messages in the database of known senders. As shown at B33, the degree of similarity of the purported known sender of the electronic message to one or more one of the plurality of known senders of electronic messages stored in the database may be quantified. At B34, it may be determined whether the purported known sender matches one of the plurality of known senders in the database of known senders. If, so (Yes branch of B34), the electronic message originates from a legitimate sender, as shown at B35, and the message may be safely delivered to its intended recipient. If the purported known sender does not match one of the plurality of known senders in the database of known senders (No branch of B34), it may be determined, as shown at B35, whether the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is greater than a threshold value (such as, for example, the value of the STRING METRIC DISTANCE THRESHOLD variable, as discussed above). If no, the electronic message may be legitimate as suggested at B37 or no information may be determined (at least, the electronic message may be determined to be an unlikely candidate for a spear phishing attack).
  • As shown at B38, if the purported known sender does not match one of the plurality of known senders in the database of known senders and the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is indeed greater than the threshold value, the received electronic message may be flagged as being suspect. Thereafter, a visual and/or other perceptible cue, warning message, dialog box and the like may be generated when the received electronic message has been flagged as being suspect, to alert the recipient thereof that the flagged electronic message is likely illegitimate.
  • According to one embodiment, the electronic message may be or may comprises an email. In Block B33, the quantifying may comprise calculating a string metric of the difference between the purported sender and one of the plurality of known senders in the database of known senders. In one embodiment, the string metric may comprise a Levenshtein distance between the purported sender and one of the plurality of known senders in the database of known senders.
  • After block B39, a prompt may be generated, to solicit a decision confirming the flagged electronic message as being suspect or a decision denying that the flagged electronic message is suspect. Thereafter, the electronic message flagged as suspect may be dropped when the prompted decision is to confirm that the flagged electronic message is suspect and the flagged electronic message may be delivered to its intended recipient when the prompted decision is to deny that the flagged electronic message is suspect.
  • FIG. 4 is a block diagram of a system configured for phishing detection, according to one embodiment. As shown therein, a spear phishing email server or workstation (as spear phishing attacks tend to be somewhat more artisanal than the comparatively less sophisticated phishing attacks) 402 (not part of the present spear phishing detection system, per se) may be coupled to a network (including, for example, a LAN or a WAN including the Internet), and to a client computing device 412's email server 408. The email server 408 may be configured to receive the email on behalf of the client computing device 412 and provide access thereto. A database 406 of known and trusted senders may also be coupled to the network 404. A Blacklist database 414 may also be coupled to the network 404. A phishing detection engine 410 may be coupled to or incorporated within, the email server 408. Alternatively, some or all of the functionality of the spear phishing detection engine 410 may be coupled to or incorporated within the client computing device 412. Alternatively still, the functionality of the spear phishing detection engine 410 may be distributed across both client computing device 412 and the email server 408. According to one embodiment, the spear phishing detection engine may be configured to carry out the functionality described herein above and, in particular, with reference to FIG. 3. The databases 406, 414 may be merged into one database and/or may be co-located with the email server 408 and/or the spear phishing detection engine 410.
  • Any reference to an engine in the present specification refers, generally, to a program (or group of programs) that perform a particular function or series of functions that may be related to functions executed by other programs (e.g., the engine may perform a particular function in response to another program or may cause another program to execute its own function). Engines may be implemented in software or hardware as in the context of an appropriate hardware device such as an algorithm embedded in a processor or application-specific integrated circuit.
  • FIG. 5 illustrates a block diagram of a computing device such as client computing device 412, email server 408 spear phishing detection engine 410 upon and with which embodiments may be implemented. Computing device 412, 408, 410 may include a bus 501 or other communication mechanism for communicating information, and one or more processors 502 coupled with bus 801 for processing information. Computing device 412, 408, 410 may further comprise a random access memory (RAM) or other dynamic storage device 504 (referred to as main memory), coupled to bus 501 for storing information and instructions to be executed by processor(s) 502. Main memory (tangible and non-transitory) 504 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 502. Computing device 412, 408, 410 may also may include a read only memory (ROM) and/or other static storage device 506 coupled to bus 501 for storing static information and instructions for processor(s) 502. A data storage device 507, such as a magnetic disk and/or solid state data storage device may be coupled to bus 501 for storing information and instructions—such as would be required to carry out the functionality shown and disclosed relative to FIG. 3. The computing device 412, 408, 410 may also be coupled via the bus 501 to a display device 521 for displaying information to a computer user. An alphanumeric input device 522, including alphanumeric and other keys, may be coupled to bus 501 for communicating information and command selections to processor(s) 502. Another type of user input device is cursor control 523, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor(s) 502 and for controlling cursor movement on display 521. The computing device 412, 408, 410 may be coupled, via a communication device (e.g., modem, network interface card or NIC) to a network 404.
  • Embodiments of the present invention are related to the use of computing device 412, 408, 410 to detect and compute a probability that received email may be or may include a spear phishing attack. According to one embodiment, the methods and systems described herein may be provided by one or more computing devices 412, 408, 410 in response to processor(s) 502 executing sequences of instructions contained in memory 504. Such instructions may be read into memory 504 from another computer-readable medium, such as data storage device 507. Execution of the sequences of instructions contained in memory 504 causes processor(s) 502 to perform the steps and have the functionality described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the described embodiments. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. Indeed, it should be understood by those skilled in the art that any suitable computer system may implement the functionality described herein. The computing devices may include one or a plurality of microprocessors working to perform the desired functions. In one embodiment, the instructions executed by the microprocessor or microprocessors are operable to cause the microprocessor(s) to perform the steps described herein. The instructions may be stored in any computer-readable medium. In one embodiment, they may be stored on a non-volatile semiconductor memory external to the microprocessor, or integrated with the microprocessor. In another embodiment, the instructions may be stored on a disk and read into a volatile semiconductor memory before execution by the microprocessor.
  • While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the embodiments disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the embodiments disclosed herein.

Claims (21)

1. A computer-implemented method, comprising:
receiving an electronic message from a purported known sender over a computer network;
accessing a database configured to store a plurality of known senders of electronic messages and determining whether the purported known sender of the electronic message matches one of the plurality of known senders of electronic messages in the database of known senders;
quantifying a degree of similarity of the purported known sender of the electronic message to at least one of the plurality of known senders of electronic messages stored in the database;
determining the received electronic message to be legitimate when the purported known sender is determined to match one of the plurality of known senders in the database of known senders;
flagging the received electronic message as being suspect when:
the purported known sender does not match one of the plurality of known senders in the database of known senders; and
the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is greater than a threshold value; and
generating at least a visual cue when the received electronic message has been flagged as being suspect, to alert a recipient thereof that the flagged electronic message is likely illegitimate.
2. The computer-implemented method of claim 1, wherein the electronic message comprises an email.
3. The computer-implemented method of claim 1, wherein quantifying comprises calculating a string metric of a difference between the purported sender and one of the plurality of known senders in the database of known senders.
4. The computer-implemented method of claim 1, wherein quantifying comprises calculating a Levenshtein distance between the purported sender and one of the plurality of known senders in the database of known senders.
5. The computer-implemented method of claim 1, further comprising prompting for a decision confirming the flagged electronic message is suspect or a decision denying that the flagged electronic message is suspect.
6. The computer-implemented method of claim 5, further comprising dropping the flagged electronic message when the prompted decision is to confirm that the flagged electronic message is suspect and delivering the flagged electronic message when the prompted decision is to deny that the flagged electronic message is suspect.
7. The computer-implemented method of claim 1, wherein accessing also accesses a database of blacklisted senders of electronic messages and dropping the received electronic message if a sender of the received electronic matches an entry in the database of blacklisted senders of electronic messages.
8. A computing device configured to determine whether a received electronic message comprises a spear phishing attack, comprising:
at least one processor;
at least one data storage device coupled to the at least one processor;
a plurality of processes spawned by said at least one processor, the processes including processing logic for:
receiving an electronic message from a purported known sender over a computer network;
accessing a database configured to store a plurality of known senders of electronic messages and determining whether the purported known sender of the electronic message matches one of the plurality of known senders of electronic messages in the database of known senders;
quantifying a degree of similarity of the purported known sender of the electronic message to at least one of the plurality of known senders of electronic messages stored in the database;
determining the received electronic message to be legitimate when the purported known sender is determined to match one of the plurality of known senders in the database of known senders;
flagging the received electronic message as being suspect when:
the purported known sender does not match one of the plurality of known senders in the database of known senders; and
the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is greater than a threshold value; and
generating at least a visual cue when the received electronic message has been flagged as being suspect, to alert a recipient thereof that the flagged electronic message is likely illegitimate
9. The computing device of claim 8, wherein the electronic message comprises an email.
10. The computing device of claim 8, wherein quantifying comprises calculating a string metric of a difference between the purported sender and one of the plurality of known senders in the database of known senders.
11. The computing device of claim 8, wherein quantifying comprises calculating a Levenshtein distance between the purported sender and one of the plurality of known senders in the database of known senders.
12. The computing device of claim 8, wherein the processes further comprise processing logic for prompting for a decision confirming the flagged electronic message is suspect or a decision denying that the flagged electronic message is suspect.
13. The computing device of claim 12, wherein the processes further comprise processing logic for dropping the flagged electronic message when the prompted decision is to confirm that the flagged electronic message is suspect and for delivering the flagged electronic message when the prompted decision is to deny that the flagged electronic message is suspect.
14. The computing device of claim 8, wherein the processes further comprise processing logic for accessing a database of blacklisted senders of electronic messages and dropping the received electronic message if a sender of the received electronic matches an entry in the database of blacklisted senders of electronic messages.
15. A tangible, non-transitory machine-readable data storage device having data stored thereon representing sequences of instructions which, when executed by a computing device, cause the computing device to:
receive an electronic message from a purported known sender over a computer network;
access a database configured to store a plurality of known senders of electronic messages and determine whether the purported known sender of the electronic message matches one of the plurality of known senders of electronic messages in the database of known senders;
quantify a degree of similarity of the purported known sender of the electronic message to at least one of the plurality of known senders of electronic messages stored in the database;
determine the received electronic message to be legitimate when the purported known sender is determined to match one of the plurality of known senders in the database of known senders;
flag the received electronic message as being suspect when:
the purported known sender does not match one of the plurality of known senders in the database of known senders; and
the quantified degree of similarity of the purported known sender of the electronic message to one of the plurality of known senders of electronic messages is greater than a threshold value; and
generate at least a visual cue when the received electronic message has been flagged as being suspect, to alert a recipient thereof that the flagged electronic message is likely illegitimate.
16. The tangible, non-transitory machine-readable data storage device of claim 15, wherein the electronic message comprises an email.
17. The tangible, non-transitory machine-readable data storage device of claim 15, wherein quantifying comprises calculating a string metric of a difference between the purported sender and one of the plurality of known senders in the database of known senders.
18. The tangible, non-transitory machine-readable data storage device of claim 15, wherein quantifying comprises calculating a Levenshtein distance between the purported sender and one of the plurality of known senders in the database of known senders.
19. The tangible, non-transitory machine-readable data storage device of claim 15, wherein the stored sequences of instructions further comprise prompting for a decision confirming the flagged electronic message is suspect or a decision denying that the flagged electronic message is suspect.
20. The tangible, non-transitory machine-readable data storage device of claim 15, wherein the stored sequences of instructions further comprise dropping the flagged electronic message when the prompted decision is to confirm that the flagged electronic message is suspect and delivering the flagged electronic message when the prompted decision is to deny that the flagged electronic message is suspect.
21. The tangible, non-transitory machine-readable data storage device of claim 15, wherein the stored sequences of instructions further comprise accessing a database of blacklisted senders of electronic messages and dropping the received electronic message if a sender of the received electronic matches an entry in the database of blacklisted senders of electronic messages.
US14/861,846 2014-11-17 2015-09-22 Detecting and thwarting spear phishing attacks in electronic messages Abandoned US20170085584A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/542,939 US9398047B2 (en) 2014-11-17 2014-11-17 Methods and systems for phishing detection

Publications (1)

Publication Number Publication Date
US20170085584A1 true US20170085584A1 (en) 2017-03-23

Family

ID=55962778

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/542,939 Active 2035-01-31 US9398047B2 (en) 2014-11-17 2014-11-17 Methods and systems for phishing detection
US14/861,846 Abandoned US20170085584A1 (en) 2014-11-17 2015-09-22 Detecting and thwarting spear phishing attacks in electronic messages
US15/165,503 Active US10021134B2 (en) 2014-11-17 2016-05-26 Methods and systems for phishing detection

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/542,939 Active 2035-01-31 US9398047B2 (en) 2014-11-17 2014-11-17 Methods and systems for phishing detection

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/165,503 Active US10021134B2 (en) 2014-11-17 2016-05-26 Methods and systems for phishing detection

Country Status (1)

Country Link
US (3) US9398047B2 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9774626B1 (en) * 2016-08-17 2017-09-26 Wombat Security Technologies, Inc. Method and system for assessing and classifying reported potentially malicious messages in a cybersecurity system
US9781149B1 (en) 2016-08-17 2017-10-03 Wombat Security Technologies, Inc. Method and system for reducing reporting of non-malicious electronic messages in a cybersecurity system
US9912687B1 (en) 2016-08-17 2018-03-06 Wombat Security Technologies, Inc. Advanced processing of electronic messages with attachments in a cybersecurity system
CN109450929A (en) * 2018-12-13 2019-03-08 成都亚信网络安全产业技术研究院有限公司 A kind of safety detection method and device
US10243904B1 (en) 2017-05-26 2019-03-26 Wombat Security Technologies, Inc. Determining authenticity of reported user action in cybersecurity risk assessment
US10243900B2 (en) * 2013-08-20 2019-03-26 Longsand Limited Using private tokens in electronic messages associated with a subscription-based messaging service
US10674009B1 (en) 2013-11-07 2020-06-02 Rightquestion, Llc Validating automatic number identification data
CN111224953A (en) * 2019-12-25 2020-06-02 哈尔滨安天科技集团股份有限公司 Method, device and storage medium for discovering threat organization attack based on abnormal point
US10686826B1 (en) 2019-03-28 2020-06-16 Vade Secure Inc. Optical scanning parameters computation methods, devices and systems for malicious URL detection
US10715543B2 (en) 2016-11-30 2020-07-14 Agari Data, Inc. Detecting computer security risk based on previously observed communications
EP3516821A4 (en) * 2016-09-26 2020-07-22 Agari Data, Inc REDUCING COMMUNICATION RISK BY DETECTING SIMILARITY WITH A TRUSTED NEWS CONTACT
CN111614543A (en) * 2020-04-10 2020-09-01 中国科学院信息工程研究所 A URL-based spear phishing email detection method and system
US10778689B2 (en) 2018-09-06 2020-09-15 International Business Machines Corporation Suspicious activity detection in computer networks
WO2020205071A1 (en) 2019-04-05 2020-10-08 Stellarite, Inc. Defanging malicious electronic files based on trusted user reporting
US10805314B2 (en) 2017-05-19 2020-10-13 Agari Data, Inc. Using message context to evaluate security of requested data
US10880322B1 (en) * 2016-09-26 2020-12-29 Agari Data, Inc. Automated tracking of interaction with a resource of a message
US11019076B1 (en) 2017-04-26 2021-05-25 Agari Data, Inc. Message security assessment using sender identity profiles
US11044267B2 (en) 2016-11-30 2021-06-22 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US11102244B1 (en) 2017-06-07 2021-08-24 Agari Data, Inc. Automated intelligence gathering
US11171973B2 (en) * 2016-12-23 2021-11-09 Microsoft Technology Licensing, Llc Threat protection in documents
US11722513B2 (en) 2016-11-30 2023-08-08 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US11757914B1 (en) 2017-06-07 2023-09-12 Agari Data, Inc. Automated responsive message to determine a security risk of a message sender
US20240056477A1 (en) * 2022-08-10 2024-02-15 Capital One Services, Llc Methods and systems for detecting malicious messages
US11936604B2 (en) 2016-09-26 2024-03-19 Agari Data, Inc. Multi-level security analysis and intermediate delivery of an electronic message
US12506747B1 (en) 2019-03-29 2025-12-23 Agari Data, Inc. Message campaign and malicious threat detection

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9596265B2 (en) * 2015-05-13 2017-03-14 Google Inc. Identifying phishing communications using templates
US11675795B2 (en) * 2015-05-15 2023-06-13 Yahoo Assets Llc Method and system for ranking search content
US9781132B2 (en) * 2015-10-13 2017-10-03 Yahoo Holdings, Inc. Fraud prevention
US10893009B2 (en) * 2017-02-16 2021-01-12 eTorch Inc. Email fraud prevention
US10021126B2 (en) 2016-02-26 2018-07-10 KnowBe4, Inc. Systems and methods for creating and running heterogeneous phishing attack campaigns
US9800613B1 (en) 2016-06-28 2017-10-24 KnowBe4, Inc. Systems and methods for performing a simulated phishing attack
US10855714B2 (en) 2016-10-31 2020-12-01 KnowBe4, Inc. Systems and methods for an artificial intelligence driven agent
CN106713335B (en) * 2016-12-30 2020-10-30 山石网科通信技术股份有限公司 Malicious software identification method and device
US9749360B1 (en) 2017-01-05 2017-08-29 KnowBe4, Inc. Systems and methods for performing simulated phishing attacks using social engineering indicators
US20180307844A1 (en) 2017-04-21 2018-10-25 KnowBe4, Inc. Using smart groups for simulated phishing training and phishing campaigns
US10334015B2 (en) 2017-04-28 2019-06-25 Bank Of America Corporation Apparatus and methods for shortening user exposure to malicious websites
US10362047B2 (en) 2017-05-08 2019-07-23 KnowBe4, Inc. Systems and methods for providing user interfaces based on actions associated with untrusted emails
US11599838B2 (en) 2017-06-20 2023-03-07 KnowBe4, Inc. Systems and methods for creating and commissioning a security awareness program
US11343276B2 (en) 2017-07-13 2022-05-24 KnowBe4, Inc. Systems and methods for discovering and alerting users of potentially hazardous messages
WO2019027837A1 (en) 2017-07-31 2019-02-07 KnowBe4, Inc. Systems and methods for using attribute data for system protection and security awareness training
US11295010B2 (en) 2017-07-31 2022-04-05 KnowBe4, Inc. Systems and methods for using attribute data for system protection and security awareness training
US10601866B2 (en) 2017-08-23 2020-03-24 International Business Machines Corporation Discovering website phishing attacks
US10616274B1 (en) * 2017-11-30 2020-04-07 Facebook, Inc. Detecting cloaking of websites using model for analyzing URL redirects
US11777986B2 (en) 2017-12-01 2023-10-03 KnowBe4, Inc. Systems and methods for AIDA based exploit selection
US10581910B2 (en) 2017-12-01 2020-03-03 KnowBe4, Inc. Systems and methods for AIDA based A/B testing
US10348761B2 (en) 2017-12-01 2019-07-09 KnowBe4, Inc. Systems and methods for situational localization of AIDA
US10715549B2 (en) 2017-12-01 2020-07-14 KnowBe4, Inc. Systems and methods for AIDA based role models
US10348762B2 (en) * 2017-12-01 2019-07-09 KnowBe4, Inc. Systems and methods for serving module
US10679164B2 (en) 2017-12-01 2020-06-09 KnowBe4, Inc. Systems and methods for using artificial intelligence driven agent to automate assessment of organizational vulnerabilities
US10673895B2 (en) 2017-12-01 2020-06-02 KnowBe4, Inc. Systems and methods for AIDA based grouping
US10812527B2 (en) 2017-12-01 2020-10-20 KnowBe4, Inc. Systems and methods for aida based second chance
US10839083B2 (en) 2017-12-01 2020-11-17 KnowBe4, Inc. Systems and methods for AIDA campaign controller intelligent records
US10313387B1 (en) 2017-12-01 2019-06-04 KnowBe4, Inc. Time based triggering of dynamic templates
US10009375B1 (en) 2017-12-01 2018-06-26 KnowBe4, Inc. Systems and methods for artificial model building techniques
US10257225B1 (en) 2017-12-01 2019-04-09 KnowBe4, Inc. Systems and methods for artificial intelligence driven agent campaign controller
US10616255B1 (en) 2018-02-20 2020-04-07 Facebook, Inc. Detecting cloaking of websites using content model executing on a mobile device
US10237302B1 (en) 2018-03-20 2019-03-19 KnowBe4, Inc. System and methods for reverse vishing and point of failure remedial training
US20190319905A1 (en) * 2018-04-13 2019-10-17 Inky Technology Corporation Mail protection system
US10673876B2 (en) 2018-05-16 2020-06-02 KnowBe4, Inc. Systems and methods for determining individual and group risk scores
US10664656B2 (en) * 2018-06-20 2020-05-26 Vade Secure Inc. Methods, devices and systems for data augmentation to improve fraud detection
US11323464B2 (en) 2018-08-08 2022-05-03 Rightquestion, Llc Artifact modification and associated abuse detection
US11089053B2 (en) * 2018-09-17 2021-08-10 Servicenow, Inc. Phishing attempt search interface
US10540493B1 (en) 2018-09-19 2020-01-21 KnowBe4, Inc. System and methods for minimizing organization risk from users associated with a password breach
US10673894B2 (en) 2018-09-26 2020-06-02 KnowBe4, Inc. System and methods for spoofed domain identification and user training
US10979448B2 (en) 2018-11-02 2021-04-13 KnowBe4, Inc. Systems and methods of cybersecurity attack simulation for incident response training and awareness
US10812507B2 (en) 2018-12-15 2020-10-20 KnowBe4, Inc. System and methods for efficient combining of malware detection rules
US11108821B2 (en) 2019-05-01 2021-08-31 KnowBe4, Inc. Systems and methods for use of address fields in a simulated phishing attack
US11334771B2 (en) 2019-12-12 2022-05-17 Vade Usa, Incorporated Methods, devices and systems for combining object detection models
US11528297B1 (en) 2019-12-12 2022-12-13 Zimperium, Inc. Mobile device security application for malicious website detection based on representative image
US11863566B2 (en) * 2019-12-12 2024-01-02 Proofpoint, Inc. Dynamic message analysis platform for enhanced enterprise security
US10735436B1 (en) * 2020-02-05 2020-08-04 Cyberark Software Ltd. Dynamic display capture to verify encoded visual codes and network address information
US11184393B1 (en) 2020-10-01 2021-11-23 Vade Secure Inc. Automated collection of branded training data for security awareness training
TWI736457B (en) * 2020-10-27 2021-08-11 財團法人資訊工業策進會 Dynamic network feature processing device and dynamic network feature processing method
US12021861B2 (en) * 2021-01-04 2024-06-25 Bank Of America Corporation Identity verification through multisystem cooperation
US12192234B2 (en) 2021-07-30 2025-01-07 Bank Of America Corporation Information security system and method for phishing website classification based on image hashing
CN115051817B (en) * 2022-01-05 2023-11-24 中国互联网络信息中心 A phishing detection method and system based on multi-modal fusion features
US12541591B2 (en) 2022-04-25 2026-02-03 Palo Alto Networks, Inc. Malware detection for documents using knowledge distillation assisted learning
US12348560B2 (en) * 2022-04-25 2025-07-01 Palo Alto Networks, Inc. Detecting phishing PDFs with an image-based deep learning approach
US12519830B2 (en) * 2023-10-10 2026-01-06 Acronis International Gmbh Systems and methods for detection of phishing webpages using machine learning
US12542814B2 (en) 2023-12-15 2026-02-03 Vade Secure SASU Detection of barcode URL in an organization inbound email traffic

Family Cites Families (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5890171A (en) 1996-08-06 1999-03-30 Microsoft Corporation Computer system and computer-implemented method for interpreting hypertext links in a document when including the document within another document
WO2001018716A1 (en) 1999-09-10 2001-03-15 Jackson Brandenburg System and method for facilitating access by sellers to certificate-related and other services
US7562387B2 (en) 2001-09-07 2009-07-14 International Business Machines Corporation Method and apparatus for selective disabling of tracking of click stream data
US7412539B2 (en) 2002-12-18 2008-08-12 Sonicwall, Inc. Method and apparatus for resource locator identifier rewrite
US7620690B1 (en) 2003-11-20 2009-11-17 Lashback, LLC Privacy control system for electronic communication
US7640322B2 (en) 2004-02-26 2009-12-29 Truefire, Inc. Systems and methods for producing, managing, delivering, retrieving, and/or tracking permission based communications
US7487213B2 (en) 2004-09-07 2009-02-03 Iconix, Inc. Techniques for authenticating email
US7422115B2 (en) 2004-09-07 2008-09-09 Iconix, Inc. Techniques for to defeat phishing
US7413085B2 (en) 2004-09-07 2008-08-19 Iconix, Inc. Techniques for displaying emails listed in an email inbox
JP2006092116A (en) 2004-09-22 2006-04-06 Canon Inc Web server and control method thereof
US20060080735A1 (en) 2004-09-30 2006-04-13 Usa Revco, Llc Methods and systems for phishing detection and notification
US20070094500A1 (en) 2005-10-20 2007-04-26 Marvin Shannon System and Method for Investigating Phishing Web Sites
US7873707B1 (en) 2004-10-27 2011-01-18 Oracle America, Inc. Client-side URL rewriter
US20060095955A1 (en) 2004-11-01 2006-05-04 Vong Jeffrey C V Jurisdiction-wide anti-phishing network service
US8032594B2 (en) 2004-11-10 2011-10-04 Digital Envoy, Inc. Email anti-phishing inspector
US20060168066A1 (en) 2004-11-10 2006-07-27 David Helsper Email anti-phishing inspector
US20060117307A1 (en) * 2004-11-24 2006-06-01 Ramot At Tel-Aviv University Ltd. XML parser
US8291065B2 (en) 2004-12-02 2012-10-16 Microsoft Corporation Phishing detection, prevention, and notification
US7634810B2 (en) 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US20050086161A1 (en) 2005-01-06 2005-04-21 Gallant Stephen I. Deterrence of phishing and other identity theft frauds
ES2382361T3 (en) * 2005-01-14 2012-06-07 Bae Systems Plc Network based security system
US8336092B2 (en) 2005-02-18 2012-12-18 Duaxes Corporation Communication control device and communication control system
US8079087B1 (en) * 2005-05-03 2011-12-13 Voltage Security, Inc. Universal resource locator verification service with cross-branding detection
WO2006119506A2 (en) * 2005-05-05 2006-11-09 Ironport Systems, Inc. Method of validating requests for sender reputation information
US8874658B1 (en) 2005-05-11 2014-10-28 Symantec Corporation Method and apparatus for simulating end user responses to spam email messages
US8799515B1 (en) 2005-06-27 2014-08-05 Juniper Networks, Inc. Rewriting of client-side executed scripts in the operation of an SSL VPN
US8015598B2 (en) 2007-11-16 2011-09-06 Arcot Systems, Inc. Two-factor anti-phishing authentication systems and methods
US20070136806A1 (en) 2005-12-14 2007-06-14 Aladdin Knowledge Systems Ltd. Method and system for blocking phishing scams
US20070162366A1 (en) 2005-12-30 2007-07-12 Ebay Inc. Anti-phishing communication system
US8839418B2 (en) 2006-01-18 2014-09-16 Microsoft Corporation Finding phishing sites
US8141150B1 (en) 2006-02-17 2012-03-20 At&T Intellectual Property Ii, L.P. Method and apparatus for automatic identification of phishing sites from low-level network traffic
US7668921B2 (en) 2006-05-30 2010-02-23 Xerox Corporation Method and system for phishing detection
US8095967B2 (en) 2006-07-27 2012-01-10 White Sky, Inc. Secure web site authentication using web site characteristics, secure user credentials and private browser
US7831707B2 (en) 2006-08-02 2010-11-09 Scenera Technologies, Llc Methods, systems, and computer program products for managing electronic subscriptions
US8220047B1 (en) 2006-08-09 2012-07-10 Google Inc. Anti-phishing system and method
US8209381B2 (en) 2007-01-19 2012-06-26 Yahoo! Inc. Dynamic combatting of SPAM and phishing attacks
US20080244715A1 (en) 2007-03-27 2008-10-02 Tim Pedone Method and apparatus for detecting and reporting phishing attempts
NZ583300A (en) 2007-08-06 2012-09-28 Stephane Moreau System for authentication of server and communications and protection against phishing
US8122251B2 (en) 2007-09-19 2012-02-21 Alcatel Lucent Method and apparatus for preventing phishing attacks
US20090089859A1 (en) 2007-09-28 2009-04-02 Cook Debra L Method and apparatus for detecting phishing attempts solicited by electronic mail
US7958555B1 (en) 2007-09-28 2011-06-07 Trend Micro Incorporated Protecting computer users from online frauds
US8646067B2 (en) 2008-01-26 2014-02-04 Citrix Systems, Inc. Policy driven fine grain URL encoding mechanism for SSL VPN clientless access
WO2009094654A1 (en) 2008-01-26 2009-07-30 Citrix Systems, Inc. Systems and methods for configuration and fine grain policy driven web content detection and rewrite
US20090216795A1 (en) 2008-02-21 2009-08-27 Ram Cohen System and method for detecting and blocking phishing attacks
US8601586B1 (en) 2008-03-24 2013-12-03 Google Inc. Method and system for detecting web application vulnerabilities
WO2009131469A1 (en) * 2008-04-21 2009-10-29 Sentrybay Limited Fraudulent page detection
US8307431B2 (en) 2008-05-30 2012-11-06 At&T Intellectual Property I, L.P. Method and apparatus for identifying phishing websites in network traffic using generated regular expressions
US20090328208A1 (en) 2008-06-30 2009-12-31 International Business Machines Method and apparatus for preventing phishing attacks
US20100042687A1 (en) 2008-08-12 2010-02-18 Yahoo! Inc. System and method for combating phishing
US8701185B2 (en) 2008-10-14 2014-04-15 At&T Intellectual Property I, L.P. Method for locating fraudulent replicas of web sites
US8073829B2 (en) 2008-11-24 2011-12-06 Microsoft Corporation HTTP cache with URL rewriting
US8495735B1 (en) * 2008-12-30 2013-07-23 Uab Research Foundation System and method for conducting a non-exact matching analysis on a phishing website
US8468597B1 (en) * 2008-12-30 2013-06-18 Uab Research Foundation System and method for identifying a phishing website
US8381292B1 (en) * 2008-12-30 2013-02-19 The Uab Research Foundation System and method for branding a phishing website using advanced pattern matching
US8448245B2 (en) 2009-01-17 2013-05-21 Stopthehacker.com, Jaal LLC Automated identification of phishing, phony and malicious web sites
CN101504673B (en) * 2009-03-24 2011-09-07 阿里巴巴集团控股有限公司 Method and system for recognizing doubtful fake website
US8621614B2 (en) 2009-05-26 2013-12-31 Microsoft Corporation Managing potentially phishing messages in a non-web mail client context
US8438642B2 (en) 2009-06-05 2013-05-07 At&T Intellectual Property I, L.P. Method of detecting potential phishing by analyzing universal resource locators
EP2282433A1 (en) 2009-08-04 2011-02-09 Deutsches Zentrum für Luft- und Raumfahrt e.V. Method for recovery of lost and/ or corrupted data
US20110035317A1 (en) 2009-08-07 2011-02-10 Mark Carlson Seedless anti phishing authentication using transaction history
US8429101B2 (en) * 2010-12-07 2013-04-23 Mitsubishi Electric Research Laboratories, Inc. Method for selecting features used in continuous-valued regression analysis
US8521667B2 (en) * 2010-12-15 2013-08-27 Microsoft Corporation Detection and categorization of malicious URLs
CN102082792A (en) * 2010-12-31 2011-06-01 成都市华为赛门铁克科技有限公司 Phishing webpage detection method and device
US8838973B1 (en) 2011-02-28 2014-09-16 Google Inc. User authentication method
US9083733B2 (en) 2011-08-01 2015-07-14 Visicom Media Inc. Anti-phishing domain advisor and method thereof
TWI459232B (en) 2011-12-02 2014-11-01 Inst Information Industry Phishing site processing method, system and computer readable storage medium storing the method
US8935342B2 (en) 2012-03-09 2015-01-13 Henal Patel Method for detecting and unsubscribing an address from a series of subscriptions
US20150200962A1 (en) * 2012-06-04 2015-07-16 The Board Of Regents Of The University Of Texas System Method and system for resilient and adaptive detection of malicious websites
CN103546446B (en) 2012-07-17 2015-03-25 腾讯科技(深圳)有限公司 Phishing website detection method, device and terminal
EP2877956B1 (en) * 2012-07-24 2019-07-17 Webroot Inc. System and method to provide automatic classification of phishing sites
US9027126B2 (en) 2012-08-01 2015-05-05 Bank Of America Corporation Method and apparatus for baiting phishing websites
CN103685174B (en) 2012-09-07 2016-12-21 中国科学院计算机网络信息中心 A kind of detection method for phishing site of independent of sample
US20140082521A1 (en) 2012-09-20 2014-03-20 Handle, Inc. Email and task management services and user interface
US8566938B1 (en) 2012-11-05 2013-10-22 Astra Identity, Inc. System and method for electronic message analysis for phishing detection
US8839369B1 (en) 2012-11-09 2014-09-16 Trend Micro Incorporated Methods and systems for detecting email phishing attacks
US9178901B2 (en) * 2013-03-26 2015-11-03 Microsoft Technology Licensing, Llc Malicious uniform resource locator detection

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10243900B2 (en) * 2013-08-20 2019-03-26 Longsand Limited Using private tokens in electronic messages associated with a subscription-based messaging service
US10694029B1 (en) 2013-11-07 2020-06-23 Rightquestion, Llc Validating automatic number identification data
US12238243B2 (en) 2013-11-07 2025-02-25 Rightquestion, Llc Validating automatic number identification data
US11856132B2 (en) 2013-11-07 2023-12-26 Rightquestion, Llc Validating automatic number identification data
US10674009B1 (en) 2013-11-07 2020-06-02 Rightquestion, Llc Validating automatic number identification data
US11005989B1 (en) 2013-11-07 2021-05-11 Rightquestion, Llc Validating automatic number identification data
US9781149B1 (en) 2016-08-17 2017-10-03 Wombat Security Technologies, Inc. Method and system for reducing reporting of non-malicious electronic messages in a cybersecurity system
US9912687B1 (en) 2016-08-17 2018-03-06 Wombat Security Technologies, Inc. Advanced processing of electronic messages with attachments in a cybersecurity system
US10027701B1 (en) 2016-08-17 2018-07-17 Wombat Security Technologies, Inc. Method and system for reducing reporting of non-malicious electronic messages in a cybersecurity system
US10063584B1 (en) 2016-08-17 2018-08-28 Wombat Security Technologies, Inc. Advanced processing of electronic messages with attachments in a cybersecurity system
US9774626B1 (en) * 2016-08-17 2017-09-26 Wombat Security Technologies, Inc. Method and system for assessing and classifying reported potentially malicious messages in a cybersecurity system
US10992645B2 (en) 2016-09-26 2021-04-27 Agari Data, Inc. Mitigating communication risk by detecting similarity to a trusted message contact
US10805270B2 (en) 2016-09-26 2020-10-13 Agari Data, Inc. Mitigating communication risk by verifying a sender of a message
EP3516821A4 (en) * 2016-09-26 2020-07-22 Agari Data, Inc REDUCING COMMUNICATION RISK BY DETECTING SIMILARITY WITH A TRUSTED NEWS CONTACT
US12316591B2 (en) 2016-09-26 2025-05-27 Agari Data, Inc. Multi-level security analysis and intermediate delivery of an electronic message
US20230208813A1 (en) * 2016-09-26 2023-06-29 Agari Data, Inc. Mitigating communication risk by detecting similarity to a trusted message contact
US12074850B2 (en) 2016-09-26 2024-08-27 Agari Data, Inc. Mitigating communication risk by verifying a sender of a message
US11936604B2 (en) 2016-09-26 2024-03-19 Agari Data, Inc. Multi-level security analysis and intermediate delivery of an electronic message
US10880322B1 (en) * 2016-09-26 2020-12-29 Agari Data, Inc. Automated tracking of interaction with a resource of a message
US11595354B2 (en) 2016-09-26 2023-02-28 Agari Data, Inc. Mitigating communication risk by detecting similarity to a trusted message contact
US10715543B2 (en) 2016-11-30 2020-07-14 Agari Data, Inc. Detecting computer security risk based on previously observed communications
US11044267B2 (en) 2016-11-30 2021-06-22 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US11722513B2 (en) 2016-11-30 2023-08-08 Agari Data, Inc. Using a measure of influence of sender in determining a security risk associated with an electronic message
US11785027B2 (en) 2016-12-23 2023-10-10 Microsoft Technology Licensing, Llc Threat protection in documents
US11171973B2 (en) * 2016-12-23 2021-11-09 Microsoft Technology Licensing, Llc Threat protection in documents
US11722497B2 (en) 2017-04-26 2023-08-08 Agari Data, Inc. Message security assessment using sender identity profiles
US11019076B1 (en) 2017-04-26 2021-05-25 Agari Data, Inc. Message security assessment using sender identity profiles
US12184662B2 (en) 2017-04-26 2024-12-31 Agari Data, Inc. Message security assessment using sender identity profiles
US10805314B2 (en) 2017-05-19 2020-10-13 Agari Data, Inc. Using message context to evaluate security of requested data
US10243904B1 (en) 2017-05-26 2019-03-26 Wombat Security Technologies, Inc. Determining authenticity of reported user action in cybersecurity risk assessment
US10778626B2 (en) 2017-05-26 2020-09-15 Proofpoint, Inc. Determining authenticity of reported user action in cybersecurity risk assessment
US12081503B2 (en) 2017-05-26 2024-09-03 Proofpoint, Inc. Determining authenticity of reported user action in cybersecurity risk assessment
US11102244B1 (en) 2017-06-07 2021-08-24 Agari Data, Inc. Automated intelligence gathering
US11757914B1 (en) 2017-06-07 2023-09-12 Agari Data, Inc. Automated responsive message to determine a security risk of a message sender
US10778689B2 (en) 2018-09-06 2020-09-15 International Business Machines Corporation Suspicious activity detection in computer networks
CN109450929A (en) * 2018-12-13 2019-03-08 成都亚信网络安全产业技术研究院有限公司 A kind of safety detection method and device
US10686826B1 (en) 2019-03-28 2020-06-16 Vade Secure Inc. Optical scanning parameters computation methods, devices and systems for malicious URL detection
WO2020197570A1 (en) * 2019-03-28 2020-10-01 Vade Secure, Inc. Optimal scanning parameters computation methods, devices and systems for malicious url detection
US12506747B1 (en) 2019-03-29 2025-12-23 Agari Data, Inc. Message campaign and malicious threat detection
WO2020205071A1 (en) 2019-04-05 2020-10-08 Stellarite, Inc. Defanging malicious electronic files based on trusted user reporting
US11856007B2 (en) 2019-04-05 2023-12-26 Material Security Inc. Defanging malicious electronic files based on trusted user reporting
EP3948609A4 (en) * 2019-04-05 2022-12-07 Material Security Inc. REMOVAL OF OFFENSIVE NATURE OF MALICIOUS E-FILES BASED ON A TRUSTED USER REPORT
US12537833B2 (en) 2019-04-05 2026-01-27 Material Security Inc. Defanging malicious electronic files based on trusted user reporting
CN111224953A (en) * 2019-12-25 2020-06-02 哈尔滨安天科技集团股份有限公司 Method, device and storage medium for discovering threat organization attack based on abnormal point
CN111614543A (en) * 2020-04-10 2020-09-01 中国科学院信息工程研究所 A URL-based spear phishing email detection method and system
US20240056477A1 (en) * 2022-08-10 2024-02-15 Capital One Services, Llc Methods and systems for detecting malicious messages
US12438910B2 (en) * 2022-08-10 2025-10-07 Capital One Services, Llc Methods and systems for detecting malicious messages

Also Published As

Publication number Publication date
US9398047B2 (en) 2016-07-19
US10021134B2 (en) 2018-07-10
US20160352777A1 (en) 2016-12-01
US20160142439A1 (en) 2016-05-19

Similar Documents

Publication Publication Date Title
US20170085584A1 (en) Detecting and thwarting spear phishing attacks in electronic messages
US12316591B2 (en) Multi-level security analysis and intermediate delivery of an electronic message
US12184662B2 (en) Message security assessment using sender identity profiles
US11595354B2 (en) Mitigating communication risk by detecting similarity to a trusted message contact
US12261883B2 (en) Detecting phishing attempts
US11044267B2 (en) Using a measure of influence of sender in determining a security risk associated with an electronic message
US10715543B2 (en) Detecting computer security risk based on previously observed communications
US10425444B2 (en) Social engineering attack prevention
US11019079B2 (en) Detection of email spoofing and spear phishing attacks
US20190319905A1 (en) Mail protection system
US11665195B2 (en) System and method for email account takeover detection and remediation utilizing anonymized datasets
US20170257395A1 (en) Methods and devices to thwart email display name impersonation
GB2550657A (en) A method of protecting a user from messages with links to malicious websites
WO2018081016A1 (en) Multi-level security analysis and intermediate delivery of an electronic message
Juneja et al. A survey on email spam types and spam filtering techniques
US12335254B2 (en) Malicious universal resource locator and file detector and response action engine
US20230074369A1 (en) Electronic mail connectedness indicator

Legal Events

Date Code Title Description
AS Assignment

Owner name: VADE RETRO TECHNOLOGY INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOUTAL, SEBASTIEN;REEL/FRAME:036626/0414

Effective date: 20150922

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: VADE SECURE, INCORPORATED, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VADE RETRO TECHNOLOGY, INCORPORATED;REEL/FRAME:047196/0317

Effective date: 20161222

AS Assignment

Owner name: TIKEHAU ACE CAPITAL, FRANCE

Free format text: SECURITY INTEREST;ASSIGNOR:VADE USA INCORPORATED;REEL/FRAME:059610/0419

Effective date: 20220311

AS Assignment

Owner name: VADE USA INCORPORATED, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS RECORDED AT REEL 059510, FRAME 0419;ASSIGNOR:TIKEHAU ACE CAPITAL;REEL/FRAME:066647/0152

Effective date: 20240222