[go: up one dir, main page]

US20210273951A1 - Risk assessment for network access control through data analytics - Google Patents

Risk assessment for network access control through data analytics Download PDF

Info

Publication number
US20210273951A1
US20210273951A1 US17/242,707 US202117242707A US2021273951A1 US 20210273951 A1 US20210273951 A1 US 20210273951A1 US 202117242707 A US202117242707 A US 202117242707A US 2021273951 A1 US2021273951 A1 US 2021273951A1
Authority
US
United States
Prior art keywords
event
identity
login
risk
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/242,707
Other versions
US12047392B2 (en
Inventor
Yanlin Wang
Weizhi Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cyberark Software Ltd
Original Assignee
Cyberark Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cyberark Software Ltd filed Critical Cyberark Software Ltd
Priority to US17/242,707 priority Critical patent/US12047392B2/en
Publication of US20210273951A1 publication Critical patent/US20210273951A1/en
Assigned to CYBERARK SOFTWARE, INC. reassignment CYBERARK SOFTWARE, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: IDAPTIVE, LLC
Assigned to GOLUB CAPITAL LLC, AS AGENT reassignment GOLUB CAPITAL LLC, AS AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: CENTRIFY CORPORATION
Assigned to CYBERARK SOFTWARE LTD. reassignment CYBERARK SOFTWARE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CYBERARK SOFTWARE, INC.
Assigned to IDAPTIVE, LLC reassignment IDAPTIVE, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: APPS & ENDPOINT COMPANY, LLC
Assigned to CENTRIFY CORPORATION reassignment CENTRIFY CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: GOLUB CAPITAL LLC
Assigned to APPS & ENDPOINT COMPANY, LLC reassignment APPS & ENDPOINT COMPANY, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CENTRIFY CORPORATION
Assigned to IDAPTIVE, LLC reassignment IDAPTIVE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CENTRIFY CORPORATION
Assigned to CENTRIFY CORPORATION reassignment CENTRIFY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, WEIZHI, WANG, YANLIN
Priority to US18/749,324 priority patent/US20240422177A1/en
Publication of US12047392B2 publication Critical patent/US12047392B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • H04L63/205Network architectures or network communication protocols for network security for managing network security; network security policies in general involving negotiation or determination of the one or more network security mechanisms to be used, e.g. by negotiation between the client and the server or between peers or by selection according to the capabilities of the entities involved

Definitions

  • This disclosure relates generally to Internet security and, more particularly, to methods and systems of risk assessment for network access control through data analytics.
  • Authentication and authorization are security means to protect a computer network from unauthorized access to its resources such as computer servers, software applications and services, and so on. Authentication verifies the identity of an entity (person, user, process, or device) that wants to access a computer network resources.
  • entity person, user, process, or device
  • terms of an entity, a person, a process, a user and a device will be used interchangeably. Common ways for authentication are username/password combination, fingerprint readers, retinal scans, etc.
  • authorization determines what privileges that an authenticated entity has during the entity's session from log-on until log-off.
  • the privileges assigned to an entity define the entity's access right to the network resources. For example, an entity may only be able to read documents but not allowed to edit documents.
  • Multifactor authentication as an enhancement of identity authentication increases security by requiring two or more different authentication methods such as a user/password combination followed by an SMS request to the user's cell phone to confirm the user's identity.
  • MFA increases the authentication security at the cost of increased complexity of the network login process for a user. A user has to perform multiple authentications, sometimes on different devices, to get authenticated.
  • adaptive MFA has been developed to ease the use of MFA.
  • a network with adaptive MFA can change its authentication requirements depending on detected conditions at log-in.
  • Adaptive MFA is rule-based, though, which limits its effectiveness because those rules are static.
  • adaptive MFA only act on the conditions at the time of a user's login without considering the user's past network access and usage history. Therefore, adaptive MFA cannot determine if the user's current login activity is normal or abnormal.
  • Embodiments of the invention build an entity profile by collecting and analyzing the entity's events in real time using well-known machine-learning methods.
  • Each event of an entity that is collected and analyzed by an embodiment of the invention includes event attributes such as entity ID, login location, login date, login time, device used at login, IP address used at login, application launched after login, and so on.
  • An embodiment of the invention employs well-known machine-learning clustering methods to learn normal entity behavior by looking for patterns in the events that stream in continuously.
  • normal entity behaviors are represented as clusters of event vectors.
  • An embodiment of the invention evaluates the risk level for a new event of an entity by comparing the event with the entity's profile represented as clusters of event vectors.
  • the risk level is associated with a confidence level. Confidence level indicates how well the system knows about the entity. This confidence level is initially low and increases over time when more events of the entity are collected and analyzed.
  • Embodiments of the invention do not need human administration in the process of building entity profile and assessing risk level of events associated with an entity.
  • An entity's profile in the form of clusters of event vectors evolves autonomously while events are continuously received and clustered by an embodiment of the invention.
  • rules for triggering risk assessment of an event associated with an entity is automatically updated. The update is based on the events that are resulted from the risk assessment on prior events associated the entity. Therefore, embodiments of the invention are much easier to operate than prior arts, which require human administration.
  • FIG. 1 is a block diagram that shows the components of an embodiment of the invention as they exist in a web portal within a computer network, or other computing environment that requires authentication and authorization to use the environment's resources.
  • the diagram shows possible event flows through the invention with thick arrows, and shows possible communication among components via API calls with thin arrows.
  • FIG. 2 shows an event in form of a three-tuple vector in a three dimensional entity profile vector space, where X axis is event's login time of day, Y axis is event's login city, and Z axis is event's login device type.
  • FIG. 3 is a diagram of an entity profile in form of an event cluster, an anomaly event in form of an event vector, and a normal event in form of an event vector, which is the first event vector of a new event cluster.
  • FIG. 4 is a diagram of an entity profile in form of two event clusters, and an anomaly event in form of an event vector.
  • FIG. 5 is a diagram of an entity profile in form of two event clusters; one is a cluster with long-term memory while the other is a cluster with short-term memory. Clusters with short-term memory decay more quickly than clusters with long-term memory.
  • FIG. 1 shows the components of an embodiment of the invention as they exist in a web portal within a computer network, or other computing environment that requires authentication and authorization to use the environment's resources.
  • An event reporting agent 1 - 2 within the environment detects entity behavior and reports it to an embodiment of the invention as events, each event with a set of attributes and can include:
  • Login events which can include parameters such as the IP address of the device used, the type of device used, physical location, number of login attempts, date and time, and more.
  • Application access events which can specify what application is used, application type, date and time of use, and more.
  • Privileged resource events such as launching a Secure Shell (SSH) session or a Remote Desktop Protocol (RDP) session as an administrator.
  • SSH Secure Shell
  • RDP Remote Desktop Protocol
  • Mobile device management events such as enrolling or un-enrolling a mobile device with an identity management service.
  • CLI command-use events such as UNIX commands or MS-DOS commands, which can specify the commands used, date and time of use, and more.
  • Authorization escalation events such as logging in as a super-user in a UNIX environment, which can specify login parameters listed above.
  • Risk feedback events which report an embodiment of the invention's evaluations of the entity. For example, when the access control service 1 - 3 requests a risk evaluation from an embodiment of the invention at entity login, the action generates an event that contains the resulting evaluation and any resulting action based on the evaluation.
  • An access control service 1 - 3 authenticates entities and can change authentication factor requirements at login and at other authentication events.
  • a directory service 1 - 4 such as Active Directory defines authentication requirements and authorization for each entity.
  • An admin web browser 1 - 5 that an administrator can use to control an embodiment of the invention.
  • An event ingestion service 1 - 10 accepts event data from the event reporting agent, filters out events that are malformed or irrelevant, extracts necessary attributes from event data, and converts event data into values that a risk assessment engine 1 - 11 can use.
  • the risk assessment engine 1 - 11 accepts entity events from the event ingestion service 1 - 10 and uses them to build an entity profile for each entity. Whenever requested, the risk assessment engine 1 - 11 can compare an event or attempted event to the entity's profile to determine a threat level for the event.
  • a streaming threat remediation engine 1 - 9 accepts a steady stream of events from the risk assessment engine 1 - 11 .
  • the streaming threat remediation engine 1 - 9 stores a rule queue. Each rule in the queue tests an incoming event and may take action if the rule detects certain conditions in the event.
  • a rule may, for example, check the event type, and contact the risk assessment engine to determine risk for the event.
  • a risk assessment service 1 - 8 is a front end for the risk assessment engine 1 - 11 .
  • the servicel- 8 allows components outside the invention's core to make authenticated connections and then request service from the risk assessment engine 1 - 11 .
  • Service is typically something such as assessing risk for a provided event or for an attempted event such as login.
  • An on-demand threat remediation engine 1 - 7 is very similar to the streaming threat remediation engine 1 - 9 . It contains a rule queue. The rules here, though, test attempted events such as log-in requests or authorization changes that may require threat assessment before the requests are granted and the event takes place. An outside component such as the access control service 1 - 3 may contact the on-demand threat remediation engine 1 - 7 with an attempted event.
  • the on-demand threat remediation engine 1 - 7 can request risk assessment from an embodiment of the invention through the risk assessment service 1 - 8 .
  • a user attempts to log into an application at a web portal.
  • the Event Reporting Agent 1 - 2 captures the user activity, records it as an event, which consists of event attributes such as user log in time, location latitude, location longitude, etc.
  • the Event Report Agent 1 - 2 forwards the event to the Event Ingestion Service 1 - 10 .
  • the Event Ingestion Service 1 - 10 filters out some of the event attributes before converting the rest of the event attributes to numeric values, and each event is now represented as an n-tuple vector, where n is the number of event attributes. In other words, each event attribute is encoded as a single value. In an embodiment, an event attribute may be encoded as a multi-dimension vector.
  • the Event Ingestion Service 1 - 10 then forwards the formatted event vector to the Risk Assessment Engine.
  • the Risk Assessment Engine 1 - 11 uses well-known machine learning clustering algorithms, e.g., K-Means, to determine if the event is part of any existing event cluster or user profile cluster in real time.
  • the user profile cluster is updated by adding the event vector into the cluster determined by the well-known clustering algorithm.
  • the Risk Assessment Engine 1 - 11 then forwards the event to the Streaming Threat Remediation Engine 1 - 9 .
  • the Risk Assessment Engine 1 - 11 applies configurable machine learning rules to run the well-known machine learning clustering algorithms, e.g., K-Means, to determine if the event is part of any existing event cluster or user profile cluster.
  • machine learning rules guide the machine learning process within the Risk Assessment Engine 1 - 11 , e.g., how to select dimensions in an event vector to be fed into the well-known machine learning algorithm, whether and how to transform the selected dimensions based event type, how to set the weight of each selected dimension in an event vector, and which machine learning algorithm to run, etc.
  • machine learning rules can be inherited and overwritten.
  • the Risk Assessment Engine 1 - 11 has default system-level machine learning rules, which can be inherited by tenant companies and individual users.
  • different tenant companies can customize their own company-level machine learning rules, which overwrite the default system-level machine learning rules.
  • different users can have different individual machine learning rules, which override company-level machine learning rules.
  • the risk assessment engine 1 - 11 may use the risk and confidence scores to assign one of five fraud risk levels to the assessed event:
  • the Risk Assessment Engine 1 - 11 computes a risk score of the event based on the vector distance between the event vector and the cluster center vector in an n-dimension vector space, where n is the number of event attributes.
  • n is the number of event attributes.
  • each event attribute is encoded as a single value.
  • an event attribute may be encoded as a multi-dimension vector.
  • Risk Score indicates how distinct the requested identity activity in the form of an event is from the user's normal behavior in the form of the user profile cluster.
  • the range of Risk Score is (0, 100], where 100 denotes the highest risk score, and 0 denotes the lowest risk score.
  • the Risk Assessment Engine 1 - 11 applies configurable risk assessment rules to compute risk scores.
  • risk assessment rules can be inherited and overwritten.
  • the Risk Assessment Engine 1 - 11 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users.
  • different tenant companies can configure their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules.
  • different users can be configured with different individual risk assessment rules, which override company-level risk assessment rules.
  • the Risk Assessment Engine 1 - 11 Associated with a risk score, the Risk Assessment Engine 1 - 11 also computes a confidence score. Confidence Score indicates how well the system knows about the user. This score is initially low and increases over time as the Risk Assessment Engine 1 - 11 receives and learns more event data of the user.
  • the Confidence Score is calculated by a customized sigmoid function based on number of data points and period of time (e.g., in days) learned by the Risk Assessment Engine 1 - 11 .
  • the range of Confidence Score is (0, 100], where 100 denotes the highest confidence score, and 0 denotes the lowest confidence score.
  • a training period is needed, where the Risk Assessment Engine 1 - 11 collects and constructs the user profile, i.e., event cluster based on the received events during this period.
  • the Risk Assessment Engine 1 - 11 runs pre-configured rules against the event and determines if the event requires any risk assessment.
  • the rules are a set of conditions, e.g., condition 1 : the user tries to log into a critical Human Resources application that can view all the employees' personal information; condition 2 : the user's device type is changed since last successful log; etc.
  • the Streaming Threat Remediation Engine 1 - 9 determines the risk level of a network access event based on received risk score and confidence score as well as current risk thresholds and confidence thresholds.
  • the event vector and the determined risk level information together as a user profile record is stored into a model repository by the Streaming Threat Remediation Engine 1 - 9 .
  • the user profile record stored in the model repository is used by the system to trigger alerts based on event risk levels.
  • an alert email or SMS text message is automatically generated to notify the user.
  • the user can take actions such as contacting customer service to evict the unauthorized network access.
  • system administrators receive an alert message, and take actions such as contacting the user for network access verification.
  • the Streaming Threat Remediation Engine 1 - 9 applies configurable risk assessment rules to compute risk level.
  • risk assessment rules can be inherited and overwritten.
  • the Streaming Threat Remediation Engine 1 - 9 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users.
  • different tenant companies can configure their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules.
  • different users can configure different individual risk assessment rules, which override company-level risk assessment rules.
  • the on-demand threat remediation engine 1 - 7 adjusts risk thresholds and confidence thresholds based on risk feedback events, which are resulted from the risk level assessment of prior events.
  • the on-demand threat remediation engine 1 - 7 determines the risk level of a network access event or attempt based on the received risk score and confidence score as well as current risk thresholds and confidence thresholds. If the access event or attempt is assessed with high fraud risk, the on-demand threat remediation engine 1 - 7 sends an instruction to the Access Control Service 1 - 3 to request user for additional authentication with alternative authentication method. Alternatively, block the access depending on the policy set by a security admin on the Access Control Service 1 - 3 .
  • the instruction from the on-demand threat remediation engine 1 - 7 to the Access Control Service 1 - 3 generates a risk feedback event that contains the risk level evaluation by the on-demand threat remediation engine 1 - 7 , and any resulting action triggered by the risk level evaluation such as the authentication of user's additional login attempt using alternative authentication method.
  • the authentication results contained in such risk feedback events are fed back from the Event Report Agent 1 - 2 to the on-demand threat remediation engine 1 - 7 via Event Ingestion Service 1 - 10 , Risk Assessment Engine 1 - 11 and Risk Assessment Service 1 - 8 .
  • the on-demand threat remediation engine 1 - 7 analyzes the received authentication results contained in risk feedback events, and determines if the risk thresholds and confidence thresholds need to be adjusted. For example, if all of the authentication results are positive, i.e., all users are authenticated successfully using alternative authentication method, the risk thresholds and/or confidence thresholds may need to be set higher to prevent unnecessary additional authentication requests.
  • the on-demand threat remediation engine 1 - 7 applies configurable risk assessment rules to compute risk level.
  • risk assessment rules can be inherited and overwritten.
  • the on-demand threat remediation engine 1 - 7 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users.
  • different tenant companies can customize their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules.
  • different users can be configured with different individual risk assessment rules, which override company-level risk assessment rules.
  • FIG. 2 shows an event represented as 3-tuple vector 2 - 3 in a three dimensional entity profile vector space 2 - 8 , where X axis 2 - 6 is event's login time of day 2 - 4 , Y axis 2 - 1 is event's login city 2 - 2 , and Z axis 2 - 7 is event's login device type 2 - 5 .
  • FIG. 3 is a diagram of an entity profile in form of an event cluster 4 - 3 , an anomaly event in form of an event vector 4 - 9 .
  • the well-know machine learning clustering algorithm keeps updating the cluster while new event vectors are received and added into the entity profile vector space.
  • the risk score of a new event is also dynamically adjusted.
  • the previous cluster center is represented as (8 AM, city A, iPhone)
  • the new cluster center is represented as (8:30 AM, city A, iPhone).
  • the risk score is low with the new cluster center because it is within 30 minutes distance from the new cluster center.
  • the new event's risk score would be high with the previous cluster center as the distance between the new event and the previous cluster center is not within 30 minutes. Therefore, this is one of the advantages of the embodiment of the invention, where the risk score is adaptively updated as the user profile cluster is updated. In prior arts, this requires manual adjustment of the risk score calculation criteria. For example, the period of low risk log in time needs to be updated from (7:30 AM ⁇ 8:30 AM) to (8:00 AM ⁇ 9:00 AM). Without adjusting the risk score criteria, the new event may be treated as anomaly in prior arts.
  • FIG. 4 shows an entity profile evolves from a cluster 5 - 3 into two clusters.
  • a new cluster 5 - 11 starts as an event vector 5 - 10 , which is detected as anomaly and not part of the existing event cluster 5 - 3 by the well-known clustering algorithm. Therefore, additional factor for authentication is triggered for this entity.
  • using one factor for authentication is considered as a weak authentication method while using two or more factors for authentication is considered as a strong authentication method.
  • different types of factors used for authentication have different levels of authentication strength.
  • authentication using security questions is considered very weak; authentication using password is considered weak to medium depending on the password rules enforced; authentication using Email or SMS or phone call is considered as medium; and one-time password (OTP) or authenticator or 3 rd party radius (RSA) is considered strong.
  • the additional factor for authentication is a strong factor for authentication than the default factor for authentication. Because the additional authentication is successful, which in turn is recorded as a new event and fed back into the Risk Assessment Engine 1 - 11 , the event vector 5 - 10 is marked as the first event vector of the new cluster 5 - 11 .
  • This type of event cluster evolution typically happens when a user maintains more than one sets of assess patterns. For example, a user may regularly travel to another city for a week once a quarter. From the event cluster perspective, the user at least has two clusters, one centered at the home location while the other centered at the visiting location. The event cluster centered at the visiting location grows during the week when the user is traveling.
  • the event cluster centered at the visiting location stops growing and eventually decays when the event data becomes outdated.
  • the event data that is stored longer than certain duration may get purged from the event cluster.
  • FIG. 5 shows a diagram of an entity profile in form of two event clusters 6 - 3 and 6 - 4 .
  • Cluster 6 - 3 is a cluster with long-term memory while cluster 6 - 8 is a cluster with short-term memory.
  • the event cluster with long-term memory represents the entity's normal or routine behavior, which does not change or only gradually changes over a long period. For example, a user usually check work emails from his/her smartphone around 7 AM every morning at home for years.
  • the event vectors of a long-term memory cluster are useful reference for the user's future routine behavior. Therefore, event vectors that belong to the event cluster with long-term memory are kept as part of the event cluster for a relatively long period, e.g., several months or years.
  • the event cluster with long-term memory is formed by well-known machine-learning clustering methods.
  • the event cluster with short-term memory represents the entity's temporary behavior, which tends to change and only last for a short period. For example, a user travels for business regularly out of his/her home for a week once a month. During the week of travelling, a user's network access behavior such as network login location and login time is likely different from the behavior in past or future months. And, the user maintains such network access behavior only during the week of travelling. The event vectors collected in current travelling week may not be the right reference for the user's future behavior.
  • the event vectors of a short-term memory cluster are only kept as part of the event cluster for a relatively short period, e.g., several days.
  • an event vector cluster with short-term memory decays more quickly than an event vector cluster with long-term memory.
  • the event cluster with short-term memory is formed by rules such as multifactor authentication with strong authentication factors.
  • the rules are configurable by users that will result customized event clusters with short-term memory.
  • FIG. 5 shows an example that the cluster 6 - 8 with short-term memory decays more quickly than the long-term memory cluster 6 - 3 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Methods and systems of risk assessment for network access control through data analytics. An embodiment of the invention employs well-known machine-learning clustering methods to learn normal entity behavior by looking for patterns in the events that stream in continuously. In an embodiment of the invention, normal entity behaviors are represented as clusters of event vectors. An embodiment of the invention evaluates the risk level for a new event of an entity by comparing the event with the entity's profile represented as clusters of event vectors. In an embodiment of the invention, the risk level is associated with a confidence level. Confidence level indicates how well the system knows about the entity. Embodiments of the invention do not need human administration in the process of building entity profile and assessing risk level of events associated with an entity.

Description

    FIELD OF THE DISCLOSURE
  • This disclosure relates generally to Internet security and, more particularly, to methods and systems of risk assessment for network access control through data analytics.
  • BACKGROUND
  • Authentication and authorization are security means to protect a computer network from unauthorized access to its resources such as computer servers, software applications and services, and so on. Authentication verifies the identity of an entity (person, user, process, or device) that wants to access a computer network resources. In the rest of this disclosure, terms of an entity, a person, a process, a user and a device will be used interchangeably. Common ways for authentication are username/password combination, fingerprint readers, retinal scans, etc. On the other hand, authorization determines what privileges that an authenticated entity has during the entity's session from log-on until log-off. The privileges assigned to an entity define the entity's access right to the network resources. For example, an entity may only be able to read documents but not allowed to edit documents.
  • Multifactor authentication (MFA) as an enhancement of identity authentication increases security by requiring two or more different authentication methods such as a user/password combination followed by an SMS request to the user's cell phone to confirm the user's identity. However, MFA increases the authentication security at the cost of increased complexity of the network login process for a user. A user has to perform multiple authentications, sometimes on different devices, to get authenticated.
  • As a result, adaptive MFA has been developed to ease the use of MFA. A network with adaptive MFA can change its authentication requirements depending on detected conditions at log-in. Adaptive MFA is rule-based, though, which limits its effectiveness because those rules are static. In addition, adaptive MFA only act on the conditions at the time of a user's login without considering the user's past network access and usage history. Therefore, adaptive MFA cannot determine if the user's current login activity is normal or abnormal.
  • SUMMARY OF THE INVENTION
  • Embodiments of the invention build an entity profile by collecting and analyzing the entity's events in real time using well-known machine-learning methods. Each event of an entity that is collected and analyzed by an embodiment of the invention includes event attributes such as entity ID, login location, login date, login time, device used at login, IP address used at login, application launched after login, and so on.
  • An embodiment of the invention employs well-known machine-learning clustering methods to learn normal entity behavior by looking for patterns in the events that stream in continuously. In an embodiment of the invention, normal entity behaviors are represented as clusters of event vectors. An embodiment of the invention evaluates the risk level for a new event of an entity by comparing the event with the entity's profile represented as clusters of event vectors. In an embodiment of the invention, the risk level is associated with a confidence level. Confidence level indicates how well the system knows about the entity. This confidence level is initially low and increases over time when more events of the entity are collected and analyzed.
  • Embodiments of the invention do not need human administration in the process of building entity profile and assessing risk level of events associated with an entity. An entity's profile in the form of clusters of event vectors evolves autonomously while events are continuously received and clustered by an embodiment of the invention. In an embodiment of the invention, rules for triggering risk assessment of an event associated with an entity is automatically updated. The update is based on the events that are resulted from the risk assessment on prior events associated the entity. Therefore, embodiments of the invention are much easier to operate than prior arts, which require human administration.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. Note that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
  • FIG. 1 is a block diagram that shows the components of an embodiment of the invention as they exist in a web portal within a computer network, or other computing environment that requires authentication and authorization to use the environment's resources. The diagram shows possible event flows through the invention with thick arrows, and shows possible communication among components via API calls with thin arrows.
  • FIG. 2 shows an event in form of a three-tuple vector in a three dimensional entity profile vector space, where X axis is event's login time of day, Y axis is event's login city, and Z axis is event's login device type.
  • FIG. 3 is a diagram of an entity profile in form of an event cluster, an anomaly event in form of an event vector, and a normal event in form of an event vector, which is the first event vector of a new event cluster.
  • FIG. 4 is a diagram of an entity profile in form of two event clusters, and an anomaly event in form of an event vector.
  • FIG. 5 is a diagram of an entity profile in form of two event clusters; one is a cluster with long-term memory while the other is a cluster with short-term memory. Clusters with short-term memory decay more quickly than clusters with long-term memory.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows the components of an embodiment of the invention as they exist in a web portal within a computer network, or other computing environment that requires authentication and authorization to use the environment's resources.
  • An event reporting agent 1-2 within the environment detects entity behavior and reports it to an embodiment of the invention as events, each event with a set of attributes and can include:
  • Login events, which can include parameters such as the IP address of the device used, the type of device used, physical location, number of login attempts, date and time, and more.
  • Application access events, which can specify what application is used, application type, date and time of use, and more.
  • Privileged resource events such as launching a Secure Shell (SSH) session or a Remote Desktop Protocol (RDP) session as an administrator.
  • Mobile device management events such as enrolling or un-enrolling a mobile device with an identity management service.
  • CLI command-use events such as UNIX commands or MS-DOS commands, which can specify the commands used, date and time of use, and more.
  • Authorization escalation events, such as logging in as a super-user in a UNIX environment, which can specify login parameters listed above.
  • Risk feedback events, which report an embodiment of the invention's evaluations of the entity. For example, when the access control service 1-3 requests a risk evaluation from an embodiment of the invention at entity login, the action generates an event that contains the resulting evaluation and any resulting action based on the evaluation.
  • An access control service 1-3 authenticates entities and can change authentication factor requirements at login and at other authentication events.
  • A directory service 1-4 such as Active Directory defines authentication requirements and authorization for each entity.
  • An admin web browser 1-5 that an administrator can use to control an embodiment of the invention.
  • An event ingestion service 1-10 accepts event data from the event reporting agent, filters out events that are malformed or irrelevant, extracts necessary attributes from event data, and converts event data into values that a risk assessment engine 1-11 can use.
  • The risk assessment engine 1-11 accepts entity events from the event ingestion service 1-10 and uses them to build an entity profile for each entity. Whenever requested, the risk assessment engine 1-11 can compare an event or attempted event to the entity's profile to determine a threat level for the event.
  • A streaming threat remediation engine 1-9 accepts a steady stream of events from the risk assessment engine 1-11. The streaming threat remediation engine 1-9 stores a rule queue. Each rule in the queue tests an incoming event and may take action if the rule detects certain conditions in the event. A rule may, for example, check the event type, and contact the risk assessment engine to determine risk for the event.
  • A risk assessment service 1-8 is a front end for the risk assessment engine 1-11. The servicel-8 allows components outside the invention's core to make authenticated connections and then request service from the risk assessment engine 1-11. Service is typically something such as assessing risk for a provided event or for an attempted event such as login.
  • An on-demand threat remediation engine 1-7 is very similar to the streaming threat remediation engine 1-9. It contains a rule queue. The rules here, though, test attempted events such as log-in requests or authorization changes that may require threat assessment before the requests are granted and the event takes place. An outside component such as the access control service 1-3 may contact the on-demand threat remediation engine 1-7 with an attempted event. The on-demand threat remediation engine 1-7 can request risk assessment from an embodiment of the invention through the risk assessment service 1-8.
  • In an embodiment, a user attempts to log into an application at a web portal. The Event Reporting Agent 1-2 captures the user activity, records it as an event, which consists of event attributes such as user log in time, location latitude, location longitude, etc. The Event Report Agent 1-2 forwards the event to the Event Ingestion Service 1-10. The Event Ingestion Service 1-10 filters out some of the event attributes before converting the rest of the event attributes to numeric values, and each event is now represented as an n-tuple vector, where n is the number of event attributes. In other words, each event attribute is encoded as a single value. In an embodiment, an event attribute may be encoded as a multi-dimension vector. The Event Ingestion Service 1-10 then forwards the formatted event vector to the Risk Assessment Engine.
  • The Risk Assessment Engine 1-11 uses well-known machine learning clustering algorithms, e.g., K-Means, to determine if the event is part of any existing event cluster or user profile cluster in real time. The user profile cluster is updated by adding the event vector into the cluster determined by the well-known clustering algorithm. The Risk Assessment Engine 1-11 then forwards the event to the Streaming Threat Remediation Engine 1-9.
  • In an embodiment, the Risk Assessment Engine 1-11 applies configurable machine learning rules to run the well-known machine learning clustering algorithms, e.g., K-Means, to determine if the event is part of any existing event cluster or user profile cluster. In an embodiment, machine learning rules guide the machine learning process within the Risk Assessment Engine 1-11, e.g., how to select dimensions in an event vector to be fed into the well-known machine learning algorithm, whether and how to transform the selected dimensions based event type, how to set the weight of each selected dimension in an event vector, and which machine learning algorithm to run, etc.
  • In an embodiment, machine learning rules can be inherited and overwritten. The Risk Assessment Engine 1-11 has default system-level machine learning rules, which can be inherited by tenant companies and individual users. On the other hand, different tenant companies can customize their own company-level machine learning rules, which overwrite the default system-level machine learning rules. Similarly, different users can have different individual machine learning rules, which override company-level machine learning rules.
  • The risk assessment engine 1-11 may use the risk and confidence scores to assign one of five fraud risk levels to the assessed event:
  • Unknown: there are not enough events in the entity profile over a long enough period of time to successfully determine fraud risk.
  • Normal: the event looks legitimate.
  • Low Risk: some aspects of the event are abnormal, but not many.
  • Medium Risk: some important aspects of the event are abnormal while some are not.
  • High Risk: many key aspects of the event are abnormal.
  • In an embodiment, the Risk Assessment Engine 1-11 computes a risk score of the event based on the vector distance between the event vector and the cluster center vector in an n-dimension vector space, where n is the number of event attributes. In other words, each event attribute is encoded as a single value. In an embodiment, an event attribute may be encoded as a multi-dimension vector. Risk Score indicates how distinct the requested identity activity in the form of an event is from the user's normal behavior in the form of the user profile cluster. In an embodiment, the range of Risk Score is (0, 100], where 100 denotes the highest risk score, and 0 denotes the lowest risk score.
  • In an embodiment, the Risk Assessment Engine 1-11 applies configurable risk assessment rules to compute risk scores. In an embodiment, risk assessment rules can be inherited and overwritten. The Risk Assessment Engine 1-11 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users. On the other hand, different tenant companies can configure their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules. Similarly, different users can be configured with different individual risk assessment rules, which override company-level risk assessment rules.
  • Associated with a risk score, the Risk Assessment Engine 1-11 also computes a confidence score. Confidence Score indicates how well the system knows about the user. This score is initially low and increases over time as the Risk Assessment Engine 1-11 receives and learns more event data of the user.
  • In an embodiment, the Confidence Score is calculated by a customized sigmoid function based on number of data points and period of time (e.g., in days) learned by the Risk Assessment Engine 1-11. In an embodiment, the range of Confidence Score is (0, 100], where 100 denotes the highest confidence score, and 0 denotes the lowest confidence score.
  • Before the Risk Assessment Engine 1-11 is able to compute a risk score with certain confidence for an event related to a user, a training period is needed, where the Risk Assessment Engine 1-11 collects and constructs the user profile, i.e., event cluster based on the received events during this period.
  • The Risk Assessment Engine 1-11 runs pre-configured rules against the event and determines if the event requires any risk assessment. The rules are a set of conditions, e.g., condition 1: the user tries to log into a critical Human Resources application that can view all the employees' personal information; condition 2: the user's device type is changed since last successful log; etc.
  • In an embodiment, the Streaming Threat Remediation Engine 1-9 determines the risk level of a network access event based on received risk score and confidence score as well as current risk thresholds and confidence thresholds. The event vector and the determined risk level information together as a user profile record is stored into a model repository by the Streaming Threat Remediation Engine 1-9. In an embodiment, the user profile record stored in the model repository is used by the system to trigger alerts based on event risk levels. In an embodiment, if the event is assessed with high fraud risk level, an alert email or SMS text message is automatically generated to notify the user. In case of network fraud, the user can take actions such as contacting customer service to evict the unauthorized network access. In an embodiment, if the event is assessed with high fraud risk level, system administrators receive an alert message, and take actions such as contacting the user for network access verification.
  • In an embodiment, the Streaming Threat Remediation Engine 1-9 applies configurable risk assessment rules to compute risk level. In an embodiment, risk assessment rules can be inherited and overwritten. The Streaming Threat Remediation Engine 1-9 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users. On the other hand, different tenant companies can configure their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules. Similarly, different users can configure different individual risk assessment rules, which override company-level risk assessment rules.
  • In an embodiment, the on-demand threat remediation engine 1-7 adjusts risk thresholds and confidence thresholds based on risk feedback events, which are resulted from the risk level assessment of prior events. The on-demand threat remediation engine 1-7 determines the risk level of a network access event or attempt based on the received risk score and confidence score as well as current risk thresholds and confidence thresholds. If the access event or attempt is assessed with high fraud risk, the on-demand threat remediation engine 1-7 sends an instruction to the Access Control Service 1-3 to request user for additional authentication with alternative authentication method. Alternatively, block the access depending on the policy set by a security admin on the Access Control Service 1-3. The instruction from the on-demand threat remediation engine 1-7 to the Access Control Service 1-3 generates a risk feedback event that contains the risk level evaluation by the on-demand threat remediation engine 1-7, and any resulting action triggered by the risk level evaluation such as the authentication of user's additional login attempt using alternative authentication method. The authentication results contained in such risk feedback events are fed back from the Event Report Agent 1-2 to the on-demand threat remediation engine 1-7 via Event Ingestion Service 1-10, Risk Assessment Engine 1-11 and Risk Assessment Service 1-8. The on-demand threat remediation engine 1-7 analyzes the received authentication results contained in risk feedback events, and determines if the risk thresholds and confidence thresholds need to be adjusted. For example, if all of the authentication results are positive, i.e., all users are authenticated successfully using alternative authentication method, the risk thresholds and/or confidence thresholds may need to be set higher to prevent unnecessary additional authentication requests.
  • In an embodiment, the on-demand threat remediation engine 1-7 applies configurable risk assessment rules to compute risk level. In an embodiment, risk assessment rules can be inherited and overwritten. The on-demand threat remediation engine 1-7 has default system-level risk assessment rules, which can be inherited by tenant companies and individual users. On the other hand, different tenant companies can customize their own company-level risk assessment rules, which overwrite the default system-level risk assessment rules. Similarly, different users can be configured with different individual risk assessment rules, which override company-level risk assessment rules.
  • FIG. 2 shows an event represented as 3-tuple vector 2-3 in a three dimensional entity profile vector space 2-8, where X axis 2-6 is event's login time of day 2-4, Y axis 2-1 is event's login city 2-2, and Z axis 2-7 is event's login device type 2-5.
  • As more and more events are collected, the event cluster is growing and expanding. FIG. 3 is a diagram of an entity profile in form of an event cluster 4-3, an anomaly event in form of an event vector 4-9. The well-know machine learning clustering algorithm keeps updating the cluster while new event vectors are received and added into the entity profile vector space.
  • In an embodiment of the invention, as the center of a user's event cluster is dynamically updated, the risk score of a new event is also dynamically adjusted. For example, the previous cluster center is represented as (8 AM, city A, iPhone), and the new cluster center is represented as (8:30 AM, city A, iPhone). In terms of the login time of day, if the new event is not within 30 minutes distance from the cluster center, the event is considered with high risk, i.e., it will be assigned with a high risk score. For a new event (8:59 AM, city A, iPhone), the risk score is low with the new cluster center because it is within 30 minutes distance from the new cluster center. However, the new event's risk score would be high with the previous cluster center as the distance between the new event and the previous cluster center is not within 30 minutes. Therefore, this is one of the advantages of the embodiment of the invention, where the risk score is adaptively updated as the user profile cluster is updated. In prior arts, this requires manual adjustment of the risk score calculation criteria. For example, the period of low risk log in time needs to be updated from (7:30 AM˜8:30 AM) to (8:00 AM˜9:00 AM). Without adjusting the risk score criteria, the new event may be treated as anomaly in prior arts.
  • FIG. 4 shows an entity profile evolves from a cluster 5-3 into two clusters. A new cluster 5-11starts as an event vector 5-10, which is detected as anomaly and not part of the existing event cluster 5-3 by the well-known clustering algorithm. Therefore, additional factor for authentication is triggered for this entity. In general, using one factor for authentication is considered as a weak authentication method while using two or more factors for authentication is considered as a strong authentication method. In addition, for a single factor authentication, different types of factors used for authentication have different levels of authentication strength. In an embodiment of the invention, authentication using security questions (SQ) is considered very weak; authentication using password is considered weak to medium depending on the password rules enforced; authentication using Email or SMS or phone call is considered as medium; and one-time password (OTP) or authenticator or 3rd party radius (RSA) is considered strong.
  • In an embodiment of the invention, the additional factor for authentication is a strong factor for authentication than the default factor for authentication. Because the additional authentication is successful, which in turn is recorded as a new event and fed back into the Risk Assessment Engine 1-11, the event vector 5-10 is marked as the first event vector of the new cluster 5-11. This type of event cluster evolution typically happens when a user maintains more than one sets of assess patterns. For example, a user may regularly travel to another city for a week once a quarter. From the event cluster perspective, the user at least has two clusters, one centered at the home location while the other centered at the visiting location. The event cluster centered at the visiting location grows during the week when the user is traveling. When the user returns to home, the event cluster centered at the visiting location stops growing and eventually decays when the event data becomes outdated. In an embodiment of the invention, the event data that is stored longer than certain duration may get purged from the event cluster. When the user travels again, as the event cluster at the visiting location is already established, the computing process for risk assessment with sufficient level of confidence is accelerated.
  • FIG. 5 shows a diagram of an entity profile in form of two event clusters 6-3 and 6-4. Cluster 6-3 is a cluster with long-term memory while cluster 6-8 is a cluster with short-term memory. The event cluster with long-term memory represents the entity's normal or routine behavior, which does not change or only gradually changes over a long period. For example, a user usually check work emails from his/her smartphone around 7 AM every morning at home for years. The event vectors of a long-term memory cluster are useful reference for the user's future routine behavior. Therefore, event vectors that belong to the event cluster with long-term memory are kept as part of the event cluster for a relatively long period, e.g., several months or years. In an embodiment of the invention, the event cluster with long-term memory is formed by well-known machine-learning clustering methods. On the other hand, the event cluster with short-term memory represents the entity's temporary behavior, which tends to change and only last for a short period. For example, a user travels for business regularly out of his/her home for a week once a month. During the week of travelling, a user's network access behavior such as network login location and login time is likely different from the behavior in past or future months. And, the user maintains such network access behavior only during the week of travelling. The event vectors collected in current travelling week may not be the right reference for the user's future behavior. Therefore, the event vectors of a short-term memory cluster are only kept as part of the event cluster for a relatively short period, e.g., several days. As a result, in an embodiment of the invention, an event vector cluster with short-term memory decays more quickly than an event vector cluster with long-term memory. In an embodiment of the invention, the event cluster with short-term memory is formed by rules such as multifactor authentication with strong authentication factors. In an embodiment of the invention, the rules are configurable by users that will result customized event clusters with short-term memory. FIG. 5 shows an example that the cluster 6-8 with short-term memory decays more quickly than the long-term memory cluster 6-3.

Claims (21)

1-22. (canceled)
23. A non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations for automatically evaluating and responding to network security risks, comprising:
building a customized behavioral profile for an identity using a machine-learning process, wherein inputs to the machine-learning process include at least two of:
a login location,
a login time,
a number of login attempts,
an identification of a login device,
an IP address used for login, and
an application used for login;
identifying a new event associated with the identity;
determining a risk level for the new event based on the customized behavioral profile for the identity;
accessing a set of security rules; and
performing a security action based on the risk level and the set of security rules, the security action comprising at least one of:
generating a prompt for authentication of the identity,
generating an alert, or
denying access by the identity to an access-restricted network resource.
24. The non-transitory computer-readable medium of claim 23, wherein the customized behavioral profile for the identity is represented by a plurality of clusters of event vectors.
25. The non-transitory computer-readable medium of claim 24, wherein the plurality of clusters of event vectors each have at least three dimensions.
26. The non-transitory computer-readable medium of claim 23, wherein the operations further comprise determining a confidence level associated with the risk level.
27. The non-transitory computer-readable medium of claim 26, wherein the confidence level is a function of a training maturity of the machine-learning process.
28. The non-transitory computer-readable medium of claim 23, wherein the inputs to the machine-learning process further include command-line interface events.
29. The non-transitory computer-readable medium of claim 23, wherein the inputs to the machine-learning process further include authorization escalation events.
30. The non-transitory computer-readable medium of claim 23, wherein the inputs to the machine-learning process further include risk feedback events.
31. The non-transitory computer-readable medium of claim 23, wherein the operations further comprise receiving updates on activity of the identity and automatically updating the customized behavioral profile for the identity based on the updates.
32. The non-transitory computer-readable medium of claim 23, wherein the inputs to the machine-learning process further include enrolling or un-enrolling a client device with an identity management service.
33. A computer-implemented method for automatically evaluating and responding to network security risks, comprising:
building a customized behavioral profile for an identity using a machine-learning process, wherein inputs to the machine-learning process include at least two of:
a login location,
a login time,
a number of login attempts,
an identification of a login device,
an IP address used for login, and
an application used for login;
identifying a new event associated with the identity;
determining a risk level for the new event based on the customized behavioral profile for the identity;
accessing a set of security rules; and
performing a security action based on the risk level and the set of security rules, the security action comprising at least one of:
generating a prompt for authentication of the identity,
generating an alert, or
denying access by the identity to an access-restricted network resource.
34. The computer-implemented method of claim 33, wherein determining the risk level includes determining whether the new event is part of a cluster of event vectors.
35. The computer-implemented method of claim 34, wherein determining the risk level is based on a distance between a vector associated with the new event and one or more of the cluster of event vectors.
36. The computer-implemented method of claim 33, further comprising applying a filter to exclude malformed or irrelevant inputs to the machine-learning process.
37. The computer-implemented method of claim 33, further comprising converting a format of ingested data to form the inputs to the machine-learning process.
38. The computer-implemented method of claim 33, wherein the set of security rules are default rules.
39. The computer-implemented method of claim 33, wherein the set of security rules are customized rules.
40. The computer-implemented method of claim 33, further comprising determining a confidence level associated with the risk level.
41. The computer-implemented method of claim 40, further comprising assigning a fraud risk level to the new event based on the risk level and the confidence level.
42. The computer-implemented method of claim 33, further comprising generating a display of the risk level.
US17/242,707 2017-10-17 2021-04-28 Risk assessment for network access control through data analytics Active 2038-03-05 US12047392B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/242,707 US12047392B2 (en) 2017-10-17 2021-04-28 Risk assessment for network access control through data analytics
US18/749,324 US20240422177A1 (en) 2017-10-17 2024-06-20 Risk assessment for network access control through data analytics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/785,430 US20190116193A1 (en) 2017-10-17 2017-10-17 Risk assessment for network access control through data analytics
US17/242,707 US12047392B2 (en) 2017-10-17 2021-04-28 Risk assessment for network access control through data analytics

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/785,430 Continuation US20190116193A1 (en) 2017-10-17 2017-10-17 Risk assessment for network access control through data analytics

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/749,324 Continuation US20240422177A1 (en) 2017-10-17 2024-06-20 Risk assessment for network access control through data analytics

Publications (2)

Publication Number Publication Date
US20210273951A1 true US20210273951A1 (en) 2021-09-02
US12047392B2 US12047392B2 (en) 2024-07-23

Family

ID=66096668

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/785,430 Abandoned US20190116193A1 (en) 2017-10-17 2017-10-17 Risk assessment for network access control through data analytics
US17/242,707 Active 2038-03-05 US12047392B2 (en) 2017-10-17 2021-04-28 Risk assessment for network access control through data analytics
US18/749,324 Pending US20240422177A1 (en) 2017-10-17 2024-06-20 Risk assessment for network access control through data analytics

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/785,430 Abandoned US20190116193A1 (en) 2017-10-17 2017-10-17 Risk assessment for network access control through data analytics

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/749,324 Pending US20240422177A1 (en) 2017-10-17 2024-06-20 Risk assessment for network access control through data analytics

Country Status (1)

Country Link
US (3) US20190116193A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220116783A1 (en) * 2020-10-08 2022-04-14 Surendra Goel System that provides cybersecurity in a home or office by interacting with Internet of Things devices and other devices
US20240179189A1 (en) * 2021-06-18 2024-05-30 Capital One Services, Llc Systems and methods for network security
US20250023900A1 (en) * 2023-07-14 2025-01-16 Omnissa, Llc Zero trust continuous risk feedback

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190116193A1 (en) * 2017-10-17 2019-04-18 Yanlin Wang Risk assessment for network access control through data analytics
US10728256B2 (en) * 2017-10-30 2020-07-28 Bank Of America Corporation Cross channel authentication elevation via logic repository
US10621341B2 (en) 2017-10-30 2020-04-14 Bank Of America Corporation Cross platform user event record aggregation system
US10721246B2 (en) 2017-10-30 2020-07-21 Bank Of America Corporation System for across rail silo system integration and logic repository
FR3079380A1 (en) * 2018-03-26 2019-09-27 Orange SECURITY MANAGEMENT OF A LOCAL COMMUNICATION NETWORK INCLUDING AT LEAST ONE COMMUNICABLE OBJECT.
US10984122B2 (en) * 2018-04-13 2021-04-20 Sophos Limited Enterprise document classification
CN109861953B (en) * 2018-05-14 2020-08-21 新华三信息安全技术有限公司 Abnormal user identification method and device
US11017100B2 (en) * 2018-08-03 2021-05-25 Verizon Patent And Licensing Inc. Identity fraud risk engine platform
US11017088B2 (en) * 2018-09-17 2021-05-25 Microsofttechnology Licensing, Llc Crowdsourced, self-learning security system through smart feedback loops
US11012433B2 (en) * 2019-03-24 2021-05-18 Zero Networks Ltd. Method and system for modifying network connection access rules using multi-factor authentication (MFA)
US11743265B2 (en) 2019-03-24 2023-08-29 Zero Networks Ltd. Method and system for delegating control in network connection access rules using multi-factor authentication (MFA)
US11023863B2 (en) * 2019-04-30 2021-06-01 EMC IP Holding Company LLC Machine learning risk assessment utilizing calendar data
WO2020227335A1 (en) * 2019-05-08 2020-11-12 SAIX Inc. Identity risk and cyber access risk engine
US11169506B2 (en) 2019-06-26 2021-11-09 Cisco Technology, Inc. Predictive data capture with adaptive control
US11218494B2 (en) * 2019-07-26 2022-01-04 Raise Marketplace, Llc Predictive fraud analysis system for data transactions
US12341807B2 (en) * 2019-12-17 2025-06-24 Imperva, Inc. Packet fingerprinting for enhanced distributed denial of service protection
US11593477B1 (en) 2020-01-31 2023-02-28 Splunk Inc. Expediting processing of selected events on a time-limited basis
GB2597909B (en) * 2020-07-17 2022-09-07 British Telecomm Computer-implemented security methods and systems
US20220159029A1 (en) * 2020-11-13 2022-05-19 Cyberark Software Ltd. Detection of security risks based on secretless connection data
US11665047B2 (en) * 2020-11-18 2023-05-30 Vmware, Inc. Efficient event-type-based log/event-message processing in a distributed log-analytics system
CN112367340B (en) * 2020-11-30 2022-07-05 杭州安恒信息技术股份有限公司 Intranet asset risk assessment method, device, equipment and medium
CN112633763B (en) * 2020-12-31 2024-04-12 上海三零卫士信息安全有限公司 Grade protection risk studying and judging method based on artificial neural network ANNs
CN112685711A (en) * 2021-02-02 2021-04-20 杭州宁达科技有限公司 New information security access control system and method based on user risk assessment
US12225027B2 (en) * 2021-03-29 2025-02-11 Armis Security Ltd. System and method for detection of abnormal device traffic behavior
US11711396B1 (en) 2021-06-24 2023-07-25 Airgap Networks Inc. Extended enterprise browser blocking spread of ransomware from alternate browsers in a system providing agentless lateral movement protection from ransomware for endpoints deployed under a default gateway with point to point links
US12074906B1 (en) 2021-06-24 2024-08-27 Airgap Networks Inc. System and method for ransomware early detection using a security appliance as default gateway with point-to-point links between endpoints
US11695799B1 (en) 2021-06-24 2023-07-04 Airgap Networks Inc. System and method for secure user access and agentless lateral movement protection from ransomware for endpoints deployed under a default gateway with point to point links
US11916957B1 (en) 2021-06-24 2024-02-27 Airgap Networks Inc. System and method for utilizing DHCP relay to police DHCP address assignment in ransomware protected network
US11722519B1 (en) 2021-06-24 2023-08-08 Airgap Networks Inc. System and method for dynamically avoiding double encryption of already encrypted traffic over point-to-point virtual private networks for lateral movement protection from ransomware
US12058171B1 (en) 2021-06-24 2024-08-06 Airgap Networks, Inc. System and method to create disposable jump boxes to securely access private applications
US12057969B1 (en) 2021-06-24 2024-08-06 Airgap Networks, Inc. System and method for load balancing endpoint traffic to multiple security appliances acting as default gateways with point-to-point links between endpoints
US11736520B1 (en) * 2021-06-24 2023-08-22 Airgap Networks Inc. Rapid incidence agentless lateral movement protection from ransomware for endpoints deployed under a default gateway with point to point links
US11757933B1 (en) 2021-06-24 2023-09-12 Airgap Networks Inc. System and method for agentless lateral movement protection from ransomware for endpoints deployed under a default gateway with point to point links
US11757934B1 (en) 2021-06-24 2023-09-12 Airgap Networks Inc. Extended browser monitoring inbound connection requests for agentless lateral movement protection from ransomware for endpoints deployed under a default gateway with point to point links
US11818219B2 (en) * 2021-09-02 2023-11-14 Paypal, Inc. Session management system
US12316676B2 (en) * 2022-07-22 2025-05-27 Cisco Technology, Inc. Threat analytics and dynamic compliance in security policies
US11743280B1 (en) * 2022-07-29 2023-08-29 Intuit Inc. Identifying clusters with anomaly detection
CN115408673B (en) * 2022-11-02 2023-10-27 杭州优百顺科技有限公司 Software validity period access control management system and method
WO2025017375A1 (en) * 2023-07-19 2025-01-23 Telefonaktiebolaget Lm Ericsson (Publ) Trust management for access to a service provided by a network function producer in a network
US20250086551A1 (en) * 2023-09-13 2025-03-13 The Toronto-Dominion Bank System and Method for Assessing Actions of Authenticated Entities Within an Enterprise System
CN117978836B (en) * 2024-03-05 2024-08-20 安徽中杰信息科技有限公司 Large-screen situation awareness system applied to cloud monitoring service platform

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020143938A1 (en) * 2000-09-28 2002-10-03 Bruce Alexander System and method for providing configurable security monitoring utilizing an integrated information system
US20030154393A1 (en) * 2002-02-12 2003-08-14 Carl Young Automated security management
US20060085854A1 (en) * 2004-10-19 2006-04-20 Agrawal Subhash C Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US20060282660A1 (en) * 2005-04-29 2006-12-14 Varghese Thomas E System and method for fraud monitoring, detection, and tiered user authentication
US20090089869A1 (en) * 2006-04-28 2009-04-02 Oracle International Corporation Techniques for fraud monitoring and detection using application fingerprinting
US20090265777A1 (en) * 2008-04-21 2009-10-22 Zytron Corp. Collaborative and proactive defense of networks and information systems
US8418249B1 (en) * 2011-11-10 2013-04-09 Narus, Inc. Class discovery for automated discovery, attribution, analysis, and risk assessment of security threats
US20130298244A1 (en) * 2012-05-01 2013-11-07 Taasera, Inc. Systems and methods for threat identification and remediation
US20140041028A1 (en) * 2011-02-07 2014-02-06 Dell Products, Lp System and Method for Assessing Whether a Communication Contains an Attack
US20150373043A1 (en) * 2014-06-23 2015-12-24 Niara, Inc. Collaborative and Adaptive Threat Intelligence for Computer Security
US20160065604A1 (en) * 2014-08-29 2016-03-03 Linkedln Corporation Anomalous event detection based on metrics pertaining to a production system
US20160156655A1 (en) * 2010-07-21 2016-06-02 Seculert Ltd. System and methods for malware detection using log analytics for channels and super channels
US20160226901A1 (en) * 2015-01-30 2016-08-04 Securonix, Inc. Anomaly Detection Using Adaptive Behavioral Profiles
US20160277424A1 (en) * 2015-03-20 2016-09-22 Ashif Mawji Systems and Methods for Calculating a Trust Score
US20160323243A1 (en) * 2015-05-01 2016-11-03 Cirius Messaging Inc. Data leak protection system and processing methods thereof
US20160352765A1 (en) * 2015-05-27 2016-12-01 Cisco Technology, Inc. Fingerprint merging and risk level evaluation for network anomaly detection
US20180198812A1 (en) * 2017-01-11 2018-07-12 Qualcomm Incorporated Context-Based Detection of Anomalous Behavior in Network Traffic Patterns
US20190081968A1 (en) * 2017-09-13 2019-03-14 Centrify Corporation Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
US20190116193A1 (en) * 2017-10-17 2019-04-18 Yanlin Wang Risk assessment for network access control through data analytics
US20190306170A1 (en) * 2018-03-30 2019-10-03 Yanlin Wang Systems and methods for adaptive data collection using analytics agents
US10594710B2 (en) * 2015-11-20 2020-03-17 Webroot Inc. Statistical analysis of network behavior using event vectors to identify behavioral anomalies using a composite score
US10673880B1 (en) * 2016-09-26 2020-06-02 Splunk Inc. Anomaly detection to identify security threats

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5699403A (en) * 1995-04-12 1997-12-16 Lucent Technologies Inc. Network vulnerability management apparatus and method
US6298445B1 (en) * 1998-04-30 2001-10-02 Netect, Ltd. Computer security
US6282546B1 (en) * 1998-06-30 2001-08-28 Cisco Technology, Inc. System and method for real-time insertion of data into a multi-dimensional database for network intrusion detection and vulnerability assessment
US7380270B2 (en) * 2000-08-09 2008-05-27 Telos Corporation Enhanced system, method and medium for certifying and accrediting requirements compliance
US20020147803A1 (en) * 2001-01-31 2002-10-10 Dodd Timothy David Method and system for calculating risk in association with a security audit of a computer network
US7325252B2 (en) * 2001-05-18 2008-01-29 Achilles Guard Inc. Network security testing
US20030037063A1 (en) * 2001-08-10 2003-02-20 Qlinx Method and system for dynamic risk assessment, risk monitoring, and caseload management
US6907430B2 (en) * 2001-10-04 2005-06-14 Booz-Allen Hamilton, Inc. Method and system for assessing attacks on computer networks using Bayesian networks
US7480715B1 (en) * 2002-01-25 2009-01-20 Vig Acquisitions Ltd., L.L.C. System and method for performing a predictive threat assessment based on risk factors
US7058796B2 (en) * 2002-05-20 2006-06-06 Airdefense, Inc. Method and system for actively defending a wireless LAN against attacks
US7322044B2 (en) * 2002-06-03 2008-01-22 Airdefense, Inc. Systems and methods for automated network policy exception detection and correction
US7472421B2 (en) 2002-09-30 2008-12-30 Electronic Data Systems Corporation Computer model of security risks
US7409721B2 (en) * 2003-01-21 2008-08-05 Symantac Corporation Network risk analysis
US7519996B2 (en) 2003-08-25 2009-04-14 Hewlett-Packard Development Company, L.P. Security intrusion mitigation system and method
US7526806B2 (en) 2003-11-05 2009-04-28 Cisco Technology, Inc. Method and system for addressing intrusion attacks on a computer system
US7764641B2 (en) 2005-02-05 2010-07-27 Cisco Technology, Inc. Techniques for determining communication state using accelerometer data
US8806591B2 (en) * 2011-01-07 2014-08-12 Verizon Patent And Licensing Inc. Authentication risk evaluation
WO2013086048A1 (en) * 2011-12-05 2013-06-13 Visa International Service Association Dynamic network analytic system
US8789190B2 (en) * 2011-12-23 2014-07-22 Mcafee, Inc. System and method for scanning for computer vulnerabilities in a network environment
US9680861B2 (en) * 2012-08-31 2017-06-13 Damballa, Inc. Historical analysis to identify malicious activity
US9811830B2 (en) * 2013-07-03 2017-11-07 Google Inc. Method, medium, and system for online fraud prevention based on user physical location data
US9286453B2 (en) * 2014-05-06 2016-03-15 International Business Machines Corporation Dynamic adjustment of authentication policy
US9501647B2 (en) * 2014-12-13 2016-11-22 Security Scorecard, Inc. Calculating and benchmarking an entity's cybersecurity risk score
US10284588B2 (en) * 2016-09-27 2019-05-07 Cisco Technology, Inc. Dynamic selection of security posture for devices in a network using risk scoring
US12197510B2 (en) 2016-10-20 2025-01-14 Micron Technology, Inc. Traversal of S portion of a graph problem to be solved using automata processor
US10715543B2 (en) 2016-11-30 2020-07-14 Agari Data, Inc. Detecting computer security risk based on previously observed communications
US10404735B2 (en) * 2017-02-02 2019-09-03 Aetna Inc. Individualized cybersecurity risk detection using multiple attributes
US10601800B2 (en) * 2017-02-24 2020-03-24 Fmr Llc Systems and methods for user authentication using pattern-based risk assessment and adjustment
US20180246762A1 (en) 2017-02-28 2018-08-30 Intel Corporation Runtime processor optimization

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020143938A1 (en) * 2000-09-28 2002-10-03 Bruce Alexander System and method for providing configurable security monitoring utilizing an integrated information system
US20030154393A1 (en) * 2002-02-12 2003-08-14 Carl Young Automated security management
US20060085854A1 (en) * 2004-10-19 2006-04-20 Agrawal Subhash C Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US20060282660A1 (en) * 2005-04-29 2006-12-14 Varghese Thomas E System and method for fraud monitoring, detection, and tiered user authentication
US20090089869A1 (en) * 2006-04-28 2009-04-02 Oracle International Corporation Techniques for fraud monitoring and detection using application fingerprinting
US20090265777A1 (en) * 2008-04-21 2009-10-22 Zytron Corp. Collaborative and proactive defense of networks and information systems
US20160156655A1 (en) * 2010-07-21 2016-06-02 Seculert Ltd. System and methods for malware detection using log analytics for channels and super channels
US20140041028A1 (en) * 2011-02-07 2014-02-06 Dell Products, Lp System and Method for Assessing Whether a Communication Contains an Attack
US8418249B1 (en) * 2011-11-10 2013-04-09 Narus, Inc. Class discovery for automated discovery, attribution, analysis, and risk assessment of security threats
US20130298244A1 (en) * 2012-05-01 2013-11-07 Taasera, Inc. Systems and methods for threat identification and remediation
US20150373043A1 (en) * 2014-06-23 2015-12-24 Niara, Inc. Collaborative and Adaptive Threat Intelligence for Computer Security
US20160065604A1 (en) * 2014-08-29 2016-03-03 Linkedln Corporation Anomalous event detection based on metrics pertaining to a production system
US20160226901A1 (en) * 2015-01-30 2016-08-04 Securonix, Inc. Anomaly Detection Using Adaptive Behavioral Profiles
US20160277424A1 (en) * 2015-03-20 2016-09-22 Ashif Mawji Systems and Methods for Calculating a Trust Score
US20160323243A1 (en) * 2015-05-01 2016-11-03 Cirius Messaging Inc. Data leak protection system and processing methods thereof
US20160352765A1 (en) * 2015-05-27 2016-12-01 Cisco Technology, Inc. Fingerprint merging and risk level evaluation for network anomaly detection
US10594710B2 (en) * 2015-11-20 2020-03-17 Webroot Inc. Statistical analysis of network behavior using event vectors to identify behavioral anomalies using a composite score
US10673880B1 (en) * 2016-09-26 2020-06-02 Splunk Inc. Anomaly detection to identify security threats
US20180198812A1 (en) * 2017-01-11 2018-07-12 Qualcomm Incorporated Context-Based Detection of Anomalous Behavior in Network Traffic Patterns
US20190081968A1 (en) * 2017-09-13 2019-03-14 Centrify Corporation Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
US20190116193A1 (en) * 2017-10-17 2019-04-18 Yanlin Wang Risk assessment for network access control through data analytics
US20190306170A1 (en) * 2018-03-30 2019-10-03 Yanlin Wang Systems and methods for adaptive data collection using analytics agents

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220116783A1 (en) * 2020-10-08 2022-04-14 Surendra Goel System that provides cybersecurity in a home or office by interacting with Internet of Things devices and other devices
US11606694B2 (en) * 2020-10-08 2023-03-14 Surendra Goel System that provides cybersecurity in a home or office by interacting with internet of things devices and other devices
US20240179189A1 (en) * 2021-06-18 2024-05-30 Capital One Services, Llc Systems and methods for network security
US12301632B2 (en) * 2021-06-18 2025-05-13 Capital One Services, Llc Systems and methods for network security
US20250023900A1 (en) * 2023-07-14 2025-01-16 Omnissa, Llc Zero trust continuous risk feedback

Also Published As

Publication number Publication date
US20240422177A1 (en) 2024-12-19
US12047392B2 (en) 2024-07-23
US20190116193A1 (en) 2019-04-18

Similar Documents

Publication Publication Date Title
US12047392B2 (en) Risk assessment for network access control through data analytics
US12248552B2 (en) Biometric identification platform
US10581919B2 (en) Access control monitoring through policy management
US10911425B1 (en) Determining authentication assurance from user-level and account-level indicators
US11190527B2 (en) Identity verification and login methods, apparatuses, and computer devices
US11902307B2 (en) Method and apparatus for network fraud detection and remediation through analytics
US20210152555A1 (en) System and method for unauthorized activity detection
US20210037000A1 (en) Securing a group-based communication system via identity verification
KR101721032B1 (en) Security challenge assisted password proxy
US10110629B1 (en) Managed honeypot intrusion detection system
US10102570B1 (en) Account vulnerability alerts
EP2545680B1 (en) Behavior-based security system
CA2751490C (en) Using social information for authenticating a user session
US20210157945A1 (en) Machine learning for identity access management
US12125050B2 (en) Security policy enforcement
US20140331278A1 (en) Systems and methods for verifying identities
KR20160111940A (en) System and method for biometric protocol standards
US9092599B1 (en) Managing knowledge-based authentication systems
US9754209B1 (en) Managing knowledge-based authentication systems
US12328391B2 (en) Managing secret values using a secrets manager
US11568038B1 (en) Threshold-based authentication
US20230421562A1 (en) Method and system for protection of cloud-based infrastructure
US8776228B2 (en) Transaction-based intrusion detection
Buttar et al. Conversational AI: Security Features, Applications, and Future Scope at Cloud Platform
KR20210026710A (en) Trust-Aware Role-based System in Public Internet-of-Things

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: GOLUB CAPITAL LLC, AS AGENT, ILLINOIS

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:059911/0773

Effective date: 20180504

Owner name: CYBERARK SOFTWARE LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYBERARK SOFTWARE, INC.;REEL/FRAME:059875/0621

Effective date: 20201109

Owner name: CYBERARK SOFTWARE, INC., MASSACHUSETTS

Free format text: MERGER;ASSIGNOR:IDAPTIVE, LLC;REEL/FRAME:059875/0610

Effective date: 20200731

Owner name: IDAPTIVE, LLC, DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:APPS & ENDPOINT COMPANY, LLC;REEL/FRAME:059875/0578

Effective date: 20180913

Owner name: CENTRIFY CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:GOLUB CAPITAL LLC;REEL/FRAME:059875/0554

Effective date: 20180815

Owner name: APPS & ENDPOINT COMPANY, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:059875/0518

Effective date: 20180815

Owner name: IDAPTIVE, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:059875/0497

Effective date: 20180815

Owner name: CENTRIFY CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, YANLIN;LI, WEIZHI;REEL/FRAME:059875/0454

Effective date: 20171019

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE