WO2025231377A1 - Methods and systems for generating, implementing, and verifying computer security programs - Google Patents
Methods and systems for generating, implementing, and verifying computer security programsInfo
- Publication number
- WO2025231377A1 WO2025231377A1 PCT/US2025/027520 US2025027520W WO2025231377A1 WO 2025231377 A1 WO2025231377 A1 WO 2025231377A1 US 2025027520 W US2025027520 W US 2025027520W WO 2025231377 A1 WO2025231377 A1 WO 2025231377A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- program
- security
- organization
- security program
- procedures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0866—Checking the configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Definitions
- Small and medium-sized organizations face unique challenges in implementing effective computer security programs.
- small and medium-sized organizations often operate with limited financial and human resources. This can make it difficult to invest in advanced security infrastructure or hire dedicated IT security personnel.
- Security programs can include security procedures, security policies, security evidence, and other security systems, frameworks, and processes.
- a method for generating, implementing, and verifying a security program for an organization includes receiving inputs providing information about the organization, generating the security program by processing the inputs describing the information about the organization, the security program including assessment objectives for the security program, and providing the information about the organization and the security program as model inputs to a security management model to generate output including implementation procedures defining steps for implementing the security program and verification procedures defining steps for verifying the assessment objectives of the security program.
- information provided about the organization is inputted to the security management model to generate output, including procedures for implementation, rules for policy, evidence for assessors, and automation to monitor compliance.
- FIG. 1 illustrates an example environment for generating, implementing, and verifying a security program for an entity.
- FIG. 2 illustrates an example method for generating, implementing, and verifying a security program for an entity.
- FIG. 3 illustrates types of output generated by the security management model.
- FIG. 4 illustrates an example template for a program procedure for each of the types of output for a major activity of a security program.
- FIG. 5 illustrates an example step for an example program procedure.
- FIG. 6 illustrates another example step for an example program procedure.
- FIG. 7 illustrates an example variant for an example step for an example program procedure.
- FIG. 8 illustrates options for example variants for a security program.
- FIG. 9 illustrates a template for an authoritative negative variant for a security program.
- FIG. 10 illustrates an example for an authoritative negative variant for a security program.
- FIG. 11 illustrates an example product agnostic step for an example program procedure.
- FIG. 12 illustrates example product specific procedures for the sub-activity type.
- FIG. 13 illustrates example product specific procedures for the evidence presentation type.
- FIG. 14 illustrates an example user interface for receiving information used to generate a security program.
- FIG. 15 illustrates another example user interface for selecting a security program.
- FIG. 16 illustrates an example user interface that is presented when an organization is recommended to upgrade their service level for a security platform.
- FIG. 17 illustrates an example user interface presenting a recommendation to use an on-site support team to complete one or more steps in a security activity and/or procedure.
- FIG. 18 illustrates an example user interface to confirm selections for a security program.
- FIG. 19 illustrates example generated output from the computer security management system.
- FIG. 20 illustrates example generated output from the computer security management system.
- FIG. 21 illustrates a list of example major activities for an organization’s security program.
- FIGs 22-42 illustrate features for example implementations disclosed herein.
- FIG. 43 is a schematic diagram that shows an example of a computing device and a mobile computing device.
- FIG. 44 illustrates an example group of program procedures.
- FIG. 45 illustrates an example user interface.
- FIG. 46 illustrates another example user interface.
- FIG. 47 illustrates yet another example user interface.
- FIG. 48 illustrates an example map of major activities, or groups of program procedures.
- FIG. 49 displays the architecture of the computer security management system.
- FIG. 50 illustrates an example relationship of program procedures and organizations.
- FIG. 51 illustrates an example major activity.
- FIG. 52 illustrates an example media protection program procedure.
- FIG. 53 illustrates an example group of media protection program procedures.
- FIGs 54A-54D illustrates example flow diagrams in accordance with the examples described in an example implementation.
- FIG. 55 illustrates the inner workings of the cybersecurity program generator, including gathering inputs and generating artifacts.
- FIG. 56 illustrates the cybersecurity program generator deploying configurations to systems
- the present disclosure describes systems and methods for generating, implementing, and verifying a security program for computer systems, improving the technology of computer security and improving the functioning of the computers in a network.
- Computer security has become a paramount concern for entities of all sizes. For each entity there are multiple stakeholders with different perspectives and levels of expertise. For example, in the computer security space there are stakeholders from the computer security program perspective, governance (e.g., users responsible for ensuring the entity is compliant with standards and the law), implem enters (e.g., system admins), cyber operations, insurance etc.
- governance e.g., users responsible for ensuring the entity is compliant with standards and the law
- implem enters e.g., system admins
- cyber operations e.g., cyber operations, insurance etc.
- the computer security management system disclosed herein interfaces which each of the possible different roles, including by automatically converting sets of requirements into a set of documents optimized for a perspective of a user. In some examples, this includes automatically converting a set of operational requirements into a set of computer security requirements.
- a set of operational requirements are converted into a procedure for a user to follow, a policy the organization implements, an automation that monitors the status of a procedure, a calendar reminder to complete a task, and/or evidence to prove to a third party the task was completed.
- users enter information about their entity and a security program is generated based on the received information. For example, by automatically converting operational decisions and requirements into specific cybersecurity decisions and requirements that define a security program.
- a security management model is leveraged to implement specific details for a computer security program based on high level operational decisions. For example, some organizations may decide that the organization includes extremely confidential information, and the highest level of computer security is required. This decision is then translated into a bunch of requirements for the security program. For example, the program may define that no computing equipment can enter or leave a building, that wireless access is not available, and that portable storage devices are not allowed to connect to computing devices based on the high level operational decision. In such an example, the operational decision flows down from a high level policy to specifically locking the devices associated with the organization.
- the policy requires that all devices be highly secured regardless of the type end point the device is and the acceptable use document that to will include corresponding requirements for the employees (e.g., describing the policy forbidding the use a removable storage device).
- the security management model automatically configures systems to disable wireless access, and disable portable storage media.
- the system routinely monitors the devices to ensure those configurations are still in place. The system can provide a report of the configuration status as evidence to a third party the configurations were complete. The system can alert the organization if there is an issue with configuring the device. In some examples, the system alerts the organization if there is a discrepancy between the devices in the inventory and the devices it detects. The system can prepare an article so the organization understands the purpose of the restrictions.
- the security management model is used to organize all the information about the operational decisions in one location.
- the security management model then allows for the organization to specify the technology that they have in place to allow aggregators to translate the operational decisions into a computer security program, including technical decisions for implementation and specific technical details that apply to the kinds of technology belonging to the organization. Aggregators can check the types of end points and automatically apply instructions for configuring the end points belonging to the organization.
- the security management model is used to organize potential solutions an organization can choose (or be assigned) to implement into program procedures based on organizational requirements. The model allows the organization to specify its requirements, and then assigns the program procedures that the organization needs.
- Program procedures can contain different types of information, that are combined into artifacts (e.g., documents, configurations, code, or other types of information).
- An aggregation process is performed by the computer security management system by combining portions of program procedures into artifacts.
- One example artifact includes instructions on how to configure a system to meet a solution.
- Another example artifact is code that configures the system to meet the solution automatically.
- a computer security management system tracks a baseline (or a catalog of baselines) for organizations and provides extra attention to fields that are different from the baseline. This allows an organization to detect potential indicators of compromised security (e.g., where there is a change that is different from a standard baseline of the organization.
- FIG. 1 illustrates an example computing environment 100 for generating, implementing, and verifying a security program for an entity.
- the environment 100 includes a computer security management system 102, an organization security platform 104, a third party policy system 106, an auditor system 108, and a network 120.
- a plurality of users 110 associated with the organization security platform 104 to implement the security program are also shown in FIG. 1.
- the computer security management system 102 is configured to generate, implement, verify, and otherwise manage security policies for a plurality of entities including an entity associated with the organization security platform 104.
- the computer security management system 102 is configured to convert an operational model and architecture to a cyber security configuration for an organization. Leveraging the security management model 130 and the aggregators, a security program is generated, implemented, verified, and managed at the organization security platform 104 as described in more detail below.
- the computer security management system 102 operates using a subscription model to a plurality of organizations including the organization associated with the organization security platform 104. In some examples, multiple tiers of service are offered via the subscription model.
- the organization security platform 104 is a platform that manages the computer security for an organization.
- the organization is a small or mid-sized company which leverages the computer security management system 102 to meet the organization’s security requirements more effectively (e.g., by meeting requirements that would otherwise not be met, reducing costs to implement a security program, implementing programs which improve employee understanding of the security program, etc.).
- the organization can be of any size and can include non-business entities (e.g., Government Organizations, Non-profits, etc.).
- the organization security platform keeps an inventory of all users and equipment in the org/system catalog and configurations/business information and provides this information to the computer security management system 102.
- the plurality of users 110 associated with the organization security platform 104 include users of various roles that interact with the entities security program in different ways.
- FIG. 39 shows examples of the types of users of the organization security platform 104 and /or the computer security management system 102.
- the users interact with multiple computing devices (sometimes referred to herein as endpoints) that are managed by the organization. In some examples, these endpoints are automatically (and in some cases, alongside some manual steps) configured and managed using the systems and methods disclosed herein.
- the security management model 130 includes associations between operational requirements, computer security requirements, computer security standards and rules, implementation details, compliance rules and processes, etc.
- the implementation details include technical steps for implementing and verifying a security requirement, documentation explaining the implementation (or providing guidance) to stakeholders with different levels of expertise, technical details for monitoring the requirement, etc.
- the data model is organized to match high level operational decisions with documentation and implementation details for a variety of specific elements related to a security program.
- security management model 130 defines relationships between rules and regulations and instructions.
- the rules are integrated within instructions.
- the output is provided to a generative Al model which can generate code to automatically implement the elements of the computer security program.
- outputs from the security management model 130 include automations for monitoring the security program, evidence that is generated to prove a procedure was completed, and/or an automation that configures the organization security platform as specified by the security management model.
- the security management model 130 allows for the aggregation of large amounts of documentation optimized for different audiences.
- the aggregators 132 assembles the information for the different users.
- the information can include human readable instruction and executable code (which may be executed by the appropriate systems automatically).
- a dashboard that displays all of this evidence for the organization. In this manner, policies and advisories are documents that targeted for human consumption, while implementation and verification instructions are automated.
- the aggregators 132 receive a set of requirements and process the computer security management system to generate output (e.g., artifacts).
- the output can include executive guidance, operational guidance, responsibility matrix, technical steps to implement the requirements, steps to produce a proof that the requirement is implemented correctly, code configured to automatically implement the requirement across the entity and/or to automatically verify and monitor that the requirement is met.
- the aggregators 132 process the security management model 130 to output information presented in a way that is most effective for the role of a user accessing the documentation.
- the aggregators can process security management model 130 to produce output that conveys the information to a variety of stakeholders with different perspectives and levels of expertise.
- the aggregators 132 are computational modules that are configured to generate certain types of output including different types of output by processing the security management model 130.
- the third party security policy system 106 is a third party publicly available system which stores computer security standards, legal requirements, etc.
- the computer security management system 102 interfaces with the third party security policy system to update the security management model 130. For example, if a standard or technology is updated by the third party security policy system 106 the computer security management system 102 will process these changes and automatically (or in some cases manually) update the security management model 130.
- the third party security policy system 106 includes a framework developed by a government run organization (e.g., the Department of Defense) that develops a set of cybersecurity standards and regulations (e.g., the Cybersecurity Maturity Management Model - “CMMC”).
- CMMC Cybersecurity Maturity Management Model - “CMMC”
- regulatory bodies publishes documents outlining rules (e.g., via a pdf stored on a server) the computer security management system can process these documents to update the security management model.
- the auditor system 108 interfaces with the organization’s security platform 104 to ensure the organization is meeting requirements.
- the auditor system 108 may interface with the computer security management system 102 to receive instructions on how to verify that the organization is meeting its requirements (without providing access to confidential information about the organization or the security platform).
- the auditor is a third party (e.g., hired by the organization to verify it is meeting its requirements) or government organization.
- an auditor may access the portal to view documentation.
- the network 120 can include one or more private and/or public networks. For example, parts of the network 120 may be included with a private network belonging to the organization and other parts of the network 120 may include a public network, such as the internet.
- the network allows for digital communication between various combinations of the computing systems illustrated in FIG. 1. Although only one network is shown in FIG. 1, many examples can include multiple networks including public networks (e.g., the network) and private networks (e.g., an internal networks of an organization, cloud based network, other private networks).
- FIG. 2 illustrates an example method 200 for generating, implementing, and verifying a security program for an entity.
- the method 200 is executed across the various systems illustrated in the environment 100 in FIG. 1.
- the method 200 includes the operations 202, 204, 206, 208, 210, and 212.
- the operation 202 is performed by an organization computing device 201
- the operations 204, 206, and 210 are performed by the computer security management system 102
- the operations 208 and 212 are performed by the organization security platform 104.
- the steps may be performed in different orders and/or by a different system or combination of systems.
- the organization computing device 201 receives inputs providing information about the organization.
- the information about the organization includes high level operational decisions.
- the organization computing device 201 displays the user interfaces illustrated in FIGs. 14-18 to a user and receives the information about the organization as user input.
- the computer security management system 102 generates a security program.
- CSMS 102 can generate the security program by processing the inputs describing the information about the organization.
- the security management model 130 (shown in FIG. 1) is used to convert high level operational decisions into the security program with specific assessment objectives.
- the security program includes policies related to assessment objectives, where the security program can include multiple assessment objectives.
- users provide information about their organization to an interface hosted via a web application (e.g., at the operation 202).
- the organization information may include details about the size of the organization, number of employees, types of devices, types of work being done, etc. This information is processed to determine a recommended set of policies which are defined by multiple assessment objectives.
- the computer security management system 102 generates implementation procedures.
- the implementation procedures define steps for implementing the security program.
- the security management model receives model inputs that include the information about the organization and the security program to generate outputs that include the implementation
- the computer security management system 102 determines one or more major activities to achieve the assessment objectives of the security program.
- the assessment objectives may be grouped by major activities.
- the objectives are grouped by a party or team responsible for each of the major activities.
- the computer security management system For each major activity, the computer security management system generates output to implement the major activity.
- the major activities include sets of program procedures which are generated via templates by leveraging the security management model 130, where the program procedure includes steps for implementing the major activity.
- the organization security platform 104 receives and executes the implementation procedures. In some examples, at least a portion of the steps for implementing the security program are performed automatically by computing systems of the organization security platform 104.
- the computer security management system 102 generates verification procedures defining steps for verifying assessment objectives of the security program.
- the security management model 130 receives model inputs that include the information about the organization and the security program to generate outputs that the verification procedures defining the steps for verifying the assessment objectives of the security program.
- the computer security management system generates output to verify that the assessment objectives are met.
- the major activities include sets of program procedures which are generated via templates by leveraging the security management model 130, where the program procedures include steps for verifying that the assessment objectives are met.
- the organization security platform 104 receives and executes the verification program procedures. In some examples, at least a portion of the steps for verifying the assessment objectives of the security program are performed automatically by computing systems of the organization security platform 104. In some examples, the organization system 104 sends data (e.g., reports) to the computer security management system 102 and the security management system 102 conducts the verification checks.
- data e.g., reports
- FIGs. 3-13 illustrate features of an example system for generating, implementing, verifying, and managing a security program of an entity according to the systems and method disclosed herein.
- the security program comprises a series of “major activities” that include groups of similar assessment objectives that can be implemented and/or verified together.
- Example major activities are shown in FIG. 21.
- Each major activity includes a program procedure comprising several steps that are required to meet the assessment objective or to move the entity closer to meeting the assessment objective.
- FIG. 3 illustrates example aggregators that can generate different types of output (sometimes referred to as “snippets”) generated by the security management model.
- the example shown includes 4 types of output (e.g., the outputs of the aggregators, sometimes referred to as artifacts).
- the types of output include a sub-activity type 302, an evidence type 304, a policy type 306, and an advisory type 308.
- each of the types are designed for a different system and/or audience.
- the output includes generated documentation and diagrams and figures illustrating the major activity.
- Some of the output is computer code (e.g., the technical language output can be code.
- the sub-activity type 302 includes output for implementing the major activity. In some examples, this includes technical and instructions on how to implement the activity. In some embodiments, the output is provided to an implementer that uses the instructions to implement the major activity. In some examples, the implementor includes a model which processes the output to generate a program that automatically implements the major activity. In some embodiments, the sub-activity type 302 is output for users implementing the security program. The sub-activity type 302 includes technical instructions on how to implement the major activity.
- the evidence type 304 includes output for assessing the implementation of the policy. In some embodiments, output is provided to an assessor to verify that the implementation meets the objective. In some embodiments, the assessor includes a model that is designed to verify that the implementation meets the objective. In some examples, the evidence type 304 provides instructions on how to verify that the major activity is implemented correctly. In some examples, the evidence type 304 includes outputs that are required to show and verify that the implementation of the major activity meets the advisory guidelines or requirements. In some examples, the evidence type 304 includes output that is automatically provided to an auditor and/or advisory agency.
- the policy type 306 includes documentation defining the one or more policies associated with the major activity.
- the output from the policy guideline is automatically integrated within an entity policy, in a manner that it becomes accessible by stakeholders in the entity (e.g., employees) once the major activity is implemented.
- the policy guideline defines the rules for the users and the organization to follow.
- the policy type 306 outputs an acceptable use policy.
- the advisory type 308 includes information about the security program for the organization.
- the advisory type 308 serves as a catalog of information about the security program for the organization.
- FIG. 4 illustrates a chart for example aggregators for different audiences and the output generated by each of the example aggregators.
- this includes example program procedures for each of the types of output for a major activity of a security program.
- Example procedures e.g., including “actions” that are correlated across different aggregators
- the example shown includes an example program procedures for each of the output types illustrated in FIG. 3.
- the program procedures are designed to provide structured output for each type of output.
- the program procedure defines steps that may be manually completed and/or automated by a corresponding system.
- the aggregators are designed for each of the major activities and generate a program procedure (comprising one or more steps) that are required to be performed to complete the major activity.
- program procedures are retrieved from storage to generate artifacts.
- the template provides instructions for automatically formatting steps that are defined by the program procedure.
- the solution includes an aggregator for each type of output, where the output is referred to as artifacts.
- the artifacts include generated code and/or documentation.
- the program procedure is a set of steps that are output from each of the output types and an individual element in a program procedure is referred to as an aggregator element.
- a program procedure has aggregator elements. These elements have different types. Aggregators identify a number of program procedures, and take out all the aggregator elements of their type. These elements are used to build an artifact of the corresponding type.
- FIG. 5 illustrates an example step for an example program procedure.
- each output type has an example output related to a requirement to meet a compliance standard for blocking portable storage devices on end points. For example, restricting a machine from connecting with a flash drive.
- the sub activity type includes a step for accessing a product specific guide.
- code may be generated to automatically configure the machines to restrict connections from portable storage devices.
- the evidence type output provides a statement to an assessor showing that the entity has disabled portable storage devices, and where the assessor can go to verify the procedure.
- the evidence type output includes a dashboard with an indicator that a requirement is complete.
- the policy type provides instructions to the organization defining the requirement that employees are not allowed to use portable storage devices, and the acceptable use policy will update to reflect the requirement.
- the advisory type defines the process and how the organization is addressing the requirement with the process.
- the sub activity type includes a block of code (e.g., JSON) that when executed by an endpoint is configured to block portable storage devices at that endpoint.
- the code depends on the endpoint (e.g., the different code will be generated based on whether the endpoint uses mac, PC, Linux operating systems).
- FIG. 6 illustrates another example step for an example program procedure.
- the step addresses a requirement that a user of a specific role signs a controlled unclassified information (“CUI”) flow diagram.
- the advisory type defines the diagram, the requirement, to provide context to the user that is required to sign the document.
- the evidence type shows where an accessor can access the document signed by the user, the policy type defines the requirement to sign the document and the sub activity type defines a step to sign the document.
- FIG. 7 illustrates an example variant (e.g., an example of a decision that is tied to a set of related “snippets”) for an example step for an example program procedure.
- Variants allow program procedures as a whole to be swapped in and out of an organization’s program.
- Variants allow a user to modify and/or customize program procedures in the security program based on their requirements.
- individual program procedures may be swapped based on the security program for the organization. For example, if an organization decisions such as weather to allow portable storage, to bring their own devices, to allow remote work, are determined and corresponding program procedure is swapped with a variant based on these decisions and accounted for within a program procedure.
- the evidence type output will indicate to an assessor that portable storage devices are allowed with required encryption to meet the corresponding standard (e.g., in this case by limiting the use of portable storage devices by requiring encryption).
- the other output types will also change accordingly, as shown in FIG. 7
- the policy output e.g., including the acceptable use policy
- advisory output is updated to inform the user on how this policy is configured and what team is responsible for managing and implementing the procedure.
- the parts of the program procedure are swapped automatically to update the documentation and implementation within the organization.
- program procedures are swapped out of a security program as a whole.
- FIG. 8 illustrates options for example variants for a security program (e.g., an example of “snippet” groups).
- a user provides information about their organization and different variants are selected based on the information about the organization. For example, a user may request that employees can work from home and various variants are automatically selected to allow for employees to securely work from home.
- FIG. 8 also illustrates multiple program procedures interfacing with each other.
- FIG. 9 illustrates an example template for an authoritative negative variant for a security program. This example shows one example for meeting a contractual or regulator requirement and adapting the security program to the regulatory requirement. For example, an authoritative negative is when there is an assessment objective that does not apply to a particular user/organization.
- FIG. 9 illustrates an example template for an authoritative negative variant for a security program. This example shows one example for meeting a contractual or regulator requirement and adapting the security program to the regulatory requirement. For example, an authoritative negative is when there is an assessment objective that does not apply to a particular user/organization.
- FIG. 11 illustrates an example product agnostic step for an example program procedure.
- This example is an example of a design decision for the security program to demonstrate a way to group decisions dependent on a type of product or technology. For example, different firewall devices require different types of support.
- the program procedure is product agnostic and links are provided to product standard operating procedures (SOPs).
- SOPs product standard operating procedures
- An example sub activity type output defining a procedure for disabling portable storage devices in different endpoint products is illustrated in FIGs. 12.
- An example evidence type output defining a procedure for verifying that the endpoints have disabled portable storage devices is illustrated in FIG. 13.
- FIG. 12 illustrates example product specific procedures (e.g., instructions) for the sub-activity type. This example illustrates different procedures for disabling portable storage media on Windows and Mac devices.
- FIG. 13 illustrates example product specific procedures for the evidence presentation type. This example illustrates steps for verifying whether portable storage media are disabled on Windows and Mac devices.
- FIGs. 14-18 illustrate user interfaces for selecting a security program.
- high level and non-technical questions are provided to a user who can enter information about their organization. The answer to these questions may trigger further related questions based on what is selected.
- a security program with assessment objectives is generated.
- selections may trigger a recommendation to consult an expert or an on-site service team. For example, based on a potential level of complexity that requires a more personalized level of service.
- the recommendation includes a suggestion to subscribe to higher membership tier to access personalized services.
- FIG. 14 illustrates an example user interface for receiving information used to generate a security program.
- a user selects descriptions that apply to their organization.
- additional options shown in FIG. 15
- This example user interface is presented to a user to provide decision support while collecting organization information (e.g., as in input for the inputs about an organization 5504 shown in FIG. 55).
- FIG. 15 illustrates another example user interface for receiving information used to generate a security program.
- this user interface a user selects elements that are present in the office building. In some examples, this figure is presented based on the selections in the user interface shown in FIG. 14.
- a user provides details about their organization and the computer security management system proposes variants, which a user can select. Once the selections are received the computer security management system generates the security program.
- FIG. 16 illustrates an example user interface that is presented when a user is recommended to upgrade their service level for a security platform.
- having a virtual machine requires a user/organization to upgrade to a higher service level.
- This is another example user interface that is presented to provide decision support to a user.
- FIG. 17 illustrates an example user interface presenting a recommendation to use an on-site support team to complete one or more steps in a security activity and/or procedure.
- this user interface the user/organization is recommended to use an on-site support team to complete a specific tasks.
- This user interface presents information related to regulatory support (e.g., to highlight to the user that the current program is missing actions required to meet a regulatory/contractual obligation.
- FIG. 18 illustrates an example user interface to confirm selections for a security program.
- FIG. 19 illustrates example generated output (e.g., diagrams built by the computer security management system 102 shown in FIG. 1) from the computer security management system.
- the aggregators illustrated in FIG. 1 generate the output providing visualizations of different aspects of a computer security program.
- the output is referred to as artifacts.
- FIG. 20 illustrates example generated output (an aggregation on advisory on how to draw a diagram) from the computer security management system.
- FIG. 20 provides instructions to an implementer for generating a network diagram.
- an algorithm is used to automate the completion of some or all of the steps for creating a network diagram.
- the example generated output (e.g., artifact) can be generated using an aggregator and can contain steps of or from multiple program procedures.
- Fig 21 is an example database of assessment objectives gathered from a governing body (CMMC).
- the security program includes multiple assessment objectives that are assigned to a major activity.
- the major activities include groups of assessment objectives that require similar implementation steps.
- the steps are referred to as program procedures.
- FIG. 21 illustrates an example of a data structure to link requirements to an aggregator.
- the right hand column includes a mapping of assessment objectives (e.g., rules) to the major activities that we will write procedures to address the assessment objectives.
- the steps illustrated are procedures, which are an element of program procedures.
- a map of major activities is illustrated in Fig 32.
- the artificial intelligence can include a directed graph that uses reinforced learning and clustering coefficients to broadly link system configurations to policy decisions to make recommendations for configurations, generate documentation, and optimize cyber programs.
- the instruction generator uses a template to generate the instructions and/or documentation.
- the program procedure contains a variant option.
- a variant is a different operational decision that could apply to a different organization.
- the variant option means the program procedure answers a certain choice.
- a different program procedure would describe the step when a different choice is selected. That other program procedure would be assigned the other option, and swapped out with this one, depending on the choice.
- the program procedure also contains snippets of text to complete different documents that are needed for the program. In this instance, it contains a snippet for an “advisory article” a document that explains to organization stakeholders what is happening in this activity.
- the program procedure can include sub-activity steps.
- the sub activity step is a link or links to a step or series of steps that describe in technical detail what needs to occur.
- users will not see the sub-activity step (e.g., instead the steps are automated and/or implemented by an implementation team).
- the user can see the sub-activity steps (e.g., by purchasing a premium level subscription service).
- the program procedure can also include evidence steps.
- the evidence steps include a link to an evidence record stored in memory, and a snippet that describes how to gather that evidence, whether it is located in the system or if it is a file saved somewhere, as well as a short description of how that evidence meets or helps meet the assessment objective.
- the program procedure also includes an associated output describing policies that apply to the organization if the program procedure is selected.
- the policies contain both organizational policies, and user facing or acceptable use policies that individual employees can be required to sign.
- Organizational policies can be technical and cover procedures that may be required to be followed.
- the acceptable use policies apply to various users, and focus more on rules that apply to the users.
- the program procedures link to supplemental guidance, like if a program procedure discusses destroying media, we would link to NIST 800-88, which describes guidelines for destroying media.
- each program procedure covers one specific activity, but tracks all the related information that applies to that specific activity. This related information can be gathered in different ways, so if a user needs to see the entire organization’s policy, the system grabs the relevant policies off every program procedure.
- the program procedures can be added or removed based on the specifics of the organization. So, if something does not apply, the system can remove the program procedure, along with policies that do not apply, evidence that do not apply, and/or procedures that do not apply. In some examples, a user provides selections of one or more program procedures to apply.
- assessment objectives are visualized by who is responsible (e.g., organization, third party consultant, specific users, etc.). These assessment objectives can also be grouped differently depending on how an organization wants them visualized.
- the “domain” view shows every assessment objective by its original domain that the Cybersecurity Maturity Model Certification (CMMC) program organized them by.
- the assessment objectives can also be sorted by “outcome” or “capability.” In some examples, different levels of service provided by the computer security management system include more or less rules.
- FIG. 32 An example program activity flow diagram is illustrated in FIG. 32. The example shown maps the major activities in the order in which they should occur. As discussed herein, a major activity contains multiple related program procedures.
- the system tracks folder structure that the evidence will be stored in, so that they are consistent and locatable. This also means the evidence is handed to an assessor, it is all in one place so the file can be hashed at the time of the assessment, to prove that nothing has been modified.
- some implementations include a program builder. For each program, there is a feature to select variants assigned to that program, and a way to publish program procedures and version them, so existing organizations can get new program procedures as they are developed.
- some implementations include a variant selector for a specific program.
- FIG. 36 shows part of a CUI flow diagram.
- the CUI Flow diagram was designed in a way that depending on certain variants, portions of the diagram (slides) can be removed that do not apply to a specific program. For example, if a program did not have any local networks, and everyone worked remotely, this slide would be deleted. In some examples, the CUI flow diagram is generated automatically.
- FIG. 37 A table illustrating a portion of the user, process, and asset inventory, is shown in FIG. 37. This table can be tracked in a database, tracks attributes regarding items in a program, and those attributes line up with assessment objectives required by CMMC. (see top row) It will track all sorts of items, like people, devices, locks and keys, buildings(sites) etc.
- FIG. 38 shows a rendition of a system catalog.
- a system catalog would contain different system baselines (like windows, Mac, SonicWall, etc.) and figure out which questions sets apply based on what mechanism they are (endpoint, boundary protection device, etc.). In some examples, the question sets assist with meeting the asessmet objectves of the security program.
- FIG. 39 shows types of users of the system.
- the plurality of users 110 illustrated and described in reference to FIG. 1.
- FIG. 40 shows an example list of variants.
- a first user selects high level operational decisions and then an expert user selects variants and subvariants for the various achievement objectives generated based on the high level operational decisions.
- the application includes a list of pointers to the appropriate procedures in the library, where the aggregators will run for that organization.
- the aggregators determine the program procedures that will be used to implement the security program.
- program procedures are aggregated that point to different product types in the list of products.
- FIG. 41 is a diagram illustrating processes and features disclosed herein.
- FIG. 42 is a diagram illustrating a database schema used by the aggregators.
- FIG. 43 shows an example of a computing device 4000 and an example of a mobile computing device that can be used to implement the techniques described here.
- the computing device 4000 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
- the mobile computing device is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices.
- the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
- the computing device 4000 includes a processor 4002, a memory 4004, a storage device 4006, a high-speed interface 4008 connecting to the memory 4004 and multiple highspeed expansion ports 4010, and a low-speed interface 4012 connecting to a low-speed expansion port 4014 and the storage device 4006.
- Each of the processor 4002, the memory 4004, the storage device 4006, the high-speed interface 4008, the high-speed expansion ports 4010, and the low-speed interface 4012 are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate.
- the processor 4002 can process instructions for execution within the computing device 4000, including instructions stored in the memory 4004 or on the storage device 4006 to display graphical information for a GUI on an external input/output device, such as a display 4016 coupled to the high-speed interface 4008.
- an external input/output device such as a display 4016 coupled to the high-speed interface 4008.
- multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory.
- multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
- the memory 4004 stores information within the computing device 4000.
- the memory 4004 is a volatile memory unit or units.
- the memory 4004 is a non-volatile memory unit or units.
- the memory 4004 can also be another form of computer-readable medium, such as a magnetic or optical disk.
- the storage device 4006 is capable of providing mass storage for the computing device 4000.
- the storage device 4006 can be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
- a computer program product can be tangibly embodied in an information carrier.
- the computer program product can also contain instructions that, when executed, perform one or more methods, such as those described above.
- the computer program product can also be tangibly embodied in a computer- or machine-readable medium, such as the memory 4004, the storage device 4006, or memory on the processor 4002.
- the high-speed interface 4008 manages bandwidth-intensive operations for the computing device 4000, while the low-speed interface 4012 manages lower bandwidthintensive operations. Such allocation of functions is exemplary only.
- the high-speed interface 4008 is coupled to the memory 4004, the display 4016 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 4010, which can accept various expansion cards (not shown).
- the low-speed interface 4012 is coupled to the storage device 4006 and the low-speed expansion port 4014.
- the low-speed expansion port 4014 which can include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- the computing device 4000 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a standard server 4020, or multiple times in a group of such servers. In addition, it can be implemented in a personal computer such as a laptop computer 4022. It can also be implemented as part of a rack server system 4024. Alternatively, components from the computing device 4000 can be combined with other components in a mobile device (not shown), such as a mobile computing device 4050. Each of such devices can contain one or more of the computing device 4000 and the mobile computing device 4050, and an entire system can be made up of multiple computing devices communicating with each other.
- the processor 4052 can execute instructions within the mobile computing device 4050, including instructions stored in the memory 4064.
- the processor 4052 can be implemented as a chipset of chips that include separate and multiple analog and digital processors.
- the processor 4052 can provide, for example, for coordination of the other components of the mobile computing device 4050, such as control of user interfaces, applications run by the mobile computing device 4050, and wireless communication by the mobile computing device 4050.
- the external interface 4062 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used.
- the memory 4064 stores information within the mobile computing device 4050.
- the memory 4064 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
- An expansion memory 4074 can also be provided and connected to the mobile computing device 4050 through an expansion interface 4072, which can include, for example, a SIMM (Single In Line Memory Module) card interface.
- SIMM Single In Line Memory Module
- the expansion memory 4074 can provide extra storage space for the mobile computing device 4050, or can also store applications or other information for the mobile computing device 4050.
- the expansion memory 4074 can include instructions to carry out or supplement the processes described above, and can include secure information also.
- the expansion memory 4074 can be provide as a security module for the mobile computing device 4050, and can be programmed with instructions that permit secure use of the mobile computing device 4050.
- secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
- the memory can include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below.
- NVRAM memory non-volatile random access memory
- a computer program product is tangibly embodied in an information carrier.
- the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
- the computer program product can be a computer- or machine-readable medium, such as the memory 4064, the expansion memory 4074, or memory on the processor 4052.
- the computer program product can be received in a propagated signal, for example, over the transceiver 4068 or the external interface 4062.
- the mobile computing device 4050 can communicate wirelessly through the communication interface 4066, which can include digital signal processing circuitry where necessary.
- the communication interface 4066 can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others.
- GSM voice calls Global System for Mobile communications
- SMS Short Message Service
- EMS Enhanced Messaging Service
- MMS messaging Multimedia Messaging Service
- CDMA code division multiple access
- TDMA time division multiple access
- PDC Personal Digital Cellular
- WCDMA Wideband Code Division Multiple Access
- CDMA2000 Code Division Multiple Access
- GPRS General Packet Radio Service
- a GPS (Global Positioning System) receiver module 4070 can provide additional navigation- and location- related wireless data to the mobile computing device 4050, which can be used as appropriate by applications running on the mobile computing device 4050.
- the mobile computing device 4050 can also communicate audibly using an audio codec 4060, which can receive spoken information from a user and convert it to usable digital information.
- the audio codec 4060 can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 4050.
- Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, etc.) and can also include sound generated by applications operating on the mobile computing device 4050.
- the mobile computing device 4050 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone 4080. It can also be implemented as part of a smart-phone 4082, personal digital assistant, or other similar mobile device.
- Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
- ASICs application specific integrated circuits
- These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
- machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine- readable medium that receives machine instructions as a machine-readable signal.
- machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
- the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
- the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
- the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
- LAN local area network
- WAN wide area network
- the Internet the global information network
- the computing system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- FIGs. 44-54 illustrate an example implementation for systems and methods that aid in the creation and operation of a cybersecurity program.
- Example implementation #1 includes methods and systems for creating and operating procedure, policy, and evidence for cyber security programs. Some examples include systems and methods for creating and operating a cybersecurity program to include documenting procedure, policy, and evidence among other artifacts.
- a method for creating and operating a cybersecurity program is disclosed. The method includes receiving inputs providing information about the organization and creating steps to operate the program (procedures), rules the organization must abide by (policies), and verification the steps were completed properly (evidence) among other artifacts.
- FIG. 44 illustrates a group of related program procedures.
- the group is referred to as an “activity”
- Each row contains the contents of one procedure, and each column shows the corresponding aggregator.
- FIG. 45 illustrates a user interface that allows a certain user (in this case, the “facility security officer) to see which portions of the activity are their responsibility, along with detailed descriptions of what it is they need to complete.
- FIG. 46 illustrates a user interface that allows a certain user (in this case, the “Govern Team” member) to see which portions of the activity are their responsibility, along with detailed descriptions of what it is they need to complete.
- FIG. 46 only shows the procedures associated with the variant the organization selected.
- FIG. 47 provides another example of the interface shown in FIG. 3, where the organization has selected to ban portable storage. In some examples, the instructions provided to the Govern Team member have changed as a result of that selection by the organization.
- FIG. 48 shows a map of major activities, or groups of program procedures by related implementation
- example implementation 1 describes systems and methods that aid in the creation and operation of a cybersecurity program.
- the system maintains a large repository of program procedures, that allows for an entity can access the system, input information about the entity, and receive a set of program procedures that is curated to the organization.
- the computer security management system creates a set of views that is intended to be helpful for different audiences and systems. Relevant procedures are aggregated, and displayed with steps to be performed (in some examples, automatically) in order to configure and operate the program, as well as instructions on how to generate the evidence.
- the policies are aggregated, and displayed as an organization policy (for example, so they can be signed by members of the team).
- the evidence is gathered and displayed in a manner that allows an auditor system to quickly run through and assess each procedure individually. For example implementation 1 the gathering of these items is sometimes referred to as “aggregation”, that are grouped by aggregators (e.g., procedures, policy, evidence).
- An aggregator can be any collection of information for a set of procedures. Examples include: variant (e.g., the procedure to a group of similar procedures that would apply to a specific entity scenario), advisory (e.g., a collection of information that provides a high-level description of the purpose of the procedure), auditing (e.g., a collection of questions that provide quality control against procedures, and gathers the answers to those questions automatically), inventory (e.g., a procedure to a characteristic of an item that needs to be inventoried), system catalog (e.g., a procedure to a characteristic of a system that needs to be configured), schedule (e.g., identifies how often the procedure needs to happen, and schedules it automatically), responsibility (e.g., identifies who is responsible for completing the procedure), and rules (e.g., the procedure to a restriction or requirement that a regulation has).
- variant e.g., the procedure to a group of similar procedures that would apply to a specific entity scenario
- advisory e.g.,
- Program procedures can be organized in different ways, depending on the requirements of the system. For example, if the user or system is an implementer, program procedures can be grouped by “major activity.” This groups program procedures so that similar tasks can be completed at the same time. In another example, if the user or system is an assessor, program procedures can be grouped by rules to expedite the assessment process. [0149] In some examples, the assessment process includes a step to map procedures where the computer security management system maps the organization’s procedures and correlates them to rules (in some examples using machine learning to automatically detect relationships between procedures and rules). Because the system maintains a relationship between procedures and rules, this task is already completed before the assessment, and allows the organization to see that the computer security management system and/or Organization Security Platform have implemented all of the tasks needed for compliance before an assessment happens.
- the computer security management system is configured to be continuously updated. For example, by removing outdated program procedures and replacing them with new program procedures. As discussed above, program procedures have all of their dependencies related to them. This allows the computer security management system to change by removing and adding procedures without affecting the other ones, or forgetting about certain portions during updates.
- Examples of sources of change that may call for the replacement, addition, or removal of a program procedure include: (1) the computer security management system may be changed at a platform level (e.g., based on new best practices, more efficient procedures, and/or to add support for more scenarios), user initiated change (e.g., a customer may submit a request for a change (e.g., to customize a process or add a new procedure), changes in technology (e.g., a transition to cloud computing), newly identified vulnerabilities (e.g., as vulnerabilities are discovered in existing and new systems, procedures need to change to address those vulnerabilities.
- the system examines inventories to check for devices with hardware vulnerabilities that have been discovered and updates procedures for these devices), and newly identified threats (for example, updating procedures to block sign ins from certain IP address spaces better protects against identified threats).
- the computer security management system can maintain a database of procedures and a database of industry products.
- the entries in the database can contain a security impact analysis (SIA) and implementation guidance.
- SIA security impact analysis
- the security impact analysis is an analysis of a product to determine the potential risks associated with its use.
- Examples of the things that can be analyzed about a product include infiltrations of that product (e.g., when was the attack first noticed, was the attack noticed internally or externally, how long after the attack was a publication put out, how was the attack handled, how much transparency does the organization have about the attack), methods used to protect that product (e.g., when signing in, does the product use Multi-Factor Authentication, does the product use encryption, are security patches released frequently for the product), and/or transparency around the product (e.g., is the product open source, has the product gone through any third-party audits, inspections, or assessments).
- infiltrations of that product e.g., when was the attack first noticed, was the attack noticed internally or externally, how long after the attack was a publication put out, how was the attack handled, how much transparency does the organization have about the attack
- methods used to protect that product e.g., when signing in, does the product use Multi-Factor Authentication, does the product use encryption, are security patches released frequently for the
- the computer security management system After the analysis, the computer security management system provides an interpreted risk level for use of a product. This allows organization computing systems to interface with the computer security management to determine information about a particular product.
- the computer security management system provides guides on how to use products to implement the procedures described in the program procedures. For example, a procedure regarding “disabling portable storage media on endpoints” corresponds with guides on how to do so for multiple providers of endpoints. 1 [0154]
- Major activities are mapped in FIG. 48. They can be organized in a certain order for an organization to perform during setup, and later perform routinely.
- FIG. 49 displays an example architecture of the computer security management system.
- the system communicates securely with organization cloud systems to facilitate implementing procedures, and to verify that the implemented procedures are functioning properly.
- the computer security management system contains automation to conduct quality control checks on organization systems.
- the computer security management system provides a user interface for organization members to sign in and input or receive information.
- the assessor communicates via internet video call to conduct assessments, or conduct assessments in person.
- the assessor can also sign in to the portal to view evidence and procedures easily.
- FIG. 50 shows how program procedures relate to organizations. Organizations grab specific program procedures that apply to them, and then those program procedures are aggregated into documentation for that organization. Different organizations use different program procedures depending on their business models, requirements, and preferences. This results in tailored documentation for different organizations that is easy to modify as organizations evolve, as program procedures can be added or removed using the computer security management system.
- systems and methods for creating and maintaining security procedures for organizations are disclosed.
- methods for creating and maintaining security procedures are disclosed. The method includes receiving inputs that provide information about an organization, including the regulations the organizations must abide by, and outputting procedures that meet the regulations, and are tailored to the organization. Additionally, the methods output evidence to prove to regulators that regulations are being met. It accomplishes this by bundling snippets of procedure, evidence and company policy and providing only certain snippets to a requesting system.
- FIG. 51 illustrates an example major activity.
- FIG. 52 illustrates an example media protection program procedure.
- FIG. 53 illustrates an example group of media protection program procedures.
- FIG. 54 illustrates example flow diagrams in accordance with the examples described in example implementation 1.
- Example Implementation 2 describes systems and methods for generating, implementing, verifying, operating and updating a security program for organizations.
- Security programs can include security procedures, security policies, security evidence, and other security systems, frameworks, and processes.
- a method for generating, implementing, and verifying a security program for an organization includes gathering inputs about best practices, organization, technologies, regulations, vulnerabilities, and threats, and generating the security program by processing the inputs.
- the outputs of the cybersecurity program generator include policies, advisories, instructions, actions, logs, audits, inventories, evidences, and reports.
- FIG. 55 illustrates an example of the cybersecurity program generation environment 5500 for generating, implementing, and verifying a security program.
- the environment 5500 includes gathering inputs about best practices 5502, the organization 5504, technologies 5506, regulations 5508, vulnerabilities 5510, and threats 5512.
- FIG. 55 also illustrates the Cybersecurity Program Generator 5514, as well as its internal workings, develop actions 5516, establish policies 5518, communicate advisories 5520, distribute instructions 5522, execute actions 5524, maintain inventories 5526, collect logs 5528, document evidences 5530, conduct audits 5532, and build reports 534.
- the Cybersecurity Program Generator 5514 is configured to generate, implement, and verify a cybersecurity program for an organization.
- the cybersecurity program generator 5514 is configured to gather inputs 5502-5512 and interpret them to develop actions 5516. Once the actions are developed, they are then used to establish policies 5518 for the organization, communicate advisories 5520 to the organization, and distribute instructions 5522 to members of the organization that need to complete actions. This leads to execute actions 5524, where actions are either taken by the generator or individuals. Actions describe any activity performed in the pursuit of managing or supporting a cybersecurity program. After actions are taken, they are logged, and those logs are collected 5528 for so that the generator may conduct audits 5532 to ensure the program is running as it is supposed to.
- Actions may necessitate maintaining inventories 5526, which the system may update automatically depending on the action taken. Inventories and audits, among other things are then documented as evidences 5530. Audits are also used to build reports 5534 for the organization and for the generator so that further required actions are taken, either automatically by the system or by individuals, and discrepancies are corrected. [0173] In the example show there are six types of inputs grouped by a source of change.
- the best practices 5502 inputs include inputs define cyber security options.
- the organization 5504 inputs include inputs defining a change in organizational practices and organizational requirements (e.g., a requirement to move large files between non-networked devices).
- the technologies 5506 inputs define options for technologies (e.g., as may be updated as new technologies/devices are integrated).
- the regulations 5508 inputs include regulatory requirements (e.g., a contractual requirement to protect data on flash drives). In some examples, the regulations 5508 include contractual requirements.
- Vulnerabilities 5510, and threats 5512 include inputs defining or describing external risks to a cyber security program. [0174] In some examples, inputs 5502 and 5508 influence what actions are picked (e.g., influence a list of solutions). In some examples, the inputs 5502 and 506 influence what choices are available for the security program (e.g., influence a list of requirements). In some examples, the inputs 5510 and 5512 determine what the potential risks are for the organization (e.g., adversarial externalities, and/or a list of risk or problems with the solutions). The inputs combine into a set of decisions at 5516.
- a business may have a requirement to move large files between nonnetworked device and a contractual requirement to protect data on flash drives.
- the actions developed at 5516 will consider using flash drives to move files between non-networked devices but will need to consider the contractual requirement (e.g., be developing encryption and protocols for managing the flash drives).
- Several actions can be identified to mitigating risk when using the flash drives.
- the business does not have a requirement to use flash drives and actions identified would include actions for not allowing the use of flash drives.
- the policies (5518) can include documents given to users for the user to acknowledge (e.g., acknowledge the policy to not user the policy), include require formats to communicate decisions (system security plan “SSP”).
- Communicate advisories 120 include decision support for users and systems. This can include documentation for organization members and/or data structure defining a correlating actions to impacts of the actions to allow the Al cybersecurity program generator 5514 to consider the impacts when selecting actions (e.g., conflicts, compatibility of actions, costs, etc.). In some examples, the communicate advisories 120 determines the most important decisions and flags these decisions for human review for a set decisions.
- Some examples include thousands or more decisions that may or may not be compatible, may or may not reinforce each other, and that have different costs. [0177] Some examples include a database of requirements, a database of solutions, and a database of adversarial/inherent risk. In some examples, the database (e.g., the actions database 5517) stores a set of actions, a set of decision criteria, and other instructions.
- the instructions and policies distributed at 122 include snippets of a security program (e.g., lines of code, implementation steps, definitions, required data to store, tracking components of the system, compliance documentation (e.g., content to include in artifacts etc.), content to include in a change log, risk data, content for a support ticket system (e.g., to automatically generate a support ticket), code to implement an action, elements (e.g., at 5530) that describe ways to test and monitor the implementation of actions).
- snippets of a security program e.g., lines of code, implementation steps, definitions, required data to store, tracking components of the system, compliance documentation (e.g., content to include in artifacts etc.), content to include in a change log, risk data, content for a support ticket system (e.g., to automatically generate a support ticket), code to implement an action, elements (e.g., at 5530) that describe ways to test and monitor the implementation of actions).
- the actions database 5517 receives an action request queries with any combination of the inputs 5502-5512 and identifies corresponding actions that are used to identify the actions at 5516.
- artificial intelligence is used to identify actions in the actions database 55117 to generate the actions. The actions are then provided to generate documentation at the establish policies 5518 and implementation instructions at the distribute instructions 5522.
- the inputs 5502-5512, the develop actions 5516, and the communicate advisories 5520 are referred to as a decision engine.
- the decision engine is configured to identify actions that are used to establish policies 5518 and distribute instructions 5522.
- the policies include documentation for consumption (e.g., by a system or human) and the instructions include steps to implement actions (e.g., computer code or processes for implementing an instruction).
- the policies and instructions are referred to as artifacts.
- the artifacts are stored in a database (e.g., the artifacts database 5535).
- the decision engine also establishes high level questions that to raise to a user, where the answer to the high level questions determine a list of specific actions that correspond to the answers given to the high level questions. For example, a high level question may refer to what types of files are transferred in an organization, which will generate specific actions related to the use of flash drives.
- the policies and instructions are implemented at the execute actions 5524.
- the execute actions 5524 can include computer-readable code for systems and processes for collecting logs (at 5528), maintaining inventories (5526), conduct audits (at 5532) and document evidences (5530), build reports (at 55340, or combinations thereof.
- the collect logs 5528, maintain inventories 5526, conduct audits 5532, and the document evidence 5530 are referred to as aggregators.
- the build reports 5534 may determine that an action is not working and will provide feedback to the decision engine (e.g., as another input at develop actions 5516).
- the feedback is used to automatically update the security program (e.g., by updating the actions selected).
- the feedback is provided to update a data model mapping the inputs to actions to improve the selection of those actions.
- the output of the reports are stored in the artifacts database 5535.
- the artifacts database 5535 can be analyzed (e.g., automatically using Al) to update actions database 5517.
- the associations between requirements and actions can be updated when it is determined from the reports that the previously identified actions did not meet the requirements (e.g., business requirements or regulatory/contractual requirements).
- FIG. 56 describes the configuration deployment environment 5600.
- the cybersecurity program generator 5514 is configured to implement baselines on organization systems 5602.
- the cybersecurity program generator 5514 develops actions 5516 to include the deployment of configurations to systems.
- the cybersecurity program generator then executes actions 5524, including deployments of configurations to system.
- the configurations are deployed over a network 120.
- the system applies configurations 5604 to organization systems 5602.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present disclosure describes systems and methods for generating, implementing, and verifying a security program for organizations. In one aspect, a method for generating, implementing, and verifying a security program for an organization is disclosed. The method includes receiving inputs providing information about the organization, generating the security program by processing the inputs describing the information about the organization, the security program including assessment objectives for the security program, and providing the information about the organization and the security program as model inputs to a security management model to generate output including implementation procedures defining steps for implementing the security program and verification procedures defining steps for verifying the assessment objectives of the security program.
Description
METHODS AND SYSTEMS FOR GENERATING, IMPLEMENTING, AND VERIFYING
COMPUTER SECURITY PROGRAMS
BACKGROUND
[0001] Computer security has become a paramount concern for businesses of all sizes. Implementing a comprehensive computer security policy can be a complex task. It requires a deep understanding of various security principles, business principles, technologies, and best practices. The fast-paced nature of technological advancement means that security threats are constantly evolving. Keeping up with these changes and ensuring that security measures are up-to-date can be a significant challenge. In many industries, organizations may be required to comply with various regulations related to data security and privacy. Navigating these regulations and ensuring compliance can be a daunting task for many organizations.
[0002] Small and medium-sized organizations, in particular, face unique challenges in implementing effective computer security programs. For example, small and medium-sized organizations often operate with limited financial and human resources. This can make it difficult to invest in advanced security infrastructure or hire dedicated IT security personnel.
SUMMARY
[0003] The present disclosure describes systems and methods for generating, implementing, and verifying a security program for organizations. Security programs can include security procedures, security policies, security evidence, and other security systems, frameworks, and processes.
[0004] In one aspect, a method for generating, implementing, and verifying a security program for an organization is disclosed. The method includes receiving inputs providing information about the organization, generating the security program by processing the inputs describing the information about the organization, the security program including assessment objectives for the security program, and providing the information about the organization and the security program as model inputs to a security management model to generate output including implementation procedures defining steps for implementing the security program and verification procedures defining steps for verifying the assessment objectives of the security program.
[0005] In another aspect, information provided about the organization is inputted to the security management model to generate output, including procedures for implementation, rules for policy, evidence for assessors, and automation to monitor compliance.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] FIG. 1 illustrates an example environment for generating, implementing, and verifying a security program for an entity.
[0007] FIG. 2 illustrates an example method for generating, implementing, and verifying a security program for an entity.
[0008] FIG. 3 illustrates types of output generated by the security management model.
[0009] FIG. 4 illustrates an example template for a program procedure for each of the types of output for a major activity of a security program.
[0010] FIG. 5 illustrates an example step for an example program procedure.
[0011] FIG. 6 illustrates another example step for an example program procedure.
[0012] FIG. 7 illustrates an example variant for an example step for an example program procedure.
[0013] FIG. 8 illustrates options for example variants for a security program.
[0014] FIG. 9 illustrates a template for an authoritative negative variant for a security program.
[0015] FIG. 10 illustrates an example for an authoritative negative variant for a security program.
[0016] FIG. 11 illustrates an example product agnostic step for an example program procedure.
[0017] FIG. 12 illustrates example product specific procedures for the sub-activity type.
[0018] FIG. 13 illustrates example product specific procedures for the evidence presentation type.
[0019] FIG. 14 illustrates an example user interface for receiving information used to generate a security program.
[0020] FIG. 15 illustrates another example user interface for selecting a security program.
[0021] FIG. 16 illustrates an example user interface that is presented when an organization is recommended to upgrade their service level for a security platform.
[0022] FIG. 17 illustrates an example user interface presenting a recommendation to use an on-site support team to complete one or more steps in a security activity and/or procedure.
[0023] FIG. 18 illustrates an example user interface to confirm selections for a security program.
[0024] FIG. 19 illustrates example generated output from the computer security management system.
[0025] FIG. 20 illustrates example generated output from the computer security management system.
[0026] FIG. 21 illustrates a list of example major activities for an organization’s security program.
[0027] FIGs 22-42 illustrate features for example implementations disclosed herein.
[0028] FIG. 43 is a schematic diagram that shows an example of a computing device and a mobile computing device.
[0029] FIG. 44 illustrates an example group of program procedures.
[0030] FIG. 45 illustrates an example user interface.
[0031] FIG. 46 illustrates another example user interface.
[0032] FIG. 47 illustrates yet another example user interface.
[0033] FIG. 48 illustrates an example map of major activities, or groups of program procedures.
[0034] FIG. 49 displays the architecture of the computer security management system.
[0035] FIG. 50 illustrates an example relationship of program procedures and organizations.
[0036] FIG. 51 illustrates an example major activity.
[0037] FIG. 52 illustrates an example media protection program procedure.
[0038] FIG. 53 illustrates an example group of media protection program procedures.
[0039] FIGs 54A-54D illustrates example flow diagrams in accordance with the examples described in an example implementation.
[0040] FIG. 55 illustrates the inner workings of the cybersecurity program generator, including gathering inputs and generating artifacts.
[0041] FIG. 56 illustrates the cybersecurity program generator deploying configurations to systems
[0042] Like reference symbols in the various drawings indicate like elements
DETAILED DESCRIPTION
[0043] The present disclosure describes systems and methods for generating, implementing, and verifying a security program for computer systems, improving the technology of computer security and improving the functioning of the computers in a network.
[0044] Computer security has become a paramount concern for entities of all sizes. For each entity there are multiple stakeholders with different perspectives and levels of expertise.
For example, in the computer security space there are stakeholders from the computer security program perspective, governance (e.g., users responsible for ensuring the entity is compliant with standards and the law), implem enters (e.g., system admins), cyber operations, insurance etc. In some examples, the computer security management system disclosed herein interfaces which each of the possible different roles, including by automatically converting sets of requirements into a set of documents optimized for a perspective of a user. In some examples, this includes automatically converting a set of operational requirements into a set of computer security requirements. In some examples, a set of operational requirements are converted into a procedure for a user to follow, a policy the organization implements, an automation that monitors the status of a procedure, a calendar reminder to complete a task, and/or evidence to prove to a third party the task was completed.
[0045] In some examples, users enter information about their entity and a security program is generated based on the received information. For example, by automatically converting operational decisions and requirements into specific cybersecurity decisions and requirements that define a security program.
[0046] In some examples, a security management model is leveraged to implement specific details for a computer security program based on high level operational decisions. For example, some organizations may decide that the organization includes extremely confidential information, and the highest level of computer security is required. This decision is then translated into a bunch of requirements for the security program. For example, the program may define that no computing equipment can enter or leave a building, that wireless access is not available, and that portable storage devices are not allowed to connect to computing devices based on the high level operational decision. In such an example, the operational decision flows down from a high level policy to specifically locking the devices associated with the organization. In these examples, the policy requires that all devices be highly secured regardless of the type end point the device is and the acceptable use document that to will include corresponding requirements for the employees (e.g., describing the policy forbidding the use a removable storage device). In some examples, after the policy is implemented, the security management model automatically configures systems to disable wireless access, and disable portable storage media. In some examples, the system routinely monitors the devices to ensure those configurations are still in place. The system can provide a report of the configuration status as evidence to a third party the configurations were complete. The system can alert the organization if there is an issue with configuring the device. In some examples, the system alerts the organization if there is a discrepancy between
the devices in the inventory and the devices it detects. The system can prepare an article so the organization understands the purpose of the restrictions.
[0047] The security management model is used to organize all the information about the operational decisions in one location. The security management model then allows for the organization to specify the technology that they have in place to allow aggregators to translate the operational decisions into a computer security program, including technical decisions for implementation and specific technical details that apply to the kinds of technology belonging to the organization. Aggregators can check the types of end points and automatically apply instructions for configuring the end points belonging to the organization. [0048] In some examples, the security management model is used to organize potential solutions an organization can choose (or be assigned) to implement into program procedures based on organizational requirements. The model allows the organization to specify its requirements, and then assigns the program procedures that the organization needs. Program procedures can contain different types of information, that are combined into artifacts (e.g., documents, configurations, code, or other types of information). An aggregation process is performed by the computer security management system by combining portions of program procedures into artifacts. One example artifact includes instructions on how to configure a system to meet a solution. Another example artifact is code that configures the system to meet the solution automatically.
[0049] In some examples, a computer security management system tracks a baseline (or a catalog of baselines) for organizations and provides extra attention to fields that are different from the baseline. This allows an organization to detect potential indicators of compromised security (e.g., where there is a change that is different from a standard baseline of the organization.
[0050] FIG. 1 illustrates an example computing environment 100 for generating, implementing, and verifying a security program for an entity. The environment 100 includes a computer security management system 102, an organization security platform 104, a third party policy system 106, an auditor system 108, and a network 120. A plurality of users 110 associated with the organization security platform 104 to implement the security program are also shown in FIG. 1.
[0051] The computer security management system 102 is configured to generate, implement, verify, and otherwise manage security policies for a plurality of entities including an entity associated with the organization security platform 104. In some examples, the computer security management system 102 is configured to convert an operational model and
architecture to a cyber security configuration for an organization. Leveraging the security management model 130 and the aggregators, a security program is generated, implemented, verified, and managed at the organization security platform 104 as described in more detail below. In some embodiments, the computer security management system 102 operates using a subscription model to a plurality of organizations including the organization associated with the organization security platform 104. In some examples, multiple tiers of service are offered via the subscription model.
[0052] The organization security platform 104 is a platform that manages the computer security for an organization. In some examples, the organization is a small or mid-sized company which leverages the computer security management system 102 to meet the organization’s security requirements more effectively (e.g., by meeting requirements that would otherwise not be met, reducing costs to implement a security program, implementing programs which improve employee understanding of the security program, etc.). However, the organization can be of any size and can include non-business entities (e.g., Government Organizations, Non-profits, etc.). The organization security platform keeps an inventory of all users and equipment in the org/system catalog and configurations/business information and provides this information to the computer security management system 102.
[0053] The plurality of users 110 associated with the organization security platform 104 include users of various roles that interact with the entities security program in different ways. FIG. 39 shows examples of the types of users of the organization security platform 104 and /or the computer security management system 102. In many examples, the users interact with multiple computing devices (sometimes referred to herein as endpoints) that are managed by the organization. In some examples, these endpoints are automatically (and in some cases, alongside some manual steps) configured and managed using the systems and methods disclosed herein.
[0054] The security management model 130 includes associations between operational requirements, computer security requirements, computer security standards and rules, implementation details, compliance rules and processes, etc. In some examples, the implementation details include technical steps for implementing and verifying a security requirement, documentation explaining the implementation (or providing guidance) to stakeholders with different levels of expertise, technical details for monitoring the requirement, etc. The data model is organized to match high level operational decisions with documentation and implementation details for a variety of specific elements related to a security program. In some examples, security management model 130 defines relationships
between rules and regulations and instructions. In alternative examples, the rules are integrated within instructions. In some examples, the output is provided to a generative Al model which can generate code to automatically implement the elements of the computer security program. In some examples, generative Al technology is used to produce output that is optimized for different sets of users with different perspectives, roles, and technical expertise. Further examples of the security management model 130 are described herein. [0055] In some examples, outputs from the security management model 130 include automations for monitoring the security program, evidence that is generated to prove a procedure was completed, and/or an automation that configures the organization security platform as specified by the security management model.
[0056] In some examples, the security management model 130 allows for the aggregation of large amounts of documentation optimized for different audiences. The aggregators 132 assembles the information for the different users. The information can include human readable instruction and executable code (which may be executed by the appropriate systems automatically). In some examples, a dashboard that displays all of this evidence for the organization. In this manner, policies and advisories are documents that targeted for human consumption, while implementation and verification instructions are automated.
[0057] The aggregators 132 receive a set of requirements and process the computer security management system to generate output (e.g., artifacts). For example, the output can include executive guidance, operational guidance, responsibility matrix, technical steps to implement the requirements, steps to produce a proof that the requirement is implemented correctly, code configured to automatically implement the requirement across the entity and/or to automatically verify and monitor that the requirement is met. In some examples, the aggregators 132 process the security management model 130 to output information presented in a way that is most effective for the role of a user accessing the documentation. For example, the aggregators can process security management model 130 to produce output that conveys the information to a variety of stakeholders with different perspectives and levels of expertise. In some examples, the aggregators 132 are computational modules that are configured to generate certain types of output including different types of output by processing the security management model 130.
[0058] The third party security policy system 106 is a third party publicly available system which stores computer security standards, legal requirements, etc. In some embodiments, the computer security management system 102 interfaces with the third party security policy system to update the security management model 130. For example, if a standard or
technology is updated by the third party security policy system 106 the computer security management system 102 will process these changes and automatically (or in some cases manually) update the security management model 130. In one example, the third party security policy system 106 includes a framework developed by a government run organization (e.g., the Department of Defense) that develops a set of cybersecurity standards and regulations (e.g., the Cybersecurity Maturity Management Model - “CMMC”). In some examples, regulatory bodies publishes documents outlining rules (e.g., via a pdf stored on a server) the computer security management system can process these documents to update the security management model.
[0059] The auditor system 108, interfaces with the organization’s security platform 104 to ensure the organization is meeting requirements. In some examples, the auditor system 108 may interface with the computer security management system 102 to receive instructions on how to verify that the organization is meeting its requirements (without providing access to confidential information about the organization or the security platform). In some embodiments, the auditor is a third party (e.g., hired by the organization to verify it is meeting its requirements) or government organization. In some examples, an auditor may access the portal to view documentation.
[0060] The network 120 can include one or more private and/or public networks. For example, parts of the network 120 may be included with a private network belonging to the organization and other parts of the network 120 may include a public network, such as the internet. The network allows for digital communication between various combinations of the computing systems illustrated in FIG. 1. Although only one network is shown in FIG. 1, many examples can include multiple networks including public networks (e.g., the network) and private networks (e.g., an internal networks of an organization, cloud based network, other private networks).
[0061] FIG. 2 illustrates an example method 200 for generating, implementing, and verifying a security program for an entity. In some examples, the method 200 is executed across the various systems illustrated in the environment 100 in FIG. 1. The method 200 includes the operations 202, 204, 206, 208, 210, and 212. In the example shown the operation 202 is performed by an organization computing device 201, the operations 204, 206, and 210 are performed by the computer security management system 102, and the operations 208 and 212 are performed by the organization security platform 104. In some implementations, the steps may be performed in different orders and/or by a different system or combination of systems.
[0062] At the operation 202, the organization computing device 201 receives inputs providing information about the organization. In some examples, the information about the organization includes high level operational decisions. In some examples, the organization computing device 201 displays the user interfaces illustrated in FIGs. 14-18 to a user and receives the information about the organization as user input.
[0063] At the operation 204, the computer security management system 102 generates a security program. For example, CSMS 102 can generate the security program by processing the inputs describing the information about the organization. In some examples, the security management model 130 (shown in FIG. 1) is used to convert high level operational decisions into the security program with specific assessment objectives. The security program includes policies related to assessment objectives, where the security program can include multiple assessment objectives. In some examples, users provide information about their organization to an interface hosted via a web application (e.g., at the operation 202). The organization information may include details about the size of the organization, number of employees, types of devices, types of work being done, etc. This information is processed to determine a recommended set of policies which are defined by multiple assessment objectives.
[0064] At the operation 206, the computer security management system 102 generates implementation procedures. The implementation procedures define steps for implementing the security program. In some examples, the security management model receives model inputs that include the information about the organization and the security program to generate outputs that include the implementation
[0065] procedures defining the steps or implementing the security program. In some examples, the computer security management system 102 determines one or more major activities to achieve the assessment objectives of the security program. The assessment objectives may be grouped by major activities. In some examples, the objectives are grouped by a party or team responsible for each of the major activities. For each major activity, the computer security management system generates output to implement the major activity. In some examples, the major activities include sets of program procedures which are generated via templates by leveraging the security management model 130, where the program procedure includes steps for implementing the major activity.
[0066] At the operation 208, the organization security platform 104 receives and executes the implementation procedures. In some examples, at least a portion of the steps for implementing the security program are performed automatically by computing systems of the organization security platform 104.
[0067] At the operation 210, the computer security management system 102 generates verification procedures defining steps for verifying assessment objectives of the security program. In some examples, the security management model 130 receives model inputs that include the information about the organization and the security program to generate outputs that the verification procedures defining the steps for verifying the assessment objectives of the security program. In some examples, for each major activity, the computer security management system generates output to verify that the assessment objectives are met. In some examples, the major activities include sets of program procedures which are generated via templates by leveraging the security management model 130, where the program procedures include steps for verifying that the assessment objectives are met.
[0068] At the operation 212, the organization security platform 104 receives and executes the verification program procedures. In some examples, at least a portion of the steps for verifying the assessment objectives of the security program are performed automatically by computing systems of the organization security platform 104. In some examples, the organization system 104 sends data (e.g., reports) to the computer security management system 102 and the security management system 102 conducts the verification checks.
[0069] FIGs. 3-13 illustrate features of an example system for generating, implementing, verifying, and managing a security program of an entity according to the systems and method disclosed herein. The security program comprises a series of “major activities” that include groups of similar assessment objectives that can be implemented and/or verified together. Example major activities are shown in FIG. 21. Each major activity includes a program procedure comprising several steps that are required to meet the assessment objective or to move the entity closer to meeting the assessment objective.
[0070] FIG. 3 illustrates example aggregators that can generate different types of output (sometimes referred to as “snippets”) generated by the security management model. The example shown includes 4 types of output (e.g., the outputs of the aggregators, sometimes referred to as artifacts). The types of output include a sub-activity type 302, an evidence type 304, a policy type 306, and an advisory type 308. In some examples, each of the types are designed for a different system and/or audience. In some examples, the output includes generated documentation and diagrams and figures illustrating the major activity. Some of the output is computer code (e.g., the technical language output can be code.
[0071] The sub-activity type 302 includes output for implementing the major activity. In some examples, this includes technical and instructions on how to implement the activity. In some embodiments, the output is provided to an implementer that uses the instructions to
implement the major activity. In some examples, the implementor includes a model which processes the output to generate a program that automatically implements the major activity. In some embodiments, the sub-activity type 302 is output for users implementing the security program. The sub-activity type 302 includes technical instructions on how to implement the major activity.
[0072] The evidence type 304 includes output for assessing the implementation of the policy. In some embodiments, output is provided to an assessor to verify that the implementation meets the objective. In some embodiments, the assessor includes a model that is designed to verify that the implementation meets the objective. In some examples, the evidence type 304 provides instructions on how to verify that the major activity is implemented correctly. In some examples, the evidence type 304 includes outputs that are required to show and verify that the implementation of the major activity meets the advisory guidelines or requirements. In some examples, the evidence type 304 includes output that is automatically provided to an auditor and/or advisory agency.
[0073] The policy type 306 includes documentation defining the one or more policies associated with the major activity. In some examples, the output from the policy guideline is automatically integrated within an entity policy, in a manner that it becomes accessible by stakeholders in the entity (e.g., employees) once the major activity is implemented. In some examples, the policy guideline defines the rules for the users and the organization to follow. In one example, the policy type 306 outputs an acceptable use policy.
[0074] The advisory type 308 includes information about the security program for the organization. For example, the advisory type 308 serves as a catalog of information about the security program for the organization.
[0075] FIG. 4 illustrates a chart for example aggregators for different audiences and the output generated by each of the example aggregators. In some examples, this includes example program procedures for each of the types of output for a major activity of a security program. Example procedures (e.g., including “actions” that are correlated across different aggregators) can be stored, for example, as data objects in computer memory described later. The example shown includes an example program procedures for each of the output types illustrated in FIG. 3. The program procedures are designed to provide structured output for each type of output. In some embodiments, the program procedure defines steps that may be manually completed and/or automated by a corresponding system. In some examples, the aggregators are designed for each of the major activities and generate a program procedure (comprising one or more steps) that are required to be performed to complete the major
activity. In some examples, program procedures are retrieved from storage to generate artifacts. In these examples, the template provides instructions for automatically formatting steps that are defined by the program procedure.
[0076] In some examples, the solution includes an aggregator for each type of output, where the output is referred to as artifacts. In some examples, the artifacts include generated code and/or documentation. The program procedure is a set of steps that are output from each of the output types and an individual element in a program procedure is referred to as an aggregator element.
[0077] In some examples, a program procedure has aggregator elements. These elements have different types. Aggregators identify a number of program procedures, and take out all the aggregator elements of their type. These elements are used to build an artifact of the corresponding type.
[0078] FIG. 5 illustrates an example step for an example program procedure. In the example shown, each output type has an example output related to a requirement to meet a compliance standard for blocking portable storage devices on end points. For example, restricting a machine from connecting with a flash drive. In the example shown, the sub activity type includes a step for accessing a product specific guide. In some examples, code may be generated to automatically configure the machines to restrict connections from portable storage devices. The evidence type output provides a statement to an assessor showing that the entity has disabled portable storage devices, and where the assessor can go to verify the procedure. In some examples, the evidence type output includes a dashboard with an indicator that a requirement is complete. The policy type provides instructions to the organization defining the requirement that employees are not allowed to use portable storage devices, and the acceptable use policy will update to reflect the requirement. The advisory type defines the process and how the organization is addressing the requirement with the process. In some examples, the sub activity type includes a block of code (e.g., JSON) that when executed by an endpoint is configured to block portable storage devices at that endpoint. In these examples, the code depends on the endpoint (e.g., the different code will be generated based on whether the endpoint uses mac, PC, Linux operating systems).
[0079] FIG. 6 illustrates another example step for an example program procedure. In this example, the step addresses a requirement that a user of a specific role signs a controlled unclassified information (“CUI”) flow diagram. In this example, the advisory type defines the diagram, the requirement, to provide context to the user that is required to sign the document. The evidence type shows where an accessor can access the document signed by the user, the
policy type defines the requirement to sign the document and the sub activity type defines a step to sign the document.
[0080] FIG. 7 illustrates an example variant (e.g., an example of a decision that is tied to a set of related “snippets”) for an example step for an example program procedure. Variants allow program procedures as a whole to be swapped in and out of an organization’s program. Variants allow a user to modify and/or customize program procedures in the security program based on their requirements. In some examples, individual program procedures may be swapped based on the security program for the organization. For example, if an organization decisions such as weather to allow portable storage, to bring their own devices, to allow remote work, are determined and corresponding program procedure is swapped with a variant based on these decisions and accounted for within a program procedure.
[0081] In the first example, portable storage devices are blocked and each of the different output types will output information and documentation according to this decision. However, a user may select a variant of this procedure and swap it with a procedure that allows portable storage devices and requires encryption. In this example, the evidence type output will indicate to an assessor that portable storage devices are allowed with required encryption to meet the corresponding standard (e.g., in this case by limiting the use of portable storage devices by requiring encryption). The other output types will also change accordingly, as shown in FIG. 7 For example, the policy output (e.g., including the acceptable use policy) is automatically updated to reflect that portable storage devices are allowed with encryption and advisory output is updated to inform the user on how this policy is configured and what team is responsible for managing and implementing the procedure. In some examples, depending on an operational decision, the parts of the program procedure are swapped automatically to update the documentation and implementation within the organization. In some examples, program procedures are swapped out of a security program as a whole.
[0082] FIG. 8 illustrates options for example variants for a security program (e.g., an example of “snippet” groups). In some examples, a user provides information about their organization and different variants are selected based on the information about the organization. For example, a user may request that employees can work from home and various variants are automatically selected to allow for employees to securely work from home. FIG. 8 also illustrates multiple program procedures interfacing with each other. [0083] FIG. 9 illustrates an example template for an authoritative negative variant for a security program. This example shows one example for meeting a contractual or regulator requirement and adapting the security program to the regulatory requirement. For example,
an authoritative negative is when there is an assessment objective that does not apply to a particular user/organization. FIG. 10 illustrates an example for an authoritative negative variant for a security program. In some examples, authoritative negatives are used because regulatory bodies may require that some or all of assessment objectives (rules) be addressed. For example, even if there is a rule that is not pertinent to an organization, the organization needs to address how the organization meets the rule regardless. The computer security management system can handle the generation and presentation of this information.
[0084] FIG. 11 illustrates an example product agnostic step for an example program procedure. This example is an example of a design decision for the security program to demonstrate a way to group decisions dependent on a type of product or technology. For example, different firewall devices require different types of support. In some examples, the program procedure is product agnostic and links are provided to product standard operating procedures (SOPs). An example sub activity type output defining a procedure for disabling portable storage devices in different endpoint products is illustrated in FIGs. 12. An example evidence type output defining a procedure for verifying that the endpoints have disabled portable storage devices is illustrated in FIG. 13.
[0085] FIG. 12 illustrates example product specific procedures (e.g., instructions) for the sub-activity type. This example illustrates different procedures for disabling portable storage media on Windows and Mac devices.
[0086] FIG. 13 illustrates example product specific procedures for the evidence presentation type. This example illustrates steps for verifying whether portable storage media are disabled on Windows and Mac devices.
[0087] FIGs. 14-18 illustrate user interfaces for selecting a security program. In the example shown, high level and non-technical questions are provided to a user who can enter information about their organization. The answer to these questions may trigger further related questions based on what is selected. Based on the selections a security program with assessment objectives is generated. In some examples, selections may trigger a recommendation to consult an expert or an on-site service team. For example, based on a potential level of complexity that requires a more personalized level of service. In some examples, the recommendation includes a suggestion to subscribe to higher membership tier to access personalized services.
[0088] FIG. 14 illustrates an example user interface for receiving information used to generate a security program. In this example, a user selects descriptions that apply to their organization. In this example, because the user selected an office building additional options
(shown in FIG. 15) are presented to the user. This example user interface is presented to a user to provide decision support while collecting organization information (e.g., as in input for the inputs about an organization 5504 shown in FIG. 55).
[0089] FIG. 15 illustrates another example user interface for receiving information used to generate a security program. In this user interface a user selects elements that are present in the office building. In some examples, this figure is presented based on the selections in the user interface shown in FIG. 14.
[0090] In some example embodiments, a user provides details about their organization and the computer security management system proposes variants, which a user can select. Once the selections are received the computer security management system generates the security program.
[0091] FIG. 16 illustrates an example user interface that is presented when a user is recommended to upgrade their service level for a security platform. In this example, having a virtual machine requires a user/organization to upgrade to a higher service level. This is another example user interface that is presented to provide decision support to a user.
[0092] FIG. 17 illustrates an example user interface presenting a recommendation to use an on-site support team to complete one or more steps in a security activity and/or procedure. In this user interface the user/organization is recommended to use an on-site support team to complete a specific tasks. This user interface presents information related to regulatory support (e.g., to highlight to the user that the current program is missing actions required to meet a regulatory/contractual obligation.
[0093] FIG. 18 illustrates an example user interface to confirm selections for a security program.
[0094] FIG. 19 illustrates example generated output (e.g., diagrams built by the computer security management system 102 shown in FIG. 1) from the computer security management system. In some examples, the aggregators illustrated in FIG. 1 generate the output providing visualizations of different aspects of a computer security program. In some examples, the output is referred to as artifacts.
[0095] FIG. 20 illustrates example generated output (an aggregation on advisory on how to draw a diagram) from the computer security management system. FIG. 20 provides instructions to an implementer for generating a network diagram. In some examples, an algorithm is used to automate the completion of some or all of the steps for creating a network diagram. The example generated output (e.g., artifact) can be generated using an aggregator and can contain steps of or from multiple program procedures.
[0096] Fig 21 is an example database of assessment objectives gathered from a governing body (CMMC). In one example implementation, the security program includes multiple assessment objectives that are assigned to a major activity. The major activities include groups of assessment objectives that require similar implementation steps. The steps are referred to as program procedures. An example list of major activities are illustrated in FIG. 21. FIG. 21 illustrates an example of a data structure to link requirements to an aggregator. [0097] The right hand column includes a mapping of assessment objectives (e.g., rules) to the major activities that we will write procedures to address the assessment objectives. The steps illustrated are procedures, which are an element of program procedures. A map of major activities is illustrated in Fig 32.
[0098] Referring to FIG. 22, the program procedures contain lots of elements including a frequency (e.g., how often one or more of the steps need to happen), a responsible party (who needs to perform the step), activity(what activity it belongs to), mechanism (a tool used to conduct the step), evidence (how the assessment objective is proven to be complete), specification (the policy that states the rules for how the step must be conducted). In some examples, the system includes an instruction generator. The instructions generator takes an outline of a program feature, including the elements listed above and creates instructions or other documentation based on these elements. In some embodiments, the instruction generator uses artificial intelligence. In some embodiments, the artificial intelligence can include a directed graph that uses reinforced learning and clustering coefficients to broadly link system configurations to policy decisions to make recommendations for configurations, generate documentation, and optimize cyber programs. In some embodiments, the instruction generator uses a template to generate the instructions and/or documentation.
[0099] Referring to FIG. 23, in some examples, the program procedure contains a variant option. A variant is a different operational decision that could apply to a different organization. The variant option means the program procedure answers a certain choice. A different program procedure would describe the step when a different choice is selected. That other program procedure would be assigned the other option, and swapped out with this one, depending on the choice. In some examples, the program procedure also contains snippets of text to complete different documents that are needed for the program. In this instance, it contains a snippet for an “advisory article” a document that explains to organization stakeholders what is happening in this activity.
[0100] Referring to FIG. 24, the program procedure can include sub-activity steps. The sub activity step is a link or links to a step or series of steps that describe in technical detail what
needs to occur. In some implementations, users will not see the sub-activity step (e.g., instead the steps are automated and/or implemented by an implementation team). In other examples, the user can see the sub-activity steps (e.g., by purchasing a premium level subscription service).
[0101] Referring to FIG. 25, the program procedure can also include evidence steps. In some examples, the evidence steps include a link to an evidence record stored in memory, and a snippet that describes how to gather that evidence, whether it is located in the system or if it is a file saved somewhere, as well as a short description of how that evidence meets or helps meet the assessment objective.
[0102] Referring to FIG. 26, the program procedure also includes an associated output describing policies that apply to the organization if the program procedure is selected. The policies contain both organizational policies, and user facing or acceptable use policies that individual employees can be required to sign. Organizational policies can be technical and cover procedures that may be required to be followed. The acceptable use policies apply to various users, and focus more on rules that apply to the users.
[0103] In some examples, the program procedures link to supplemental guidance, like if a program procedure discusses destroying media, we would link to NIST 800-88, which describes guidelines for destroying media.
[0104] In some implementations, each program procedure covers one specific activity, but tracks all the related information that applies to that specific activity. This related information can be gathered in different ways, so if a user needs to see the entire organization’s policy, the system grabs the relevant policies off every program procedure.
[0105] In some implementations, the program procedures can be added or removed based on the specifics of the organization. So, if something does not apply, the system can remove the program procedure, along with policies that do not apply, evidence that do not apply, and/or procedures that do not apply. In some examples, a user provides selections of one or more program procedures to apply.
[0106] Referring to FIGs. 27-30 generally, in some examples, assessment objectives are visualized by who is responsible (e.g., organization, third party consultant, specific users, etc.). These assessment objectives can also be grouped differently depending on how an organization wants them visualized. The “domain” view shows every assessment objective by its original domain that the Cybersecurity Maturity Model Certification (CMMC) program organized them by. The assessment objectives can also be sorted by “outcome” or
“capability.” In some examples, different levels of service provided by the computer security management system include more or less rules.
[0107] Referring to FIG. 31, in some examples, there is an order for outcomes, and what style of activity (govern, harden, defend) they relate closest to.
[0108] An example program activity flow diagram is illustrated in FIG. 32. The example shown maps the major activities in the order in which they should occur. As discussed herein, a major activity contains multiple related program procedures.
[0109] Referring to FIG. 33, the system tracks folder structure that the evidence will be stored in, so that they are consistent and locatable. This also means the evidence is handed to an assessor, it is all in one place so the file can be hashed at the time of the assessment, to prove that nothing has been modified.
[0110] Referring to FIG. 34, some implementations include a program builder. For each program, there is a feature to select variants assigned to that program, and a way to publish program procedures and version them, so existing organizations can get new program procedures as they are developed.
[0111] Referring to FIG. 35, some implementations include a variant selector for a specific program.
[0112] FIG. 36 shows part of a CUI flow diagram. The CUI Flow diagram was designed in a way that depending on certain variants, portions of the diagram (slides) can be removed that do not apply to a specific program. For example, if a program did not have any local networks, and everyone worked remotely, this slide would be deleted. In some examples, the CUI flow diagram is generated automatically.
[0113] A table illustrating a portion of the user, process, and asset inventory, is shown in FIG. 37. This table can be tracked in a database, tracks attributes regarding items in a program, and those attributes line up with assessment objectives required by CMMC. (see top row) It will track all sorts of items, like people, devices, locks and keys, buildings(sites) etc. [0114] FIG. 38 shows a rendition of a system catalog. A system catalog would contain different system baselines (like windows, Mac, SonicWall, etc.) and figure out which questions sets apply based on what mechanism they are (endpoint, boundary protection device, etc.). In some examples, the question sets assist with meeting the asessmet objectves of the security program.
[0115] FIG. 39 shows types of users of the system. For example, the plurality of users 110 illustrated and described in reference to FIG. 1.
[0116] FIG. 40 shows an example list of variants.
[0117] In some examples, a first user selects high level operational decisions and then an expert user selects variants and subvariants for the various achievement objectives generated based on the high level operational decisions. In some examples, the application includes a list of pointers to the appropriate procedures in the library, where the aggregators will run for that organization. In some examples, there is a list of products and the aggregators run the aggregation for that program based on the list of products and the variants associated with those products. Based on the selected variants and list of devices the aggregators determine the program procedures that will be used to implement the security program. In some examples, program procedures are aggregated that point to different product types in the list of products.
[0118] In some examples, organizations select the products they use from a repository hosted by a computer security management system. When the aggregators create the artifacts based on variants, they will link relationships to those products via a product agnostic link. [0119] FIG. 41 is a diagram illustrating processes and features disclosed herein.
[0120] FIG. 42 is a diagram illustrating a database schema used by the aggregators.
[0121] FIG. 43 shows an example of a computing device 4000 and an example of a mobile computing device that can be used to implement the techniques described here. The computing device 4000 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
[0122] The computing device 4000 includes a processor 4002, a memory 4004, a storage device 4006, a high-speed interface 4008 connecting to the memory 4004 and multiple highspeed expansion ports 4010, and a low-speed interface 4012 connecting to a low-speed expansion port 4014 and the storage device 4006. Each of the processor 4002, the memory 4004, the storage device 4006, the high-speed interface 4008, the high-speed expansion ports 4010, and the low-speed interface 4012, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. The processor 4002 can process instructions for execution within the computing device 4000, including instructions stored in the memory 4004 or on the storage device 4006 to display graphical
information for a GUI on an external input/output device, such as a display 4016 coupled to the high-speed interface 4008. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
[0123] The memory 4004 stores information within the computing device 4000. In some implementations, the memory 4004 is a volatile memory unit or units. In some implementations, the memory 4004 is a non-volatile memory unit or units. The memory 4004 can also be another form of computer-readable medium, such as a magnetic or optical disk. [0124] The storage device 4006 is capable of providing mass storage for the computing device 4000. In some implementations, the storage device 4006 can be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product can also contain instructions that, when executed, perform one or more methods, such as those described above. The computer program product can also be tangibly embodied in a computer- or machine-readable medium, such as the memory 4004, the storage device 4006, or memory on the processor 4002.
[0125] The high-speed interface 4008 manages bandwidth-intensive operations for the computing device 4000, while the low-speed interface 4012 manages lower bandwidthintensive operations. Such allocation of functions is exemplary only. In some implementations, the high-speed interface 4008 is coupled to the memory 4004, the display 4016 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 4010, which can accept various expansion cards (not shown). In the implementation, the low-speed interface 4012 is coupled to the storage device 4006 and the low-speed expansion port 4014. The low-speed expansion port 4014, which can include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0126] The computing device 4000 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a standard server 4020, or multiple times in a group of such servers. In addition, it can be implemented in a personal
computer such as a laptop computer 4022. It can also be implemented as part of a rack server system 4024. Alternatively, components from the computing device 4000 can be combined with other components in a mobile device (not shown), such as a mobile computing device 4050. Each of such devices can contain one or more of the computing device 4000 and the mobile computing device 4050, and an entire system can be made up of multiple computing devices communicating with each other.
[0127] The mobile computing device 4050 includes a processor 4052, a memory 4064, an input/output device such as a display 4054, a communication interface 4066, and a transceiver 4068, among other components. The mobile computing device 4050 can also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 4052, the memory 4064, the display 4054, the communication interface 4066, and the transceiver 4068, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
[0128] The processor 4052 can execute instructions within the mobile computing device 4050, including instructions stored in the memory 4064. The processor 4052 can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 4052 can provide, for example, for coordination of the other components of the mobile computing device 4050, such as control of user interfaces, applications run by the mobile computing device 4050, and wireless communication by the mobile computing device 4050.
[0129] The processor 4052 can communicate with a user through a control interface 4058 and a display interface 4056 coupled to the display 4054. The display 4054 can be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 4056 can comprise appropriate circuitry for driving the display 4054 to present graphical and other information to a user. The control interface 4058 can receive commands from a user and convert them for submission to the processor 4052. In addition, an external interface 4062 can provide communication with the processor 4052, so as to enable near area communication of the mobile computing device 4050 with other devices. The external interface 4062 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used.
[0130] The memory 4064 stores information within the mobile computing device 4050. The memory 4064 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 4074 can also be provided and connected to the mobile computing device 4050 through an expansion interface 4072, which can include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory 4074 can provide extra storage space for the mobile computing device 4050, or can also store applications or other information for the mobile computing device 4050. Specifically, the expansion memory 4074 can include instructions to carry out or supplement the processes described above, and can include secure information also. Thus, for example, the expansion memory 4074 can be provide as a security module for the mobile computing device 4050, and can be programmed with instructions that permit secure use of the mobile computing device 4050. In addition, secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0131] The memory can include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The computer program product can be a computer- or machine-readable medium, such as the memory 4064, the expansion memory 4074, or memory on the processor 4052. In some implementations, the computer program product can be received in a propagated signal, for example, over the transceiver 4068 or the external interface 4062.
[0132] The mobile computing device 4050 can communicate wirelessly through the communication interface 4066, which can include digital signal processing circuitry where necessary. The communication interface 4066 can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication can occur, for example, through the transceiver 4068 using a radio-frequency. In addition, short-range communication can occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global
Positioning System) receiver module 4070 can provide additional navigation- and location- related wireless data to the mobile computing device 4050, which can be used as appropriate by applications running on the mobile computing device 4050.
[0133] The mobile computing device 4050 can also communicate audibly using an audio codec 4060, which can receive spoken information from a user and convert it to usable digital information. The audio codec 4060 can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 4050. Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, etc.) and can also include sound generated by applications operating on the mobile computing device 4050.
[0134] The mobile computing device 4050 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone 4080. It can also be implemented as part of a smart-phone 4082, personal digital assistant, or other similar mobile device.
[0135] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
[0136] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine- readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
[0137] To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or
LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
[0138] The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
[0139] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Example Implementation 1:
[0140] FIGs. 44-54 illustrate an example implementation for systems and methods that aid in the creation and operation of a cybersecurity program. Example implementation #1 includes methods and systems for creating and operating procedure, policy, and evidence for cyber security programs. Some examples include systems and methods for creating and operating a cybersecurity program to include documenting procedure, policy, and evidence among other artifacts. In one aspect, a method for creating and operating a cybersecurity program is disclosed. The method includes receiving inputs providing information about the organization and creating steps to operate the program (procedures), rules the organization must abide by (policies), and verification the steps were completed properly (evidence) among other artifacts.
[0141] FIG. 44 illustrates a group of related program procedures. The group is referred to as an “activity” Each row contains the contents of one procedure, and each column shows the corresponding aggregator. FIG. 45 illustrates a user interface that allows a certain user (in this
case, the “facility security officer) to see which portions of the activity are their responsibility, along with detailed descriptions of what it is they need to complete. FIG. 46 illustrates a user interface that allows a certain user (in this case, the “Govern Team” member) to see which portions of the activity are their responsibility, along with detailed descriptions of what it is they need to complete. FIG. 46 only shows the procedures associated with the variant the organization selected. FIG. 47 provides another example of the interface shown in FIG. 3, where the organization has selected to ban portable storage. In some examples, the instructions provided to the Govern Team member have changed as a result of that selection by the organization. FIG. 48 shows a map of major activities, or groups of program procedures by related implementation
[0142] As discussed above, example implementation 1 describes systems and methods that aid in the creation and operation of a cybersecurity program.
[0143] Various entities have cybersecurity programs. For each entity there are multiple stakeholders and systems with different perspectives, priorities, and experience. For example, in the cybersecurity place, stakeholders and systems take on general specializations. Individuals and systems specializing in governance and concerned with ensuring the entity is compliant with standards and the law. Individuals and systems that specialize in the technology are concerned with implementing systems for the business. Individuals and systems that specialize in defending from and responding to threats are concerned with locking systems down and protecting them.
[0144] For each procedure, a corresponding policy and evidence are attached to the procedure. When the cybersecurity program is modified, a rule is put in place for enforcing the procedure, and a proof is generated to verify the procedure happened. This collection of procedure, policy, and evidence (among other artifacts) are collectively referred to in example implementation 1 as “program procedures.”
[0145] Different entities can have different applicable procedures. The system maintains a large repository of program procedures, that allows for an entity can access the system, input information about the entity, and receive a set of program procedures that is curated to the organization.
[0146] After an organization is given a set of program procedures, the computer security management system creates a set of views that is intended to be helpful for different audiences and systems. Relevant procedures are aggregated, and displayed with steps to be performed (in some examples, automatically) in order to configure and operate the program, as well as instructions on how to generate the evidence. The policies are aggregated, and
displayed as an organization policy (for example, so they can be signed by members of the team). The evidence is gathered and displayed in a manner that allows an auditor system to quickly run through and assess each procedure individually. For example implementation 1 the gathering of these items is sometimes referred to as “aggregation”, that are grouped by aggregators (e.g., procedures, policy, evidence).
[0147] There can be more aggregators than procedures, policy, and evidence. An aggregator can be any collection of information for a set of procedures. Examples include: variant (e.g., the procedure to a group of similar procedures that would apply to a specific entity scenario), advisory (e.g., a collection of information that provides a high-level description of the purpose of the procedure), auditing (e.g., a collection of questions that provide quality control against procedures, and gathers the answers to those questions automatically), inventory (e.g., a procedure to a characteristic of an item that needs to be inventoried), system catalog (e.g., a procedure to a characteristic of a system that needs to be configured), schedule (e.g., identifies how often the procedure needs to happen, and schedules it automatically), responsibility (e.g., identifies who is responsible for completing the procedure), and rules (e.g., the procedure to a restriction or requirement that a regulation has).
[0148] Program procedures can be organized in different ways, depending on the requirements of the system. For example, if the user or system is an implementer, program procedures can be grouped by “major activity.” This groups program procedures so that similar tasks can be completed at the same time. In another example, if the user or system is an assessor, program procedures can be grouped by rules to expedite the assessment process. [0149] In some examples, the assessment process includes a step to map procedures where the computer security management system maps the organization’s procedures and correlates them to rules (in some examples using machine learning to automatically detect relationships between procedures and rules). Because the system maintains a relationship between procedures and rules, this task is already completed before the assessment, and allows the organization to see that the computer security management system and/or Organization Security Platform have implemented all of the tasks needed for compliance before an assessment happens.
[0150] The computer security management system is configured to be continuously updated. For example, by removing outdated program procedures and replacing them with new program procedures. As discussed above, program procedures have all of their dependencies related to them. This allows the computer security management system to
change by removing and adding procedures without affecting the other ones, or forgetting about certain portions during updates. Examples of sources of change that may call for the replacement, addition, or removal of a program procedure include: (1) the computer security management system may be changed at a platform level (e.g., based on new best practices, more efficient procedures, and/or to add support for more scenarios), user initiated change (e.g., a customer may submit a request for a change (e.g., to customize a process or add a new procedure), changes in technology (e.g., a transition to cloud computing), newly identified vulnerabilities (e.g., as vulnerabilities are discovered in existing and new systems, procedures need to change to address those vulnerabilities. In one example, the system examines inventories to check for devices with hardware vulnerabilities that have been discovered and updates procedures for these devices), and newly identified threats (for example, updating procedures to block sign ins from certain IP address spaces better protects against identified threats).
[0151] The computer security management system can maintain a database of procedures and a database of industry products. The entries in the database can contain a security impact analysis (SIA) and implementation guidance. The security impact analysis is an analysis of a product to determine the potential risks associated with its use. Examples of the things that can be analyzed about a product include infiltrations of that product (e.g., when was the attack first noticed, was the attack noticed internally or externally, how long after the attack was a publication put out, how was the attack handled, how much transparency does the organization have about the attack), methods used to protect that product (e.g., when signing in, does the product use Multi-Factor Authentication, does the product use encryption, are security patches released frequently for the product), and/or transparency around the product (e.g., is the product open source, has the product gone through any third-party audits, inspections, or assessments).
[0152] After the analysis, the computer security management system provides an interpreted risk level for use of a product. This allows organization computing systems to interface with the computer security management to determine information about a particular product.
[0153] After the security impact analysis, the computer security management system provides guides on how to use products to implement the procedures described in the program procedures. For example, a procedure regarding “disabling portable storage media on endpoints” corresponds with guides on how to do so for multiple providers of endpoints. 1
[0154] Major activities are mapped in FIG. 48. They can be organized in a certain order for an organization to perform during setup, and later perform routinely.
[0155] FIG. 49 displays an example architecture of the computer security management system. The system communicates securely with organization cloud systems to facilitate implementing procedures, and to verify that the implemented procedures are functioning properly. The computer security management system contains automation to conduct quality control checks on organization systems. The computer security management system provides a user interface for organization members to sign in and input or receive information. The assessor communicates via internet video call to conduct assessments, or conduct assessments in person. The assessor can also sign in to the portal to view evidence and procedures easily. [0156] FIG. 50 shows how program procedures relate to organizations. Organizations grab specific program procedures that apply to them, and then those program procedures are aggregated into documentation for that organization. Different organizations use different program procedures depending on their business models, requirements, and preferences. This results in tailored documentation for different organizations that is easy to modify as organizations evolve, as program procedures can be added or removed using the computer security management system.
[0157] In some aspects, systems and methods for creating and maintaining security procedures for organizations are disclosed. In one aspect, methods for creating and maintaining security procedures are disclosed. The method includes receiving inputs that provide information about an organization, including the regulations the organizations must abide by, and outputting procedures that meet the regulations, and are tailored to the organization. Additionally, the methods output evidence to prove to regulators that regulations are being met. It accomplishes this by bundling snippets of procedure, evidence and company policy and providing only certain snippets to a requesting system.
[0158] FIG. 51 illustrates an example major activity.
[0159] FIG. 52 illustrates an example media protection program procedure.
[0160] FIG. 53 illustrates an example group of media protection program procedures. [0161] FIG. 54 illustrates example flow diagrams in accordance with the examples described in example implementation 1.
Example Implementation 2:
[0162] As discussed above, computer security has become a paramount concern for businesses of all sizes. Implementing a comprehensive computer security policy is a complex
task that requires a deep understanding of various security principles, business principles, technologies, and best practices. The fast-paced nature of technological advancement means that security threats are constantly evolving. Keeping up with these changes and ensuring that security measures are up-to-date is a significant challenge.
[0163] In some example industries, organizations may be required to comply with various regulations related to data security and privacy. Navigating these regulations and ensuring compliance is a resource intensive task for many organizations.
[0164] Compounding these challenges is the fact that there is interplay between each of the elements of a security program. The organizational policies drive the technical decisions, the technical decisions influence the use of various computing systems, the use of the computing systems create security requirements, and the means available to meet the security requirements also shape system use. Additional security requirements are driven by regulatory or contractual obligations. Designing a functional program that meets business, regulatory and technical needs involves the interplay of hundreds of factors.
[0165] Operating the security program, once designed, includes several challenges, as the procedures designed to improve security and privacy add complexity and cost; for example, in many cases an organization must keep detailed records of current state, or “baseline”, so that deviations from the baseline can be investigated as possible security concerns.
[0166] Once a security program is in place, many regulatory and contractual structures require organizations to demonstrate that the program is working as intended, adding the generation of evidence and auditing information to the already large task of running the program.
[0167] In addition to demonstrating the value of the current security program, requirements change, driving the need to update the program for it to retain its value. These changes can come from multiple dimensions, including (but not limited to) changes in the organization’s regulatory or contractual obligations, changes in the organization's business model, and changes in the underlying technology the program relies on (e.g., Microsoft or Amazon releases new capabilities in their cloud). Advances in adversary techniques means that new threats force changes, and the discovery of flaws in existing systems also force changes.
[0168] Small and medium-sized organizations face particularly unique challenges in implementing effective computer security programs. For example, small and medium-sized organizations often operate with limited financial and human resources. This can make it difficult to invest in advanced security infrastructure or hire dedicated IT security personnel.
[0169] Example Implementation 2 describes systems and methods for generating, implementing, verifying, operating and updating a security program for organizations. Security programs can include security procedures, security policies, security evidence, and other security systems, frameworks, and processes.
[0170] A method for generating, implementing, and verifying a security program for an organization is disclosed. The method includes gathering inputs about best practices, organization, technologies, regulations, vulnerabilities, and threats, and generating the security program by processing the inputs. The outputs of the cybersecurity program generator include policies, advisories, instructions, actions, logs, audits, inventories, evidences, and reports.
[0171] FIG. 55 illustrates an example of the cybersecurity program generation environment 5500 for generating, implementing, and verifying a security program. The environment 5500 includes gathering inputs about best practices 5502, the organization 5504, technologies 5506, regulations 5508, vulnerabilities 5510, and threats 5512. FIG. 55 also illustrates the Cybersecurity Program Generator 5514, as well as its internal workings, develop actions 5516, establish policies 5518, communicate advisories 5520, distribute instructions 5522, execute actions 5524, maintain inventories 5526, collect logs 5528, document evidences 5530, conduct audits 5532, and build reports 534.
[0172] The Cybersecurity Program Generator 5514 is configured to generate, implement, and verify a cybersecurity program for an organization. In some examples, the cybersecurity program generator 5514 is configured to gather inputs 5502-5512 and interpret them to develop actions 5516. Once the actions are developed, they are then used to establish policies 5518 for the organization, communicate advisories 5520 to the organization, and distribute instructions 5522 to members of the organization that need to complete actions. This leads to execute actions 5524, where actions are either taken by the generator or individuals. Actions describe any activity performed in the pursuit of managing or supporting a cybersecurity program. After actions are taken, they are logged, and those logs are collected 5528 for so that the generator may conduct audits 5532 to ensure the program is running as it is supposed to. Actions may necessitate maintaining inventories 5526, which the system may update automatically depending on the action taken. Inventories and audits, among other things are then documented as evidences 5530. Audits are also used to build reports 5534 for the organization and for the generator so that further required actions are taken, either automatically by the system or by individuals, and discrepancies are corrected.
[0173] In the example show there are six types of inputs grouped by a source of change. The best practices 5502 inputs include inputs define cyber security options. The organization 5504 inputs include inputs defining a change in organizational practices and organizational requirements (e.g., a requirement to move large files between non-networked devices). The technologies 5506 inputs define options for technologies (e.g., as may be updated as new technologies/devices are integrated). The regulations 5508 inputs include regulatory requirements (e.g., a contractual requirement to protect data on flash drives). In some examples, the regulations 5508 include contractual requirements. Vulnerabilities 5510, and threats 5512 include inputs defining or describing external risks to a cyber security program. [0174] In some examples, inputs 5502 and 5508 influence what actions are picked (e.g., influence a list of solutions). In some examples, the inputs 5502 and 506 influence what choices are available for the security program (e.g., influence a list of requirements). In some examples, the inputs 5510 and 5512 determine what the potential risks are for the organization (e.g., adversarial externalities, and/or a list of risk or problems with the solutions). The inputs combine into a set of decisions at 5516.
[0175] In one example a business may have a requirement to move large files between nonnetworked device and a contractual requirement to protect data on flash drives. The actions developed at 5516 will consider using flash drives to move files between non-networked devices but will need to consider the contractual requirement (e.g., be developing encryption and protocols for managing the flash drives). Several actions can be identified to mitigating risk when using the flash drives. In an alternative example, the business does not have a requirement to use flash drives and actions identified would include actions for not allowing the use of flash drives.
[0176] After the actions are identified (e.g., at 5516 based on the inputs 55102-5512). The policies (5518) can include documents given to users for the user to acknowledge (e.g., acknowledge the policy to not user the policy), include require formats to communicate decisions (system security plan “SSP”). Communicate advisories 120 include decision support for users and systems. This can include documentation for organization members and/or data structure defining a correlating actions to impacts of the actions to allow the Al cybersecurity program generator 5514 to consider the impacts when selecting actions (e.g., conflicts, compatibility of actions, costs, etc.). In some examples, the communicate advisories 120 determines the most important decisions and flags these decisions for human review for a set decisions. Some examples include thousands or more decisions that may or may not be compatible, may or may not reinforce each other, and that have different costs.
[0177] Some examples include a database of requirements, a database of solutions, and a database of adversarial/inherent risk. In some examples, the database (e.g., the actions database 5517) stores a set of actions, a set of decision criteria, and other instructions. In some examples the instructions and policies distributed at 122 include snippets of a security program (e.g., lines of code, implementation steps, definitions, required data to store, tracking components of the system, compliance documentation (e.g., content to include in artifacts etc.), content to include in a change log, risk data, content for a support ticket system (e.g., to automatically generate a support ticket), code to implement an action, elements (e.g., at 5530) that describe ways to test and monitor the implementation of actions).
[0178] In some examples, the actions database 5517 receives an action request queries with any combination of the inputs 5502-5512 and identifies corresponding actions that are used to identify the actions at 5516. In some examples, artificial intelligence is used to identify actions in the actions database 55117 to generate the actions. The actions are then provided to generate documentation at the establish policies 5518 and implementation instructions at the distribute instructions 5522.
[0179] In some examples, the inputs 5502-5512, the develop actions 5516, and the communicate advisories 5520 are referred to as a decision engine. The decision engine is configured to identify actions that are used to establish policies 5518 and distribute instructions 5522.
[0180] The policies include documentation for consumption (e.g., by a system or human) and the instructions include steps to implement actions (e.g., computer code or processes for implementing an instruction). In some examples, the policies and instructions are referred to as artifacts. In some examples, the artifacts are stored in a database (e.g., the artifacts database 5535). In some examples, the decision engine also establishes high level questions that to raise to a user, where the answer to the high level questions determine a list of specific actions that correspond to the answers given to the high level questions. For example, a high level question may refer to what types of files are transferred in an organization, which will generate specific actions related to the use of flash drives.
[0181] The policies and instructions are implemented at the execute actions 5524. The execute actions 5524 and can include computer-readable code for systems and processes for collecting logs (at 5528), maintaining inventories (5526), conduct audits (at 5532) and document evidences (5530), build reports (at 55340, or combinations thereof. In some examples, the collect logs 5528, maintain inventories 5526, conduct audits 5532, and the document evidence 5530 are referred to as aggregators.
[0182] In some examples, the build reports 5534 may determine that an action is not working and will provide feedback to the decision engine (e.g., as another input at develop actions 5516). In some examples, the feedback is used to automatically update the security program (e.g., by updating the actions selected). In some examples, the feedback is provided to update a data model mapping the inputs to actions to improve the selection of those actions. In some examples, the output of the reports are stored in the artifacts database 5535. The artifacts database 5535 can be analyzed (e.g., automatically using Al) to update actions database 5517. For example, the associations between requirements and actions can be updated when it is determined from the reports that the previously identified actions did not meet the requirements (e.g., business requirements or regulatory/contractual requirements). [0183] FIG. 56 describes the configuration deployment environment 5600. The cybersecurity program generator 5514 is configured to implement baselines on organization systems 5602. In some examples, the cybersecurity program generator 5514 develops actions 5516 to include the deployment of configurations to systems. The cybersecurity program generator then executes actions 5524, including deployments of configurations to system.
The configurations are deployed over a network 120. The system applies configurations 5604 to organization systems 5602.
[0184] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
[0185] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all
embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
[0186] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.
Claims
1. A method for generating a security program for an organization, the method comprising: receiving, by a computer system comprising at least one processor and memory, inputs providing information about the organization; generating, by the computer system, the security program by processing the inputs describing the information about the organization, the security program including assessment objectives for the security program; and executing a security management model, by the computer system, using i) the information about the organization and ii) the security program as model inputs to the security management model to generate output including i) implementation procedures defining steps for implementing the security program and ii) verification procedures defining steps for verifying the assessment objectives of the security program.
2. The method of claim 1, wherein the information about the organization includes high level operational decisions.
3. The method of claim 2, wherein the security management model is used to convert the high level operational decisions into the security program with specific assessment objectives.
4. The method of claim 1, the method further comprising: executing, by the computer system, first computer-readable instructions of the security program , wherein at least a portion of the first computer-readable instructions defined by the implementation procedures are performed automatically.
5. The method of claim 1, the method further comprising: executing, by the computer system, second computer-readable instructions of the security program , wherein at least a portion of the second computer-readable instructions defined by the verification procedure are performed automatically.
6. The method of claim 1, wherein the computer system comprises a server communicably coupled to one or more client devices over a data network..
7. The method of claim 6, wherein the method further comprises determining, by a second server operating as an auditor system, that the computer system is performing i) the implementation procedures and ii) the verification procedures.
8. The method of claim 1, wherein the output further includes at least one selected from a group comprising: an evidence program procedure; a policy program procedure; a responsibility assignment; a frequency setting; an organization inventory; a system configuration baseline; a mapping of program procedures to rules; and a mapping of program procedures to the assessment objectives.
9 A cyber security program generation method comprising: obtaining requirement data defining operational requirements of an organization (5504) and operational limitations of the organization (5508); obtaining cyber security solution data defining security operations (5502) and security technologies (5506) representing possible elements of a cyber security program for the organization; obtaining, based on the requirement data and the cyber security solution data, risk data defining potential cyber security vulnerabilities (5510) and cyber security threats to the cyber security program (5512); selecting, based on a) the requirement data, b) the cyber security solution data, and c) the risk data, a set of actions from a database of cyber security actions, wherein the cyber security actions each comprise executable instructions for implementing a portion of the cyber security program, and wherein selecting the set of actions, optionally, comprises providing the a) the requirement data, b) the cyber security solution data, and c) the risk data as an input vector to a machine learning model trained to determine action sets for
implementation of cyber security programs that balance conflicts between the requirement data, the cyber security data, and the risk data; and configuring the cyber security program by executing the actions of the set of actions, the configuring comprising: generating one or more cyber security policies for the cyber security program based on crawling a policy database to identify a plurality of policy snippets relevant to executing the set of actions, and aggregating the plurality of policy snippets into the cyber security policies; and generating one or more cyber security instructions for the cyber security program based on crawling an instruction database to identify a plurality of machine executable instruction snippets relevant to executing the set of actions, and aggregating the plurality of policy snippets into the cyber security instructions.
10. The method of claim 9, further comprising: monitoring execution of the cyber security policies and the cyber security instructions to verify implementation of the cyber security program to obtain monitoring data.
11. The method of claim 10 further comprising archiving the monitoring data in an auditable database to verify that the cyber security program is executing properly.
12. The method of claim 10, further comprising: determining, based on the monitoring data, that at least one action of the set of actions is no longer effective for the cyber security program, and in response, selecting a new action to replace or modify the at least one action; reconfiguring the cyber security program by executing new action.
13. A system for automatic generation of computer-readable code, the system comprising: one or more processors; memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform actions comprising: receiving, by a computer system comprising at least one processor and memory, inputs providing information about the organization;
generating, by the computer system, the security program by processing the inputs describing the information about the organization, the security program including assessment objectives for the security program; and executing a security management model, by the computer system, using i) the information about the organization and ii) the security program as model inputs to the security management model to generate output including i) implementation procedures defining steps for implementing the security program and ii) verification procedures defining steps for verifying the assessment objectives of the security program.
14. The system of claim 13, the system further comprising a server that comprises the one or more processors and the memory.
15. The system of claim 14, the system further comprising a second server operating as an auditor system, the auditor system configured to determine if the server is performing i) the implementation procedures and ii) the verification procedures.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463641787P | 2024-05-02 | 2024-05-02 | |
| US63/641,787 | 2024-05-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025231377A1 true WO2025231377A1 (en) | 2025-11-06 |
Family
ID=97562321
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/027520 Pending WO2025231377A1 (en) | 2024-05-02 | 2025-05-02 | Methods and systems for generating, implementing, and verifying computer security programs |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025231377A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150288722A1 (en) * | 2011-08-09 | 2015-10-08 | CloudPassage, Inc. | Systems and methods for implementing security |
| US20190207968A1 (en) * | 2018-01-02 | 2019-07-04 | Criterion Systems, Inc. | Methods and Systems for Providing an Integrated Assessment of Risk Management and Maturity for an Organizational Cybersecurity/Privacy Program |
-
2025
- 2025-05-02 WO PCT/US2025/027520 patent/WO2025231377A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150288722A1 (en) * | 2011-08-09 | 2015-10-08 | CloudPassage, Inc. | Systems and methods for implementing security |
| US20190207968A1 (en) * | 2018-01-02 | 2019-07-04 | Criterion Systems, Inc. | Methods and Systems for Providing an Integrated Assessment of Risk Management and Maturity for an Organizational Cybersecurity/Privacy Program |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12381915B2 (en) | Data processing systems and methods for performing assessments and monitoring of new versions of computer code for compliance | |
| US10803097B2 (en) | Data processing systems for generating and populating a data inventory | |
| US10997318B2 (en) | Data processing systems for generating and populating a data inventory for processing data access requests | |
| US10862905B2 (en) | Incident response techniques | |
| US10437860B2 (en) | Data processing systems for generating and populating a data inventory | |
| US10693903B2 (en) | Method and apparatus for data security analysis of data flows | |
| US10438020B2 (en) | Data processing systems for generating and populating a data inventory for processing data access requests | |
| US9253202B2 (en) | IT vulnerability management system | |
| US10043156B2 (en) | System and method for cross enterprise collaboration | |
| US10404526B2 (en) | Method and system for generating recommendations associated with client process execution in an organization | |
| US20210029171A1 (en) | Data processing systems and methods for performing privacy assessments and monitoring of new versions of computer code for privacy compliance | |
| US20190163928A1 (en) | System and method for managing enterprise data | |
| US20120232947A1 (en) | Automation of business management processes and assets | |
| US20190340562A1 (en) | Systems and method for project management portal | |
| US11080162B2 (en) | System and method for visualizing and measuring software assets | |
| US20230419223A1 (en) | Vendor risk assessment | |
| US20210344720A1 (en) | Data processing systems and methods for performing privacy assessments and monitoring of new versions of computer code for privacy compliance | |
| US20160132472A1 (en) | Process intelligence system | |
| Donthi | A Scrumban Integrated Approach to Improve Software Development Process and Product Delivery | |
| US11748367B2 (en) | Entity selection tool system and method | |
| Vengathattil | Exploring Information Systems for Business Continuity Planning in IT-Driven Organizations Post-Pandemic: Insight into Enhancing Future Resilience | |
| Iqbal et al. | Significant requirements engineering practices for software development outsourcing | |
| WO2025231377A1 (en) | Methods and systems for generating, implementing, and verifying computer security programs | |
| US20160086114A1 (en) | Service-based consulting framework | |
| US20230144362A1 (en) | Detecting configuration gaps in systems handling data according to system requirements frameworks |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25798814 Country of ref document: EP Kind code of ref document: A1 |