US20170308452A1 - Method and system for providing information from third party applications to devices - Google Patents
Method and system for providing information from third party applications to devices Download PDFInfo
- Publication number
- US20170308452A1 US20170308452A1 US15/499,616 US201715499616A US2017308452A1 US 20170308452 A1 US20170308452 A1 US 20170308452A1 US 201715499616 A US201715499616 A US 201715499616A US 2017308452 A1 US2017308452 A1 US 2017308452A1
- Authority
- US
- United States
- Prior art keywords
- alert
- log
- rules
- new log
- alerts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3065—Monitoring arrangements determined by the means or processing involved in reporting the monitored data
- G06F11/3072—Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0227—Filtering policies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
Definitions
- FIG. 1A illustrates an example system 100 for providing information from third party applications to devices, according to one embodiment.
- FIG. 1B illustrates details of an example alert system 103 , according to one embodiment.
- FIG. 2 illustrates an example method 200 for providing information from other applications to electronic devices, according to one embodiment.
- FIG. 3 illustrates an example process 205 for checking third party log files, according to one embodiment.
- FIG. 4 illustrates an example communication process 215 for determining whether any rules have been violated, according to one embodiment.
- FIG. 5 illustrates an example alert creation/addition process 220 for creating/adding alerts to an alert queue if rules have been violated, according to one embodiment.
- FIG. 6 illustrates an example process 225 for an alert processor to check the alert queue for available alerts and send the alerts, according to one embodiment.
- FIG. 7 illustrates an example table diagram with relationships of the system SQL database 130 , according to one embodiment.
- FIG. 8-11 are example screen shots with may be utilized in one embodiment of the invention.
- FIG. 12 illustrates a blocking mechanism for blocking source IP addresses, according to one embodiment.
- FIG. 13 illustrates an example blocking mechanism, according to an embodiment.
- FIG. 14 illustrates an example mechanism that uses an alert to perform a function(s), according to an embodiment.
- FIG. 1A illustrates a system 100 for providing information from third party applications to devices, according to one embodiment.
- system 100 may include, but is not limited to a device 101 , a network 102 , and an alert system 103 .
- the device 101 may comprise, though is not limited to, any mobile device (e.g., pager, personal digital assistant, phone, i-phone, etc.) and/or any non-mobile device (e.g., personal computer, lap-top computer, etc.).
- the electronic device 101 may utilize a user interface that displays information received from the alert system 103 .
- the network 102 may include, but is not limited to the Internet and/or an intranet.
- FIG. 1B illustrates details of the alert system 103 , according to one embodiment.
- the alert system 103 may include, though is not limited to a service 110 , setup/maintenance screens 135 , and/or a database 130 .
- the alert system 103 may access a third party log file 105 , which may be a record of all system exceptions, anomalies, events, etc., tracked by a third-party application. In certain embodiments, this information is recorded chronologically.
- the service 110 may be a platform or component that includes, though is not limited to, a log monitor 113 , a rules engine 115 , an alert engine 120 , and/or an alert processor 125 .
- the log monitor 113 is configured to monitor the third party log files 105 from third party applications.
- the third party applications may include any application that runs on a computer, including, but not limited to, a web server application firewall (e.g., DOTDEFENDER), a personal computer firewall (e.g., MCAFEE, TREND MICRO), a computer operating system (e.g., MS WINDOWS), parental control software (e.g., WEBWATCHER, CONTENT PROTECT), an automated teller machine (e.g., NCR, Triton), a server system (e.g., MS IIS, MS SQL), or any combination(s) thereof.
- the log monitor 113 may also be programmed to check the third party log files 105 for one or more new log entries every predetermined time unit (e.g., a predetermined time interval X, such as one second, thirty seconds, one hour, one day, one week).
- a predetermined time interval X such as one second, thirty seconds, one hour, one day, one week.
- the time unit may be configured by the user and/or by a computer. If any new log entries are found by the log monitor 113 when checking the third party log file 105 , the new log entry information may then be sent to the rules engine 115 for processing. If no new log entries are found, the log monitor 113 may wait for the next time unit to check again for new log entries.
- the rules engine 115 remains in a sleep state until a new log entry is passed to it.
- the rules engine 115 may be configured to check the new log entry against each active rule in the database 130 that includes filtration criteria.
- An active rule may be defined as a record in a rule table of the database 130 that contains at least one of the filtration criteria.
- the filtration criteria may include, though are not limited to, one or more of the following: type of event, severity of event, velocity of event, source of event, or any combination thereof. If information regarding any new log entry meets the predefined filtration criteria, the information for that new log entry may be passed to the alert engine 120 .
- the alert engine 120 may be configured to create alerts and add alerts to an alert queue.
- an alert processing component 125 may be configured to check the alert queue for available alerts and process any available alerts.
- the alert system 103 of FIG. 1B may also include setup/maintenance screens 135 , which may be used to setup user profiles, rules, mail sever information, roles, and other configuration information.
- FIGS. 8-11 are example setup/maintenance screen 135 .
- FIG. 8 is an example mail server setup screen that may be used to set up the mail server that alert system 103 may use to send alerts. The user may set mail server settings before any alerts can be sent. Any or all of the mail server name, mail server IP address, mail server username, mail server password, and email address may be entered.
- FIG. 9 is an example user setup screen that can be used to set up new users in the system. A user may be any contact who will receive alerts.
- FIG. 10 is an example profile setup screen that may be used to set up profiles.
- a profile may be a list of users. The profiles may allow alerts to easily be configured to be sent to multiple users in a group.
- the user may be first set up using the user set up screen. Once the user is set up, the user can be added to one or more profiles. Alerts may then be configured to go to certain profiles and/or individual users.
- FIG. 11 is an example service control screen, which may be used to manually start and stop the alert system 103 . If the button displays “start monitoring”, then the alert system 103 is in a stopped state. Clicking the button will start the service. If the button displays “stop monitoring”, then the alert system 103 is in a running state. Clicking the button will then stop the service.
- FIG. 7 is a block diagram showing various illustrative tables and associated relationships within the database 130 , according to one embodiment.
- a table of user profiles (tblUserProfiles) 705 may be used to store the users and their contact information, which may include, though is not limited to: a username, a user first name, a user last name, a user phone number, or a user email, or any combination thereof.
- tblUserProfiles user profiles
- a frequency table (tblFrequency) 710 may be used to store an increment counter variable (frequcondstring), discussed in more detail below, for example, with respect to 430 of FIG. 4 .
- This increment counter variable may store, for example, information about when to check which of multiple criteria each new log entry meets.
- the increment counter variable information may include, for example, information about the frequency of occurrence (e.g., how often to check).
- a rules table (tblRules) 715 may be used to store information about various rules, as discussed above with respect to the rules engine 115 .
- This information may include, for example, the profile ID of the rule, a frequency ID of the rule (e.g., which may be a unique number assigned to identify a particular frequency), and a description of the rule.
- a user profile table (tblProfileUsers) 720 may also be included to build groups of users to be notified by the alert engine 510 .
- the groups of users may include, for example, information on the profile ID of the users and the user IDs of the users.
- another profiles table (tblProfiles) 730 may be used to give the groups of users (e.g., those created and stored in tblProfileUsers 720 ) a specific descriptive name and an ID.
- an alert log table (tblAlertLog) 745 may be used to store alerts, and may include a profile ID and rule ID for each alert, as well as information on when each alert was sent.
- FIG. 2 illustrates an example method 200 for providing information from other applications to electronic devices, according to one embodiment.
- the log monitor checks the third party log files (e.g., from the third party applications) for any new log entries.
- communication process 210 if there are any new log entries, information about these new log entries may then be sent to the rules engine 115 .
- a process of determining whether or not any of the new log entries violate any rules in the rules engine 115 may be performed.
- the alert engine 120 may create alerts.
- the alerts can be added to an alert queue and sent.
- FIG. 3 illustrates an example process 205 for checking third party log files, according to one embodiment.
- a monitoring process 305 may be performed, wherein the log monitor 113 may periodically monitor the third party log file 105 to see if any new log entries have appeared.
- a determination process 310 may be performed, wherein the log monitor 113 determines if any new log entries have been found.
- the log monitor 113 may be configured to scan the most recent entry in the third party log file 105 and compare the date and time of that entry against the date and time of the last entry found, which may be stored in memory.
- the date and time of the log entry may be written to memory and the entry may then be passed to the rules engine 115 . If the date and time of the most recent entry is after the date and time of the last entry found, this indicates that a new log entry has been found and that date and time may then be written to memory over the previous date and time.
- the log monitor confirms that a new log entry has been found, the new log entry information may then be sent to the rules engine 115 for processing. If, at 310 , the log monitor confirms that no new log entry has been found, the process may return to 305 , where the log monitor 113 may wait for the next time unit or other triggering event to check again for any new log entries.
- FIG. 4 illustrates details relating to process 215 , to determine if any rules have been violated, according to one embodiment.
- the rules engine 115 may process an active rule on each new log entry in 400 to see if it violates the active rule (e.g., beginning with the first rule in a rules queue).
- the processing of the rules may also take into account filtration criteria.
- the filtration criteria may include, but are not limited to: type of event, severity of event, velocity of event, source of event, or any combination thereof. Additional features of these exemplary filtration criteria are discussed with respect to actions 405 - 420 , below. Note that depending upon the specific embodiment in question, some or all of the exemplary filtration criteria in 405 - 420 may be included.
- the filtration criteria may include, but are not limited to: time of day; type of event (e.g., unauthorized attempt) number of events (e.g., more than 5); number of events within a certain time period (e.g., more than 5 within 5 minutes); key words found in alert; or severity of alert; or any combination thereof (e.g. more than 5 SQL injection attempts within 1 hour from a single source).
- the rules engine 115 may determine whether the type of the log entry is defined as a trigger.
- the rules which may define each of the triggered types, may be configured by a user of alert system 103 through the setup and maintenance screens 135 .
- the third party application that creates the third party log file 105 defines the type of the log entry. For example, if the third party log file 105 is created by a web server application firewall, there may be various types of entries in the log file (e.g., SQL injection, cross-site scripting, web crawler). By way of illustration and not limitation, a user may select SQL injection and cross-site scripting types as a trigger in a rule, but may not select the web crawler type.
- the SQL injection and cross-site scripting types would exist in the rules table 130 and therefore both entry types would be in the trigger. If the trigger determination routine confirms that the log entry type is not a trigger, the communication process may proceed directly to 420 . If, however, the log entry type is found to be a trigger, then the communication process may proceed to one or more trigger routines, beginning at 410 .
- the rules engine 115 may be configured to determine whether the severity of the subject log entry is defined as a trigger.
- a third party application may include a severity field in their third party log file 105 .
- this severity field may indicate the relative importance of each log entry in the third party log file 105 .
- the third party application that creates the third party log file 105 may also define the severity scale.
- a web server application firewall may define severity on a scale of 1 to 5 (e.g., 1 being the least severe and 5 being the most severe). The rule may then define as a trigger a log entry with a severity of 5.
- the rules engine then compares the severity defined by the rule to the severity in the log entry to determine if the severity is a trigger. If the rules engine determines that the log entry severity is not a trigger, the process may precede to an increment counter routine 430 . If, however, the rules engine determines that the log entry severity is a trigger, the process may proceed to 415 .
- the rules engine 115 may determine whether the velocity (e.g., frequency) of the log entry is defined as a trigger.
- velocity may be the frequency that a type of log entry occurs.
- velocity may be measured as a number per time period (e.g., minute, hour, day, week, month, year), though it may also be based on other periodic measures.
- the rule might define the velocity frequency trigger for a SQL injection type in a web application firewall as 5 times per hour.
- the rules engine 115 may be configured to compare the frequency defined in the rule against the increment counter variable to determine if the log entry velocity is a trigger.
- the process may proceed to the increment counter routine 430 . If yes, the log entry velocity is a trigger, the process may move to 420 .
- the filtering criteria e.g., 405 - 420
- the filtering criteria may be modified to create a custom filtration profile for each of several users.
- the rules engine 115 may be configured to determine whether the source of the log entry is defined as a trigger.
- the third party application that creates the third party log file 105 may also define the source. For example, if the third party log file 105 is created by a web server application firewall, there may be a source IP address that gets logged with each log entry (e.g., 192.168.0.76). The rule may select an IP address of 204.234.23.2 as the source. The rules engine 115 may then compare the source defined in the rule against the source in the log entry. If the sources match, the source entry is a trigger. If, at 420 , the rules engine determines that the log entry source is not a trigger, the process may proceed to the increment counter routine 430 . If, however, the rules engine determines that the log entry source is a trigger, the process may proceed to a transmit alert routine 425 .
- the increment counter routine may update the increment counter with information on whether any new log entry did or did not meet certain filtration criteria.
- the increment counter may be a multi-dimensional incremental counter, which may store information about which of the multiple criteria each new log entry met.
- the increment counter may store information regarding whether or not each new log entry met the filtration criteria of type, source, severity and/or velocity, etc.
- the process may return to 400 (e.g., additional filtration criteria, such as criteria in addition to that filtration criteria discussed in 405 - 420 ). If the end of rule collection has been reached, such that the new log entry does not need to be checked against additional rules, the process may proceed to 445 where the rules engine 115 may return to sleep mode.
- FIG. 5 illustrates an example alert creation/addition process 220 for creating alerts and/or adding alerts to an alert queue if rules have been violated, according to one embodiment.
- various details of an example alert creation/addition process 220 are shown, including features wherein, if the rules engine 115 has determined that information about a new log entry should be sent to the alert engine 120 , the alert engine 120 may create alerts and/or add alerts to an alert queue.
- an alert can be created (e.g., utilizing 505 - 520 , explained below).
- the alert type of the new log entry may be looked up by the alert engine 220 , which may check the new log entry against each active alert to determine which active alert(s) (e.g., email message, text message, page, cell phone call, etc.) are appropriate for the new log entry (e.g., tblalertlog 745 may be used to find the alert log ID, the profile ID, and the rule ID; tblrules 715 may be used to find the profile ID, the frequency ID, and a rules description; and tblfrequency 710 may be used to find the frequency ID, the frequency occurrence, and the frequency conditions).
- active alert(s) e.g., email message, text message, page, cell phone call, etc.
- tblalertlog 745 may be used to find the alert log ID, the profile ID, and the rule ID
- tblrules 715 may be used to find the profile ID, the frequency ID, and a rules description
- tblfrequency 710 may be used to find the
- the contact information (e.g., email address, cell phone number, pager number, etc.) listed for the alert type may be found by searching database 130 (e.g., tblprofileusers 720 , tbleuserprofiles 705 , and tblprofiles 730 may be used to find the user ID and the profile ID, which can both be used to find the necessary contact information).
- an alert creation routine 515 an alert may be created by utilizing the information from the new log entry with the appropriate contact information.
- the alert may be added to the alert queue.
- an end of collection check is performed to see if additional alerts need to be created. If additional alerts need to be created, the process may return to 505 . If the end has been reached, and additional alerts do not need to be created, the alert engine 120 may return to sleep mode in 535 .
- FIG. 6 illustrates an example alert checking and sending process 225 wherein an alert processor may check the alert queue for available alerts and send any available alerts, according to one embodiment.
- an alert processor 125 may check an alert queue for available alerts and/or send such alerts, are shown.
- the alert processor 125 may check the alert queue 605 every X time units for available alerts that need to be sent. (Note that X may be a time unit or interval configured by the user and/or determined by a computer.)
- An available alert may be an alert in the alert queue where certain pre-defined conditions (e.g., alert rules and roles) have been met such that the alert is considered “available” to be sent.
- certain individuals or groups of individuals based on names e.g., John Smith, Jane Smith, etc.
- roles e.g., user, administrator, data security specialist, web master, business executive
- pre-defined groups e.g., production support team, web development team, marketing department, management team, auditors
- alerts based on various pre-defined criteria include, but are not limited to: pre-defined “on” hours when the at least one alert may be sent to certain individuals (e.g., the normal user of the computer may be sent an alert from 9 AM-5 PM local time, and a back-up administrator can be sent an alert between 5:01 PM and 8:59 AM local time); certain individuals may only be sent an alert after a pre-defined number of unauthorized attempts to access a system have been made (e.g., within a certain pre-defined time period); certain individuals may be sent an alert based on the subject matter of the at least one alert (e.g., the normal user of the computer may be sent alerts with the key words “unauthorized access attempt”, and a back-up administrator may be sent alerts with the key word “unauthorized data changes”); certain individuals may be sent an alert based on the severity level of the alert (e.g., the normal user of the computer may be sent alerts that are low, moderate, and severe, but the back-up administrator may only be sent alerts that are
- the alert processor 125 may send the alert and log the alert in the database 130 . If no available alerts are found, the processor may wait for the next X time units to check again.
- the ability to generate an alert has been described above.
- the alert may also provide the ability perform certain functions, as discussed below with respect to FIGS. 12-14 .
- FIG. 12 illustrates a blocking mechanism for blocking source IP addresses (e.g., from malicious and/or compromised computers) by accessing a hyperlink, according to embodiments of the invention.
- a source IP address is the address of a source computer (e.g., sending an email or attempting to access another computer) connected to an IP network.)
- the hyperlink may be accessed via a smartphone 1205 , tablet 1210 , or other computer 1215 , or any other device with access to a network (e.g., the Internet 1220 ).
- the hyperlink may be a uniform resource locator (URL).
- a web server 1225 may decode information stored in the URL and look up information in the database 130 .
- An application programming interface (API) 1230 e.g., web server Microsoft Internet Information Services (ISS)
- ISS Internet Information Services
- the URL may be unencrypted or encrypted.
- the URL may contain many types of information.
- the URL may contain an alert ID and/or a source IP address.
- the alert ID and/or source IP address may be encrypted or unencrypted, and may be stored in database 130 .
- the alert ID may also include many other types of information, comprising: computer control information (e.g., turn on the PCs screen saver with a password, power off the PC), information on reports/graphs (e.g., pull and display report/graphs), firewall information (e.g., change the security levels (e.g., low, medium, high)).
- FIG. 13 illustrates an example blocking mechanism, according to an embodiment.
- the user may receive an alert (e.g., a report) comprising a URL incorporating a source IP address, and access (e.g., clicks on) the URL and source IP address (e.g., www.mysite.com/servicename/1DFGRR452XXX).
- the browser may connect to the web server 1225 .
- the web server 1225 e.g., using a web service
- the web server 1225 may call the API 1230 (e.g., Microsoft IIS) to enable execution of a function that blocks a source IP address on the web server by passing the source IP address as a parameter to the function to execute a command to block the source IP address.
- the API 1230 e.g., Microsoft IIS
- FIG. 14 illustrates an example mechanism that uses an alert to perform a function(s), according to an embodiment.
- the user may access (e.g., clicks on) a URL that incorporates information about an alert (e.g., www.mysite.com/servicename/1DFGRR452XXX).
- the browser may connect to the web server 1225 .
- the web server 1225 e.g., using a web service
- the web server 1225 may run a SQL statement against a database 130 to look up information stored related to the alert ID.
- the SQL statement may return the information stored related to the alert ID.
- the source IP address e.g., 192.168.1.1
- the web server 1225 may call the API 1230 (e.g., Microsoft IIS) to execute any functions stored for the alert ID.
- a function may be executed that blocks a source IP address on the web server by passing the source IP address as a parameter to the function to execute a command to block the source IP address.
- graph and/or report information may be displayed.
- the alert system 103 may run on a web server alongside a pre-existing web firewall.
- the pre-existing third-party firewall would typically block the intrusion attempt and write an entry to its third party log file 105 .
- the alert system 103 may monitor the third party file 105 .
- the alert system 103 would determine that a new log entry had been added: the blocking of the intrusion attempt. The blocking of the intrusion attempt log entry would be sent to rules engine 115 . As set forth in FIG.
- the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405 - 420 in FIG. 4 ). If the blocking of the intrusion attempt log entry file met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth in FIGS. 5 and 6 .
- pre-set criteria e.g., similar to, but not limited by, 405 - 420 in FIG. 4 .
- real-time alerts may be provided to users.
- users do not need to be logged online to the firewall when an intrusion attempt occurs, nor do users need to review past log files after an intrusion attempt has occurred, to discover an intrusion attempt.
- a user may thus take action (e.g., block all access from the intruder's IP address, shut down the user's web site until the threat has passed) to stop an intruder or potential intruder before the intruder or potential intruder has the opportunity to attempt many types and variants of penetrations (e.g., which may eventually be successful if given enough time).
- the alert system 103 may allow a user to customize the notifications. For example, a small business owner may want to be notified of all attempts during business hours, but during non-business hours, the small business owner may want to have the web master notified only of any instance of more than ten attempts within five minutes by a single source (or IP address). As another example, a home user may want to be notified only of attempts: exceeding a certain frequency, by time of day, or by severity, or any combination thereof.
- the alert system 103 may run alongside a parental control application.
- the parental control application would typically block the unauthorized web site and write an entry to its third party log file 105 .
- the alert system 103 may monitor the third party file 105 .
- the alert system 103 would determine that a new log entry had been added: the blocking of the unauthorized web site.
- the blocking of the unauthorized web site log entry would be sent to rules engine 115 .
- the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405 - 420 in FIG. 4 ). If the blocking of the unauthorized web site met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth in FIGS. 5 and 6 .
- real-time alerts may be provided to parents (or guardians, or caretakers, school administrators, teachers, etc.).
- parents do not need to be logged online to the firewall when an intrusion attempt occurs, nor do parents need to review past log files after an intrusion attempt has occurred, to discover an intrusion attempt.
- the alert system 103 may allow a parent to customize the notifications. For example, a parent may select to only be notified of a number of repeated attempts within a certain timeframe, or by severity, as defined by specific types of web sites. So, for example, a parent could be notified instantly of three or more attempts to enter adult web sites within a certain timeframe (such as three or more attempts in a day), but a fewer number of attempts to access social media websites after midnight.
- the alert system 103 may run alongside a standard WINDOWS application as well as an entity's proprietary application.
- ATM machines may be driven by on-board WINDOWS-based personal computers (PCs), along with the ATM manufacturer's proprietary software.
- the log files from the WINDOWS software and the proprietary software may include: hardware failures, software events, currency status, receipt paper supply, number and dollars of withdrawals and deposits, etc.
- the proprietary application could write an entry to its third party log file 105 .
- the alert system 103 may monitor the third party file 105 .
- the alert system 103 would determine that a new log entry had been added: the cash being low.
- the low cash log entry would be sent to rules engine 115 .
- the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405 - 420 in FIG. 4 ). If the low cash log entry met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth in FIGS. 5 and 6 .
- the alert system 103 may allow customized notifications. For example, the alert system 103 might notify one user when cash is getting low in the ATM, but another user of transaction volumes to be used for profitability calculations.
- the alert system 103 may comprise one or more computers.
- a computer may be any programmable machine capable of performing arithmetic and/or logical operations.
- computers may comprise processors, memories, data storage devices, and/or other commonly known or novel components. These components may be connected physically or through network or wireless links.
- Computers may be referred to with terms that are commonly used by those of ordinary skill in the relevant arts, such as servers, PCs, mobile devices, and other terms. It will be understood by those of ordinary skill that those terms used herein are interchangeable, and any computer capable of performing the described functions may be used.
- server may appear in the following specification, the disclosed embodiments are not limited to servers.
- modules are defined here as an isolatable element that performs a defined function and has a defined interface to other elements.
- the modules described in this disclosure may be implemented in hardware, a combination of hardware and software, firmware, wetware (i.e., hardware with a biological element) or a combination thereof, all of which are behaviorally equivalent.
- modules may be implemented using computer hardware in combination with software routine(s) written in a computer language (such as C, C++, Fortran, Java, Basic, Matlab or the like) or a modeling/simulation program such as Simulink, Stateflow, GNU Script, or LabVIEW MathScript.
- modules using physical hardware that incorporates discrete or programmable analog, digital and/or quantum hardware.
- programmable hardware include: computers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs); field programmable gate arrays (FPGAs); and complex programmable logic devices (CPLDs).
- Computers, microcontrollers and microprocessors are programmed using languages such as assembly, C, C++ or the like.
- FPGAs, ASICs and CPLDs are often programmed using hardware description languages (HDL) such as VHSIC hardware description language (VHDL) or Verilog that configure connections between internal hardware modules with lesser functionality on a programmable device.
- HDL hardware description languages
- VHDL VHSIC hardware description language
- Verilog Verilog
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Debugging And Monitoring (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
- This application is a Continuation to U.S. patent application Ser. No. 13/486,133, filed Jun. 1, 2012, which claims the benefit of U.S. Provisional Application No. 61/492,199, filed Jun. 1, 2011, which are incorporated by reference in their entireties.
-
FIG. 1A illustrates anexample system 100 for providing information from third party applications to devices, according to one embodiment. -
FIG. 1B illustrates details of anexample alert system 103, according to one embodiment. -
FIG. 2 illustrates anexample method 200 for providing information from other applications to electronic devices, according to one embodiment. -
FIG. 3 illustrates anexample process 205 for checking third party log files, according to one embodiment. -
FIG. 4 illustrates anexample communication process 215 for determining whether any rules have been violated, according to one embodiment. -
FIG. 5 illustrates an example alert creation/addition process 220 for creating/adding alerts to an alert queue if rules have been violated, according to one embodiment. -
FIG. 6 illustrates anexample process 225 for an alert processor to check the alert queue for available alerts and send the alerts, according to one embodiment. -
FIG. 7 illustrates an example table diagram with relationships of the system SQLdatabase 130, according to one embodiment. -
FIG. 8-11 are example screen shots with may be utilized in one embodiment of the invention. -
FIG. 12 illustrates a blocking mechanism for blocking source IP addresses, according to one embodiment. -
FIG. 13 illustrates an example blocking mechanism, according to an embodiment. -
FIG. 14 illustrates an example mechanism that uses an alert to perform a function(s), according to an embodiment. -
FIG. 1A illustrates asystem 100 for providing information from third party applications to devices, according to one embodiment. Consistent with the innovations here,such system 100 may include, but is not limited to adevice 101, anetwork 102, and analert system 103. Here, for example, thedevice 101 may comprise, though is not limited to, any mobile device (e.g., pager, personal digital assistant, phone, i-phone, etc.) and/or any non-mobile device (e.g., personal computer, lap-top computer, etc.). Theelectronic device 101 may utilize a user interface that displays information received from thealert system 103. Additionally, thenetwork 102 may include, but is not limited to the Internet and/or an intranet. -
FIG. 1B illustrates details of thealert system 103, according to one embodiment. Thealert system 103 may include, though is not limited to aservice 110, setup/maintenance screens 135, and/or adatabase 130. Thealert system 103 may access a thirdparty log file 105, which may be a record of all system exceptions, anomalies, events, etc., tracked by a third-party application. In certain embodiments, this information is recorded chronologically. Theservice 110 may be a platform or component that includes, though is not limited to, alog monitor 113, arules engine 115, analert engine 120, and/or analert processor 125. - According to some embodiments, the
log monitor 113 is configured to monitor the thirdparty log files 105 from third party applications. Here, the third party applications may include any application that runs on a computer, including, but not limited to, a web server application firewall (e.g., DOTDEFENDER), a personal computer firewall (e.g., MCAFEE, TREND MICRO), a computer operating system (e.g., MS WINDOWS), parental control software (e.g., WEBWATCHER, CONTENT PROTECT), an automated teller machine (e.g., NCR, Triton), a server system (e.g., MS IIS, MS SQL), or any combination(s) thereof. Thelog monitor 113 may also be programmed to check the thirdparty log files 105 for one or more new log entries every predetermined time unit (e.g., a predetermined time interval X, such as one second, thirty seconds, one hour, one day, one week). Here, the time unit may be configured by the user and/or by a computer. If any new log entries are found by thelog monitor 113 when checking the thirdparty log file 105, the new log entry information may then be sent to therules engine 115 for processing. If no new log entries are found, thelog monitor 113 may wait for the next time unit to check again for new log entries. - According to some implementations, the
rules engine 115 remains in a sleep state until a new log entry is passed to it. Once a new log entry is received, therules engine 115 may be configured to check the new log entry against each active rule in thedatabase 130 that includes filtration criteria. An active rule may be defined as a record in a rule table of thedatabase 130 that contains at least one of the filtration criteria. In one embodiment, the filtration criteria may include, though are not limited to, one or more of the following: type of event, severity of event, velocity of event, source of event, or any combination thereof. If information regarding any new log entry meets the predefined filtration criteria, the information for that new log entry may be passed to thealert engine 120. - The
alert engine 120 may be configured to create alerts and add alerts to an alert queue. Finally, analert processing component 125 may be configured to check the alert queue for available alerts and process any available alerts. - The
alert system 103 ofFIG. 1B may also include setup/maintenance screens 135, which may be used to setup user profiles, rules, mail sever information, roles, and other configuration information.FIGS. 8-11 are example setup/maintenance screen 135.FIG. 8 is an example mail server setup screen that may be used to set up the mail server thatalert system 103 may use to send alerts. The user may set mail server settings before any alerts can be sent. Any or all of the mail server name, mail server IP address, mail server username, mail server password, and email address may be entered.FIG. 9 is an example user setup screen that can be used to set up new users in the system. A user may be any contact who will receive alerts. Any or all of the user ID, username, first name, last name, cell phone number, and email address may be entered.FIG. 10 is an example profile setup screen that may be used to set up profiles. A profile may be a list of users. The profiles may allow alerts to easily be configured to be sent to multiple users in a group. In order for a user to be added to a profile, the user may be first set up using the user set up screen. Once the user is set up, the user can be added to one or more profiles. Alerts may then be configured to go to certain profiles and/or individual users.FIG. 11 is an example service control screen, which may be used to manually start and stop thealert system 103. If the button displays “start monitoring”, then thealert system 103 is in a stopped state. Clicking the button will start the service. If the button displays “stop monitoring”, then thealert system 103 is in a running state. Clicking the button will then stop the service. - One or
more databases 130 present in or associated with the alert system environment may include multiple types of data utilized by theservice 110 and the setup-maintenance screens. One embodiment of anexample database 130 is described in more detail inFIG. 7 .FIG. 7 is a block diagram showing various illustrative tables and associated relationships within thedatabase 130, according to one embodiment. Here, for example, a table of user profiles (tblUserProfiles) 705 may be used to store the users and their contact information, which may include, though is not limited to: a username, a user first name, a user last name, a user phone number, or a user email, or any combination thereof. One example use of this user profile information is discussed in more detail below with respect to 510 ofFIG. 5 . A frequency table (tblFrequency) 710 may be used to store an increment counter variable (frequcondstring), discussed in more detail below, for example, with respect to 430 ofFIG. 4 . This increment counter variable may store, for example, information about when to check which of multiple criteria each new log entry meets. The increment counter variable information may include, for example, information about the frequency of occurrence (e.g., how often to check). Further, a rules table (tblRules) 715 may be used to store information about various rules, as discussed above with respect to therules engine 115. This information may include, for example, the profile ID of the rule, a frequency ID of the rule (e.g., which may be a unique number assigned to identify a particular frequency), and a description of the rule. A user profile table (tblProfileUsers) 720 may also be included to build groups of users to be notified by thealert engine 510. The groups of users may include, for example, information on the profile ID of the users and the user IDs of the users. Moreover, another profiles table (tblProfiles) 730 may be used to give the groups of users (e.g., those created and stored in tblProfileUsers 720) a specific descriptive name and an ID. Finally, an alert log table (tblAlertLog) 745 may be used to store alerts, and may include a profile ID and rule ID for each alert, as well as information on when each alert was sent. -
FIG. 2 illustrates anexample method 200 for providing information from other applications to electronic devices, according to one embodiment. In achecking process 205 of the example method, here, the log monitor checks the third party log files (e.g., from the third party applications) for any new log entries. Incommunication process 210, if there are any new log entries, information about these new log entries may then be sent to therules engine 115. In 215, a process of determining whether or not any of the new log entries violate any rules in therules engine 115 may be performed. In 220, thealert engine 120 may create alerts. In 225, the alerts can be added to an alert queue and sent. -
FIG. 3 illustrates anexample process 205 for checking third party log files, according to one embodiment. Referring toFIG. 3 , various details of thechecking process 205 ofFIG. 2 are shown. First, amonitoring process 305 may be performed, wherein the log monitor 113 may periodically monitor the thirdparty log file 105 to see if any new log entries have appeared. Next, at an appointed time period, interval or other trigger, adetermination process 310 may be performed, wherein thelog monitor 113 determines if any new log entries have been found. Here, for example, the log monitor 113 may be configured to scan the most recent entry in the thirdparty log file 105 and compare the date and time of that entry against the date and time of the last entry found, which may be stored in memory. If no date and time is found in memory, the date and time of the log entry may be written to memory and the entry may then be passed to therules engine 115. If the date and time of the most recent entry is after the date and time of the last entry found, this indicates that a new log entry has been found and that date and time may then be written to memory over the previous date and time. - If, at 310, the log monitor confirms that a new log entry has been found, the new log entry information may then be sent to the
rules engine 115 for processing. If, at 310, the log monitor confirms that no new log entry has been found, the process may return to 305, where the log monitor 113 may wait for the next time unit or other triggering event to check again for any new log entries. -
FIG. 4 illustrates details relating to process 215, to determine if any rules have been violated, according to one embodiment. Upon receipt of a new log entry, therules engine 115 may process an active rule on each new log entry in 400 to see if it violates the active rule (e.g., beginning with the first rule in a rules queue). The processing of the rules may also take into account filtration criteria. In one embodiment, the filtration criteria may include, but are not limited to: type of event, severity of event, velocity of event, source of event, or any combination thereof. Additional features of these exemplary filtration criteria are discussed with respect to actions 405-420, below. Note that depending upon the specific embodiment in question, some or all of the exemplary filtration criteria in 405-420 may be included. In other embodiments, other filtration criteria, in addition to or instead of the filtration criteria in 405-420, may be utilized. The filtration criteria may include, but are not limited to: time of day; type of event (e.g., unauthorized attempt) number of events (e.g., more than 5); number of events within a certain time period (e.g., more than 5 within 5 minutes); key words found in alert; or severity of alert; or any combination thereof (e.g. more than 5 SQL injection attempts within 1 hour from a single source). - In 405, the
rules engine 115 may determine whether the type of the log entry is defined as a trigger. In some embodiments, the rules, which may define each of the triggered types, may be configured by a user ofalert system 103 through the setup and maintenance screens 135. In one embodiment, the third party application that creates the thirdparty log file 105 defines the type of the log entry. For example, if the thirdparty log file 105 is created by a web server application firewall, there may be various types of entries in the log file (e.g., SQL injection, cross-site scripting, web crawler). By way of illustration and not limitation, a user may select SQL injection and cross-site scripting types as a trigger in a rule, but may not select the web crawler type. In this illustration, the SQL injection and cross-site scripting types would exist in the rules table 130 and therefore both entry types would be in the trigger. If the trigger determination routine confirms that the log entry type is not a trigger, the communication process may proceed directly to 420. If, however, the log entry type is found to be a trigger, then the communication process may proceed to one or more trigger routines, beginning at 410. - In 410, the
rules engine 115 may be configured to determine whether the severity of the subject log entry is defined as a trigger. In some embodiments, a third party application may include a severity field in their thirdparty log file 105. Here, for example, this severity field may indicate the relative importance of each log entry in the thirdparty log file 105. The third party application that creates the thirdparty log file 105 may also define the severity scale. In one example implementation, a web server application firewall may define severity on a scale of 1 to 5 (e.g., 1 being the least severe and 5 being the most severe). The rule may then define as a trigger a log entry with a severity of 5. The rules engine then compares the severity defined by the rule to the severity in the log entry to determine if the severity is a trigger. If the rules engine determines that the log entry severity is not a trigger, the process may precede to anincrement counter routine 430. If, however, the rules engine determines that the log entry severity is a trigger, the process may proceed to 415. - In 415, the
rules engine 115 may determine whether the velocity (e.g., frequency) of the log entry is defined as a trigger. Here, velocity may be the frequency that a type of log entry occurs. In one embodiment, velocity may be measured as a number per time period (e.g., minute, hour, day, week, month, year), though it may also be based on other periodic measures. According to some implementations, for example, the rule might define the velocity frequency trigger for a SQL injection type in a web application firewall as 5 times per hour. As such, therules engine 115 may be configured to compare the frequency defined in the rule against the increment counter variable to determine if the log entry velocity is a trigger. If, at 415, the rules engine determines that the log entry velocity is not a trigger, the process may proceed to theincrement counter routine 430. If yes, the log entry velocity is a trigger, the process may move to 420. In some embodiments, if desired, the filtering criteria (e.g., 405-420) may be modified to create a custom filtration profile for each of several users. - In 420, the
rules engine 115 may be configured to determine whether the source of the log entry is defined as a trigger. Here, the third party application that creates the thirdparty log file 105 may also define the source. For example, if the thirdparty log file 105 is created by a web server application firewall, there may be a source IP address that gets logged with each log entry (e.g., 192.168.0.76). The rule may select an IP address of 204.234.23.2 as the source. Therules engine 115 may then compare the source defined in the rule against the source in the log entry. If the sources match, the source entry is a trigger. If, at 420, the rules engine determines that the log entry source is not a trigger, the process may proceed to theincrement counter routine 430. If, however, the rules engine determines that the log entry source is a trigger, the process may proceed to a transmitalert routine 425. - According to an illustrative transmit alert routine 425, information regarding any new log entries that have been filtered by the rules with the filtration criteria, which the
rules engine 115 has determined should be passed to thealert engine 120, may now be passed to thealert engine 120. In 430, the increment counter routine may update the increment counter with information on whether any new log entry did or did not meet certain filtration criteria. In some embodiments, the increment counter may be a multi-dimensional incremental counter, which may store information about which of the multiple criteria each new log entry met. Here, for example, the increment counter may store information regarding whether or not each new log entry met the filtration criteria of type, source, severity and/or velocity, etc. - Once the increment counter routine is complete, in 440, it can be determined whether other rules exist that need to be processed. If, at 440, it is determined that the end of the rule collection has not been reached, the process may return to 400 (e.g., additional filtration criteria, such as criteria in addition to that filtration criteria discussed in 405-420). If the end of rule collection has been reached, such that the new log entry does not need to be checked against additional rules, the process may proceed to 445 where the
rules engine 115 may return to sleep mode. -
FIG. 5 illustrates an example alert creation/addition process 220 for creating alerts and/or adding alerts to an alert queue if rules have been violated, according to one embodiment. Referring toFIG. 5 , various details of an example alert creation/addition process 220 are shown, including features wherein, if therules engine 115 has determined that information about a new log entry should be sent to thealert engine 120, thealert engine 120 may create alerts and/or add alerts to an alert queue. In 500, when thealert engine 120 receives a new log entry from therules engine 115, an alert can be created (e.g., utilizing 505-520, explained below). - In a
first lookup routine 505, the alert type of the new log entry may be looked up by thealert engine 220, which may check the new log entry against each active alert to determine which active alert(s) (e.g., email message, text message, page, cell phone call, etc.) are appropriate for the new log entry (e.g.,tblalertlog 745 may be used to find the alert log ID, the profile ID, and the rule ID; tblrules 715 may be used to find the profile ID, the frequency ID, and a rules description; andtblfrequency 710 may be used to find the frequency ID, the frequency occurrence, and the frequency conditions). Next, in asecond lookup routine 510, the contact information (e.g., email address, cell phone number, pager number, etc.) listed for the alert type may be found by searching database 130 (e.g.,tblprofileusers 720,tbleuserprofiles 705, andtblprofiles 730 may be used to find the user ID and the profile ID, which can both be used to find the necessary contact information). After such lookup, according to an alert creation routine 515, an alert may be created by utilizing the information from the new log entry with the appropriate contact information. Additionally, in anadd alert routine 520, the alert may be added to the alert queue. At 530, an end of collection check is performed to see if additional alerts need to be created. If additional alerts need to be created, the process may return to 505. If the end has been reached, and additional alerts do not need to be created, thealert engine 120 may return to sleep mode in 535. -
FIG. 6 illustrates an example alert checking and sendingprocess 225 wherein an alert processor may check the alert queue for available alerts and send any available alerts, according to one embodiment. Referring toFIG. 6 , various details of an example alert checking routine 225, wherein analert processor 125 may check an alert queue for available alerts and/or send such alerts, are shown. In achecking routine 610, thealert processor 125 may check thealert queue 605 every X time units for available alerts that need to be sent. (Note that X may be a time unit or interval configured by the user and/or determined by a computer.) - An available alert may be an alert in the alert queue where certain pre-defined conditions (e.g., alert rules and roles) have been met such that the alert is considered “available” to be sent. For example, certain individuals or groups of individuals based on names (e.g., John Smith, Jane Smith, etc.), roles (e.g., user, administrator, data security specialist, web master, business executive), or pre-defined groups (e.g., production support team, web development team, marketing department, management team, auditors) can be sent certain alerts based on various criteria.
- Examples of alerts based on various pre-defined criteria include, but are not limited to: pre-defined “on” hours when the at least one alert may be sent to certain individuals (e.g., the normal user of the computer may be sent an alert from 9 AM-5 PM local time, and a back-up administrator can be sent an alert between 5:01 PM and 8:59 AM local time); certain individuals may only be sent an alert after a pre-defined number of unauthorized attempts to access a system have been made (e.g., within a certain pre-defined time period); certain individuals may be sent an alert based on the subject matter of the at least one alert (e.g., the normal user of the computer may be sent alerts with the key words “unauthorized access attempt”, and a back-up administrator may be sent alerts with the key word “unauthorized data changes”); certain individuals may be sent an alert based on the severity level of the alert (e.g., the normal user of the computer may be sent alerts that are low, moderate, and severe, but the back-up administrator may only be sent alerts that are severe); business executives may be sent an alert above a certain severity if there are more than three in one day, but not notified after normal business hours; or data security specialists may be sent an alert at any time of day of severity 3 events if there are more than three such events per hour; or any combination thereof.
- If an available alert is found, in 615, the
alert processor 125 may send the alert and log the alert in thedatabase 130. If no available alerts are found, the processor may wait for the next X time units to check again. - The ability to generate an alert has been described above. The alert may also provide the ability perform certain functions, as discussed below with respect to
FIGS. 12-14 . -
FIG. 12 illustrates a blocking mechanism for blocking source IP addresses (e.g., from malicious and/or compromised computers) by accessing a hyperlink, according to embodiments of the invention. (A source IP address is the address of a source computer (e.g., sending an email or attempting to access another computer) connected to an IP network.) The hyperlink may be accessed via asmartphone 1205,tablet 1210, orother computer 1215, or any other device with access to a network (e.g., the Internet 1220). The hyperlink may be a uniform resource locator (URL). Aweb server 1225 may decode information stored in the URL and look up information in thedatabase 130. An application programming interface (API) 1230 (e.g., web server Microsoft Internet Information Services (ISS)) may then be accessed to perform a function (e.g., block an IP address). - The URL may be unencrypted or encrypted. The URL may contain many types of information. For example, in an embodiment, the URL may contain an alert ID and/or a source IP address. The alert ID and/or source IP address may be encrypted or unencrypted, and may be stored in
database 130. The alert ID may also include many other types of information, comprising: computer control information (e.g., turn on the PCs screen saver with a password, power off the PC), information on reports/graphs (e.g., pull and display report/graphs), firewall information (e.g., change the security levels (e.g., low, medium, high)). -
FIG. 13 illustrates an example blocking mechanism, according to an embodiment. In 1305, the user may receive an alert (e.g., a report) comprising a URL incorporating a source IP address, and access (e.g., clicks on) the URL and source IP address (e.g., www.mysite.com/servicename/1DFGRR452XXX). In 1310, the browser may connect to theweb server 1225. In 1315, the web server 1225 (e.g., using a web service) may intercept the URL link and decrypt one or more pieces of the URL (e.g., 1DFGRR452XXX) to discover the source IP address (e.g., 192.168.1.1). In 1320, theweb server 1225 may call the API 1230 (e.g., Microsoft IIS) to enable execution of a function that blocks a source IP address on the web server by passing the source IP address as a parameter to the function to execute a command to block the source IP address. -
FIG. 14 illustrates an example mechanism that uses an alert to perform a function(s), according to an embodiment. In 1405, the user may access (e.g., clicks on) a URL that incorporates information about an alert (e.g., www.mysite.com/servicename/1DFGRR452XXX). In 1410, the browser may connect to theweb server 1225. In 1415, the web server 1225 (e.g., using a web service) may intercept the URL link and decrypt one or more pieces of the URL (e.g., 1DFGRR452XXX) to discover an alert ID (e.g., 1101). In 1420, theweb server 1225 may run a SQL statement against adatabase 130 to look up information stored related to the alert ID. For example, the alert ID may store information related to blocking the source IP address (e.g., Select Source IP from tblalertlog where alertID=1101). In 1425, the SQL statement may return the information stored related to the alert ID. For example, the source IP address (e.g., 192.168.1.1) may be returned. In 1430, theweb server 1225 may call the API 1230 (e.g., Microsoft IIS) to execute any functions stored for the alert ID. For example, a function may be executed that blocks a source IP address on the web server by passing the source IP address as a parameter to the function to execute a command to block the source IP address. As another example, graph and/or report information may be displayed. - Several example embodiments are set forth below. However, many other embodiments are also possible.
- In one example, the
alert system 103 may run on a web server alongside a pre-existing web firewall. In the event of an intrusion attempt into the web site hosted on the web server, the pre-existing third-party firewall would typically block the intrusion attempt and write an entry to its thirdparty log file 105. Referring toFIG. 3 above, in 305, thealert system 103 may monitor thethird party file 105. In 310, thealert system 103 would determine that a new log entry had been added: the blocking of the intrusion attempt. The blocking of the intrusion attempt log entry would be sent torules engine 115. As set forth inFIG. 4 , in 400 the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405-420 inFIG. 4 ). If the blocking of the intrusion attempt log entry file met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth inFIGS. 5 and 6 . - In the above manner, real-time alerts may be provided to users. In this way, users do not need to be logged online to the firewall when an intrusion attempt occurs, nor do users need to review past log files after an intrusion attempt has occurred, to discover an intrusion attempt. A user may thus take action (e.g., block all access from the intruder's IP address, shut down the user's web site until the threat has passed) to stop an intruder or potential intruder before the intruder or potential intruder has the opportunity to attempt many types and variants of penetrations (e.g., which may eventually be successful if given enough time).
- In some embodiments, the
alert system 103 may allow a user to customize the notifications. For example, a small business owner may want to be notified of all attempts during business hours, but during non-business hours, the small business owner may want to have the web master notified only of any instance of more than ten attempts within five minutes by a single source (or IP address). As another example, a home user may want to be notified only of attempts: exceeding a certain frequency, by time of day, or by severity, or any combination thereof. - In another example, the
alert system 103 may run alongside a parental control application. In the event of an unauthorized attempt to access an unauthorized web site, the parental control application would typically block the unauthorized web site and write an entry to its thirdparty log file 105. Referring toFIG. 3 above, in 305, thealert system 103 may monitor thethird party file 105. In 310, thealert system 103 would determine that a new log entry had been added: the blocking of the unauthorized web site. The blocking of the unauthorized web site log entry would be sent torules engine 115. As set forth inFIG. 4 , in 400 the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405-420 inFIG. 4 ). If the blocking of the unauthorized web site met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth inFIGS. 5 and 6 . - In the above manner, real-time alerts may be provided to parents (or guardians, or caretakers, school administrators, teachers, etc.). In this way, parents do not need to be logged online to the firewall when an intrusion attempt occurs, nor do parents need to review past log files after an intrusion attempt has occurred, to discover an intrusion attempt.
- In some embodiments, the
alert system 103 may allow a parent to customize the notifications. For example, a parent may select to only be notified of a number of repeated attempts within a certain timeframe, or by severity, as defined by specific types of web sites. So, for example, a parent could be notified instantly of three or more attempts to enter adult web sites within a certain timeframe (such as three or more attempts in a day), but a fewer number of attempts to access social media websites after midnight. - In another example, the
alert system 103 may run alongside a standard WINDOWS application as well as an entity's proprietary application. For example, ATM machines may be driven by on-board WINDOWS-based personal computers (PCs), along with the ATM manufacturer's proprietary software. The log files from the WINDOWS software and the proprietary software may include: hardware failures, software events, currency status, receipt paper supply, number and dollars of withdrawals and deposits, etc. - In the event of, for example, cash being low, the proprietary application could write an entry to its third
party log file 105. Referring toFIG. 3 above, in 305, thealert system 103 may monitor thethird party file 105. In 310, thealert system 103 would determine that a new log entry had been added: the cash being low. The low cash log entry would be sent torules engine 115. As set forth inFIG. 4 , in 400 the first rule would be processed, using, for example, pre-set criteria (e.g., similar to, but not limited by, 405-420 inFIG. 4 ). If the low cash log entry met all of the filtering criteria, an instant alert would be sent (e.g., via email or cellular phone text message) to intended recipients, following the procedures set forth inFIGS. 5 and 6 . - In some embodiments, the
alert system 103 may allow customized notifications. For example, thealert system 103 might notify one user when cash is getting low in the ATM, but another user of transaction volumes to be used for profitability calculations. - In this specification, “a” and “an” and similar phrases are to be interpreted as “at least one” and “one or more.” References to “an” embodiment in this disclosure are not necessarily to the same embodiment.
- It should also be noted that the
alert system 103 may comprise one or more computers. A computer may be any programmable machine capable of performing arithmetic and/or logical operations. In some embodiments, computers may comprise processors, memories, data storage devices, and/or other commonly known or novel components. These components may be connected physically or through network or wireless links. Computers may be referred to with terms that are commonly used by those of ordinary skill in the relevant arts, such as servers, PCs, mobile devices, and other terms. It will be understood by those of ordinary skill that those terms used herein are interchangeable, and any computer capable of performing the described functions may be used. For example, though the term “server” may appear in the following specification, the disclosed embodiments are not limited to servers. - Many of the elements described in the disclosed embodiments may be implemented as modules. A module is defined here as an isolatable element that performs a defined function and has a defined interface to other elements. The modules described in this disclosure may be implemented in hardware, a combination of hardware and software, firmware, wetware (i.e., hardware with a biological element) or a combination thereof, all of which are behaviorally equivalent. For example, modules may be implemented using computer hardware in combination with software routine(s) written in a computer language (such as C, C++, Fortran, Java, Basic, Matlab or the like) or a modeling/simulation program such as Simulink, Stateflow, GNU Octave, or LabVIEW MathScript. Additionally, it may be possible to implement modules using physical hardware that incorporates discrete or programmable analog, digital and/or quantum hardware. Examples of programmable hardware include: computers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs); field programmable gate arrays (FPGAs); and complex programmable logic devices (CPLDs). Computers, microcontrollers and microprocessors are programmed using languages such as assembly, C, C++ or the like. FPGAs, ASICs and CPLDs are often programmed using hardware description languages (HDL) such as VHSIC hardware description language (VHDL) or Verilog that configure connections between internal hardware modules with lesser functionality on a programmable device. Finally, it needs to be emphasized that the above mentioned technologies may be used in combination to achieve the result of a functional module.
- The disclosure of this patent document incorporates material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, for the limited purposes required by law, but otherwise reserves all copyright rights whatsoever.
- While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail may be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above described example embodiments.
- In addition, it should be understood that any figures that highlight any functionality and/or advantages, are presented for example purposes only. The disclosed architecture is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown. For example, the steps listed in any flowchart may be re-ordered or only optionally used in some embodiments.
- It should be noted that Applicant has, for consistency reasons, used the phrase “comprising” throughout the claims instead of “including, but not limited to”. However, it should be noted that “comprising” should be interpreted as meaning “including, but not limited to”.
- In addition, it should be noted that, if not already set forth explicitly in the claims, the term “a” should be interpreted as “at least one” and “the”, “said”, etc. should be interpreted as “the at least one”, “said at least one”, etc.
- Further, the purpose of any Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope in any way.
- Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112, paragraph 6. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112, paragraph 6.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/499,616 US20170308452A1 (en) | 2011-06-01 | 2017-04-27 | Method and system for providing information from third party applications to devices |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201161492199P | 2011-06-01 | 2011-06-01 | |
| US13/486,133 US9665458B2 (en) | 2011-06-01 | 2012-06-01 | Method and system for providing information from third party applications to devices |
| US15/499,616 US20170308452A1 (en) | 2011-06-01 | 2017-04-27 | Method and system for providing information from third party applications to devices |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/486,133 Continuation US9665458B2 (en) | 2011-06-01 | 2012-06-01 | Method and system for providing information from third party applications to devices |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20170308452A1 true US20170308452A1 (en) | 2017-10-26 |
Family
ID=47260377
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/486,133 Expired - Fee Related US9665458B2 (en) | 2011-06-01 | 2012-06-01 | Method and system for providing information from third party applications to devices |
| US15/499,616 Abandoned US20170308452A1 (en) | 2011-06-01 | 2017-04-27 | Method and system for providing information from third party applications to devices |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/486,133 Expired - Fee Related US9665458B2 (en) | 2011-06-01 | 2012-06-01 | Method and system for providing information from third party applications to devices |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US9665458B2 (en) |
| WO (1) | WO2012167066A2 (en) |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9436765B2 (en) | 2012-10-15 | 2016-09-06 | Wix.Com Ltd. | System for deep linking and search engine support for web sites integrating third party application and components |
| US9213807B2 (en) * | 2013-09-04 | 2015-12-15 | Raytheon Cyber Products, Llc | Detection of code injection attacks |
| JP5915637B2 (en) * | 2013-12-19 | 2016-05-11 | トヨタ自動車株式会社 | Rare earth magnet manufacturing method |
| WO2017136695A1 (en) * | 2016-02-05 | 2017-08-10 | Defensestorm, Inc. | Enterprise policy tracking with security incident integration |
| CN108055150A (en) * | 2017-12-11 | 2018-05-18 | 中盈优创资讯科技有限公司 | A kind of daily record shields method and device |
| CN111030857B (en) * | 2019-12-06 | 2024-11-01 | 深圳前海微众银行股份有限公司 | Network alarm method, device, system and computer readable storage medium |
| US12309179B2 (en) * | 2022-02-01 | 2025-05-20 | Sap Se | Log entry buffer extension network |
| US20250106239A1 (en) * | 2023-09-26 | 2025-03-27 | Honeywell International Inc. | Systems, apparatuses, methods, and computer program products for cybersecurity threat assessment |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7143439B2 (en) * | 2000-01-07 | 2006-11-28 | Security, Inc. | Efficient evaluation of rules |
| US20070222589A1 (en) * | 2002-06-27 | 2007-09-27 | Richard Gorman | Identifying security threats |
| US7278160B2 (en) * | 2001-08-16 | 2007-10-02 | International Business Machines Corporation | Presentation of correlated events as situation classes |
Family Cites Families (31)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5958012A (en) | 1996-07-18 | 1999-09-28 | Computer Associates International, Inc. | Network management system using virtual reality techniques to display and simulate navigation to network components |
| US7680879B2 (en) | 1996-07-18 | 2010-03-16 | Computer Associates Think, Inc. | Method and apparatus for maintaining data integrity across distributed computer systems |
| US7342581B2 (en) | 1996-07-18 | 2008-03-11 | Computer Associates Think, Inc. | Method and apparatus for displaying 3-D state indicators |
| US8621032B2 (en) | 1996-07-18 | 2013-12-31 | Ca, Inc. | Method and apparatus for intuitively administering networked computer systems |
| US7003587B1 (en) | 1996-07-18 | 2006-02-21 | Computer Associates Think, Inc. | Method and apparatus for maintaining data integrity across distributed computer systems |
| US7693941B2 (en) | 1996-07-18 | 2010-04-06 | Reuven Battat | Method and apparatus for predictively and graphically administering a networked system in a time dimension |
| US5876240A (en) | 1997-04-01 | 1999-03-02 | The Whitaker Corp | Stacked electrical connector with visual indicators |
| US20030033402A1 (en) | 1996-07-18 | 2003-02-13 | Reuven Battat | Method and apparatus for intuitively administering networked computer systems |
| US5991881A (en) | 1996-11-08 | 1999-11-23 | Harris Corporation | Network surveillance system |
| US7315893B2 (en) | 1997-07-15 | 2008-01-01 | Computer Associates Think, Inc. | Method and apparatus for filtering messages based on context |
| US20030023721A1 (en) | 1997-07-15 | 2003-01-30 | Computer Associates Think, Inc. | Method and apparatus for generating context-descriptive messages |
| US20030018771A1 (en) | 1997-07-15 | 2003-01-23 | Computer Associates Think, Inc. | Method and apparatus for generating and recognizing speech as a user interface element in systems and network management |
| US6493755B1 (en) | 1999-01-15 | 2002-12-10 | Compaq Information Technologies Group, L.P. | Automatic notification rule definition for a network management system |
| BR0010095A (en) | 1999-04-26 | 2002-05-21 | Computer Ass Think Inc | Method and apparatus for displaying a state of each of a plurality of network system components, and, computer-readable encoded storage medium with processing instructions for implementing a process for presenting a state of each of a plurality of system components network |
| GB0022485D0 (en) | 2000-09-13 | 2000-11-01 | Apl Financial Services Oversea | Monitoring network activity |
| US20070192863A1 (en) | 2005-07-01 | 2007-08-16 | Harsh Kapoor | Systems and methods for processing data flows |
| US20020147809A1 (en) | 2000-10-17 | 2002-10-10 | Anders Vinberg | Method and apparatus for selectively displaying layered network diagrams |
| US7093292B1 (en) | 2002-02-08 | 2006-08-15 | Mcafee, Inc. | System, method and computer program product for monitoring hacker activities |
| AU2003243253B2 (en) | 2002-05-14 | 2009-12-03 | Cisco Technology, Inc. | Method and system for analyzing and addressing alarms from network intrusion detection systems |
| US7152242B2 (en) | 2002-09-11 | 2006-12-19 | Enterasys Networks, Inc. | Modular system for detecting, filtering and providing notice about attack events associated with network security |
| US7376969B1 (en) | 2002-12-02 | 2008-05-20 | Arcsight, Inc. | Real time monitoring and analysis of events from multiple network security devices |
| US7246156B2 (en) | 2003-06-09 | 2007-07-17 | Industrial Defender, Inc. | Method and computer program product for monitoring an industrial network |
| DE10337144A1 (en) | 2003-08-11 | 2005-03-17 | Hewlett-Packard Company, Palo Alto | Method for recording event logs |
| WO2005026872A2 (en) | 2003-09-16 | 2005-03-24 | Terassic-5 Infosec Ltd | Internal lan perimeter security appliance composed of a pci card and complementary software |
| US20060236395A1 (en) | 2004-09-30 | 2006-10-19 | David Barker | System and method for conducting surveillance on a distributed network |
| US7962616B2 (en) | 2005-08-11 | 2011-06-14 | Micro Focus (Us), Inc. | Real-time activity monitoring and reporting |
| US7653633B2 (en) | 2005-11-12 | 2010-01-26 | Logrhythm, Inc. | Log collection, structuring and processing |
| US7962957B2 (en) * | 2007-04-23 | 2011-06-14 | International Business Machines Corporation | Method and apparatus for detecting port scans with fake source address |
| US20090287603A1 (en) | 2008-05-15 | 2009-11-19 | Bank Of America Corporation | Actionable Alerts in Corporate Mobile Banking |
| US20100251370A1 (en) * | 2009-03-26 | 2010-09-30 | Inventec Corporation | Network intrusion detection system |
| US8677487B2 (en) * | 2011-10-18 | 2014-03-18 | Mcafee, Inc. | System and method for detecting a malicious command and control channel |
-
2012
- 2012-06-01 US US13/486,133 patent/US9665458B2/en not_active Expired - Fee Related
- 2012-06-01 WO PCT/US2012/040441 patent/WO2012167066A2/en not_active Ceased
-
2017
- 2017-04-27 US US15/499,616 patent/US20170308452A1/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7143439B2 (en) * | 2000-01-07 | 2006-11-28 | Security, Inc. | Efficient evaluation of rules |
| US7278160B2 (en) * | 2001-08-16 | 2007-10-02 | International Business Machines Corporation | Presentation of correlated events as situation classes |
| US20070222589A1 (en) * | 2002-06-27 | 2007-09-27 | Richard Gorman | Identifying security threats |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2012167066A2 (en) | 2012-12-06 |
| US9665458B2 (en) | 2017-05-30 |
| US20130007836A1 (en) | 2013-01-03 |
| WO2012167066A3 (en) | 2013-02-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20170308452A1 (en) | Method and system for providing information from third party applications to devices | |
| USRE50335E1 (en) | Contextual security behavior management and change execution | |
| US10664785B2 (en) | Systems, structures, and processes for interconnected devices and risk management | |
| US12010151B2 (en) | Systems and methods for deploying configurations on computing devices and validating compliance with the configurations during scheduled intervals | |
| US7941851B2 (en) | Architecture for identifying electronic threat patterns | |
| US9324119B2 (en) | Identity and asset risk score intelligence and threat mitigation | |
| US11244270B2 (en) | Systems, structures, and processes for interconnected devices and risk management | |
| US20230179611A1 (en) | Digital Safety and Account Discovery | |
| US20190028557A1 (en) | Predictive human behavioral analysis of psychometric features on a computer network | |
| US10157369B2 (en) | Role tailored dashboards and scorecards in a portal solution that integrates retrieved metrics across an enterprise | |
| WO2019226615A1 (en) | Digital visualization and perspective manager | |
| US12323427B2 (en) | User risk scoring based on role and event risk scores | |
| US9477934B2 (en) | Enterprise collaboration content governance framework | |
| EP3529969B1 (en) | Digital safety and account discovery | |
| Weinz et al. | The Impact of Emerging Phishing Threats: Assessing Quishing and LLM-generated Phishing Emails against Organizations | |
| Kuypers et al. | Designing organizations for cyber security resilience | |
| US20230239314A1 (en) | Risk management security system | |
| US12443704B2 (en) | Network security probe | |
| US9261951B2 (en) | Systems and methods for managing security data | |
| Horsman | Can signs of digital coercive control be evidenced in mobile operating system settings?-A guide for first responders | |
| Nickle et al. | Notification of Data Security Incident at Professional Compounding Centers of America, Inc.(PCCA) | |
| Tsang et al. | Security Alert Management System for Internet Data Center Based on ISO/IEC 27001 Ontology | |
| Baharudin et al. | Developing ADUN e-Community Portal for Community in Malaysia | |
| Heaton | Securing the Digital Home Front. | |
| Davidson | After the latest hack attack, can feds trust Uncle Sam with their personal information? |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: HORIZON MARKETING GROUP, INC., FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STEVENSON, THOMAS;MATYGER, ALLAN;SMITH, PAUL;AND OTHERS;SIGNING DATES FROM 20120709 TO 20120816;REEL/FRAME:042193/0572 Owner name: DATA SECURITY SOLUTIONS, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WILMINGTON SAVINGS FUND SOCIETY, FSB;REEL/FRAME:042193/0589 Effective date: 20170410 Owner name: WILMINGTON SAVINGS FUND SOCIETY, FSB, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HORIZON MARKETING GROUP, INC.;REEL/FRAME:042193/0586 Effective date: 20120614 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |