US20240297913A1 - Communications network topology for minimizing latency in a many-to-one environment - Google Patents
Communications network topology for minimizing latency in a many-to-one environment Download PDFInfo
- Publication number
- US20240297913A1 US20240297913A1 US18/270,686 US202118270686A US2024297913A1 US 20240297913 A1 US20240297913 A1 US 20240297913A1 US 202118270686 A US202118270686 A US 202118270686A US 2024297913 A1 US2024297913 A1 US 2024297913A1
- Authority
- US
- United States
- Prior art keywords
- sub
- layer
- servers
- value
- subordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2379—Updates performed during online database operations; commit processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24553—Query execution of query operations
- G06F16/24554—Unary operations; Data partitioning operations
- G06F16/24556—Aggregation; Duplicate elimination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/256—Integrating or interfacing systems involving database management systems in federated or virtual databases
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/282—Hierarchical databases, e.g. IMS, LDAP data stores or Lotus Notes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/829—Topology based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/10015—Access to distributed or replicated servers, e.g. using brokers
Definitions
- the present subject matter relates generally to systems and methods for structuring a communications network topology in order to manage user updates to a database. More specifically, the present invention relates to a database network topology including the logical layout of a cascading series of servers in a communications network in which many user devices update individual data fields in a central server in a close to real-time environment.
- network topology is used to describe the logical layout of a communication network.
- node describes an element in the system (a server, a user device, etc.) and the term “link” describes the communication path between the connected nodes, whether wired or wireless.
- server topologies such as, for example, point-to-point, daisy chain, bus, star, ring, mesh, tree, and hybrid (e.g., a combination of two or more of the known topologies).
- Each of these layouts provides its own strengths and weaknesses.
- a bus topology may be easy to set up and maintain, but it becomes decreasingly efficient when the number of nodes increases and there can be security risks when each node in the network has access to the communications along the bus.
- server topologies that handle real-time communications amongst a massive number of nodes.
- the network topology that supports social media platforms, such as Facebook, Twitter, Instagram, etc. must be capable of handling incoming communications from millions of users to update data fields in the system in a near real-time basis.
- these millions of social media users are not trying to update a single shared value in a central server. Instead, many users may be submitting unique content (e.g., a new post) and others may be submitting an update to a shared value (the number of likes on a given post). In rare instances in which millions of users may be updating a single shared value (e.g., the number of likes on a Facebook post of a famous account) the speed at which the aggregated data field is updated may not be critical.
- a question may be posed to a vast audience that may be answered by each audience member with a “yes” or a “no” response.
- the central server receives each user's response and updates the survey results in response to each user response. For example, the central server may receive 823,945 yes responses and 789,352 no responses and record the aggregated value for each in the central server.
- Solutions such as a Redis Cluster, provide a mesh network of distributed servers in a cluster to service millions of users and replicate data between master and slave nodes. However, these solutions are designed to provide optimum failover and have the downside of introducing latency as additional servers are added into the network. There are instances in which the speed at which the users desire the values to be updated exceeds the capacity of these known topologies.
- a real-time survey application may want to be able to present survey results from millions of users to the millions of users in near real-time and, therefore, such systems require that millions of users are able to update a common shared value on the server in near real-time.
- the present disclosure provides systems and methods for providing a database network or server topology that helps to minimize latency when receiving data inputs from a multitude of end user computing devices to update aggregated values in one or more data fields or databases in a principal or master server.
- the database network topology described herein allows a single shared value to be updated by a multitude of users simultaneously on multiple sub-level servers, which feed higher-level servers, which in turn aggregate the data from the sub-level servers and feed a master server, which aggregates all the data updates and feeds the aggregated data back to the users.
- the network topology is best represented as a pyramid.
- the Sound Off! App is a mobile application in which sports fans are able to cheer for or jeer against participants in a live sporting event by communicating a “sentiment value.”
- the Sound Off! App uses a live football game between the Tampa Bay Buccaneers and the Kansas City Chiefs as an example, in the Sound Off! App, each of the following sentiment values may be recorded: (1) the number of cheers for the Miami Bay Buccaneers; (2) the number of jeers against the Tampa Bay Buccaneers; (3) the number of cheers for the Kansas City Chiefs; and (4) the number of jeers against the Kansas City Chiefs.
- sentiment values can be visualized and presented in-stadium, as well as in-app, to the viewing and listening audience to demonstrate how engaged the fans of each team are at any given time.
- the speed at which the four tracked values (e.g., (1) the number of cheers for the Miami Bay Buccaneers, (2) the number of jeers against the Miami Bay Buccaneers, (3) the number of cheers for the Kansas City Chiefs, and (4) the number of jeers against the Kansas City Chiefs) is updated within the system is important. Reactions to events within a sporting event are feel more compelling the more instantaneous they are. The longer the delay between a play and a reaction, the less engaged and the less a part of the experience the user may feel.
- the pyramid topology is intended to enable the maximum number of nodes to interact with the exact same data point within the least amount of time.
- an authoritative server sits at the top of the pyramid. This is the “principal” or “master” server.
- the principal server connects to a sub-layer of “subordinate” or “slave” servers.
- the principal server gathers data from the subordinate servers at a predetermined interval based on the maximum number of connections that can be serviced within the desired timeframe.
- An additional layer of lower-level subordinate servers can be added under each of the higher-level subordinate servers to increase the network capacity. Additional levels of subordinate servers can be added to further increase the network capacity such that a single network may include a principal server and many cascading levels of subordinate servers to form the pyramid structure.
- each new layer of subordinate servers increases the latency of the system. For example, if the principal server can handle communications with 1,000 subordinate servers to update the four tracked values in one second, then each subordinate server can also handle communications with 1,000 subordinate servers to update the four tracked values in one second. Accordingly, by adding a second subordinate sub-layer of 1,000 servers to each of the first 1,000 subordinate servers, the network capacity increases from 1,000 users to 1,000,000 users and the time to update the four tracked values increases from one second to two seconds. Adding a third subordinate sub-layer of 1,000 servers to each server in the second subordinate layer of servers, the network capacity increases from 1,000,000 users to 1,000,000,000 users and the time to update the four tracked values increases from two seconds to three seconds.
- T is the maximum amount of time the system will take to update value X
- N is the number of connections a single server can update in time t
- L is the number of levels of N nodes implemented. Accordingly, t times N equals T.
- the maximum number of connections that can update value X in T time is the number of nodes in the lowest level, which is equal to N to the power Y, where Y is equal to the number of levels of subordinate sub-layers.
- An example of a method of minimizing latency in a communication network in which a plurality of user devices update a first data field may include the steps of: providing a principal server including a principal stored value of the first data field; providing a first sub-layer of subordinate servers in communication with the principal server, wherein each subordinate server in the first sub-layer includes a first sub-layer stored value of the first data field, wherein the first sub-layer of subordinate servers includes at least N-number of servers; providing an N-number of groups of second sub-layer of subordinate servers in communication with the first sub-layer of subordinate servers, wherein each subordinate server in the second sub-layer includes a second sub-layer stored value of the first data field, wherein each group of the second sub-layer of subordinate servers includes at least N-number of servers in communication with a respective one of the subordinate servers in the first sub-layer of subordinate servers; in each of the subordinate servers in the second sub-layer, receiving an end user input value of the first data field from one or more of the
- An example of a communication network may include: a principal server including a principal stored value of a first data field; a first sub-layer of subordinate servers in communication with the principal server, wherein each subordinate server in the first sub-layer includes a first sub-layer stored value of the first data field, wherein the first sub-layer of subordinate servers includes at least N-number of servers; and an N-number of groups of second sub-layer of subordinate servers in communication with the first sub-layer of subordinate servers, wherein each subordinate server in the second sub-layer includes a second sub-layer stored value of the first data field, wherein each group of the second sub-layer of subordinate servers includes at least N-number of servers in communication with a respective one of the subordinate servers in the first sub-layer of subordinate servers; wherein each of the subordinate servers in the second sub-layer receives an end user input value of the first data field from one or more of a plurality of user devices; wherein each of the subordinate servers in the first sub-layer receives a second sub
- the principal stored value of the first data field may be, for example, a sentiment value.
- the sentiment value may relate to a first participant in a live sporting event, whether an individual or a team/organization.
- the aggregated value of the end user input value of the first data field from each of the plurality of user devices triggers one of a visual effect or an audible effect at the live sporting event.
- the visual effect may be a text, images, or video on one or more displays in the stadium hosting the live sporting event.
- a Sound Off! Fan Meter may be displayed on a jumbotron in the stadium.
- the audible effect may be a noise amplified and projected at the stadium hosting the live sporting event.
- the audible effect may be simulated cheering at the live sporting event.
- the sounds could be a team fight song, a goal horn, a “sad trombone” sound when a team fails to achieve an objective, etc.
- the occurrence and strength of the sound may be related to the aggregated value of the end user input value. For example, some cheers could lead to an amplified cheering sound, while even more cheers could lead to the playing of the fight song. A smaller number of jeers may lead to an amplified booing sound, while a greater number of jeers may lead to a humorously mocking movie quote or similar.
- the principal server further includes a principal stored value of a second data field.
- the principal stored value of the second data field may be a sentiment value and the sentiment value may relate to a second participant in a live sporting event.
- a method of managing simultaneous user updates to a database to minimize latency includes providing at least one master server, the master server comprising at least one database with a single shared value; providing a plurality of slave servers, each slave server comprising the database with the single shared value; simultaneously updating, through a plurality of users, the single shared value on the user personal devices and transmitting the update to the slave servers, wherein the master server and the plurality of slave servers are capable of completing a maximum number of database updates within a timeframe; organizing the master server and the plurality of slave servers into a server topology comprising a top-tier, at least one mid-tier, and a bottom-tier, wherein the top-tier comprises the master server, the at least one mid-tier comprises slave servers which update the single shared value of the master server, and the bottom-tier comprises slave servers which update the single shared value of the at least one mid-tier; determining the number of the at least one mid-tier and bottom-tier slave servers in the server topology by comparing the maximum number of database updates within a
- the method further comprises erasing, by the slave servers, data from the single shared value when the server transmits an update to another server.
- the method further comprises transmitting, by the master server, updates to the users back through the mid-tier and bottom-tier servers.
- the method further comprises transmitting, by the master server, updates to the users back through a feedback server.
- An object of the subject matter presented herein is to improve the fan experience for live sporting events by providing a “game within a game” by merging a live sporting event with a related live interactive game (typically provided on a mobile device such as a smartphone or tablet).
- Another object of the invention is to provide a simple to use real-time interaction with a live sporting event that does not interfere with the viewer's focus on the live event and creates a sense of involvement in the live event by providing a meaningful mechanism for remote involvement.
- Another object of the invention is to provide a new communication network topology that enables the recording of specific user input values in near real-time from a plethora of users.
- FIG. 1 is a schematic diagram illustrating an example of a pyramid server topology according to the teachings provided herein.
- FIG. 2 is a schematic diagram illustrating a further example of a pyramid server topology according to the teachings provided herein.
- FIG. 3 is a schematic diagram illustrating a still further example of a pyramid server topology according to the teachings provided herein.
- FIGS. 1 - 3 illustrate exemplary systems 100 , 200 , 300 for providing a server topology that helps to minimize latency when receiving data inputs from a multitude of end user computing devices to update aggregated values in one or more data fields in a principal server.
- the present invention was developed to allow millions of users to update an application database.
- the application database contains a single shared value in a database which is updated by many users operating a variety of personal devices, such as cellular phones, tablets, laptops, etc.
- updates to the database single shared value must update within a predetermined timeframe from receiving the update from the user. Millions of users may be updating the application simultaneously, and the number of simultaneous database updates cannot negatively affect the update timeframe of the single shared value.
- the system 100 includes a single principal server 102 in communication with a first sub-layer 104 of subordinate servers 106 .
- Each of the subordinate servers 106 in the first sub-layer 104 is in communication with a second sub-layer 108 of subordinate servers 106 .
- end user devices 110 e.g., mobile devices
- the system 100 shown in FIG. 1 provides the communication network topology to implement the systems and methods described herein.
- the illustrated server topology 200 utilizes a master server 201 , which sits at the top of the server topology.
- the master server 201 connects to a first sub-layer of slave servers 202 .
- the master server 201 and slave servers 202 both contain databases with a same shared value.
- the shared value on the master server 201 database is fed the data from the databases of a first tier 203 of slave servers 202 at a predetermined time interval.
- the time interval is equal to the maximum number of connections that can be serviced within the desired timeframe.
- a second tier 204 of slave servers 202 , and any number of subsequent tiers of slave servers 202 , with databases containing the same shared value can be added under each of the first tier 203 of slave servers 102 to increase capacity.
- Additional slave servers 202 to the first tier 203 will not add to the total processing time, but an additional tier of slave servers 202 will.
- the master server 201 can handle 1,000 slave servers 202 or user connections in one second
- each slave server 202 of the first tier 203 can also handle 1,000 slave servers or user connections in one second. That means that the time it takes the database of the master server 201 to service all 1,000 databases of the slave servers 202 of the first tier 203 is the same amount of time it will take each slave server 202 of the first tier 203 to service all 1,000 databases of the slave servers 202 of the second tier 204 .
- the final outcome is that increasing the load from 1,000 user connections to 1,000,000 user connections will only double the time to two seconds.
- slave servers 202 can service up to 1 billion user connections, and only triple the time it takes the master server 201 to process its user connections.
- master server 201 can service 1,000 connections in one second using the above-described server topology, it will only take two seconds to service a million user connections with a first tier 203 of slave servers 202 and it will take only three seconds to service a billion user connections with a second tier 204 of slave servers 202 .
- Equations 1-5, whereby X is the number of users needed to be reached, T is the maximum amount of time to refresh value X, t is the maximum allowed time per tier, # is number of slave server tiers needed, and N is the number of user connections a single server can update in t.
- the maximum number of connections that can update X in T time is the number of servers in the last level or lowermost tier N, or simply N to the next power.
- a slave server 202 sends an update of the single shared value, which is in the form of a data packet of the single aggregate value of all the updates received by the slave server 202 , to either the database in the server tier above that slave server or the database of the master server 102 , then that slave server database will reset the single shared value to a default value, e.g. zero, if the single shared value is performing a counting function.
- the slave server database does not expend processing time determining the difference between the current value of the single shared value and the value of the single shared value at the time the slave server database last updated either the server in the tier above that slave server, or the master server 201 .
- the flow of data between the databases of the master server 201 and the slave servers 202 is two-way, therefore, as the slave servers 202 are updating the single shared value in the databases of their master server 201 , the master servers 201 are updating the databases of their slave servers 202 of the aggregate value of the single shared value, which is then shared with the users 110 .
- a separate feedback server with a database 305 is utilized to update the users 110 , with the aggregate value of the single shared value.
- the flow of database values for the single shared value between the master server 301 and the slave servers 302 is one-way, therefore, as the slave servers 302 are updating the single shared value in their master server database 301 , the master server 301 is sharing the aggregate value of the single shared value with the feedback server(s) 305 , which is then relayed the users 110 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application is a national stage application under 35 USC 371 of international application PCT/US2021/065644 filed Dec. 30, 2021, which claims the benefit of priority to U.S. application Ser. No. 17/316,603 filed May 10, 2021, which claims the benefit of priority to U.S. Provisional No. 63/132,987 filed Dec. 31, 2020.
- The present subject matter relates generally to systems and methods for structuring a communications network topology in order to manage user updates to a database. More specifically, the present invention relates to a database network topology including the logical layout of a cascading series of servers in a communications network in which many user devices update individual data fields in a central server in a close to real-time environment.
- The term “network topology” is used to describe the logical layout of a communication network. Within a topology, the term node describes an element in the system (a server, a user device, etc.) and the term “link” describes the communication path between the connected nodes, whether wired or wireless. There are various commonly known server topologies, such as, for example, point-to-point, daisy chain, bus, star, ring, mesh, tree, and hybrid (e.g., a combination of two or more of the known topologies). Each of these layouts provides its own strengths and weaknesses. For example, a bus topology may be easy to set up and maintain, but it becomes decreasingly efficient when the number of nodes increases and there can be security risks when each node in the network has access to the communications along the bus.
- As the world becomes increasingly connected by networked communications, and as each user has essentially become a mobile node himself or herself (e.g., by accessing networks through their mobile devices), there is a need for server topologies that handle real-time communications amongst a massive number of nodes. For example, the network topology that supports social media platforms, such as Facebook, Twitter, Instagram, etc. must be capable of handling incoming communications from millions of users to update data fields in the system in a near real-time basis.
- In many instances, these millions of social media users are not trying to update a single shared value in a central server. Instead, many users may be submitting unique content (e.g., a new post) and others may be submitting an update to a shared value (the number of likes on a given post). In rare instances in which millions of users may be updating a single shared value (e.g., the number of likes on a Facebook post of a famous account) the speed at which the aggregated data field is updated may not be critical.
- By contrast, in a survey application, a question may be posed to a vast audience that may be answered by each audience member with a “yes” or a “no” response. In this case, the central server receives each user's response and updates the survey results in response to each user response. For example, the central server may receive 823,945 yes responses and 789,352 no responses and record the aggregated value for each in the central server. Solutions, such as a Redis Cluster, provide a mesh network of distributed servers in a cluster to service millions of users and replicate data between master and slave nodes. However, these solutions are designed to provide optimum failover and have the downside of introducing latency as additional servers are added into the network. There are instances in which the speed at which the users desire the values to be updated exceeds the capacity of these known topologies.
- To make the point clear, when a given Tweet is posted to Twitter, the Tweet does not need to reach every other Twitter user within one second. Also, Twitter does not have one million users all updating a single Tweet at the same time. By contrast, a real-time survey application may want to be able to present survey results from millions of users to the millions of users in near real-time and, therefore, such systems require that millions of users are able to update a common shared value on the server in near real-time.
- Accordingly, there is a need for a server topology that helps to minimize latency when receiving data inputs from a multitude of end user computing devices to update aggregated values in one or more data fields in a principal server, as described herein.
- To meet the needs described above and others, the present disclosure provides systems and methods for providing a database network or server topology that helps to minimize latency when receiving data inputs from a multitude of end user computing devices to update aggregated values in one or more data fields or databases in a principal or master server. The database network topology described herein allows a single shared value to be updated by a multitude of users simultaneously on multiple sub-level servers, which feed higher-level servers, which in turn aggregate the data from the sub-level servers and feed a master server, which aggregates all the data updates and feeds the aggregated data back to the users. The network topology is best represented as a pyramid.
- The examples provided herein are made with reference to a mobile application referred to herein as the Sound Off! App. For purposes of this disclosure, the Sound Off! App is a mobile application in which sports fans are able to cheer for or jeer against participants in a live sporting event by communicating a “sentiment value.” Using a live football game between the Tampa Bay Buccaneers and the Kansas City Chiefs as an example, in the Sound Off! App, each of the following sentiment values may be recorded: (1) the number of cheers for the Tampa Bay Buccaneers; (2) the number of jeers against the Tampa Bay Buccaneers; (3) the number of cheers for the Kansas City Chiefs; and (4) the number of jeers against the Kansas City Chiefs. These four values (i.e., sentiment values) can be visualized and presented in-stadium, as well as in-app, to the viewing and listening audience to demonstrate how engaged the fans of each team are at any given time. This dynamic interaction by fans, whether engaged in-person at the game or remotely, creates another mechanism for fans to be and feel engaged with the sporting event and connected to the other fans.
- In the Sound Off! App, the speed at which the four tracked values (e.g., (1) the number of cheers for the Tampa Bay Buccaneers, (2) the number of jeers against the Tampa Bay Buccaneers, (3) the number of cheers for the Kansas City Chiefs, and (4) the number of jeers against the Kansas City Chiefs) is updated within the system is important. Reactions to events within a sporting event are feel more compelling the more instantaneous they are. The longer the delay between a play and a reaction, the less engaged and the less a part of the experience the user may feel.
- As described above, known server topologies struggle to update shared values based on input from a multitude of users without introducing unwanted latency. To solve this problem, the present subject matter presents a unique server topology referred to herein as a pyramid topology. The pyramid topology is intended to enable the maximum number of nodes to interact with the exact same data point within the least amount of time.
- In the pyramid topology presented herein, an authoritative server sits at the top of the pyramid. This is the “principal” or “master” server. The principal server connects to a sub-layer of “subordinate” or “slave” servers. The principal server gathers data from the subordinate servers at a predetermined interval based on the maximum number of connections that can be serviced within the desired timeframe. An additional layer of lower-level subordinate servers can be added under each of the higher-level subordinate servers to increase the network capacity. Additional levels of subordinate servers can be added to further increase the network capacity such that a single network may include a principal server and many cascading levels of subordinate servers to form the pyramid structure.
- In the pyramid structure, each new layer of subordinate servers increases the latency of the system. For example, if the principal server can handle communications with 1,000 subordinate servers to update the four tracked values in one second, then each subordinate server can also handle communications with 1,000 subordinate servers to update the four tracked values in one second. Accordingly, by adding a second subordinate sub-layer of 1,000 servers to each of the first 1,000 subordinate servers, the network capacity increases from 1,000 users to 1,000,000 users and the time to update the four tracked values increases from one second to two seconds. Adding a third subordinate sub-layer of 1,000 servers to each server in the second subordinate layer of servers, the network capacity increases from 1,000,000 users to 1,000,000,000 users and the time to update the four tracked values increases from two seconds to three seconds.
- Using the pyramid topology taught herein, following variables interact when solving for a system design. T is the maximum amount of time the system will take to update value X, N is the number of connections a single server can update in time t, and L is the number of levels of N nodes implemented. Accordingly, t times N equals T. The maximum number of connections that can update value X in T time is the number of nodes in the lowest level, which is equal to N to the power Y, where Y is equal to the number of levels of subordinate sub-layers.
- An example of a method of minimizing latency in a communication network in which a plurality of user devices update a first data field may include the steps of: providing a principal server including a principal stored value of the first data field; providing a first sub-layer of subordinate servers in communication with the principal server, wherein each subordinate server in the first sub-layer includes a first sub-layer stored value of the first data field, wherein the first sub-layer of subordinate servers includes at least N-number of servers; providing an N-number of groups of second sub-layer of subordinate servers in communication with the first sub-layer of subordinate servers, wherein each subordinate server in the second sub-layer includes a second sub-layer stored value of the first data field, wherein each group of the second sub-layer of subordinate servers includes at least N-number of servers in communication with a respective one of the subordinate servers in the first sub-layer of subordinate servers; in each of the subordinate servers in the second sub-layer, receiving an end user input value of the first data field from one or more of the plurality of user devices; in each of the subordinate servers in the first sub-layer, receiving a second sub-layer input value of the first data field from each of the subordinate servers in the respective group of second sub-layer servers; in the principal server, receiving a first sub-layer input value of the first data field from each of the subordinate servers in the first sub-layer; and in the principal server, updating the principal stored value of the first data field to equal an aggregated value of the first sub-layer input value of the first data field from each of the subordinate servers in the first sub-layer, which is equal to an aggregated value of the end user input value of the first data field from each of the plurality of user devices.
- An example of a communication network may include: a principal server including a principal stored value of a first data field; a first sub-layer of subordinate servers in communication with the principal server, wherein each subordinate server in the first sub-layer includes a first sub-layer stored value of the first data field, wherein the first sub-layer of subordinate servers includes at least N-number of servers; and an N-number of groups of second sub-layer of subordinate servers in communication with the first sub-layer of subordinate servers, wherein each subordinate server in the second sub-layer includes a second sub-layer stored value of the first data field, wherein each group of the second sub-layer of subordinate servers includes at least N-number of servers in communication with a respective one of the subordinate servers in the first sub-layer of subordinate servers; wherein each of the subordinate servers in the second sub-layer receives an end user input value of the first data field from one or more of a plurality of user devices; wherein each of the subordinate servers in the first sub-layer receives a second sub-layer input value of the first data field from each of the subordinate servers in the respective group of second sub-layer servers; wherein the principal server receives a first sub-layer input value of the first data field from each of the subordinate servers in the first sub-layer and updates the principal stored value of the first data field to equal an aggregated value of the first sub-layer input value of the first data field from each of the subordinate servers in the first sub-layer, which is equal to an aggregated value of the end user input value of the first data field from each of the plurality of user devices.
- In each of the example system and method described above, the principal stored value of the first data field may be, for example, a sentiment value. The sentiment value may relate to a first participant in a live sporting event, whether an individual or a team/organization.
- In some embodiments, the aggregated value of the end user input value of the first data field from each of the plurality of user devices triggers one of a visual effect or an audible effect at the live sporting event. For example, the visual effect may be a text, images, or video on one or more displays in the stadium hosting the live sporting event. For example, a Sound Off! Fan Meter may be displayed on a jumbotron in the stadium. The audible effect may be a noise amplified and projected at the stadium hosting the live sporting event. For example, the audible effect may be simulated cheering at the live sporting event. Similarly, the sounds could be a team fight song, a goal horn, a “sad trombone” sound when a team fails to achieve an objective, etc. The occurrence and strength of the sound may be related to the aggregated value of the end user input value. For example, some cheers could lead to an amplified cheering sound, while even more cheers could lead to the playing of the fight song. A smaller number of jeers may lead to an amplified booing sound, while a greater number of jeers may lead to a humorously mocking movie quote or similar.
- In some examples, the principal server further includes a principal stored value of a second data field. The principal stored value of the second data field may be a sentiment value and the sentiment value may relate to a second participant in a live sporting event.
- In another embodiment, a method of managing simultaneous user updates to a database to minimize latency includes providing at least one master server, the master server comprising at least one database with a single shared value; providing a plurality of slave servers, each slave server comprising the database with the single shared value; simultaneously updating, through a plurality of users, the single shared value on the user personal devices and transmitting the update to the slave servers, wherein the master server and the plurality of slave servers are capable of completing a maximum number of database updates within a timeframe; organizing the master server and the plurality of slave servers into a server topology comprising a top-tier, at least one mid-tier, and a bottom-tier, wherein the top-tier comprises the master server, the at least one mid-tier comprises slave servers which update the single shared value of the master server, and the bottom-tier comprises slave servers which update the single shared value of the at least one mid-tier; determining the number of the at least one mid-tier and bottom-tier slave servers in the server topology by comparing the maximum number of database updates within a timeframe the servers can achieve and a target timeframe for the master server to receive an update to the single shared value once the user has update the single shared value is received by the slave server, wherein a plurality of users will simultaneously transmit updates to the single shared value from their devices to the bottom-tier slave servers, which collect and aggregate the user updates into a single update value that is transmitted to the mid-tier servers; wherein the mid-tier servers receive a plurality of updates from mid-tier and bottom-tier slave servers, which collect and aggregate the user updates into a single update value that is transmitted to the either master server or other mid-tier servers, and wherein the master server receives a plurality of updates from mid-tier servers, which collects and aggregates the user updates into a single update value that is transmitted to user.
- In other embodiments, the method further comprises erasing, by the slave servers, data from the single shared value when the server transmits an update to another server.
- In still further embodiments, the method further comprises transmitting, by the master server, updates to the users back through the mid-tier and bottom-tier servers.
- In still further embodiments, the method further comprises transmitting, by the master server, updates to the users back through a feedback server.
- An object of the subject matter presented herein is to improve the fan experience for live sporting events by providing a “game within a game” by merging a live sporting event with a related live interactive game (typically provided on a mobile device such as a smartphone or tablet).
- Another object of the invention is to provide a simple to use real-time interaction with a live sporting event that does not interfere with the viewer's focus on the live event and creates a sense of involvement in the live event by providing a meaningful mechanism for remote involvement.
- Another object of the invention is to provide a new communication network topology that enables the recording of specific user input values in near real-time from a plethora of users.
- Additional objects, advantages, and novel features of the solutions provided herein will be recognized by those skilled in the art based on the following detail description and claims, as well as the accompanying drawings, and/or may be learned by production or operation of the examples provided herein.
- The figures depict one or more embodiments of the subject matter described herein. They are provided as examples only. Within the figures, reference numbers are used to refer to elements described in the detailed description.
-
FIG. 1 is a schematic diagram illustrating an example of a pyramid server topology according to the teachings provided herein. -
FIG. 2 is a schematic diagram illustrating a further example of a pyramid server topology according to the teachings provided herein. -
FIG. 3 is a schematic diagram illustrating a still further example of a pyramid server topology according to the teachings provided herein. -
FIGS. 1-3 illustrateexemplary systems 100, 200, 300 for providing a server topology that helps to minimize latency when receiving data inputs from a multitude of end user computing devices to update aggregated values in one or more data fields in a principal server. The present invention was developed to allow millions of users to update an application database. In some embodiments, the application database contains a single shared value in a database which is updated by many users operating a variety of personal devices, such as cellular phones, tablets, laptops, etc. In order for the application to operate effectively, updates to the database single shared value must update within a predetermined timeframe from receiving the update from the user. Millions of users may be updating the application simultaneously, and the number of simultaneous database updates cannot negatively affect the update timeframe of the single shared value. - As shown in
FIG. 1 , thesystem 100 includes a singleprincipal server 102 in communication with afirst sub-layer 104 ofsubordinate servers 106. Each of thesubordinate servers 106 in thefirst sub-layer 104 is in communication with asecond sub-layer 108 ofsubordinate servers 106. As further shown, end user devices 110 (e.g., mobile devices) communicate with the lowest sub-layer ofsubordinate servers 106. As will be recognized by those skilled in the art, thesystem 100 shown inFIG. 1 provides the communication network topology to implement the systems and methods described herein. - Referring to
FIG. 2 , the illustrated server topology 200 utilizes amaster server 201, which sits at the top of the server topology. Themaster server 201 connects to a first sub-layer ofslave servers 202. Themaster server 201 andslave servers 202 both contain databases with a same shared value. The shared value on themaster server 201 database is fed the data from the databases of afirst tier 203 ofslave servers 202 at a predetermined time interval. The time interval is equal to the maximum number of connections that can be serviced within the desired timeframe. Asecond tier 204 ofslave servers 202, and any number of subsequent tiers ofslave servers 202, with databases containing the same shared value can be added under each of thefirst tier 203 ofslave servers 102 to increase capacity. -
Additional slave servers 202 to thefirst tier 203 will not add to the total processing time, but an additional tier ofslave servers 202 will. For example, if themaster server 201 can handle 1,000slave servers 202 or user connections in one second, then eachslave server 202 of thefirst tier 203 can also handle 1,000 slave servers or user connections in one second. That means that the time it takes the database of themaster server 201 to service all 1,000 databases of theslave servers 202 of thefirst tier 203 is the same amount of time it will take eachslave server 202 of thefirst tier 203 to service all 1,000 databases of theslave servers 202 of thesecond tier 204. The final outcome is that increasing the load from 1,000 user connections to 1,000,000 user connections will only double the time to two seconds. - Further, if you add a third tier of
slave servers 202, you can service up to 1 billion user connections, and only triple the time it takes themaster server 201 to process its user connections. - Therefore, if
master server 201 can service 1,000 connections in one second using the above-described server topology, it will only take two seconds to service a million user connections with afirst tier 203 ofslave servers 202 and it will take only three seconds to service a billion user connections with asecond tier 204 ofslave servers 202. - Below is a mathematical calculation of
FIG. 2 having the form of equations (EQNs) 1-5, whereby X is the number of users needed to be reached, T is the maximum amount of time to refresh value X, t is the maximum allowed time per tier, # is number of slave server tiers needed, and N is the number of user connections a single server can update in t. The maximum number of connections that can update X in T time is the number of servers in the last level or lowermost tier N, or simply N to the next power. -
- Referring to EQNs 1-5, if N=100, T=l second, and the system must support 800,000 connections, then X=100 at the second level (one tier of slave servers), X=10,000 at the third level (two tiers of slave servers), and X=1,000,000 at the fourth level (three tiers of slave servers). Since 800,000 is greater than the number of servers at the third level and less than the number of servers in the fourth level, three tiers of slave servers are needed. At three tiers of service (#=3) and T=one second, t is 333 ms.
- In order to operate at maximum efficiency, when a
slave server 202 sends an update of the single shared value, which is in the form of a data packet of the single aggregate value of all the updates received by theslave server 202, to either the database in the server tier above that slave server or the database of themaster server 102, then that slave server database will reset the single shared value to a default value, e.g. zero, if the single shared value is performing a counting function. By resetting after sending an update, the slave server database does not expend processing time determining the difference between the current value of the single shared value and the value of the single shared value at the time the slave server database last updated either the server in the tier above that slave server, or themaster server 201. - Referring to
FIG. 2 , the flow of data between the databases of themaster server 201 and theslave servers 202 is two-way, therefore, as theslave servers 202 are updating the single shared value in the databases of theirmaster server 201, themaster servers 201 are updating the databases of theirslave servers 202 of the aggregate value of the single shared value, which is then shared with theusers 110. - Referring to the alternative embodiment illustrated in
FIG. 3 , a separate feedback server with adatabase 305 is utilized to update theusers 110, with the aggregate value of the single shared value. In this embodiment the flow of database values for the single shared value between themaster server 301 and theslave servers 302 is one-way, therefore, as theslave servers 302 are updating the single shared value in theirmaster server database 301, themaster server 301 is sharing the aggregate value of the single shared value with the feedback server(s) 305, which is then relayed theusers 110. - It should be noted that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications may be made without departing from the spirit and scope of the present invention and without diminishing its attendant advantages.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18/270,686 US20240297913A1 (en) | 2020-12-31 | 2021-12-30 | Communications network topology for minimizing latency in a many-to-one environment |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202063132987P | 2020-12-31 | 2020-12-31 | |
| US17/316,603 US20220207024A1 (en) | 2020-12-31 | 2021-05-10 | Tiered server topology |
| US18/270,686 US20240297913A1 (en) | 2020-12-31 | 2021-12-30 | Communications network topology for minimizing latency in a many-to-one environment |
| PCT/US2021/065644 WO2022147220A1 (en) | 2020-12-31 | 2021-12-30 | Communications network topology for minimizing latency in a many-to-one environment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20240297913A1 true US20240297913A1 (en) | 2024-09-05 |
Family
ID=82118685
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/316,603 Abandoned US20220207024A1 (en) | 2020-12-31 | 2021-05-10 | Tiered server topology |
| US18/270,686 Pending US20240297913A1 (en) | 2020-12-31 | 2021-12-30 | Communications network topology for minimizing latency in a many-to-one environment |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/316,603 Abandoned US20220207024A1 (en) | 2020-12-31 | 2021-05-10 | Tiered server topology |
Country Status (4)
| Country | Link |
|---|---|
| US (2) | US20220207024A1 (en) |
| EP (1) | EP4272375A4 (en) |
| CA (1) | CA3204037A1 (en) |
| WO (1) | WO2022147220A1 (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6961363B1 (en) * | 1999-12-02 | 2005-11-01 | International Business Machines Corporation | Frequency look-ahead and link state history based scheduling in indoor wireless pico-cellular networks |
| US20050283526A1 (en) * | 2001-09-13 | 2005-12-22 | O'neal Mike | Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network |
| US20080183753A1 (en) * | 2007-01-30 | 2008-07-31 | Oracle International Corporation | Distributed Device Information Management System As A Distributed Information Repository System |
| US20130085993A1 (en) * | 2011-09-29 | 2013-04-04 | Avaya Inc. | System and method to join and cut two-way rest overlay trees for distributed knowledge bases |
| US8433771B1 (en) * | 2009-10-02 | 2013-04-30 | Amazon Technologies, Inc. | Distribution network with forward resource propagation |
| US20180107794A1 (en) * | 2016-10-18 | 2018-04-19 | Medfusion, Inc. | Aggregation servers providing information based on records from a plurality of data portals and related methods and computer program products |
| US20180146262A1 (en) * | 2011-11-16 | 2018-05-24 | Chandrasagaran Murugan | Remote engagement system |
| US20180300350A1 (en) * | 2017-04-18 | 2018-10-18 | Microsoft Technology Licensing, Llc | File table index aggregate statistics |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7487179B2 (en) * | 2006-01-31 | 2009-02-03 | International Business Machines Corporation | Method and program product for automating the submission of multiple server tasks for updating a database |
| EP2080110A4 (en) * | 2006-10-05 | 2014-01-15 | Nat Ict Australia Ltd | MULTI-USER AND DECENTRALIZED ONLINE ENVIRONMENT |
| US9043325B1 (en) * | 2011-06-24 | 2015-05-26 | Google Inc. | Collecting useful user feedback about geographical entities |
| WO2015177802A1 (en) * | 2014-05-23 | 2015-11-26 | Banerjee Saugata | System and method for establishing single window online meaningful access and effective communication |
| CN106027634B (en) * | 2016-05-16 | 2019-06-04 | 白杨 | Message Port Exchange Service System |
| CN107800738B (en) * | 2016-09-05 | 2021-03-05 | 华为数字技术(苏州)有限公司 | Data updating method and device |
| JP6926035B2 (en) * | 2018-07-02 | 2021-08-25 | 株式会社東芝 | Database management device and query partitioning method |
| US11514341B2 (en) * | 2019-05-21 | 2022-11-29 | Azra Analytics, Inc. | Systems and methods for sports data crowdsourcing and analytics |
-
2021
- 2021-05-10 US US17/316,603 patent/US20220207024A1/en not_active Abandoned
- 2021-12-30 US US18/270,686 patent/US20240297913A1/en active Pending
- 2021-12-30 WO PCT/US2021/065644 patent/WO2022147220A1/en not_active Ceased
- 2021-12-30 CA CA3204037A patent/CA3204037A1/en active Pending
- 2021-12-30 EP EP21916475.3A patent/EP4272375A4/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6961363B1 (en) * | 1999-12-02 | 2005-11-01 | International Business Machines Corporation | Frequency look-ahead and link state history based scheduling in indoor wireless pico-cellular networks |
| US20050283526A1 (en) * | 2001-09-13 | 2005-12-22 | O'neal Mike | Systems for distributing data over a computer network and methods for arranging nodes for distribution of data over a computer network |
| US20080183753A1 (en) * | 2007-01-30 | 2008-07-31 | Oracle International Corporation | Distributed Device Information Management System As A Distributed Information Repository System |
| US8433771B1 (en) * | 2009-10-02 | 2013-04-30 | Amazon Technologies, Inc. | Distribution network with forward resource propagation |
| US20130085993A1 (en) * | 2011-09-29 | 2013-04-04 | Avaya Inc. | System and method to join and cut two-way rest overlay trees for distributed knowledge bases |
| US20180146262A1 (en) * | 2011-11-16 | 2018-05-24 | Chandrasagaran Murugan | Remote engagement system |
| US20180107794A1 (en) * | 2016-10-18 | 2018-04-19 | Medfusion, Inc. | Aggregation servers providing information based on records from a plurality of data portals and related methods and computer program products |
| US20180300350A1 (en) * | 2017-04-18 | 2018-10-18 | Microsoft Technology Licensing, Llc | File table index aggregate statistics |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220207024A1 (en) | 2022-06-30 |
| CA3204037A1 (en) | 2022-07-07 |
| EP4272375A4 (en) | 2025-03-12 |
| EP4272375A1 (en) | 2023-11-08 |
| WO2022147220A1 (en) | 2022-07-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9681099B1 (en) | Multiplex live group communication | |
| Vaccari et al. | Follow the leader! Direct and indirect flows of political communication during the 2013 Italian general election campaign | |
| US20210084360A1 (en) | Methods and systems for displaying content | |
| US10511561B2 (en) | Media plug-in for third-party system | |
| US9579577B2 (en) | Electronic system with challenge mechanism and method of operation thereof | |
| CN102467723B (en) | For checking the system and method providing recommendation in type social networks to user | |
| US8745508B2 (en) | Systems and methods for user polling | |
| US9144740B2 (en) | Systems and methods for video game participation via social network interactions | |
| CN109104459A (en) | The method and apparatus of the computing device of point to point network for identification | |
| US11522969B2 (en) | Systems and methods for adjusting storage based on determining content item popularity | |
| CN115525834A (en) | Method for screening and displaying messages, computer device and recording medium | |
| Grigoryan et al. | Hybridized political participation | |
| US20240297913A1 (en) | Communications network topology for minimizing latency in a many-to-one environment | |
| KR20200098581A (en) | How to supply content across multiple devices | |
| CN111182036B (en) | Data distribution method, network construction method, device, equipment and storage medium | |
| WO2020060856A1 (en) | Shared live audio | |
| BR112014006764B1 (en) | METHODS AND TERMINAL FOR PROVIDING INTERACTIVE SERVICES WITHIN A NETWORK FOR DISTRIBUTION OF TELEVISION CONTENT | |
| US20240244101A1 (en) | Communication queue | |
| US20220174363A1 (en) | User uploaded videostreaming system with social media notification features and related methods | |
| US10135773B2 (en) | Communications system | |
| US20240370133A1 (en) | Video Conferencing and Interface System | |
| CN104753964A (en) | Network product display method, device and system | |
| JP7706597B2 (en) | Information processing device, program, and information processing method | |
| US20250390952A1 (en) | Method, apparatus and system for equalizing latencies in publication of event data to multiple client devices | |
| US11983728B2 (en) | Opinion aggregation system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ALLOWED -- NOTICE OF ALLOWANCE NOT YET MAILED Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |