US20070005530A1 - Selecting grid executors via a neural network - Google Patents
Selecting grid executors via a neural network Download PDFInfo
- Publication number
- US20070005530A1 US20070005530A1 US11/138,938 US13893805A US2007005530A1 US 20070005530 A1 US20070005530 A1 US 20070005530A1 US 13893805 A US13893805 A US 13893805A US 2007005530 A1 US2007005530 A1 US 2007005530A1
- Authority
- US
- United States
- Prior art keywords
- grid
- executors
- neural network
- work
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
Definitions
- This invention generally relates to grid computer systems and more specifically relates to selecting a grid executor via a neural network.
- Computer systems typically include a combination of hardware, such as semiconductors and circuit boards, and software, also known as computer programs.
- grid computing a grid controller breaks up a task at one computer into multiple, smaller units of work (UOW).
- UOW units of work
- the grid controller sends each unit of work to multiple receiving computers in parallel via a network for execution. Some of these receiving computers execute the unit of work and send the results back quickly. Other of the receiving computers execute the unit of work and send the results back more slowly. Still others never receive the unit of work, receive the unit of work but never execute it, or execute unit of work but never send the results back.
- the grid controller uses the first results that are returned for a particular unit of work and ignores the other, later results.
- grid computing also has the advantage of performance benefits, by breaking up a large task into many smaller units of work and executing them in parallel.
- some grid controllers keep track of the availability of computers in the network, and issue the units of work that have the highest priority to the computers in the network with the highest availability. Similarly, the grid controllers issue the units of work with lower priorities to the computers in the network that have less availability. While the technique of keeping track of computer availability does boost performance, there is a need for more advanced techniques that increase grid performance even more.
- a method, apparatus, system, and signal-bearing medium are provided that, in an embodiment, send units of work to grid executors, create training data based on the performance of the grid executors, and train a neural network via the training data.
- the training data includes pairs of input and output data, where the input data is the types of the units of work and the output data is the service strengths of the grid executors.
- FIG. 1 depicts a high-level block diagram of an example system for implementing an embodiment of the invention.
- FIG. 2 depicts a block diagram of selected components of the example system, according to an embodiment of the invention.
- FIG. 3 depicts a flowchart of processing for registering a grid executor, according to an embodiment of the invention.
- FIG. 4 depicts a flowchart for processing units of work in a training mode, according to an embodiment of the invention.
- FIG. 5 depicts a flowchart for processing units of work in a performance mode, according to an embodiment of the invention.
- FIG. 1 depicts a high-level block diagram representation of a computer system 100 connected via a network 130 to a server 132 , according to an embodiment of the present invention.
- the hardware components of the computer system 100 may be implemented by an eServer iSeries computer system available from International Business Machines of Armonk, N.Y.
- eServer iSeries computer system available from International Business Machines of Armonk, N.Y.
- those skilled in the art will appreciate that the mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system.
- the computer system 100 acts as a client for the server 132 , but the terms “server” and “client” are used for convenience only, and in other embodiments an electronic device that is used as a server in one scenario may be used as a client in another scenario, and vice versa.
- the major components of the computer system 100 include one or more processors 101 , a main memory 102 , a terminal interface 111 , a storage interface 112 , an I/O (Input/Output) device interface 113 , and communications/network interfaces 114 , all of which are coupled for inter-component communication via a memory bus 103 , an I/O bus 104 , and an I/O bus interface unit 105 .
- the computer system 100 contains one or more general-purpose programmable central processing units (CPUs) 101 A, 101 B, 101 C, and 101 D, herein generically referred to as the processor 101 .
- the computer system 100 contains multiple processors typical of a relatively large system; however, in another embodiment the computer system 100 may alternatively be a single CPU system.
- Each processor 101 executes instructions stored in the main memory 102 and may include one or more levels of on-board cache.
- the main memory 102 is a random-access semiconductor memory for storing data and programs.
- the main memory 102 represents the entire virtual memory of the computer system 100 , and may also include the virtual memory of other computer systems coupled to the computer system 100 or connected via the network 130 .
- the main memory 102 is conceptually a single monolithic entity, but in other embodiments the main memory 102 is a more complex arrangement, such as a hierarchy of caches and other memory devices.
- the main memory 102 may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors.
- the main memory 102 may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures.
- NUMA non-uniform memory access
- the main memory 102 includes a grid manager 150 , a neural network 152 , a grid application 154 , and grid data 156 .
- the grid manager 150 , the neural network 152 , the grid application 154 , and the grid data 156 are illustrated as being contained within the memory 102 in the computer system 100 , in other embodiments some or all of them may be on different computer systems and may be accessed remotely, e.g., via the network 130 .
- the computer system 100 may use virtual addressing mechanisms that allow the programs of the computer system 100 to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities.
- the grid manager 150 , the neural network 152 , the grid application 154 , and the grid data 156 are illustrated as being contained within the main memory 102 , these elements are not necessarily all completely contained in the same storage device at the same time. Further, although the grid manager 150 , the neural network 152 , the grid application 154 , and the grid data 156 are illustrated as being separate entities, in other embodiments some of them, or portions of some of them, may be packaged together.
- the grid manager 150 breaks up tasks generated by the grid application 154 into multiple units of work and sends the units of work to the servers 132 for execution.
- the grid application 154 may be a user application, a third party application, an operating system, any portion thereof, or any other appropriate executable or interpretable code or statements.
- the grid manager 150 uses the grid data 156 and the neural network 152 to choose the appropriate servers 132 to receive the units of work.
- the neural network 152 is a parallel computing model analogous to the human brain, consisting of multiple simple processing units (processors or code) connected by adaptive weights.
- the neural network 152 may be either supervised or unsupervised.
- a supervised neural network differs from conventional programs in that a programmer does not write algorithmic code to tell the neural network how to process data. Instead, the neural network is trained by presenting training data of the desired input/output relationships to the neural network.
- An unsupervised neural network can extract statistically significant features from input data. This differs from supervised neural networks in that only input data is presented to the neural network during training.
- the neural network 152 has a learning mechanism, which operates by updating the adaptive weights after each training iteration.
- the neural network 152 produces the desired input/output relationships specified by the training data, the training of the neural network 152 ceases, and the neural network 152 no longer updates its adaptive weights. Instead, the neural network 152 enters a performance mode, during which the neural network 152 receives input data and produces output data using the trained adaptive weights.
- neural networks Many different types exist that fall under the label “neural networks.” These different models have unique network topologies and learning mechanisms. Examples of known neural network models are the Back Propagation Model, the Adaptive Resonance Theory Model, the Self-Organizing Feature Maps Model, the Self-Organizing TSP Networks Model, and the Bidirectional Associative Memories Model, but in other embodiments any appropriate model may be used.
- the grid manager 150 includes instructions capable of executing on the processor 101 or statements capable of being interpreted by instructions executing on the processor 101 to perform the functions as further described below with reference to FIGS. 3, 4 , and 5 .
- the grid manager 150 may be implemented in microcode.
- the grid manager 150 may be implemented in hardware via logic gates and/or other appropriate hardware techniques in lieu of or in addition to a processor-based system.
- the memory bus 103 provides a data communication path for transferring data among the processor 101 , the main memory 102 , and the I/O bus interface unit 105 .
- the I/O bus interface unit 105 is further coupled to the system I/O bus 104 for transferring data to and from the various I/O units.
- the I/O bus interface unit 105 communicates with multiple I/O interface units 111 , 112 , 113 , and 114 , which are also known as I/O processors (IOPs) or I/O adapters (IOAs), through the system I/O bus 104 .
- the system I/O bus 104 may be, e.g., an industry standard PCI bus, or any other appropriate bus technology.
- the I/O interface units support communication with a variety of storage and I/O devices.
- the terminal interface unit 111 supports the attachment of one or more user terminals 121 , 122 , 123 , and 124 .
- the storage interface unit 112 supports the attachment of one or more direct access storage devices (DASD) 125 , 126 , and 127 (which are typically rotating magnetic disk drive storage devices, although they could alternatively be other devices, including arrays of disk drives configured to appear as a single large storage device to a host).
- DASD direct access storage devices
- the contents of the main memory 102 may be stored to and retrieved from the direct access storage devices 125 , 126 , and 127 , as needed.
- the I/O and other device interface 113 provides an interface to any of various other input/output devices or devices of other types. Two such devices, the printer 128 and the fax machine 129 , are shown in the exemplary embodiment of FIG. 1 , but in other embodiment many other such devices may exist, which may be of differing types.
- the network interface 114 provides one or more communications paths from the computer system 100 to other digital devices and computer systems; such paths may include, e.g., one or more networks 130 .
- the memory bus 103 is shown in FIG. 1 as a relatively simple, single bus structure providing a direct communication path among the processors 101 , the main memory 102 , and the I/O bus interface 105 , in fact the memory bus 103 may comprise multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration.
- the I/O bus interface 105 and the I/O bus 104 are shown as single respective units, the computer system 100 may in fact contain multiple I/O bus interface units 105 and/or multiple I/O buses 104 . While multiple I/O interface units are shown, which separate the system I/O bus 104 from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices are connected directly to one or more system I/O buses.
- the computer system 100 depicted in FIG. 1 has multiple attached terminals 121 , 122 , 123 , and 124 , such as might be typical of a multi-user “mainframe” computer system. Typically, in such a case the actual number of attached devices is greater than those shown in FIG. 1 , although the present invention is not limited to systems of any particular size.
- the computer system 100 may alternatively be a single-user system, typically containing only a single user display and keyboard input, or might be a server or similar device which has little or no direct user interface, but receives requests from other computer systems (clients).
- the computer system 100 may be implemented as a personal computer, portable computer, laptop or notebook computer, PDA (Personal Digital Assistant), tablet computer, pocket computer, telephone, pager, automobile, teleconferencing system, appliance, or any other appropriate type of electronic device.
- PDA Personal Digital Assistant
- the network 130 may be any suitable network or combination of networks and may support any appropriate protocol suitable for communication of data and/or code to/from the computer system 100 .
- the network 130 may represent a storage device or a combination of storage devices, either connected directly or indirectly to the computer system 100 .
- the network 130 may support Infiniband.
- the network 130 may support wireless communications.
- the network 130 may support hard-wired communications, such as a telephone line or cable.
- the network 130 may support the Ethernet IEEE (Institute of Electrical and Electronics Engineers) 802.3x specification.
- the network 130 may be the Internet and may support IP (Internet Protocol).
- the network 130 may be a local area network (LAN) or a wide area network (WAN). In another embodiment, the network 130 may be a hotspot service provider network. In another embodiment, the network 130 may be an intranet. In another embodiment, the network 130 may be a GPRS (General Packet Radio Service) network. In another embodiment, the network 130 may be a FRS (Family Radio Service) network. In another embodiment, the network 130 may be any appropriate cellular data network or cell-based radio network technology. In another embodiment, the network 130 may be an IEEE 802.11B wireless network. In still another embodiment, the network 130 may be any suitable network or combination of networks. Although one network 130 is shown, in other embodiments any number (including zero) of networks (of the same or different types) may be present.
- the server 132 includes a grid executor 134 and may also include some or all of the hardware components already described for the computer system 100 . In another embodiment, the functions of the server 132 may be implemented as an application in the computer system 100 .
- FIG. 1 is intended to depict the representative major components of the computer system 100 , the network 130 , and the server 132 at a high level, that individual components may have greater complexity than represented in FIG. 1 , that components other than or in addition to those shown in FIG. 1 may be present, and that the number, type, and configuration of such components may vary.
- additional complexity or additional variations are disclosed herein; it being understood that these are by way of example only and are not necessarily the only such variations.
- the various software components illustrated in FIG. 1 and implementing various embodiments of the invention may be implemented in a number of manners, including using various computer software applications, routines, components, programs, objects, modules, data structures, etc., referred to hereinafter as “computer programs,” or simply “programs.”
- the computer programs typically comprise one or more instructions that are resident at various times in various memory and storage devices in the computer system 100 , and that, when read and executed by one or more processors 101 in the computer system 100 , cause the computer system 100 to perform the steps necessary to execute steps or elements comprising the various aspects of an embodiment of the invention.
- a non-rewriteable storage medium e.g., a read-only memory or storage device attached to or within a computer system, such as a CD-ROM, DVD ⁇ R, or DVD+R;
- a rewriteable storage medium e.g., a hard disk drive (e.g., the DASD 125 , 126 , or 127 ), CD-RW, DVD ⁇ RW, DVD+RW, DVD-RAM, or diskette; or
- a communications or transmission medium such as through a computer or a telephone network, e.g., the network 130 .
- Such tangible signal-bearing media when carrying or encoded with computer-readable, processor-readable, or machine-readable instructions or statements that direct or control the functions of the present invention, represent embodiments of the present invention.
- Embodiments of the present invention may also be delivered as part of a service engagement with a client corporation, nonprofit organization, government entity, internal organizational structure, or the like. Aspects of these embodiments may include configuring a computer system to perform, and deploying software systems and web services that implement, some or all of the methods described herein. Aspects of these embodiments may also include analyzing the client company, creating recommendations responsive to the analysis, generating software to implement portions of the recommendations, integrating the software into existing processes and infrastructure, metering use of the methods and systems described herein, allocating expenses to users, and billing users for their use of these methods and systems.
- FIG. 1 The exemplary environments illustrated in FIG. 1 are not intended to limit the present invention. Indeed, other alternative hardware and/or software environments may be used without departing from the scope of the invention.
- FIG. 2 depicts a block diagram of selected components of the example system, according to an embodiment of the invention.
- the computer system 100 is connected to a server 132 - 1 , a server 132 - 2 , and a server 132 - 3 via the network 130 .
- Each of the servers 132 - 1 , 132 - 2 , and 132 - 3 is an example of the server 132 , as previously described above with reference to FIG. 1 .
- the server 132 - 1 includes a grid executor A 134 - 1
- the server 132 - 2 includes a grid executor B 134 - 2
- the server 132 - 3 includes a grid executor C 134 - 3 .
- the computer system 100 includes the grid data 156 , which includes example records 205 , 210 , and 215 , but in other embodiments any number of records with any appropriate data may be present.
- Each of the example records includes a grid executor identifier field 220 , a service strength field 225 , a services available field 230 , a unit of work type field 235 , a unit of work priority field 240 , and a performance statistics field 245 .
- the grid executor identifier field 220 identifies one of the grid executors 134 such as the grid executor A 134 - 1 , the grid executor B 134 - 2 , or the grid executor C 134 - 3 .
- the service strength 225 indicates a service or services for which the associated grid executor 220 performs faster than other services that the grid executor 220 provides.
- the services available 230 indicates services that are available at the grid executor 220 , regardless of the speed at which the grid executor 220 performs them.
- the service strengths 225 are a subset of the services available 230 for a particular grid executor 220 .
- the unit of work type 235 indicates a type of unit of work that the grid manager 150 has sent to the grid executor 220 .
- the unit of work priority 240 indicates the priority of the unit of work type 235 , as reported by the grid application 154 or as specified by the grid manager 150 .
- the performance statistics 245 indicates the previous performance of units of work having the unit of work type 235 when issued to the grid executor 220 . In various embodiments, the performance statistics 245 may include the response time for processing the unit of work type 235 or the percentage of time that the grid executor 220 is available for processing the unit of work type 235 .
- FIG. 3 depicts a flowchart of processing for registering the grid executors 134 , according to an embodiment of the invention.
- Control begins at block 300 .
- Control then continues to block 305 where the grid manager 150 receives service strengths and available services from the grid executors 134 .
- Control then continues to block 310 where the grid manager 150 creates a record (such as the record 205 , 210 , or 215 ) in the grid data 156 and stores the grid executor identifier 220 , the reported service strengths 225 of the grid executors 134 , and the reported available services 230 of the grid executors 134 .
- Control then continues to block 399 where the logic of FIG. 3 returns.
- FIG. 4 depicts a flowchart for processing units of work in a training mode, according to an embodiment of the invention.
- Control begins at block 400 .
- Control then continues to block 405 where the grid manager 150 creates units of work based on the grid application 154 .
- the grid manager 150 may create the units of work based on and/or in response to the tasks, functions, requests, messages, interrupts, or actions of the grid application 154 .
- the grid manager 150 further determines the type of the created unit of work and a priority of the created unit of work.
- the grid manager may determine the priority of the unit of work based on the priority of the grid application 154 on which the unit of work is based, based on a priority reported by the grid application 154 on which the unit of work is based, or based on any other technique.
- the grid manager 150 may select the grid executor 134 that has a service strength 225 that matches the unit of work type.
- the grid manager 150 may use either the services available 230 or the service strengths 225 of the grid executors 134 to select the grid executors 134 depending on the priority of the unit of work.
- the grid manager 150 may select the grid executors 134 whose service strengths 225 match the unit of work type, but if the priority of the unit of work is low (below the threshold) the grid manager 150 uses the services available 230 to select the grid executors 134 .
- the grid manager 150 selects a subset of the grid executors 134 from which the grid manager 150 received the services strengths 225 and the services available 230 .
- the grid manager 150 stores the unit of work type of the created unit of work into the unit of work type field 235 of the records in the grid data 156 associated with the selected grid executors 134 .
- the grid manager 150 further sets the unit of work priority associated with the created unit of work into the unit of work priority field 240 in the record associated with the selected grid executors 134 .
- the grid manager 150 selects those grid executors 220 (those records in the grid data 156 ), for every unit of work type 235 , that have the best performance statistics 245 , e.g., the lowest response time or the highest availability.
- the grid manager 150 then creates training data that includes pairs of unit of work types 235 and service strengths 225 .
- the grid manager 150 repeatedly inputs the work types 235 to the neural network 152 until the neural network 152 produces the paired respective service strengths 225 as output at least a threshold percentage of the time. Control then continues to block 499 where the logic of FIG. 4 returns.
- FIG. 5 depicts a flowchart for processing units of work in a performance mode after the training mode is complete, according to an embodiment of the invention.
- Control begins at block 500 .
- Control then continues to block 505 where the grid manager 150 creates units of work based on the grid application 154 , as previously described above with reference to block 405 of FIG. 4 .
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A method, apparatus, system, and signal-bearing medium that, in an embodiment, send units of work to grid executors, create training data based on the performance of the grid executors, and train a neural network via the training data. The training data includes pairs of input and output data, where the input data is the types of the units of work and the output data is the service strengths of the grid executors. Once the neural network has been trained, subsequent units of work have their grid executors selected by inputting the types of the units of work to the neural network and receiving a service strength from the neural network as output. The grid executors are then selected based on the output service strength from the neural network. In this way, in an embodiment, the grid performance may be increased.
Description
- This invention generally relates to grid computer systems and more specifically relates to selecting a grid executor via a neural network.
- The development of the EDVAC computer system of 1948 is often cited as the beginning of the computer era. Since that time, computer systems have evolved into extremely sophisticated devices, and computer systems may be found in many different settings. Computer systems typically include a combination of hardware, such as semiconductors and circuit boards, and software, also known as computer programs.
- Years ago, computer systems were stand-alone devices that did not communicate with each other. But today, computers are increasingly connected via networks, such as the Internet. When connected via a network, one computer, often called a client, may request services from another computer, often called a server. Further, a computer that acts as a client in one scenario may act as a server in another scenario. In addition to the Internet example above, companies often have internal networks that connect their various computers together. A large company with hundreds of thousands of employees may have hundreds of thousands of computers all connected via a network. Many of these computers are idle for much of the time. For example, typical office workers have computers on their desks, which they use for a few hours each day to check e-mail, compose an occasional document, or request services from a server computer. The rest of the day, the office worker spends on the telephone, in meetings, or at home while the computer sits unused and idle. Thus, many companies have hundreds of millions of dollars invested in computers that are underutilized.
- These companies would naturally like to find a way to use this vast, underutilized, but widely distributed, computer capacity. One technique for using idle computer capacity is called grid computing. In grid computing, a grid controller breaks up a task at one computer into multiple, smaller units of work (UOW). The grid controller sends each unit of work to multiple receiving computers in parallel via a network for execution. Some of these receiving computers execute the unit of work and send the results back quickly. Other of the receiving computers execute the unit of work and send the results back more slowly. Still others never receive the unit of work, receive the unit of work but never execute it, or execute unit of work but never send the results back. The grid controller uses the first results that are returned for a particular unit of work and ignores the other, later results. In addition to the benefit of saving money by using underutilized computer resources, grid computing also has the advantage of performance benefits, by breaking up a large task into many smaller units of work and executing them in parallel.
- In order to increase the performance benefits, some grid controllers keep track of the availability of computers in the network, and issue the units of work that have the highest priority to the computers in the network with the highest availability. Similarly, the grid controllers issue the units of work with lower priorities to the computers in the network that have less availability. While the technique of keeping track of computer availability does boost performance, there is a need for more advanced techniques that increase grid performance even more.
- A method, apparatus, system, and signal-bearing medium are provided that, in an embodiment, send units of work to grid executors, create training data based on the performance of the grid executors, and train a neural network via the training data. The training data includes pairs of input and output data, where the input data is the types of the units of work and the output data is the service strengths of the grid executors. Once the neural network has been trained, subsequent units of work have their grid executors selected by inputting the types of the units of work to the neural network and receiving a service strength from the neural network as output. The grid executors are then selected based on the output service strength from the neural network. In this way, in an embodiment, the grid performance may be increased.
- Various embodiments of the present invention are hereinafter described in conjunction with the appended drawings:
-
FIG. 1 depicts a high-level block diagram of an example system for implementing an embodiment of the invention. -
FIG. 2 depicts a block diagram of selected components of the example system, according to an embodiment of the invention. -
FIG. 3 depicts a flowchart of processing for registering a grid executor, according to an embodiment of the invention. -
FIG. 4 depicts a flowchart for processing units of work in a training mode, according to an embodiment of the invention. -
FIG. 5 depicts a flowchart for processing units of work in a performance mode, according to an embodiment of the invention. - It is to be noted, however, that the appended drawings illustrate only example embodiments of the invention, and are therefore not considered limiting of its scope, for the invention may admit to other equally effective embodiments.
- Referring to the Drawings, wherein like numbers denote like parts throughout the several views,
FIG. 1 depicts a high-level block diagram representation of acomputer system 100 connected via anetwork 130 to aserver 132, according to an embodiment of the present invention. In an embodiment, the hardware components of thecomputer system 100 may be implemented by an eServer iSeries computer system available from International Business Machines of Armonk, N.Y. However, those skilled in the art will appreciate that the mechanisms and apparatus of embodiments of the present invention apply equally to any appropriate computing system. Thecomputer system 100 acts as a client for theserver 132, but the terms “server” and “client” are used for convenience only, and in other embodiments an electronic device that is used as a server in one scenario may be used as a client in another scenario, and vice versa. - The major components of the
computer system 100 include one ormore processors 101, amain memory 102, aterminal interface 111, astorage interface 112, an I/O (Input/Output)device interface 113, and communications/network interfaces 114, all of which are coupled for inter-component communication via amemory bus 103, an I/O bus 104, and an I/Obus interface unit 105. - The
computer system 100 contains one or more general-purpose programmable central processing units (CPUs) 101A, 101B, 101C, and 101D, herein generically referred to as theprocessor 101. In an embodiment, thecomputer system 100 contains multiple processors typical of a relatively large system; however, in another embodiment thecomputer system 100 may alternatively be a single CPU system. Eachprocessor 101 executes instructions stored in themain memory 102 and may include one or more levels of on-board cache. - The
main memory 102 is a random-access semiconductor memory for storing data and programs. In another embodiment, themain memory 102 represents the entire virtual memory of thecomputer system 100, and may also include the virtual memory of other computer systems coupled to thecomputer system 100 or connected via thenetwork 130. Themain memory 102 is conceptually a single monolithic entity, but in other embodiments themain memory 102 is a more complex arrangement, such as a hierarchy of caches and other memory devices. For example, themain memory 102 may exist in multiple levels of caches, and these caches may be further divided by function, so that one cache holds instructions while another holds non-instruction data, which is used by the processor or processors. Themain memory 102 may be further distributed and associated with different CPUs or sets of CPUs, as is known in any of various so-called non-uniform memory access (NUMA) computer architectures. - The
main memory 102 includes agrid manager 150, aneural network 152, agrid application 154, andgrid data 156. Although thegrid manager 150, theneural network 152, thegrid application 154, and thegrid data 156 are illustrated as being contained within thememory 102 in thecomputer system 100, in other embodiments some or all of them may be on different computer systems and may be accessed remotely, e.g., via thenetwork 130. Thecomputer system 100 may use virtual addressing mechanisms that allow the programs of thecomputer system 100 to behave as if they only have access to a large, single storage entity instead of access to multiple, smaller storage entities. Thus, while thegrid manager 150, theneural network 152, thegrid application 154, and thegrid data 156 are illustrated as being contained within themain memory 102, these elements are not necessarily all completely contained in the same storage device at the same time. Further, although thegrid manager 150, theneural network 152, thegrid application 154, and thegrid data 156 are illustrated as being separate entities, in other embodiments some of them, or portions of some of them, may be packaged together. - The
grid manager 150 breaks up tasks generated by thegrid application 154 into multiple units of work and sends the units of work to theservers 132 for execution. In various embodiments, thegrid application 154 may be a user application, a third party application, an operating system, any portion thereof, or any other appropriate executable or interpretable code or statements. Thegrid manager 150 uses thegrid data 156 and theneural network 152 to choose theappropriate servers 132 to receive the units of work. - The
neural network 152 is a parallel computing model analogous to the human brain, consisting of multiple simple processing units (processors or code) connected by adaptive weights. In various embodiments, theneural network 152 may be either supervised or unsupervised. A supervised neural network differs from conventional programs in that a programmer does not write algorithmic code to tell the neural network how to process data. Instead, the neural network is trained by presenting training data of the desired input/output relationships to the neural network. An unsupervised neural network can extract statistically significant features from input data. This differs from supervised neural networks in that only input data is presented to the neural network during training. Theneural network 152 has a learning mechanism, which operates by updating the adaptive weights after each training iteration. Once a sufficient level of training has been achieved by theneural network 152, for example, theneural network 152 produces the desired input/output relationships specified by the training data, the training of theneural network 152 ceases, and theneural network 152 no longer updates its adaptive weights. Instead, theneural network 152 enters a performance mode, during which theneural network 152 receives input data and produces output data using the trained adaptive weights. - Many different types of computing models exist that fall under the label “neural networks.” These different models have unique network topologies and learning mechanisms. Examples of known neural network models are the Back Propagation Model, the Adaptive Resonance Theory Model, the Self-Organizing Feature Maps Model, the Self-Organizing TSP Networks Model, and the Bidirectional Associative Memories Model, but in other embodiments any appropriate model may be used.
- In an embodiment, the
grid manager 150 includes instructions capable of executing on theprocessor 101 or statements capable of being interpreted by instructions executing on theprocessor 101 to perform the functions as further described below with reference toFIGS. 3, 4 , and 5. In another embodiment, thegrid manager 150 may be implemented in microcode. In another embodiment, thegrid manager 150 may be implemented in hardware via logic gates and/or other appropriate hardware techniques in lieu of or in addition to a processor-based system. - The
memory bus 103 provides a data communication path for transferring data among theprocessor 101, themain memory 102, and the I/Obus interface unit 105. The I/Obus interface unit 105 is further coupled to the system I/O bus 104 for transferring data to and from the various I/O units. The I/Obus interface unit 105 communicates with multiple I/O interface units O bus 104. The system I/O bus 104 may be, e.g., an industry standard PCI bus, or any other appropriate bus technology. - The I/O interface units support communication with a variety of storage and I/O devices. For example, the
terminal interface unit 111 supports the attachment of one ormore user terminals storage interface unit 112 supports the attachment of one or more direct access storage devices (DASD) 125, 126, and 127 (which are typically rotating magnetic disk drive storage devices, although they could alternatively be other devices, including arrays of disk drives configured to appear as a single large storage device to a host). The contents of themain memory 102 may be stored to and retrieved from the directaccess storage devices - The I/O and
other device interface 113 provides an interface to any of various other input/output devices or devices of other types. Two such devices, theprinter 128 and thefax machine 129, are shown in the exemplary embodiment ofFIG. 1 , but in other embodiment many other such devices may exist, which may be of differing types. Thenetwork interface 114 provides one or more communications paths from thecomputer system 100 to other digital devices and computer systems; such paths may include, e.g., one ormore networks 130. - Although the
memory bus 103 is shown inFIG. 1 as a relatively simple, single bus structure providing a direct communication path among theprocessors 101, themain memory 102, and the I/O bus interface 105, in fact thememory bus 103 may comprise multiple different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration. Furthermore, while the I/O bus interface 105 and the I/O bus 104 are shown as single respective units, thecomputer system 100 may in fact contain multiple I/Obus interface units 105 and/or multiple I/O buses 104. While multiple I/O interface units are shown, which separate the system I/O bus 104 from various communications paths running to the various I/O devices, in other embodiments some or all of the I/O devices are connected directly to one or more system I/O buses. - The
computer system 100 depicted inFIG. 1 has multiple attachedterminals FIG. 1 , although the present invention is not limited to systems of any particular size. Thecomputer system 100 may alternatively be a single-user system, typically containing only a single user display and keyboard input, or might be a server or similar device which has little or no direct user interface, but receives requests from other computer systems (clients). In other embodiments, thecomputer system 100 may be implemented as a personal computer, portable computer, laptop or notebook computer, PDA (Personal Digital Assistant), tablet computer, pocket computer, telephone, pager, automobile, teleconferencing system, appliance, or any other appropriate type of electronic device. - The
network 130 may be any suitable network or combination of networks and may support any appropriate protocol suitable for communication of data and/or code to/from thecomputer system 100. In various embodiments, thenetwork 130 may represent a storage device or a combination of storage devices, either connected directly or indirectly to thecomputer system 100. In an embodiment, thenetwork 130 may support Infiniband. In another embodiment, thenetwork 130 may support wireless communications. In another embodiment, thenetwork 130 may support hard-wired communications, such as a telephone line or cable. In another embodiment, thenetwork 130 may support the Ethernet IEEE (Institute of Electrical and Electronics Engineers) 802.3x specification. In another embodiment, thenetwork 130 may be the Internet and may support IP (Internet Protocol). - In another embodiment, the
network 130 may be a local area network (LAN) or a wide area network (WAN). In another embodiment, thenetwork 130 may be a hotspot service provider network. In another embodiment, thenetwork 130 may be an intranet. In another embodiment, thenetwork 130 may be a GPRS (General Packet Radio Service) network. In another embodiment, thenetwork 130 may be a FRS (Family Radio Service) network. In another embodiment, thenetwork 130 may be any appropriate cellular data network or cell-based radio network technology. In another embodiment, thenetwork 130 may be an IEEE 802.11B wireless network. In still another embodiment, thenetwork 130 may be any suitable network or combination of networks. Although onenetwork 130 is shown, in other embodiments any number (including zero) of networks (of the same or different types) may be present. - The
server 132 includes agrid executor 134 and may also include some or all of the hardware components already described for thecomputer system 100. In another embodiment, the functions of theserver 132 may be implemented as an application in thecomputer system 100. - It should be understood that
FIG. 1 is intended to depict the representative major components of thecomputer system 100, thenetwork 130, and theserver 132 at a high level, that individual components may have greater complexity than represented inFIG. 1 , that components other than or in addition to those shown inFIG. 1 may be present, and that the number, type, and configuration of such components may vary. Several particular examples of such additional complexity or additional variations are disclosed herein; it being understood that these are by way of example only and are not necessarily the only such variations. - The various software components illustrated in
FIG. 1 and implementing various embodiments of the invention may be implemented in a number of manners, including using various computer software applications, routines, components, programs, objects, modules, data structures, etc., referred to hereinafter as “computer programs,” or simply “programs.” The computer programs typically comprise one or more instructions that are resident at various times in various memory and storage devices in thecomputer system 100, and that, when read and executed by one ormore processors 101 in thecomputer system 100, cause thecomputer system 100 to perform the steps necessary to execute steps or elements comprising the various aspects of an embodiment of the invention. - Moreover, while embodiments of the invention have and hereinafter will be described in the context of fully-functioning computer systems, the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and the invention applies equally regardless of the particular type of signal-bearing medium used to actually carry out the distribution. The programs defining the functions of this embodiment may be stored in, encoded on, and delivered to the
computer system 100 via a variety of tangible signal-bearing media, which include, but are not limited to the following computer-readable media: - (1) information permanently stored on a non-rewriteable storage medium, e.g., a read-only memory or storage device attached to or within a computer system, such as a CD-ROM, DVD−R, or DVD+R;
- (2) alterable information stored on a rewriteable storage medium, e.g., a hard disk drive (e.g., the
DASD - (3) information conveyed by a communications or transmission medium, such as through a computer or a telephone network, e.g., the
network 130. - Such tangible signal-bearing media, when carrying or encoded with computer-readable, processor-readable, or machine-readable instructions or statements that direct or control the functions of the present invention, represent embodiments of the present invention.
- Embodiments of the present invention may also be delivered as part of a service engagement with a client corporation, nonprofit organization, government entity, internal organizational structure, or the like. Aspects of these embodiments may include configuring a computer system to perform, and deploying software systems and web services that implement, some or all of the methods described herein. Aspects of these embodiments may also include analyzing the client company, creating recommendations responsive to the analysis, generating software to implement portions of the recommendations, integrating the software into existing processes and infrastructure, metering use of the methods and systems described herein, allocating expenses to users, and billing users for their use of these methods and systems.
- In addition, various programs described hereinafter may be identified based upon the application for which they are implemented in a specific embodiment of the invention. But, any particular program nomenclature that follows is used merely for convenience, and thus embodiments of the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
- The exemplary environments illustrated in
FIG. 1 are not intended to limit the present invention. Indeed, other alternative hardware and/or software environments may be used without departing from the scope of the invention. -
FIG. 2 depicts a block diagram of selected components of the example system, according to an embodiment of the invention. In the example illustrated system, thecomputer system 100 is connected to a server 132-1, a server 132-2, and a server 132-3 via thenetwork 130. Each of the servers 132-1, 132-2, and 132-3 is an example of theserver 132, as previously described above with reference toFIG. 1 . The server 132-1 includes a grid executor A 134-1, the server 132-2 includes a grid executor B 134-2, and the server 132-3 includes a grid executor C 134-3. - The
computer system 100 includes thegrid data 156, which includesexample records executor identifier field 220, aservice strength field 225, a servicesavailable field 230, a unit ofwork type field 235, a unit ofwork priority field 240, and aperformance statistics field 245. - The grid
executor identifier field 220 identifies one of thegrid executors 134 such as the grid executor A 134-1, the grid executor B 134-2, or the grid executor C 134-3. Theservice strength 225 indicates a service or services for which the associatedgrid executor 220 performs faster than other services that thegrid executor 220 provides. The services available 230 indicates services that are available at thegrid executor 220, regardless of the speed at which thegrid executor 220 performs them. Theservice strengths 225 are a subset of the services available 230 for aparticular grid executor 220. - The unit of
work type 235 indicates a type of unit of work that thegrid manager 150 has sent to thegrid executor 220. The unit ofwork priority 240 indicates the priority of the unit ofwork type 235, as reported by thegrid application 154 or as specified by thegrid manager 150. Theperformance statistics 245 indicates the previous performance of units of work having the unit ofwork type 235 when issued to thegrid executor 220. In various embodiments, theperformance statistics 245 may include the response time for processing the unit ofwork type 235 or the percentage of time that thegrid executor 220 is available for processing the unit ofwork type 235. -
FIG. 3 depicts a flowchart of processing for registering thegrid executors 134, according to an embodiment of the invention. Control begins atblock 300. Control then continues to block 305 where thegrid manager 150 receives service strengths and available services from thegrid executors 134. Control then continues to block 310 where thegrid manager 150 creates a record (such as therecord grid data 156 and stores thegrid executor identifier 220, the reportedservice strengths 225 of thegrid executors 134, and the reportedavailable services 230 of thegrid executors 134. Control then continues to block 399 where the logic ofFIG. 3 returns. -
FIG. 4 depicts a flowchart for processing units of work in a training mode, according to an embodiment of the invention. Control begins atblock 400. Control then continues to block 405 where thegrid manager 150 creates units of work based on thegrid application 154. In various embodiments, thegrid manager 150 may create the units of work based on and/or in response to the tasks, functions, requests, messages, interrupts, or actions of thegrid application 154. Thegrid manager 150 further determines the type of the created unit of work and a priority of the created unit of work. The grid manager may determine the priority of the unit of work based on the priority of thegrid application 154 on which the unit of work is based, based on a priority reported by thegrid application 154 on which the unit of work is based, or based on any other technique. - Control then continues to block 410 where the
grid manager 150 selectsgrid executors 134 based on theservice strengths 225 of thegrid executors 134, the services available 230 of thegrid executors 134, the type of the created unit of work, and the priority of the created unit of work. In an embodiment, thegrid manager 150 may select thegrid executor 134 that has aservice strength 225 that matches the unit of work type. In another embodiment, thegrid manager 150 may use either the services available 230 or theservice strengths 225 of thegrid executors 134 to select thegrid executors 134 depending on the priority of the unit of work. For example, if the priority of the unit work is high (above a threshold), thegrid manager 150 may select thegrid executors 134 whoseservice strengths 225 match the unit of work type, but if the priority of the unit of work is low (below the threshold) thegrid manager 150 uses the services available 230 to select thegrid executors 134. Thus, thegrid manager 150 selects a subset of thegrid executors 134 from which thegrid manager 150 received theservices strengths 225 and the services available 230. - The
grid manager 150 stores the unit of work type of the created unit of work into the unit ofwork type field 235 of the records in thegrid data 156 associated with the selectedgrid executors 134. Thegrid manager 150 further sets the unit of work priority associated with the created unit of work into the unit ofwork priority field 240 in the record associated with the selectedgrid executors 134. - Control then continues to block 415 where the
grid manager 150 sends the created units of work to the selectedgrid executors 134 in parallel, meaning that the units of work are sent to multiple of the selectedgrid executors 134 without waiting for a response from any oneparticular grid executor 134. At least one of thegrid executors 134 executes the units of work and returns a response to thegrid application 154. - Control then continues to block 420 where the
grid manager 150 retrieves performance statistics data associated with the parallel execution of the units of work and stores the performance statistics data in the performance statistics field 245 of the records associated with thegrid executors 220 that executed the units of work. - Control then continues to block 425 where the
grid manager 150 creates training data based on theservice strengths 225, the unit ofwork type 235, and theperformance statistics 245. In an embodiment, thegrid manager 150 selects those grid executors 220 (those records in the grid data 156), for every unit ofwork type 235, that have thebest performance statistics 245, e.g., the lowest response time or the highest availability. Thegrid manager 150 then creates training data that includes pairs of unit ofwork types 235 andservice strengths 225. Control then continues to block 430 where thegrid manager 150 trains theneural network 152 with the unit ofwork types 235 as input to theneural network 152 and the respective pairedservice strengths 225 as output from theneural network 152. That is, thegrid manager 150 repeatedly inputs the work types 235 to theneural network 152 until theneural network 152 produces the pairedrespective service strengths 225 as output at least a threshold percentage of the time. Control then continues to block 499 where the logic ofFIG. 4 returns. -
FIG. 5 depicts a flowchart for processing units of work in a performance mode after the training mode is complete, according to an embodiment of the invention. Control begins atblock 500. Control then continues to block 505 where thegrid manager 150 creates units of work based on thegrid application 154, as previously described above with reference to block 405 ofFIG. 4 . - Control then continues to block 510 where the
grid manager 150 inputs thetypes 235 of the units of work into theneural network 152. Control then continues to block 515 where theneural network 152 generates theservice strengths 225 as output. Control then continues to block 520 where thegrid manager 150 selects thegrid executors 134 from thegrid data 156 based on theservice strengths 225 that were output from theneural network 152. In an embodiment, thegrid manager 150 selects thosegrid executors 134 withservice strengths 225 that match the output service strengths from theneural network 152. - Control then continues to block 525 where the
grid manager 150 sends the units of work in parallel to the selectedgrid executors 134 identified by thegrid executor identifier 220. Control then continues to block 530 where at least one of the selectedgrid executors 134 executes the units of work and returns a response to thegrid application 154. - In the previous detailed description of exemplary embodiments of the invention, reference was made to the accompanying drawings (where like numbers represent like elements), which form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments were described in sufficient detail to enable those skilled in the art to practice the invention, but other embodiments may be utilized and logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention. Different instances of the word “embodiment” as used within this specification do not necessarily refer to the same embodiment, but they may. The previous detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
- In the previous description, numerous specific details were set forth to provide a thorough understanding of embodiments of the invention. But, the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the invention.
Claims (20)
1. A method comprising:
sending a first plurality of units of work to a first plurality of grid executors in parallel;
creating training data based on performance of the first plurality of grid executors;
training a neural network via the training data; and
selecting a second plurality of grid executors via the neural network.
2. The method of claim 1 , further comprising:
sending a second unit of work to the second plurality of grid executors in parallel.
3. The method of claim 1 , further comprising:
receiving a service strength from each of the first plurality of grid executors.
4. The method of claim 3 , wherein the creating the training data further comprises:
creating a plurality of pairs of input data and output data based on the performance, wherein the input data comprises a plurality of types of the first plurality of units of work and the output data comprises the service strengths of the first plurality of grid executors.
5. The method of claim 4 , wherein the creating the training data further comprises:
selecting the plurality of types based on response time for the plurality of types at the first plurality of grid executors.
6. The method of claim 2 , wherein the selecting further comprises:
inputting a type of the second unit of work to the neural network; and
receiving a second service strength from the neural network.
7. The method of claim 6 , wherein the selecting further comprises:
selecting the second plurality of grid executors based on the second service strength from the neural network.
8. A signal-bearing medium encoded with instructions, wherein the instructions when executed comprise:
receiving a service strength from each of a first plurality of grid executors;
selecting a subset of the first plurality of grid executors based on the service strength;
sending a first plurality of units of work to the subset of the first plurality of grid executors in parallel;
creating training data based on performance of the subset of the first plurality of grid executors;
training a neural network via the training data; and
selecting a second plurality of grid executors via the neural network.
9. The signal-bearing medium of claim 8 , further comprising:
sending a second unit of work to the second plurality of grid executors in parallel.
10. The signal-bearing medium of claim 8 , wherein the creating the training data further comprises:
creating a plurality of pairs of input data and output data based on the performance, wherein the input data comprises a plurality of types of the first plurality of units of work and the output data comprises the service strengths of the subset of the first plurality of grid executors.
11. The signal-bearing medium of claim 10 , wherein the creating the training data further comprises:
selecting the plurality of types based on response time for the plurality of types at the subset of the first plurality of grid executors.
12. The signal-bearing medium of claim 9 , wherein the selecting further comprises:
inputting a type of the second unit of work to the neural network; and
receiving a second service strength from the neural network.
13. The signal-bearing medium of claim 12 , wherein the selecting further comprises:
selecting the second plurality of grid executors based on the second service strength from the neural network.
14. The signal-bearing medium of claim 8 , wherein the receiving further comprises:
receiving services available from each of the first plurality of grid executors.
15. A method for configuring a computer, comprising:
configuring the computer to receive a service strength and services available from each of a first plurality of grid executors;
configuring the computer to select a subset of the first plurality of grid executors based on a priority and one of the service strength and services available;
configuring the computer to send a first plurality of units of work to the subset of the first plurality of grid executors in parallel;
configuring the computer to create training data based on performance of the subset of the first plurality of grid executors;
configuring the computer to train a neural network via the training data; and
configuring the computer to select a second plurality of grid executors via the neural network.
16. The method of claim 15 , further comprising:
configuring the computer to send a second unit of work to the second plurality of grid executors in parallel.
17. The method of claim 15 , wherein the configuring the computer to create the training data further comprises:
configuring the computer to create a plurality of pairs of input data and output data based on the performance, wherein the input data comprises a plurality of types of the first plurality of units of work and the output data comprises the service strengths of the subset of the first plurality of grid executors.
18. The method of claim 17 , wherein the configuring the computer to create the training data further comprises:
configuring the computer to select the plurality of types based on response time for the plurality of types at the subset of the first plurality of grid executors.
19. The method of claim 16 , wherein the configuring the computer to select further comprises:
configuring the computer to input a type of the second unit of work to the neural network; and
configuring the computer to receive a second service strength from the neural network.
20. The method of claim 19 , wherein the configuring the computer to select further comprises:
configuring the computer to select the second plurality of grid executors based on the second service strength from the neural network.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/138,938 US20070005530A1 (en) | 2005-05-26 | 2005-05-26 | Selecting grid executors via a neural network |
CNA2006100678049A CN1869965A (en) | 2005-05-26 | 2006-03-13 | Method and device for selecting grid executors via a neural network |
JP2006144217A JP2006331425A (en) | 2005-05-26 | 2006-05-24 | Method and program for selecting grid executer via neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/138,938 US20070005530A1 (en) | 2005-05-26 | 2005-05-26 | Selecting grid executors via a neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070005530A1 true US20070005530A1 (en) | 2007-01-04 |
Family
ID=37443633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/138,938 Abandoned US20070005530A1 (en) | 2005-05-26 | 2005-05-26 | Selecting grid executors via a neural network |
Country Status (3)
Country | Link |
---|---|
US (1) | US20070005530A1 (en) |
JP (1) | JP2006331425A (en) |
CN (1) | CN1869965A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130117756A1 (en) * | 2011-11-08 | 2013-05-09 | Electronics And Telecommunications Research Institute | Task scheduling method for real time operating system |
US20130290223A1 (en) * | 2012-04-27 | 2013-10-31 | Yahoo! Inc. | Method and system for distributed machine learning |
CN111670463A (en) * | 2018-02-08 | 2020-09-15 | 谷歌有限责任公司 | Machine Learning-Based Geometric Mesh Simplification |
US10922363B1 (en) * | 2010-04-21 | 2021-02-16 | Richard Paiz | Codex search patterns |
CN112418430A (en) * | 2019-08-19 | 2021-02-26 | 发那科株式会社 | Machine learning method and machine learning device for learning work process |
CN112703682A (en) * | 2018-09-13 | 2021-04-23 | 诺基亚通信公司 | Apparatus and method for designing a beam grid using machine learning |
WO2021215906A1 (en) * | 2020-04-24 | 2021-10-28 | Samantaray Shubhabrata | Artificial intelligence-based method for analysing raw data |
US11526746B2 (en) | 2018-11-20 | 2022-12-13 | Bank Of America Corporation | System and method for incremental learning through state-based real-time adaptations in neural networks |
US11675841B1 (en) | 2008-06-25 | 2023-06-13 | Richard Paiz | Search engine optimizer |
US11741090B1 (en) | 2013-02-26 | 2023-08-29 | Richard Paiz | Site rank codex search patterns |
US11809506B1 (en) | 2013-02-26 | 2023-11-07 | Richard Paiz | Multivariant analyzing replicating intelligent ambience evolving system |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016077797A1 (en) | 2014-11-14 | 2016-05-19 | Google Inc. | Generating natural language descriptions of images |
CN106203619B (en) * | 2015-05-29 | 2022-09-13 | 三星电子株式会社 | Data optimized neural network traversal |
CN108369661B (en) * | 2015-11-12 | 2022-03-11 | 谷歌有限责任公司 | Neural network programmer |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040215931A1 (en) * | 1996-11-29 | 2004-10-28 | Ellis Frampton E. | Global network computers |
-
2005
- 2005-05-26 US US11/138,938 patent/US20070005530A1/en not_active Abandoned
-
2006
- 2006-03-13 CN CNA2006100678049A patent/CN1869965A/en active Pending
- 2006-05-24 JP JP2006144217A patent/JP2006331425A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040215931A1 (en) * | 1996-11-29 | 2004-10-28 | Ellis Frampton E. | Global network computers |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11941058B1 (en) | 2008-06-25 | 2024-03-26 | Richard Paiz | Search engine optimizer |
US11675841B1 (en) | 2008-06-25 | 2023-06-13 | Richard Paiz | Search engine optimizer |
US10922363B1 (en) * | 2010-04-21 | 2021-02-16 | Richard Paiz | Codex search patterns |
US8954975B2 (en) * | 2011-11-08 | 2015-02-10 | Electronics And Telecommunications Research Institute | Task scheduling method for real time operating system |
US20130117756A1 (en) * | 2011-11-08 | 2013-05-09 | Electronics And Telecommunications Research Institute | Task scheduling method for real time operating system |
US20130290223A1 (en) * | 2012-04-27 | 2013-10-31 | Yahoo! Inc. | Method and system for distributed machine learning |
US9633315B2 (en) * | 2012-04-27 | 2017-04-25 | Excalibur Ip, Llc | Method and system for distributed machine learning |
US11741090B1 (en) | 2013-02-26 | 2023-08-29 | Richard Paiz | Site rank codex search patterns |
US11809506B1 (en) | 2013-02-26 | 2023-11-07 | Richard Paiz | Multivariant analyzing replicating intelligent ambience evolving system |
CN111670463A (en) * | 2018-02-08 | 2020-09-15 | 谷歌有限责任公司 | Machine Learning-Based Geometric Mesh Simplification |
CN112703682A (en) * | 2018-09-13 | 2021-04-23 | 诺基亚通信公司 | Apparatus and method for designing a beam grid using machine learning |
US11526746B2 (en) | 2018-11-20 | 2022-12-13 | Bank Of America Corporation | System and method for incremental learning through state-based real-time adaptations in neural networks |
CN112418430A (en) * | 2019-08-19 | 2021-02-26 | 发那科株式会社 | Machine learning method and machine learning device for learning work process |
WO2021215906A1 (en) * | 2020-04-24 | 2021-10-28 | Samantaray Shubhabrata | Artificial intelligence-based method for analysing raw data |
Also Published As
Publication number | Publication date |
---|---|
CN1869965A (en) | 2006-11-29 |
JP2006331425A (en) | 2006-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2006331425A (en) | Method and program for selecting grid executer via neural network | |
Zhou et al. | On cloud service reliability enhancement with optimal resource usage | |
US20080010497A1 (en) | Selecting a Logging Method via Metadata | |
US9619430B2 (en) | Active non-volatile memory post-processing | |
US20080288963A1 (en) | Selective event registration | |
CN108604239B (en) | System and method for efficiently classifying data objects | |
US7613897B2 (en) | Allocating entitled processor cycles for preempted virtual processors | |
US7853928B2 (en) | Creating a physical trace from a virtual trace | |
US8543577B1 (en) | Cross-channel clusters of information | |
US20060036894A1 (en) | Cluster resource license | |
US7536461B2 (en) | Server resource allocation based on averaged server utilization and server power management | |
US7552236B2 (en) | Routing interrupts in a multi-node system | |
US20080221855A1 (en) | Simulating partition resource allocation | |
Han et al. | SlimML: Removing non-critical input data in large-scale iterative machine learning | |
US12235815B2 (en) | Graph-based application performance optimization platform for cloud computing environment | |
Chen et al. | Silhouette: Efficient cloud configuration exploration for large-scale analytics | |
US20060026214A1 (en) | Switching from synchronous to asynchronous processing | |
US7606906B2 (en) | Bundling and sending work units to a server based on a weighted cost | |
CN112380127B (en) | Test case regression method, device, equipment and storage medium | |
US20060248015A1 (en) | Adjusting billing rates based on resource use | |
CN111930485A (en) | A Performance-Based Job Scheduling Method | |
US20070006070A1 (en) | Joining units of work based on complexity metrics | |
US7287196B2 (en) | Measuring reliability of transactions | |
Hassannezhad Najjari et al. | A systematic overview of live virtual machine migration methods | |
Wu et al. | A selective mirrored task based fault tolerance mechanism for big data application using cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAARTMAN, RANDALL P.;BRANDA, STEVEN J.;DUGGIRALA, SURYA V.;AND OTHERS;REEL/FRAME:016306/0715;SIGNING DATES FROM 20050520 TO 20050523 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |