[go: up one dir, main page]

HK1112301A - Computer network system for synchronizing a second database with a first database and corresponding procedure - Google Patents

Computer network system for synchronizing a second database with a first database and corresponding procedure Download PDF

Info

Publication number
HK1112301A
HK1112301A HK08107217.4A HK08107217A HK1112301A HK 1112301 A HK1112301 A HK 1112301A HK 08107217 A HK08107217 A HK 08107217A HK 1112301 A HK1112301 A HK 1112301A
Authority
HK
Hong Kong
Prior art keywords
database
data
ssp
error
pic
Prior art date
Application number
HK08107217.4A
Other languages
Chinese (zh)
Inventor
迈克尔‧班克
汉斯贝特‧洛克
Original Assignee
瑞士银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 瑞士银行股份有限公司 filed Critical 瑞士银行股份有限公司
Publication of HK1112301A publication Critical patent/HK1112301A/en

Links

Description

Computer network system for synchronizing a second database with a first database and corresponding process
Technical Field
The present invention relates to computer network systems and processes for building and/or synchronizing a second database from/with a first database. In particular, the present invention relates to those computer network systems in which a first, existing database is to be transferred to a second database that is to be newly constructed.
Background
In complex systems with one or more front-end stations/applications and a back-end, migration traditionally occurs in a manner that first migrates the front-end and then migrates the back-end. In practice, for various reasons (high complexity, long down time of the system), the synchronous migration of the front-end and the back-end is not usually indicated. First, in the case of large DP engineering, which eliminates a single step migration (so-called "big blast") from an existing database platform to a new database platform, there is a need for a systematic way of allowing a controlled, gradual transition from an existing database to a new database for a number of reasons (e.g., because all applications accessing the new database have not yet been completed, because the operational behavior of the new database must still be studied in detail, etc.).
Furthermore, there is often an operational need to bring both databases into a state of actual agreement at a particular defined point in time (e.g., at the end of the day). In other words, data synchronization should be continuously maintained in both database systems, and the user should also be able to maintain the data, for example, using an application software program.
Since a very large amount of change of data held in a first database can occur in a short time due to continuously maintaining the first database even after the data is initially transferred from the first database to a second database (initial load), a way that is cost-effective with respect to computation time and transfer is required. The system requirements are also increased if the changes are kept online in the first database and are available in the second database as simultaneously as possible (at least close to real time). In some cases, for collective or group changes, offline maintenance is also required (at low operation) and must be possible offline.
Due to the migration that is usually performed from the first database platform to the second database platform, and for most of the applications from technical or IT perspectives (fast access, more complex query options, changes in hardware system platform, etc.) (enterprise flow optimization, enterprise reorganization, etc.), there are most significant differences in the physical implementation, structure and organization form between the first and second databases. This aspect is particularly emphasized when there are significant differences in the structure of the system architecture (hardware, operating system, database design, and database implementation) between the first and second databases. In this case, changes to be made in the first database (change of existing entry, detection, creation and population of new entry) cannot be mapped in the same way, i.e. different in the second database (1: 1). Furthermore, changes are often complex, i.e., the changes affect a first plurality of entries in the first database, but due to different structure and organization, a different plurality of entries in the second database, or different changes and/or additional fields entered into the second database. The environment also precludes immediate maintenance of changes in the second database in the same manner as occurs in the first database.
Finally, it must be considered that typically multiple computer program applications access and change the database in the case of large DP engineering. This environment (simultaneous access) has a significant impact on the policy of keeping the second database updated, especially in the case of similar online systems.
Due to the elapsed times (times) of the message/data flow in a network comprising two databases and/or connecting two database platforms to each other, and other influences (file length, priority, etc.) of a real-time or online environment or even a mixed environment (real-time and batch system), it is not possible to directly ensure that changes are available to the application software program that accesses the second database in exactly the same sequence as executed in the first database. In other words, when data is transferred from one database to another, the data may be overridden by earlier transmitted data. This has the following undesirable consequences: an "older" change may reset the "newer" changed data to an "old" value. Furthermore, due to these effects, there may arise a problem that the record is not yet completely maintained in the second database and thus is not completely changed, and thus, at the end, the error data is available to the application software program accessing the second database.
Not at a minimum, a role must be taken so that the migration process does not significantly (in theory, at all) limit the quality, operability, performance, etc. of the original database.
US 6,223,187B 1 (bootby et al) discloses a computer implemented method for synchronizing a first database located in a first computer with a second database located in a second computer. To this end, a first history data file located on the first computer is used and contains a record that reproduces the records of the first database at the time of terminating the previous synchronization. In the event that the records of the first database are not changed or added since the last synchronization, the first computer sends second computer information used by the second computer to identify the records of the first database to be changed anew.
In this method, the historical data file contains a copy of the previous synchronization results of the two databases. The historical data file is used to determine changes in the database since the previous synchronization and to restore records that are no longer being sent. If no historical data file exists, all records from both databases are loaded into working memory and compared. In this case, the synchronization is the following process: the records of the two databases are compared to the historical data file to identify changes, additions, or removals in each of the two databases from the previous synchronization and to determine which additions, removals, or updates must be made to the databases to synchronize the records of the two databases.
Disclosure of Invention
It is an object of the present invention to provide a computer network that can efficiently synchronize two database platforms while avoiding the disadvantages and problems of the previous methods as described above.
To achieve this object, the invention provides a computer network system having the features of claim 1.
The invention also provides for performing a comparison between the first and second databases to obtain a status relating to equality of the information content of both databases. Starting from the data comparison, according to the invention, a report (error log file) is generated relating to the error and/or loss records. Finally, a correction function for erroneous and/or missing records is provided.
To this end, according to the invention, a data container is provided with a control table and a data table. It is used to simulate transaction isogeny (blacket) in a first database environment and a second database environment. An error record based on the data comparison is also written to the container.
This error detection and handling is a sub-function of synchronization between the two databases. It is based on the structure of the error log file and the data container. During synchronization, all messages are written to and processed from the data container. If an error occurs during synchronization, the data is also identified. A link is then created from the data container to the error log file and the error is then displayed/shown.
To this end, according to the present invention, the error handling during software program component error log files, data containers, synchronization, re-transfer, and data equalizations is combined into one logical unit. A GUI is available to the user that allows for synchronized, initial loading, and consolidated reporting of data equal components. An option is also provided for manually initiating a re-transfer for data correction due to the entry.
A repeat function may be provided for performing immediate correction of identified differences between the first and second databases. Another function, the re-delivery function, includes a set of functions: selecting an error or loss record in the second database environment in the table; corresponding changes are generated and propagated via a synchronization process back to the second database environment. The re-pass function corrects the following three possible errors:
records disappear in the first database but appear in the second database.
Records appear in the first database but disappear in the second database.
Records appear in the first database, but appear in the second database with the wrong content.
The data comparison system compares the data bins of the two databases to each other and finds as many differences as possible. The comparison can be easily performed if the data structures on both systems are almost identical. The key issue is that at a particular critical point (in time) a large amount of data must be compared to each other.
The data comparison system essentially has three components: error detection, error analysis and error correction.
In one aspect, error detection includes the evacuation and processing of data from both databases. For this, hash values are calculated and compared with one another. If there is a discrepancy, the data is retrieved from the appropriate database. Another part of error detection is a comparison program that compares the corrupt data from the first and second databases to document differences in the synchronized error log file (and data used for synchronization in the data container) in detail. In a data container, there is an immediate attempt to apply new data to the corresponding database by performing a repeat function.
The error analysis includes a processing function of error processing for analyzing and linking data from the error log file and the data container to each other. The data is then displayed by a GUI (graphical user interface). Analysis of what errors are included may then be performed manually, if necessary. Furthermore, from this GUI, a so-called batch re-delivery function and a repeat function (retry) can be initiated.
In the case of error correction, there are 3 versions:
re-delivery and/or repeat functions (retries) of individual records.
Error correction writes error data into the data container and initiates a correction function from the data container.
The partial initial load or mass update is the same as the initial load.
In the case of an initial load, the affected tables are deleted first.
In the case of error correction, the following data structures are read and written in particular:
data container
Error log
Uninstalling the File
Hash file
Convert files
Comparison files
Re-passing the file
Q1 database
For the unload file, the same data structure as that of the initial load-unload file is used.
The coexist controller program defines the programs or program components that are called for a particular record type. The coexist controller program is required to load the data to be modified from the first database to the second database environment.
In the case of a successful re-transfer, the coexist controller program sets the error entry in the data container to "done".
The error message and error data may be displayed (sorted if needed).
In the data container, errors resulting from the reconcilement of the second database can be distinguished from errors resulting from the synchronization between the two databases. Additionally providing functionality for display, modification and re-delivery or retry of data.
By the functionality according to the invention, the number and types of errors are reduced, the longer the parallel operation of the systems of the two database environments takes. After the end of the process (day, week, etc.) and depending on the type of record, a reconcile may be made. It is also possible to check only records that have been asked for (queried) on the second database side, which have not been asked for (queried) in the second database. For example, records that have not been used may be checked once a month.
Reconciles the differences between the systems that found the two databases and corrects them. In this way, in the first position, errors are detected that have not been synchronously discovered. These errors may be:
unpackaged of batch/online programs on the system of the first database
Loss of messages and/or files on the transmission path
Bugs in the second database System Environment
Recovery of one of the two systems
Message records that cannot be applied in the second database environment.
It must be assumed that most errors can be corrected by the re-delivery function. Optionally, the second database may also be reloaded by another initial load or a partial initial load (mass update).
Based on the database entries to be compared and their attributes, in a first step hash values are determined and compared with each other. If they are different, the raw data items are compared to each other in a second step. To this end, the hash value is first sent, if necessary, in a second step, to the second database by the encapsulation module and compared there.
The approach according to the invention proposes a series of unexpected advantages during migration (phases) and in operation:
data traffic related to capacity (volume) and time requirements is less than data traffic using other approaches where, for example, an application program writes directly to both databases during the migration phase. The cost of tuning the application software program is also low. Finally, the costs for searching for errors in the database and/or application software program are relatively clear due to the clear allocation according to which only the encapsulation module can access the first database to write or change and convert/decompose the work units into messages according to the defined rules and then send the messages to the second database.
Furthermore, it is more efficient to build the encapsulation module and, when accessing the first database, to program it to test whether the original unit of work (unchanged in content but broken up or divided into separate messages, if necessary) is sent to the second database, or whether the changed entries made in the unit of work (broken up or divided into separate messages, if necessary) are sent from the first database to the second database. Depending on the result of the test, the corresponding content may then be sent. All accesses to change the first database are made exclusively by the encapsulation module. To this end, the application software program, as well as other (e.g., utility) programs, do not directly access the first database. Instead, these programs present their change commands to the first database to the encapsulation module, which coordinates and performs the access to the first database. In addition, the encapsulation module sends the changes (in a manner described in detail below) to the second database. This ensures that the second database does not "lose" the changes of the first database. The process has the effect of unifying the two database platforms.
Furthermore, the approach according to the present invention allows coexistence and interaction between two application worlds (i.e. two different complex DP system environments), each based on its own database core (i.e. the first and second databases). During the coexistence and migration phases, the decentralized workstations from the application world and the application software programs running on the workstations acquire the required data in real time from one of the two databases without any problem and process the data and, if necessary, write back the changed data (at least the first database). It may even be obvious to the user of the databases that he or she is communicating with both databases. In other words, the user does not notice the existence of two databases at all, since even the content provided to him or her on the user interface may alternatively or instructively access one or both of the two databases, while in the individual case the user may not detect which database to access. The first database may be a hierarchical database (hierarchical database) in which data is migrated to a relational (second) database, or an object-oriented (second) database. Likewise, the first database may be a relational database in which data is migrated to an object-oriented (second) database.
Since the application software program can only access one of the two databases externally (i.e., the first database) to make changes, while tracking the second database from changes in the first database, the two databases actually have the same content at least at certain critical times (e.g., the end of a day).
Performing access by a unit of work on at least a first database from at least one application workstation to generate, change or delete content of the database, wherein at least one first server directs and maintains the first database, the server is connected to the at least one application workstation, at least one second server directs and maintains a second database, at least one data connection connects the two servers, access by the unit of work to the first database occurs through an encapsulation module, the encapsulation module is set up and programmed such that the unit of work is passed thereto, the unit of work accepted is broken down into one or more messages, the message is entered into the first database, and the message is sent to the second database.
During the migration phase, synchronization only needs to be forwarded from the first (master) database to the second (slave) database since all application software programs only access the first database and make changes. With the encapsulation module, the goal of also performing access to each change of the first database is achieved at another location. The location may be a list of messages (for real-time transmission) or a batch file (for processing in batch mode).
The database structure may be considered in both respects (which may be different) by breaking down units of work (which may be complex processes initiated by the application software program (i.e., commands for changing the database) with reference to the fact that the application software program processes) into one or more separate or self-encapsulating messages. In this way, no information content is lost while processing both work units and/or keeping changes in both databases. Furthermore, depending on the structure of the first database relative to the second database, more efficient access may be made, which requires less communication bandwidth and computer/memory resources.
"self-encapsulating messages" are understood as data logically belonging together or coming from a process flow. The data can be structured hierarchically:
header part 1 (e.g., create new customer)
M groups (1-M) (surname, first name, account manager, etc.)
Header part 2 (e.g., creating the address of a new client)
N groups (1-N) (street, city, country, etc.)
Header part 3 (e.g., creation of additional data)
O pieces of grouping (1-O) (hobby, birthday, etc.)
Item 3
P packets
Item 2
Q packets
Item 1
New or different organizational structures or criteria (search or classification criteria) may also be generated or used in the second database. This also simplifies the operation of the second database and increases the efficiency of accessing the second database, while the first database can be operated synchronously based on the actual same data.
Another advantage of the approach according to the invention is that migration can be performed gradually, since the application software program, which up to now only accessed the first database, requires a new data switching protocol to access the second database. Thus, migration may be performed continuously, undetectable by a user of the application software program. The user interface visible to the user of the application software program may remain unchanged.
A particularly suitable area for use in accordance with the approach of the present invention is the primary data, i.e., customer data, partner data, product data, process data, etc., as opposed to transactional data (i.e., account movement, sorting, transfer, product processing data, etc.).
Advantageous developments and alternatives of the invention
In a preferred embodiment of the invention, an encapsulation module is established and programmed to provide the messages with a first identifier identifying each message before the messages are sent by the encapsulation module to the second database. In this case, the encapsulation module is set up and programmed to obtain the first identifier from the preferred central unit, which forms the first identifier as a timestamp or serial number. This ensures that individual messages can be processed in the correct sequence and associated in the correct manner (with the work unit).
The encapsulation module sends the identifier with each change or message associated with the second database. If the origin of the change is in the first database, the identifier (typically a timestamp) is tracked with each change in the second database.
Each message contains the content of the first database to be changed or generated and/or the changed or generated content of the first database and is stored in the first and/or second database. Each message generated by the encapsulation module has in common a technology header portion, an application header, and a content portion (old and new). The content parts (old and new) are made up of a sequence of characters comprising up to several kilobytes. The content depends on the type of encapsulation, the type of update (store, modify, delete) and the type of content transferred.
In other words, depending on the action to be performed, the message contains the code for the action to be performed, the content of the first database to be changed or generated, and/or the changed or generated content of the first database.
Populating a message structure by an encapsulation module is as follows and preferably, applies equally to batch mode:
update type Application header section Old content New content
Storage (S) X x!
Amendment (M) X X X
Delete (D) X X
Data is provided in a message in a manner that ensures that as few "empty" data items or initialization structures as possible must be physically forwarded in the message via the infrastructure. This is related to data security.
The header portion and old content are filled with all three update types "store", "modify" and "delete", the data before the change is the old content, and the data after the change is the new content. In the case of "delete", the old content fills the last data before the physical delete. In the case of the "delete" update type, only the old content is filled, and in the case of the "store" update type, only the new content is filled.
Description of the interface:
name (R) Content providing method and apparatus
COEX-MUTPRG Changing program names of programs
COEX-AGENTC Proxy code
COEX-APCDE Application code
COEX-NL Processing branches
COEX-UFCC-E Program function code
COEX-UPTYP Update type S storage M modification D deletion (erasure)
COEX-USERID USERID OF CANCER-PERSON
COEX-PAKET-TIME-STAMP Date and time of message (YYYYMMDDhhmmssuuuuuu)
COEX-REC-TIME-STAMP Date and time of change (YYYYMMDDhhmmssuuuu)uu)
COEX-NL-KD Branch of
COEX-KDST Client code number
COEX-OBJID Object identification/DB 1 Key fields
COEX-RECTYP Record type (record type from DB1 or TERM, TERM record does not include data part)
COEX-REC-SEQUENCE Record sequence number (within a packet, in the case of TERM ═ highest sequence number per packet)
COEX-ORIGIN Record start 0, initial load 1, retransmit (from DB1), sync 3, reconcile 4, functionality (DB1)5, online isotype (DB2)
COEX-REQUEST-TYPE On-line processing and batch processing
COEX-RESYNC-ID The original key for re-delivery from TAPCONLINEPACKAGE (or TAPCONLINEATADATA) or from TAPCBATCHPACKAGE (or TAPCBATCHATA)
COEX-RESYNC-STATUS Return code including DB1 re-pass functionality
COEX-RESERVED Retention
COEX-DATA Record, old and New
The COEX-recetp in the header portion describes the type of data included in the old content and the new content. In the case of functional encapsulation (described below), this attribute contains the specific transaction code; i.e. so-called project messages.
Thus, in particular, each message comprises the following identification data: a message timestamp (identifying the database 1 process) and a sequence number (defining the correct sequence of processes within the process). It is to be understood that embodiments of the present invention do not absolutely require all of the parameters listed in the above tables.
As previously described, the encapsulation module is built and programmed to store the plurality of messages resulting from the decomposition of the unit of work and the first identifier in a project message, which is then sent to the second database. This ensures that all messages belonging to one unit of work associated with the second database are not processed until they are sent together to the second database and arrive at the second database. This effectively prevents older data associated with a database field from "overtaking" newer data associated with the same database field due to batch processes initiated in parallel or nearly simultaneously, different transit times in the DP network due to different file lengths, etc., thereby ultimately creating an erroneous entry in the second database. In the same way, data items that are functionally related to each other are prevented from being processed or entered in the second database in an incorrect order, so that so-called referential integrity is maintained. In this way, a series of mutually independent updates on the second database side are considered.
Thus, the encapsulation module is set up and programmed to place messages and project messages to be sent into an output wait queue from which they can be sent to the input wait queue of the controller of the second database.
At least as long as it is designed to send data from a first database in the manner described above, a controller is provided on the second database side (which is preferably set up and programmed to read messages sent from the input wait queue to the controller) in the manner according to the invention for checking whether all messages belonging to one unit of work have reached the input wait queue, for performing appropriate changes in the second database when all messages belonging to one unit of work have reached the input wait queue, and for distributing corresponding changes or messages belonging to one unit of work, at least partially to other databases or applications, if it is necessary to distribute the changes or messages belonging to one unit of work in dependence on certain conditions.
In other words, the input wait queue serves as a storage box in which messages belonging to one unit of work are added as a separate part, and the controller starts changing the second database having the message contents only when all messages belonging to that unit of work are received. This ensures that the input contents do not cross each other and thus do not change erroneously when changing the second database. This is a mechanism to avoid error changes, especially in case of changes that trigger a change in the outcome.
The header part of each message is forwarded on arrival at the controller of the second database to the second database or to a preferably unchanged controller, as well as the data part old/new, between which a specific part of the second database may be inserted. This may be a separate attribute of the relevant database entry (i.e. a second database-specific code of e.g. 16 digits). Depending on the message type, this may be an account ID, business contact ID or address ID, etc. The controller forwards the same interface (i.e., the same information in the same format) to all symbiont elements that function in a single instance.
For the automatic maintenance of (partly) administrative data, so-called batch procedures are available in the first database. These batch processes are managed (monitored and controlled) independently of the real-time maintenance of the first database. Batch processes are primarily used to process large amounts of data. In particular, these programs prepare files for third parties, generate lists and perform internal processing, such as mass change (mass change) of all description object types xyz.
Since these centralized changes must access the first database via the encapsulation module, the present invention provides, similar to the individual access of the application software program: according to the invention, the encapsulation module is preferably set up and programmed upon reaching predetermined parameters to decompose the work units from the batch run into corresponding messages and write them to the transfer database, so that after reaching the predetermined parameters, the contents of the transfer database are transferred to the second database.
Finally, there are also intermediate solutions between centralized changes and individual changes, where the centralized changes are performed in a batch run and the individual changes are typically performed by an application software program. In this intermediate solution, the application software program that multiplicatively changes the first database is proposed via a macro-routine. In this manner, relatively small amounts (e.g., on the order of 100) of changes may be performed in a batch run via an application software program from a workstation without creating and processing the actual batch run.
An encapsulation module is also established and programmed upon reaching predetermined parameters to break down units of work from the batch run into corresponding messages and write them to the transfer database. A monitor software module is also provided that is set up after predetermined parameters have been reached and is programmed to transfer the contents of the transfer database to the second database. To do so, the monitor software module begins sending the contents of the transfer database to the second database after the predetermined parameters have been reached. The predetermined parameter may be a predetermined time, e.g., every 10-30 minutes, or a specific time of day (e.g., at night with little data traffic), a predetermined amount of data, or the like.
Preferably, the contents of the transfer database are then transferred to the second database as one or more closed batch transfer files. The group of messages that belong together can always be entered into a closed batch file and not distributed to two separate batch files. With the appropriate code, individual sequences of batch files can be identified. For this purpose, each batch file has a file header from which it can be seen in what context, what command requirements are present, on what date, at what time of day, etc. the batch file was created. Furthermore, in the event of an error, the monitor may again send a specific batch file as needed.
In a similar way to how the encapsulation module prevents or performs all accesses to the first database on the side of the first database, the controller according to the invention preferably ensures that the second database is changed exclusively in the manner controlled by the controller on the side of the second database. Thus, preferably, the batch file containing the transferred database contents is also transferred to the controller of the second database for further processing.
For each database or application that receives data from the first database, the controller of the second database preferably has a coexisting elements program module that is set up and programmed to synchronize the data of the associated database or application and to execute changes corresponding to messages belonging to one of the work units in the input wait queue in the second database or application, or in the database associated with the associated application. In this connection, in order to seek a unified interface design, the second database must be handled in the same way as the database or application receiving data from the first database. The only important difference is that the second database is updated before all other databases or applications.
The information relating to this for the second database and/or other databases or applications is provided to the controller of the co-existing element program which contents are preferably stored in the table. For this purpose, for each database or application in which a coexisting element program module exists, a row identifying the database or application by name is saved in a two-dimensional table. New databases or applications can be easily added. For each change or message, i.e. for each property of the database, a column is used. In these columns, three different values may be input: {0,1,2}. "0" indicates that the corresponding database or application does not require the attribute or cannot process it; "1" indicates that the corresponding database or application can process the attribute, but only provide the processing when a value change occurs; and "2" indicates that the corresponding database or application can process the attribute and provide the processing in any case.
In the second three-dimensional table, preferably, "message type", "database or application", and "database attribute" are saved. For each message type, there is preferably a two-dimensional sub-table according to the invention. For each database or application in which a coexistence element program module exists, a column is saved in a two-dimensional sub-table. The database or application is identified by its name. New databases or applications can be easily added. For each attribute, there may be a row in the two-dimensional sub-table. Two different values can be entered here: {0,1}. A "0" indicates that this attribute of the message does not affect the database or application. A "1" indicates that this attribute of the message may affect the database or application. The present invention also provides the option of swapping rows and columns in a table.
The following are also within the scope of the invention: information for the controller of the second database or other database or application is saved and maintained, rather than in a table, in a chain, possibly multi-dimensionally organized, data object structure.
According to the invention, the controller of the second database is also set up and programmed so that messages belonging to one work unit can be transmitted to the appropriate coexistence element program module, by means of which these messages are further processed. Preferably, suitable co-existence element program modules are created and programmed to set an OK flag in the table after successful further processing by the suitable co-existence element program and/or to enter a NOK flag (not OK flag) along with the names of the suitable co-existence element programs in the error handling table so that they are available for display and/or reprocessing or error correction.
According to the present invention, it is provided that preferably the message not yet successfully further processed by the coexistence original program is reprocessed or error corrected by a message not yet successfully further processed by the coexistence original program, wherein the message is sent again by the controller of the second database to the appropriate coexistence element program for further processing after the restart not yet successfully further processed by the coexistence element program from the first database, by a message being re-passed to the appropriate coexistence element program for further processing after the restart by the controller of the second database, or by a message not yet successfully further processed by the coexistence element program being deleted from the second database.
According to the invention, the message packet preferably contains from 1 to n messages applied to transactions of the first database. The message may be associated with multiple symbiont elements. It is also possible to process all messages (so-called packets) of a transaction of the first database in one transaction in the environment of the second database. The re-delivery may re-deliver all messages of the first database grouping to the second database. Such a packet may be identified as intended for retransmission. A periodic batch run may select all identified packets, write the message to be re-delivered to a file, and transmit the file to the first database. In the first database, the file may be read and the corresponding message may be transmitted to the second database via the synchronization structure. In the context of the second database, the retransmitted packet may be processed, and the identified and retransmitted packets may be given an error status of "retransmitted".
According to the present invention, the duplicate function can process again a packet that cannot be successfully processed by the controller through the symbiont element. The use of this functionality exists in the case of sequence and/or structural problems.
According to the present invention, the termination function may set the error status of the packet to a "done" error status. The packet for each of the symbiont elements may be set to "complete".
According to the invention, the reprocessing or error correction may link the input data of the controller of the second database (data provided in real time and data provided in batch) with the error events recorded in the error database and may store them in the error report database. The reprocessed or error corrected data is incorporated in the database of the controller of the second database. If messages from transactions from the first to second databases cannot be applied to the second database, they are preferably maintained in the database of the controller of the second database, where they are processed by reprocessing or error correction.
When recording an error event, the message that the error event occurred is preferably stored as a primary key. Thus, in error analysis, an error event entry may be assigned to the message. This is necessary because or if in reprocessing or error correction the error event entry does not refer to the message but to the packet.
According to the invention, in the event of an error, the external application software program writes an error message into the error event entry with as little difference and significance as possible, so that the error analysis does not take an excessive amount of time. This simplifies the search for errors in the program.
According to the invention, two acknowledgements to the controller are available to the coexistence element program. The controller of the second database has different behavior depending on which positive acknowledgement is passed back.
Error(s) in Support function
1. Error detection Recognition of discrepancies in possible error states interface to an error recording function for recording errors
2. Error recording Error logging function, logging error events, storing input messages that cannot be processed; ensuring linking of messages that cannot be processed with all associated error entries
3. Error analysis Display of a list of all input messages of an error summary display error table setting filters according to error status, start date and time, end date and time, branch, customer code number, object ID, message type, change procedure display input message and/or its content
The generated error message shows all error entries belonging to the input message call repeat function call re-pass
4. Error correction The repeat function can reprocess packets that the controller of the second database cannot successfully process, the retransmit retransmission can retransmit packets that the controller of the second database cannot successfully process from the first database, the terminate function can manually set packets that the controller of the second database cannot successfully process to a "done" error state
In the case of sequence problems, reprocessing or error correction makes duplicate functionality available. If the coexistence element program identifies a sequence problem, the automatic attempt may be repeated with a positive acknowledgement. The acknowledgement, its allowed value and its meaning are described below.
In accordance with the present invention, the software program components in the user's second database environment use the error reporting database to enter errors or to pass through operational monitoring in the event of all "warning" and "exceptional" error events. The following table describes how error events are classified.
Status of state Acknowledgement of the fact Description of the invention
OK 00 And (6) successfully processing. No error handling/reprocessing is included.
Warning 04 The process is performed but should be checked again.
Exceptions by 08 The desired process can be executed and terminated. All resources are reset to the original state. In the event that the input is validated, multiple errors may be logged before terminating the process.
Forced termination (exception) 12 The status of the batch process is provided. If this happens, the entire process should be terminated (program stop).
To achieve the availability of the encapsulation module to different requirements, the functionality is built and programmed by reference to data. The reference data may control the encapsulation module to change the first database and/or send one or more messages to the second database.
In a preferred embodiment of the invention, the encapsulation module is built and programmed to send messages to the second database in accordance with the logic switch, which is preferably controlled externally and/or by a program.
The encapsulation module provides functionality whereby online or batch changes initiated by an application software program in the context of a first database can be sent to a second database. The functionality of the encapsulation module is controlled by reference to the data table. The reference data controls whether the message is to be sent to the second database. According to the invention, the tracking of the second database is controlled by two (or more) switches. For example, for each business unit, the first switch defines whether the second database is to be tracked. For each application software program, the second switch controls whether changes initiated by the application software program are to be tracked in the second database. Thus, the second database is only tracked when both switches are "on", i.e., the second database is to be tracked for the business unit (first switch), and the current application software program contains an entry to track the second database (second switch). Through these functions, migration of precise control over the database platform is ensured.
Here, "functional encapsulation" is understood to mean the transmission of all changes of an individual attribute to the first and/or second database. This may forward all changes to other software program components in a controlled manner and with lower transmission costs. These software program components then perform functions (modify, delete, insert) in the second database environment. The changed entry to the first database is generated by the application of the work unit through the respective functions from the first database to the second database. Optionally, the changed entry to the first database is generated by the application of the work unit through a respective message from the first database to the second database. In the case of the last-mentioned record-based synchronization or encapsulation, if a change of the first database occurs, the synchronization of the first to the second database takes place for all changed records (═ database entries). In the case of functional synchronization or encapsulation, if a change occurs to the first database, all changed records are not only synchronized from the first database to the second database, but also forward the original message sent to the transaction. The same applies to the synchronization back to the first database from the second database.
The approach according to the invention ensures that the duration of the different ends of the daily treatment (or the final treatment at other times) does not change too much, so that the relevant treatment cannot be deduced within the provided time period. In the second database, within a few seconds, the tracking of online changes according to the inventive approach is successfully inferred. Tens of minutes (20-40 minutes) are sufficient for tracking batch process changes in the second database.
By means of the invention it is ensured that each change for the first database is detected by the encapsulation module and sent to the second database, wherein,
during the transfer to the second database, it is not possible to forge the changes,
the changes also arrive at the second database,
apply the changes in the second database in the correct sequence,
if the processing is abnormally ended in the second database, it is possible to restart or perform error processing; references controlled by the processing unit are possible; ensures the consistency of data, an
Unforeseen inconsistencies (e.g., application bugs) between the two databases may be corrected by mediation.
Especially with regard to search errors and understanding processes, it is advantageous if a change certificate of a change performed in the first database and/or the second database is recorded, preferably in a suitable database or in a working database. Which is typically a change in customer premises.
The essential reason for using functional packages is that the number of records after change is unpredictable and, in the case of individual changes, leads to a large number of inevitable changes. Once a transaction has placed a relatively large number (on the order of approximately 100 or more) of changed calls, the performance of the overall system deteriorates significantly. This means that the response time extends to a few seconds, thus terminating the transaction due to the timeout. This timeout results from the tracking redundancy data of the transaction if the structure of the first database can handle no more than 20-30 persistent messages per second.
A change in a particular attribute of the first database causes a function dependency to exist upon a non-particular change number that triggers other attributes of the first database.
According to the invention, at least one software program component may also be provided, by means of which, in the case of a transaction initiated from one application workstation of the first database, a transaction of the same type may be invoked on the second database and vice versa, in which case, from the perspective of the application workstation, the behavior of the transaction of the same type on the second database side is similar to the corresponding transaction on the first database side.
According to the invention, in combination with the coexistence of the first and second databases, the same type of transaction has the following advantages: for clients and dispersed applications, the migration of the (backend) database platform is transparent, i.e., invisible. This approach also allows testing of new components of the second database platform, for example by comparing the database contents of both sides. The inconsistency indicates an error on the second database side. Another advantage is that migration can be done stepwise (e.g., one branch after another).
The aim and purpose of porting (port) transactions from a first database platform to a second database platform as so-called homogeneous transactions is that in the context of the second database platform, the functions, services and data present at the first database platform should be available as quickly as possible. According to the invention, the same source program is used (so-called single-source concept). This may maintain (and if necessary modify) only one source code, i.e. the source code of the first database platform, in the migration phase. But does not change the interface/interface to the application software program when the same type transaction is activated in the environment of the second database platform. Thus, the application is not affected by the migration and activation.
Furthermore, by migration/migration of the first database data and functionality to the second database platform, replacement of the first database by multiple software program components is significantly simplified, as any technical issues of cross-system replacement can be corrected.
A homogeneous transaction consists of one or more software program modules. For example, a software application module is a Cobol program that contains processing logic instructions and accesses the system via primitives (primitives). A primitive consists of a macro, for example, written in the delta computer language. Macros are available in the second database environment (same interface as in the first database environment), but access the new Cobol module in the background. The Cobol module uses the structure of the second database to ensure processing in the new environment according to the old functionality.
Thus, the same type of transaction in the migrating second database environment is based on the same Cobol program code as the "original" transaction in the first database environment. In other words, the same type of transaction in the second database environment is the same copy of the appropriate transaction in the first database environment, with substantial differences in the system environment simulated at the second database side.
In conjunction with the above-described migration of application software programs and transaction programs, for example, of the Cobol programming language, it is possible to continue to perform retention work for software in the first database environment and then to continue to update the transfer code (even automatically) to the environment of the second database.
Since the interface of the same type of transaction in the second database environment corresponds exactly to the original transaction in the first database environment, it can be configured exactly whether and how the original transaction in the first database environment or the same type of transaction in the second database environment should be used. As long as the first database environment is the master, all changes to the data store (data stock) are performed via the original transactions in the first database environment. However, some read-only homogeneous transactions may optionally already be activated on the second database environment side. During this time period, record orientation and functional synchronization is performed between the second database environment and the first database environment. For functional synchronization, some modification or write-through transactions may be used before the time that the second database is the master. To this end, the same message that has already been processed in the first database environment is transmitted. However, it is no longer necessary to revalidate inputs on the same type of transaction side.
Changes performed in real time (online) on the first database side have used the encapsulation modules of the first database. The encapsulation module may synchronize all changed records from the first database to the second database (record synchronization). On the second database side, the records are sent to a master coexistence controller that tracks the coexistence element programs and corresponding application elements (software components) in the environment of the second database platform. The encapsulation module is migrated once and then adapted to the environment of the second database. In this way, changes to the database contents may be sent to the symbiont component programs and corresponding application components (software components) in the second database platform environment via the master symbiont controller.
The modification of the same type of transaction uses the same mechanism as the record synchronization to write to the second database and corresponding application elements (software components) in the second database platform environment.
The second database may be defined as master after all of the same type of transactions are available to the second database environment. From this point on, all real-time (but also batch) changes are made via the same type of transaction, which triggers synchronization with the first database after a successful change in the second database. This synchronization occurs exclusively and functionally at this stage, i.e. all incoming messages or transactions are passed unchanged to the first database and tracked there. Once this phase is inferred, the same type of transaction can be replaced.
However, since in this way the same data and functionality is available on both the first and second database sides, the same type of transaction can also be used for functional synchronization of the first database to the second database. As mentioned above, even for any opposite synchronization from the second database to the first database, all information can thus be used equally to maintain synchronization of the two systems.
In accordance with the present invention, in conjunction with the presence of the first and second databases, the same type of transaction has the following advantages: for clients and dispersed applications, the migration of the (backend) database platform is transparent, i.e., invisible. This approach also allows testing of new components of the second database platform, for example by comparing the database contents of both sides. The inconsistency indicates an error on the second database side. Another advantage is that migration can be done stepwise (e.g., one branch after another).
In summary, it must be stated that: a common type of transaction may be used to ensure that the functions of the two databases are synchronized. The same type of transaction can also be used to keep the second database master and also for the first database and without real time impact on the interface. The same type of transaction may be used to incrementally implement the construction of individual software program components. If some software program components are not already available as masters in the context of the second database, these individual software program components may be used as backups.
The first database is the master database whenever a change occurs in the first database and only thereafter in the second database. During this time, the second database is managed as a slave to the first database.
Once the changes have occurred on the second database, and only then in the first database (if needed), the second database is the master database. During this time, the first database is managed as a slave to the second database (if needed). In order to be able to perform this step, all transactions of the same type must occur. Furthermore, the application software program is no longer allowed to access the first database for writing in real-time or batch operations.
The software program component may be the master component once all changes relevant in the second database environment are first performed in the software program component and only then tracked in the second database (in the first database, if needed). In this case, both the second database and the first database are managed as slaves. To achieve this state, all data of the second and first databases must be present in and managed by software program components.
The maintaining in the first database may only be ended when the application software program in the first database environment does not require more data from the first database.
From the first database environment or from the second database environment, two synchronization directions are distinguished depending on the start of the change. Thus, the starting point of the change defines: whether the first or second database is the master database for a particular transaction and a particular processing unit or branch. During migration, for one transaction, the first database is the master database for a particular processing unit, and at the same time the second database is the master database for other processing units.
In the case of synchronization in the direction from the first to the second database, the synchronization is record-oriented or functional. Transactions are divided into three categories. This allows for an orderly differentiation of the application software programs to be ported.
The first type of transaction triggers the synchronization of record orientation (i.e., database entry orientation). In particular, these transactions must be used if only some entries in the first database are affected by such a change.
A second type of transaction triggers a function synchronization. In particular, if a relatively large number of entries in the first database are affected by such a change, these transactions must be used.
In the case of record oriented synchronization, the encapsulation module transmits all entries of transaction changes through the first database to the master coexist controller. First, the master coexistence controller invokes the coexistence utility of the coexistence element of the second database environment to bring the entries and/or changes of the first database into the second database environment. After the second database entry is successfully changed, the master coexist controller invokes a coexistence element and/or a coexistence utility (e.g., partner) of the application that contains the adaptation rules (mapping logic) from the first to the second database and/or to the application in the second database environment.
In this case, a same type of transaction that does not require the first database environment successfully brings the data into the second database environment.
In the case of functional synchronization, rather than transmitting those entries of the first database that are changed by one or more transactions to the master co-controller via the encapsulation module in real time, the raw input messages sent to the first database transactions are transmitted to the master co-controller via the encapsulation module in real time. Because of the message identifier, the primary coexist controller recognizes that the same type of transaction that includes the incoming message, rather than the recorded message, and forwards the transaction directly to the first database that performs the same transaction. When the encapsulation module of the first database is also migrated, all changes to the second database are also made via the same type of encapsulation module of the first database. The same type encapsulation module sends changes as a logging message to the master coexist controller, such as containing adaptation rules (mapping logic) from the first to the second database and/or to the application in the second database environment in the case of a logging synchronization call to the coexistence element and/or coexistence utility (partner) of the application.
In this case, the same type of transaction is used to bring the data in the correct format (e.g., as a slave record) to the second database and to trigger synchronization with the application software program. However, since the content is already validated in the context of the first database, online validation is not performed in the context of the second database. The validation of content in the second database environment is activated only if the second database is the master database. This also makes it possible to synchronize the functions later from the second to the first database (in reverse). Although changes from the second database environment and/or from the second database to the application software program "flow down" continue to occur in the form of record orientation, in the case of this synchronization direction, the synchronization from the second to the first database takes place exclusively, functionally.
Since the transactions on both sides (first and second database platforms) are the same, all changes occur exclusively via the same type of packaged module in the first database environment. The encapsulation module uses the database macro to synchronously modify the second database. The encapsulation module then also sends the same records to the master coexistence controller (as in the case of record synchronization, the same records are sent to the coexistence element and/or coexistence utility of the application software programs) so that they can be synchronized.
Unlike conventional approaches, the approach of the present invention now advantageously provides migration starting at the backend. This has the following advantages: on the front-end side, i.e. the side of the application workstation, GUI, user software, etc., no changes (or only a little changes) are needed, so that migration does not have an impact on the user.
By means of the functional encapsulation according to the invention, the logic comprising the new database structure and the data structure of the second database to be taken into account in the subsequent processing is implemented as such or at least as similarly as possible as it is in the first database. According to the present invention, preferably, the same type of transaction is used. The master coexist controller may obtain the change message online or as a batch file. Messages that include encapsulation due to functionality may be detected due to a particular record type or message type. The host controller may then invoke the root program and hand over the message. The root program may invoke a corresponding same type of transaction. In cooperation with the migrated and adapted encapsulation program, the same type transaction can now create a record old/new (with database entry old/new and/or change tasks) for the first database, since the master controller is typically received from the first database. These records may then be placed into an output wait queue and the master controller may then process them as if they came from the first database. A specific code (COX ORIGIN) is set only in the header part so that the detection can be made according to where the record comes from. This is important for error analysis.
Drawings
Fig. 1 shows a schematic representation of a first and a second database in their respective environments, and a communication mechanism between the two databases.
Fig. 2 shows a conceptual, standardized model of a controller table indicating which application elements (software components) of the second database platform a change relates to.
Fig. 3 to 7 explain the behavior in the case of storing and inserting data, the behavior in the case of modifying data, the behavior in the case of changing a case, and the behavior in the case of deleting a case based on flowcharts.
Fig. 8 explains error correction of individual records based on a flowchart.
Fig. 9 explains error correction of a file based on a flowchart.
Detailed Description
In fig. 1, the database structure of the first database DB1 is shown on the left side, and the database structure of the second database DB2 is shown on the right side. On the workstation ws1.. WSn, changes of the first database DB1 are initiated in the framework of the work unit UOW by the application software program running thereon. These changes are forwarded to a so-called encapsulation module KM (via a company-wide or worldwide data network, not shown). The encapsulation module KM is established and programmed to decompose the work units UOW transmitted to the encapsulation module KM into one or more messages m1.. Mn to make corresponding entries in the first database DB1 and to send the messages m1.. Mn to the second database DB 2. Since the wrapper module KM comes from the workstations w1.. Wn to access the first database, it is preferred that the wrapper module KM is set up and programmed to test whether it is more efficient (with respect to transmission duration and transmission quality and/or processing cost in the context of the second database DB2) to send the original work units UoW to the second database DB2 without changing the content (but breaking or dividing the content into separate messages, if necessary), or to send changed entries derived from the application of the work units UoW to the first database DB1 (breaking or dividing the content into separate messages, if necessary). According to the result of the test, the corresponding content is then transmitted.
For the transmission of the message m1.. Mn by the encapsulation module KM to the second database DB2 to take place virtually immediately after arrival and processing of the corresponding workstation UoW, the software module nrt Xfer (close to real-time transfer) is used for the transplatform message transmission. This is used for database synchronization to transmit time-critical (time-critical) changes occurring in the online process to the second database DB2 in near real-time, so that messages sent from the first database platform can also be processed on the second database platform.
In a similar manner to the above-described transfer of the input online change task, there is also a workstation UoW which derives from the Batch task and passes the Batch agent Batch to the encapsulation module KM.
In the same way as in the online case, the encapsulation module KM is built and programmed to decompose the workstation UoW transmitted to the encapsulation module KM by the Batch agent Batch into one or more messages m1.. Mn, to make corresponding entries in the first database DB1 and to send the messages m1.. Mn to the second database DB 2. To this end, as the original workstation UoW is handed over by the Batch agent Batch to access the first database, the encapsulation module KM also tests from the first database DB1 to the second database DB2 whether it is more efficient (with respect to the duration of the transmission and the quality of the transmission and/or the processing cost in the context of the second database DB2) to send the original work units UoW to the second database DB1 for unchanged content (but to split or divide the content into separate messages if necessary), or to send changed entries derived from the application of the work units UoW to the first database DB1 (to split or divide the content into separate messages if necessary). According to the result of the test, the corresponding content is then transmitted. The content is not sent directly to the second database DB2 but is written to the transfer database Q1, from which transfer database Q1 an off-platform file transfer takes place. To this end, a monitor for accessing the transfer database Q1 and a file transfer program for transferring changes from the batch process that are converted to messages in a file-oriented manner in synchronization with the second database platform are used.
On the side of the second database platform DB2, the change message is obtained (online or as a batch file) using the master co-controller COEX. The main common controller COEX comprises a plurality of program modules interacting with each other: an ONL-IN module, an ONL-OUT module, a BAT-OUT module, and a VERTEIL-REGELWERK (distributed controller) module.
The ONL-IN module is invoked with a message by the online software module nrt Xfer from the first database platform and the handover message from the first database is put into the co-existing database COEX-DB. Since the data and item details of the transaction can arrive in any sequence, the messages are collected in the co-existence database COEX-DB until all the messages of the transaction have been transmitted. To determine the integrity of the transaction messages, the grouped messages in the DB2 table are managed for each transaction, the DB2 table receives and maintains the number of currently transmitted messages from the first database so far, and the total number of messages from the first database DB 1.
The second DB2 table, which is addressed by the master co-controller COEX, is used to store messages from the first database for further processing.
The VERTEIL-REGELWERK module is invoked with the transmitted messages from the first database DB1 as parameters, before the messages from the first database DB1 are temporarily stored. The VERTEIL-REGELWERK module, described in detail below, returns an OK or must return condition. In the case of OK, the current row of pointers is first updated in the COEX database DB with the supplied flags for the COEX software components. In the case of an error, the return must return condition without further processing of the online proxy software module nrt Xfer.
Once the message from the first database DB1 of the transaction has been completely transmitted to the second database platform, a call to the ONL-OUT module is initiated by the ONL-IN.
In this case, the call occurs as an asynchronous call with a new request sent. At invocation, a keyword for a transaction is handed over from a first database. This includes the "branch" and/or "packet timestamp" fields of the transaction from the first database.
The ONL-OUT module reads the data in the program loop in the technically correct sequence, i.e. the transaction from the first database DB1, and the message temporarily stored in the coexistence database (online), and transmits this data in turn. This is supported by the sequence number in the header part of the message. Thus messages split into two or more rows or columns may be put back together after being read (online) from the coexistence database.
After successful processing of all messages of a transaction from the first database, eventually, the control message for the relevant transaction is marked as complete. In this manner, the data for the transaction is released for later logical reassembly.
The BAT-OUT module is a batch agent that contains a read routine for sequentially reading files provided by the batch agent in the environment of the first database platform and controls the workcell UoW. After each reading of the message (consisting of header portion, database entry old, database entry new), the VERTEIL-REGELWERK module is invoked and the message is transmitted as a parameter. This module is not called by the TERM record.
In order to minimize access and network load, in each case no messages or database entries contained therein are written to the coexistence database (batch processing). Alternatively, the entire packet is read into the BAT-OUT module and saved in program memory, requiring that the packet not exceed a predetermined size. When the packet is too large, only the packet is written to the coexistence database (batch processing). Then, the same processing as in ONL-OUT is performed, and a corresponding coexistent application element (software component) is provided. Data is retrieved from a program memory or coexistence database (batch process) depending on location. If the packet cannot be processed, the packet must be written to the coexistence database (batch processing).
The VERTEIL-REGELWERK module receives as input data the old message (pre-change state) from the first database platform and the new message (post-change state) from the first database platform. Each "old" attribute is compared to "new" to determine if the attribute has changed. If a change has occurred, the application element (software component) associated with the change is built via the table (see FIG. 2). For each software component, the message gets a flag identifying whether or not it is relevant to the component. Fig. 2 shows a conceptual standardized model for a controller table. These may be implemented in different ways depending on performance requirements.
The following key table may effectively set the parameters of the actual controller data: REFERENCE _ REC
The significance is as follows: in this key table, the following fields are saved for record type:
·REC_ID (PK)
RECTYPE record type, e.g. D201
Whether DB2_ ID must determine the identifier of a DB2 key
REFERENCE_SWCOMP
The significance is as follows: in this key table, the following fields (e.g., CCA) are saved for the COEX application element (software component):
·SWCOMP_ID,(PK)
SWCOMP, name of software component, e.g. CCA
ACTIVE, flag (value range Y/N), software component (de) activation
REFERENCE_COLS
The significance is as follows: in this key table, the following fields are saved for record type:
REC _ ID, PK, corresponding to REFERENCE _ REC
COL _ NO, PK, Serial number
COL _ NAME, field NAME in record type
To control the processing, the following table is provided:
ACTIVE_NL
the significance is as follows: (de) activation of the data is transferred to the software component of each branch. This controls whether or not branched data (regardless of the data type) is forwarded to the software component.
Field:
NL, PK, Branch, e.g. 0221
SWCOMP _ ID, PK, and REFERENCE _ SWCOMP
Correspond to
ACTIVE, flag (value range Y/N),
combined (de) activation of branches and SWCOMP _ ID
DELIVERY
The significance is as follows: conditions are defined for forwarding the record type to the software component. This condition is defined by fields, for example: if field 02 or 04 or 05 is changed in record type 01(═ D201), the record must be forwarded to software component 01(═ CCA).
Field:
REC _ ID, PK, corresponding to REFERENCE _ REC
SWCOMP _ ID, PK, and REFERENCE _ SWCOMP
Correspond to
COLNO _ CHG PK corresponding to REFERENCE _ COLS
DELIVERY flag (value range Y/N)
(de) activation of REC _ ID, SWCOMP _ ID, COL _ NO combinations
In a preferred embodiment of the invention, the messages created by the encapsulation module of the first database have the following properties. As an attribute here, a field that allows process control to the first and second databases is saved.
05 COEX-IDENT message identification
Name of 10 COEX-MUTPRG PIC X (06). times.. change program
10 COEX-AGENTC PIC X (02.) agent code
10 COEX-APCDE PIC X (02). times.application program code
10 COEX-NL PIC X (04). times.. branch treated
10 COEX-UFCC-E PIC X (03). times.program function code
10 COEX-UPTYP PIC X (01). update type
Storing
Modifying M
Deleting (erasing)
10 COEX-USERID PIC X (06). times.
Time stamp for 10 COEX-PAKET-DATUM-ZEIT
15 COEX-PAKET-DATUM PIC 9(08). times.
(YYYYMMDD)
15 COEX-PAKET-ZEIT PIC 9 (12.) times of grouping
(HHMMSSuuuuuu)
10 COEX-RECORD-DATUM-ZEIT
15 COEX-RECORD-DATUM PIC 9(08). times.
(YYYYMMDD)
15 COEX-RECORD-ZEIT PIC 9 (12.) time of change
(HHMMSSuuuuuu)
10 COEX-ID. data identification
15 COEX-KDID
20 COEX-NL-KD PIC X (04). times.. branch
20 COEX-KDST PIC X (08). times.customer code number
15 COEX-OBJID PIC X (20). times.object identification
10 COEX-RECTYP PIC X (04). times.. records type
10 COEX-REC-SEQUENZ PIC 9 (08.) record sequence no
(without grouping)
10 COEX-ORIGIN PIC X (01). times.. the starting point of the record
Initial load 0 ═ initial load
1-resynchronization
2 ═ synchronization
Readjustment of 3 ═ m
* 4=RIPEROS
10 COEX-REQUEST-TYPE PIC X (01). times.
O-in-line processing
Batch process
10 COEX-RESYNC-ID PIC X (32)
TAPCONLINEPACKA
GE
10 COEX-RESYNC-STATUS PIC X (02). times.DB 1 return code
Re-routing functionality
10 COEX-RESERVED PIC X (06). times.a.reverse, whereby the header is preserved
Hold 150 bytes long
COEX-DATEN PIC X (10600.) first database data
Space (A) of
In the field COEX-PAKET-ZEIT, a timestamp is introduced at the beginning of the transaction uniformity. In the field COEX-REC-ZEIT, a changed timestamp is introduced. It is necessary to ensure the uniqueness of each record type and each record. The field COEX-OBJID is initialized to a blank space. In the field COEX-REC-SEQUENCE, a record SEQUENCE number (a record SEQUENCE number within a packet, for example, TERM ═ the highest SEQUENCE number of each packet) is input. In the field COEX-REQUEST-TYPE, when an output through batch processing is input, "B" is input as batch processing or "O" is input as inline processing.
At initial load, the field COEX-RESYNC-OF will be filled with spaces, never changed at resynchronization, and filled with error codes at reconcile. The field COEX-USERID contains the user ID that triggered the change. Even for batch transfers, the filling must be done again by the encapsulation module. The field COEX-PAKET-ZEIT contains the date and time (YYYYMMDDhmssuuuu) at which the packet, or transaction classmate (transaction break), started. All records of the same transaction class have the same timestamp. The field COEX-REC-ZEIT contains the date and time of the change (YYYYMMDDhhmmssuuuuuu). It is necessary to ensure the uniqueness of each record type and each record. The time stamp is used for detection time of double time (bitmamporal) data retention. This indicates that the value is entered in the BiTemp field BTMP _ UOW _ START. The field COEX-REC-TYPE contains the latest "TERM" record in the case of a packaged module. This marks the end of the transaction as homogeneous. The field COEX-REC-SEQUENCE contains a record SEQUENCE number (the record SEQUENCE number within a packet, e.g., TERM ═ the highest SEQUENCE number per packet). Using the recorded sequence number in the packet, the sequence of changes within the transaction class can be recovered. Depending on the starting point {0, 1.., 4} of the record for the initial load, the field COEX-ORIGIN contains the resynchronization, synchronization, reconcile, and application software from the first database. This is required for coexistence services, application software and error handling.
Depending on the TYPE of processing in the second database environment, the field COEX-REQUEST-TYPE contains { O, B }: o is an in-line process and B is a batch process. In this way, services in the second database environment relating to (batch) processing can be optimized. In the case OF resynchronization, the field COEX-RESYNC-OF contains the error ID and identifies the error table entry referenced by the resynchronization. In this manner, the status of the entries in the error table may be updated when a resynchronization is received. The field COEX-BTX-ID marks the resynchronization for the initial load and identifies the table entry referenced by the resynchronization. In this manner, the status of the entries in the error table may be updated when a resynchronization is received. The encapsulation module describes COEX-PAKET-ZEIT, COEX-REC-ZEIT, COEX-REC-SEQUENCE fields that map the transaction identity from the first database.
For the first database, the data is old and new, 10600 bytes mentioned in the header section as 'spaces' are available. The physical boundary between recording old and recording new is movable depending on what structure is used. In each case, the length is not fixed but specified. As an example, a record or copy of CIF master record D201 is listed below. The copy corresponds to a data description of a database record of the first database.
*************************************************************************
* *
Recording: d201SSP NON-DMS COB85 length: 1644 bytes REL: 0.1 *
* *
Generating: 14.08.2001 change last: 30.08.2001 *
Description of the drawings: migration interface
* *
*************************************************************************
05 D201-FILLER-0-SSP PIC X(12).
05 D201-DATA-SSP.
10D 201-DATMUT-SSP PIC 9(08). times.customer change date
10D 201-hatidd-SSP PIC X (36). customer has an indicator
10D 201-HATIKDR-SSP redefines D201-HATIKD-SSP PICX (01)
Appear 36 times
Customer has an indicator
10D 201-STATREC-SSP PIC X (01). times.customer status
10D 201-flag kd-SSP PIC X (72.) customer review
10D 201-FLAGKDR-SSP redefines D201-FLAGKD-SSP PICX (01)
And occurs 72 times.
Customer review
10D 201-flag kd2-SSP PIC X (72.) customer reviews
10D 201-FLAGKD2R-SSP redefines D201-FLAGKD2-SSP PICX (01)
And occurs 72 times.
Customer review
10D 201-flag kd3-SSP PIC X (72.) customer reviews
10D 201-FLAGKD3R-SSP redefines D201-FLAGKD3-SSP PICX (01)
And occurs 72 times.
Customer review
10D 201-flag kd4-SSP PIC X (72.) customer reviews
10D 201-FLAGKD4R-SSP redefines D201-FLAGKD4-SSP PICX (01)
And occurs 72 times.
Customer review
10D 201-flag kd9-SSP PIC X (72.) customer reviews
10D 201-FLAGKD9R-SSP redefines D201-FLAGKD9-SSP PICX (01)
And occurs 72 times.
Customer review
10D 201-NLFLAG-ssp
15D 201-NLFLAGKD-SSP PICX (01) appeared 18 times.
Branch application indicator
10D 201-ADID-ssp.
15D 201-KDID-ssp. customer ID;
20D 201-NL-SSP PIC X (04). times.. branch
20D 201-KDST-SSP PIC X (08). times.customer code number
15D 201-ADRLNR-KD-SSP PIC 9(04) · customer address serial number
10D 201-AGENTC-SSP PIC X (02)
10D 201-C1D201-CC-ssp the following attributes: technical group of D201
15D 201-B1D201-CC-SSP. D201-DOMZIL, D201-NAT technology group
20D 201-DOMZIL-SSP PIC X (05). times.. dwelling
20D 201-NAT-SSP PIC X (03). times.nationality
20D 201-AWIRTCf-SSP PIC 9 (01.) The. cit
Technical group of 15D 201-B3D201-CC-ssp. D201-BRANC, D201-BRA
Technical group of 20D 201-BRANC-ssp. x D201-BRANC1 and D201
25D 201-BRANC1-SSP PIC X (01). times. UBS department code
25D 201-BRANC2-SSP PIC X (03). times. UBS department code
20D 201-BRANCHE-SSP PIC X (05). times.NACE code (department code)
20 D201-FILLER-1-SSP PIC X(03).
Technique group of 15D 201-B2D201-CC-SSP
20D 201-SPRACH-SSP PIC 9(01). multidot.language code correspondence
Technology group of 10D 201-C2D201-CC-SSP
15D 201-U1D311-CC-ssp. a subset of D201-C2D201-CC with various address attributes
20D 201-ADRLNR-SSP PIC 9 (04.) address sequence number
20D 201-VERSART-SSP PIC 9(01). multidot.
20D 201-versefc-SSP PIC 9(01). times.
20D 201-LEITWG-SSP
25D 201-BETREU-SSP PIC X (08). times.route responsible person
25D 201-DATLWGAB-SSP PIC 9(08). multidot.
25D 201-DATLWGBI-SSP PIC 9(08). times.
20D 201-address-ssp; D201-AD4M24 higher level group
20D 201-AD4M24-SSP PIC X (24) 4 times X4X 24 form the address
20D 201-AD2M24-SSP PIC 9(01) appeared 2 times
20D 201-NAMEI2-SSP PIC 9(05) appeared 2 times
20D 201-VORNAMI2-SSP PIC 9(05) appeared 2 times
20D 201-ORTI2-SSP PIC 9(05) 2 occurrences by site
20D 201-VERSRT-SSP
25D 201-LANDC-SSP
30D 201-LANDC1-SSP PIC X (03). times.
30D 201-LANDC2-SSP PIC X (02). times.
25D 201-TARIFC-SSP PIC X (01). times.tariff code
25D 201-PLZ-SSP PIC X (10)
25D 201-PLZ-PF-SSP PIC X (10). times.. post code post office mailbox address
15D 201-U2D201-CC-SSP. D201-KUART and D201-D
20D 201-kurt-ssp
25D 201-KDGR-SSP PIC X (01). times.customer population
25D 201-rekrart-SSP PIC X (02). times.local customer type
20D 201-DATGR-SSP PIC 9 (08.) date of birth or establishment
10D 201-bectru-B1-SSP PIC X (08) · customer principal (numerals 1-4 ═ c ·
Organization unit)
10D 201-BETREU-B2-SSP PIC X (08). times.expert person in charge
10D 201-PERS-SSP PIC X (02)
10D 201-BCNR-SSP PIC X (06)
10D 201-DATGAB-SSP PIC 9 (08.) customers from a certain date
10D 201-DATGBI-SSP PIC 9(08). times.customer deactivation date
10D 201-DATKON-SSP PIC 9(08). times.dead or banked date
10D 201-DATUM-MIG-SSP PIC 9(08). times.migration date merge SBC- > UBS
10D 201-INTCODE-ssp
15D 201-IGC-SSP OCCURS 10 times
20D 201-IGI-SSP PIC X (02). times.times.interesting fields-identification
20D 201-IGN-SSP PIC X (02). times.content of field of interest
Application indicator for 10D 201-FLAGFAP-SSP PIC X (72.) external application
Redefining D201-FLAGFAPR-SSP D201-FLAGFAP-SSP PICX (01)
Now 72 times.
Application indicator of external application
10D 201-VIANZ-SSP PIC 9 (05.) A. the number of dispatched instructions
10D 201-BOKUOC-SSP PIC 9 (01.) x exchange customer conditions (BOKUKO)
Appear
10D 201-BOKUKO-SSP exhibits 0 to 1 in accordance with D201-BOKUOC-SSP
Exchanging customer specific conditions;
15D 201-KUKO-SSP PIC 9(01). multidot.
15D 201-STEKA-SSP PIC 9 (01.) state zip code
15D 201-BROKCA-SSP PIC 9(03) V9(04) basis calculated in%
15D 201-DEPAUT-SSP PIC 9 (01.) safe account instructions (automatic)
15D 201-GENLI-SSP PIC 9(01). times.
15D 201-dpshelle-SSP PIC X (04). times.secure account location
15D 201-ABWKU-SSP PIC 9 (01.) A specific treatment conditions
15D 201-SEGA-SSP PIC 9 (01.) customers connected to SEGA
15D 201-kutpps-SSP PIC 9(02) exchange-related customer type definitions
15D 201-STATI-SSP PIC 9(01)
15D 201-COUKON-SSP PIC 9 (01.) A. Back-off convention
15D 201-STEAD-SSP PIC 9 (01.) recipient zip code
15D 201-INTKTO-SSP PIC 9 (01.) internal account
15D 201-ABSCHB-SSP PIC 9(01). times.
Code of
Symbol of 15D 201-TRAX-SYM-SSP OCCURS 2 times order transmission
20 D201-TRAX1-SSP PIC X(05). ***---no dsc---
20 D201-TRAX2-SSP PIC X(03). ***---no dsc---
15D 201-CEDEL-SSP PIC X (01). times.Cedel reference code
15 D201-FILLER-2-SSP PIC X(03).
15D 201-TITELTYP-SSP PICX (02) appeared 9 times
15D 201-soffspez-SSP PIC X (02). times.sofvex specific account
15D 201-LFZHCH-SEG-ssp
20D 201-LFZH-CSA-SSP PIC X (08). times.
20D 201-LFZH-CSO-SSP PIC X (08). times.
15D 201-LFZHCH-BC-SSP. transfer to Switzerland a header that does not support SEGA
20D 201-LFZH-CBA-SSP PIC X (08). times.A Swiss title is delivered without support for SEGA
20D 201-LFZH-CBO-SSP PIC X (08). times.A Swiss title is delivered without support for SEGA
15D 201-LFZHUEB-SSP appeared 7 times · transferred to the country and shared
20D 201-LFZHHLAND-SSP PIC X (03). times.
20D 201-LFZH-AKT-SSP PIC X (08). times.. transferred to and shared by countries
20D 201-LFZH-OBL-SSP PIC X (08). times.
15D 201-CALAND-SSP appeared 9 times for national and safety type CA calculations
20D 201-CA-LAN-SSP PIC X (03). times.. for national and security type CA calculations
20D 201-CAVORCD-SSP PIC X (01). times.. for national and security type CA calculations
20D 201-CABRA-SSP PIC 9(03) V9(04) for national and security type CA
Computing
10D 201-U3D201-CC-ssp
15D 201-kontrinar-SSP PIC X (06)
10D 201-segarn-SSP PIC X (06)
Technical groups of 10D 201-U4D201-CC-ssp. D201-ZUGRIFFB and D20
15D 201-ZUGRIFFB-SSP PIC X (02). times.object with restricted access
15D 201-ZUGRIFFB-ALT-SSP PIC X (02) from the last employee of the previous employee
'ZUGRIFFB value'
10D 201-KDGR-DH-SSP PIC X (01). times.
Customer group
10D 201-kutpys-EM-SSP PIC 9 (02.) for issued customer types
10D 201-FLAGMKG-SSP PIC X (36.) Pushing entire bank selectors to market
10D 201-FLAGMKGR-SSP redefines D201-FLAGMKG-SSP PICX (01)
Appear 36 times
Marketing the whole bank of selectors
10D 201-FLAGMKN-SSP PIC X (18.) market the selected individuals of the branch
10D 201-FLAGMKNR-SSP redefines D201-FLAGMKN-SSP PICX (01)
Appear 18 times
Marketing the selected branch to market
10 D201-GRUPPANL-KD-SSP PIC X(02).
10 D201-FILLER-3-SSP PIC X(01).
10 D201-M2000-SSP.
15D 201-BETREU-1-SSP PIC X (08). about. EBS customer conclusion (relationship)
15D 201-TELNO-1-SSP PIC X (15). times.
15D 201-BETREU-KD-SSP PIC X (08). times.Credit officer
Account identification of 15D 201-TRXKT-a-SSP PIC X (15)
15D 201-KTONR-TRX-SSR redefines D201-TRXKT-A-SSP
(privilege)
Owner of 20D 201-KTOST-TRX-SSP PIC X (08). times.
Account augmentation of 20D 201-KTOZU-TRX-SSP PIC X (02). times.transaction account
20D 201-KTOLNR-TRX-SSP PIC 9(04) account serial number of transaction account
20 D201-FILLER-4-SSP PIC X(01).
Account identification of 15D 201-TRXKT-UL-SSP PIC X (15)
15D 201-KTONR-UL-SSP REDEFINES D201-TRXKT-UL-SSP
Account number (Enterprise)
Owner of 20D 201-KTOST-UL-SSP PIC X (08). times.transaction account
Account augmentation of 20D 201-KTOZU-UL-SSP PIC X (02). times.transaction account
Account serial number of 20D 201-KTOLNR-UL-SSP PIC 9(04) · transaction account
20 D201-FILLER-5-SSP PIC X(01).
15 D201-FILLER-6-SSP PIC X(03).
15D 201-KDSEGM-1-SSP PIC X (03). times.customer segment
10D 201-GRP-ZUG-SSP PIC X (08). times.group membership code
10 D201-RSTUFE-SSP PIC X(05).
10D 201-RSTUFE-RIS-SSP redefines D201-RSTUFE-SSP;
15D 201-RSTUFE-K-SSP PIC X (03). times.
15D 201-RSTUFE-R1-SSP PIC X (02)
10D 201-SEX-SSP PIC X (01). times.sex code
10D 201-RUECKST-ART-SSP PIC X (01). times.A/B retention type
10D 201-RUECKBET-A-SSP PIC S9(17) results in isolated symbols
Retention amount A
10D 201-CRRI-SSP PIC 9(03). CRRI (credit risk liability indicator)
10D 201-TARIFC-KD-SSP PIC X (01). times.customer desired tariff code
10D 201-RKAT-SSP PIC X (02)
10 D201-FILLER-7-SSP PIC X(01).
10D 201-TELNO-P-SSP PIC X (15) private telephone
10D 201-TELNO-G-SSP PIC X (15). times.. service telephone
10D 201-KRATING-SSP PIC 9(05) V9(02). Suitable values, Swiss region
10D 201-KUSEGM-RAT-SSP PIC X (02) customer segment rate values
10D 201-DATUM-TEL-SSP PIC 9 (8.) date of last telephone bank use
10D 201-ORGANSCH-NR-SSP PIC X (04.) group
10D 201-SALDGSF-DUR-SSP PIC S9(15) V9(02) caused 2 occurrences of the isolated symbol
Assets at the last trading date of a month
10D 201-STATUS-KC-SSP PIC X (01). Key-Club subscriber STATUS
10D 201-EROEFDAT-KC-SSP PIC 9(08) · Key-Club open date
10D 201-DELDAT-KC-SSP PIC 9(08) · Key-Club closure date
10D 201-STATUS-KS-SSP PIC X (01). times.keyhop subscriber STATUS
10D 201-EROEFDAT-KS-SSP PIC 9(08). multidot.keyshop custom open day
10D 201-DELDAT-KS-SSP PIC 9 (08.) Keyshop custom closure date
10D 201-DOMZIL-BO-SSP PIC X (05.) dwelling of beneficiary
10D 201-DATSTUD-SSP PIC 9(08). times.end of learning
10D 201-BETREU-ANR-SSP PIC X (08). times.. internal (investment portfolio manager)
10D 201-GREG-SSP PIC X (02). times.. country, region or large region code
10D 201-LANDC-RSK-SSP PIC X (03). multidot.residence risk
10D 201-NAT-BO-SSP PIC X (03). times.
10D 201-GEPA-SSP PIC 9(01) · private banking code
10D 201-JUZU-SSP PIC X (02). times.times.French (additional identifier)
10D 201-TOGE-SSP PIC X (04). times.sub-company code
10D 201-KUKO-ART-SSP PIC 9(02) subscriber contact type
10D 201-DATUM-KDK-SSP PIC 9(08). times the date the customer contacted
10D 201-KMU-MA-SSP PIC X (02) number of employees of SME
10D 201-RES-3-SSP PIC X (06)
10D 201-VERMGNV-GES-SSP PIC S9(15) V9(02) results in isolated symbols
Assets of multiple households, customers on the last trading day of a month
10D 201-VERMGNL-GES-SSP PIC S9(15) V9(02) results in isolated symbols
Multiple owners, customers, final trade destination property in a month
10D 201-DATUM-HR-SSP PIC 9(08). times.date of commercial enrollment entry
10D 201-DATUM-CAP-SSP PIC 9(08). times.
10D 201-ADID-KC-ssp
15D 201-KDID-KC-ssp
Branch of customer ID of 20D 201-NL-KC-SSP PIC X (04). times.third party address
20D 201-KDST-KC-SSP PIC X (08). times.third party address ID customer ID
Customer owner
15D 201-ADRLNR-KC-SSP PIC 9 (04.) third party address ID
Address serial number of
10D 201-DATUM-MM-SSP PIC 9(08). multidata.multimat's last usage date
10D 201-DATUM-TB-SSP PIC 9(08). times.last use date of telephone bank
Cost category of 10D 201-KREDIT-AWK-SSP PIC X (02). times.Credit Process
10D 201-BETREU-STV-SSP PIC X (08). times.. replace responsible person
10D 201-DATUM-AUS-SSP PIC 9(08). times.employee retirement date
10D 201-PLANING-FIN-SSP PIC X (02)
10D 201-RES-4-SSP PIC X (02). times.. reserved field
10D 201-RES-5-SSP PIC 9(08) · reserved field
(vi) record (D201) end of
In the COBOL program, the interface is used twice, once as 'alt' (old) and once as 'neu' (new):
*PARENT(Root):InputRecord
01 SSP-COEX-REQUEST-BLOCK.
*Header
COPY AHVCHEAD.
*data
02 COEX-DAT-D201.
*------------------------------------------------------
* COEX-RECTYP=′D20 1′
*------------------------------------------------------
03 D201-COEX-ALT.
COPY AHVCD201.
03 D201-COEX-NEU.
COPY AHVCD201.
for database changes (write, overwrite, erase), traditionally the following DB primitives are used:
.ADD DBWRITE,RECORD
.ADD DBREWR,RECORD
.ADD DBERASE,RECORD
primitives consist of deltawritten macros and Cobol blocks. The macro makes the same interface available to both the first and second databases, but can access the new Cobol module in the background. The Cobol module uses the infrastructure components of the second database to provide processing in the new environment (of the second database) according to the old functionality (i.e., as in the first database platform environment).
The encapsulation module is for encapsulating all software programs accessing the first database and having a change effect using the DBWRITE, DBREWRITE and DBERASE primitives on the (sub) database of the first database.
According to the invention, the generic module is invoked upon a change of the first database or one of its (sub-) databases. This makes a plausible check and calls the sub-modules (DBWRITE module, DBREWRITE module and DBERASE module: change validation module) instead of the DB primitives described above. The parameter field describes which type of change is included. The generic module contains the corresponding DB primitives and is responsible for tracking in the second database. To ensure that changes to multiple programs are not mixed, a packet is formed for each logical process. A logical process will typically correspond to a unit of work. This is illustrated based on the following example of a module called CI 0010:
module CI0010
Parameter(s)
·T001ACA
·P005PPVC
·CI0010-RECORD-ALT
·CI0010-RECORD-NEU
The P005PPVC contains, among other fields:
p005PPVC-DB1-UPDATE tracks the first database (Y/N)
P005PPVC-SSP-UPDATE traces the second database (Y/N)
P005PPVC-MUTPRG program or transaction name
P005PPVC-NL processing Branch
P005PPVC-NL-S Branch of the principal (Online)
P005PPVC-TZE terminal center Unit (Online)
P005PPVC-TRID terminal identification (Online)
P005PPVC-UFCC-E program function code (online)
P005 PPVC-UPTYPP DB update type
D ═ deletion (erasure)
Modified (overwrite)
Storage (write)
P005PPVC-USERID principal's user ID (Online)
P005PPVC-SACHBKZ person of responsibility short code (on-line)
P005PPVC-KDID customer ID
P005PPVC-OBJID object ID/Address Serial number
P005 PPVC-RECTYPP 4 character record type (e.g. K001)
P005PPVC-FUNKTION calling function
I is the initial unit of work
P is a processing unit
T ═ terminating unit of work
A ═ IPT (even if one record per cell)
P005PPVC-TRANSFER-KEY logical work Unit Key
P005PPVC-STATUS Return State (corresponding to T001-STATUS)
Invocation of CI0010
Calling "CI 0010" using T001ACA
P005PPVC
CI0010-RECORD-ALT
CI0010-RECORD-NEU
According to the invention, each logical unit of work contains the following module calls:
one call with "initialize" function (group to open second database)
N-1 calls (write, rewrite, erase inserted into packet) with the processing function "handle
One call with "terminate" function (close the second database group)
The DB changes that occur via the batch procedure are not directly (online) transferred to the second database, but are first stored in the transfer database Q1. The database is opened and closed by the encapsulation module.
The contents of the transfer database Q1 are merged into a file under the control of the monitor and sent to the second database platform by file transfer.
In the following, the flow in the database component in the second database platform environment is explained as an example. The coexistence element may be used for online synchronization, batch synchronization, and initial loading of the second database.
The sequence problem (messages catch up with each other in online synchronization, or differences between online and batch synchronization) can be handled as follows:
by reading the data of the second database before it changes. To this end, in the application and (sub) database of the second database platform, the data before the change is read and the relevant fields are compared with those in the message. The field to be changed should have the same version as in the second database, such as an 'old' message.
Optionally, the timestamp of the first database may be compared to the timestamp of the second database. The change timestamp 'relative' of the first database is stored in the second database. The timestamps are compared before the change. The first database in the second database must have a older change timestamp relative to the store than the new timestamp from the first database of messages.
Finally, in another alternative, the data may be saved in parallel in the second database DB2 (dual time). In this case, each record can simply be inserted. The time series in the second database DB2 are managed based on the change time stamp of the first database. The current test for DB2 DB1 eliminates any sequence problems. The processing is controlled via a code table. For the application data program of the second database, the controller must be set to "off".
The behavior in the case of storing and inserting data, the behavior in the case of modifying data, the behavior in the case of changing a case, and the behavior in the case of deleting a case are explained based on the flowcharts of fig. 3 to 7.
In the first database platform DB1, entries (master data, individuals, etc.) are uniquely identified by "customer numbers", and finally, one customer having a plurality of customer numbers is managed similarly to a plurality of different customers. To this end, objects (accounts, security, stock accounts) are defined and identified by similarly constructed accounts, stock accounts, security numbers, etc. These objects are then always assigned to a customer.
In contrast, in the second database platform DB2, all entries, customers and objects are uniformly and uniquely identified by the "DB 2 identifier". These "DB 2 identifiers" are completely independent of the "customer number" of the first database platform DB 1.
A stable interpretation between the number of the first database and the "DB 2 identifier" is provided throughout the co-existence phase of the two database platforms. For this purpose, an "interpretation table" managed by the coexist controller is used.
The relationship DB1 customer number < - > "DB 2 identifier" (customer) is implemented by a specific software program component "participant directory" (see fig. 1). The relationship DB1 object number < - > "DB 2 identifier" (object) is implemented in the software program component "contract directory" (see fig. 1).
These relationships are established with first production data received from a first database to a second database and are expanded with each data reception and/or data tracking.
From the moment of reception of the first production data, these relationships are no longer changed; they are simply "extended" or supplemented.
The loss of one of these relationships necessitates the restoration of the corresponding directory.
With the DB1 number interpreted as the associated "DB 2 identifier", the process is according to the following algorithm:
for DB1 number, does the corresponding "DB 2 identifier" already exist in the software program component "participant directory" or the software program component "contract directory?
If "yes," the discovered DB 2' identifier is used.
If "not," then a "new," unique DB2 identifier is generated and entered along with the DB1 number into the software program component "participant catalog" or "contract catalog" correlation.
But newly opens the DB2 identifier, the absolutely necessary accompanying attributes are entered into the second database platform. The newly opened DB2 identifier may be used.
The algorithm is invoked and processed anywhere in the second database platform environment, where the corresponding DB2 identifier of DB1 number must be determined. This includes, inter alia, the above described migration access, "same type" transactions, application software programs CCA, SPK, ALP, BD/BTX, DB2 (see fig. 1), all user-directed services operating on the second database side for the main data.
For the forwarding conversion algorithm, preferably, a variant for batch operations and a variant for online operations are provided. For both embodiments, the design is made for the multiplication parallel use.
For securing co-existing (e.g., "same type transactions") flows and transactions, an interpretation from the DB2 identifier to the associated DB1 number is also required. To this end, preferably, one variant for batch operations and one variant for online operations are provided. For both embodiments, the design is also made for the multiplication parallel use, and in the result of this inverse interpretation, the most important attributes of the customer or object are also preferably output.
The change messages distributed to the different coexistence applications CCA, SPK, ALP, BD/BTX, DB2 (see fig. 1) by the ONL OUT and BAT OUT modules in the coexistence controller (see fig. 1) are passed on from the first database DB1 to the second database platform according to these coexistence applications. The change messages are transmitted to those application software programs CCA, SPK, ALP, BD/BTX, which have their own data stores (databases) held only by them, and to the second database DB 2. In this example, these are databases of participants, contracts and product catalogs, Core Cash Accounts (CCAs), and other application software programs. In a similar manner to the coexist controller, each of the individual application software programs to which the change messages are transmitted has an input message buffer ENP. In the buffer, groups of associated messages may be identified. The association messages are collected in the coexist controller and placed together as a whole set in the input message buffer ENP of the affected application software program. The logic distributed to the application software program is according to the following principles:
only the entire (i.e. complete) change message is placed in the input message buffer ENP of the affected application software program. There is no exception to individual attributes.
In the case of an association record group, only the entire combined message is sent.
The application software program only receives messages in its input message buffer ENP if it is "affected" by a change or message.
For each input change or message, it is determined what property was changed based on the "old"/"new" record. This needs to be taken as an input parameter to determine which application software programs the change/message is to be sent to, in addition to the second database DB2, in the table "attribute-influence-application-software-program" described in detail below. This does not apply to "insert" and "delete" messages. Furthermore, the table "record-type-distribution", also described in detail below, is "affected" by the message/change. The coexist controller controls the distribution of messages/changes accordingly.
The "record-type-distribution" table is a manually maintained static table. The ONL OUT and BAT OUT modules read the table for each application program, but never write the table.
The table has two dimensions: component and record type.
For each component (application software program), there is a row. Components are identified by their name, e.g., participants, contracts and product catalogs, Core Cash Accounts (CCAs), etc. New components can be added at any time.
For each record type sent by the encapsulation module KM, there is a column. Each of the functionally encapsulated transaction messages is counted as an independent record type.
In a separate field of the table, there may be a value 0, 1, 2. They have the following meanings:
0 ": the component is not interested in the record type.
1 ": the component is basically interested in the type of record, but only receives messages when affected by the changed properties (see below).
2': the component is interested in the type of record and always receives the message.
The form "attribute-influence-application-software-program" form is a manually maintained static form. The ONL OUT and BAT OUT modules read the table for each application program, but never write the table. The table has three dimensions: record type, component, and attribute.
For each record type sent by the encapsulation module KM, there is a two-dimensional sub-table.
For each component (application program), there is a column in the two-dimensional sub-table. Components are identified by their name, e.g., participants, contracts and product catalogs, Core Cash Accounts (CCAs), etc. New components can be added at any time.
For each attribute of the record type, there is a row in the two-dimensional sub-table.
In a separate field of the two-dimensional sub-table, there may be a value of 0, 1. They have the following meanings:
0 ": the components do not depend on the attributes of the record type. This means that neither the relevant attributes are kept in the local data of the component, nor are they used for the mapping rules. The component is not "affected" by the type of record.
1 ": the components depend on the attributes of the record type. This may mean that the correlation property is preserved in the local data of the component; attributes are also represented for mapping rules for local data retention of components.
Another aspect of the invention is at least one software program component by which, in the case of a transaction initiated from one application workstation on a first database, a second sub-database invokes a so-called homogeneous transaction and vice versa. In this case, from the perspective of the application workstation, the behavior of the same type of transaction on the second database side is similar to the behavior of the transaction on the first database side.
By migrating so-called homogeneous transactions, the functions, services and data present at the first database are available in the context of the second database platform as quickly as possible. According to the invention, the same source program is used. This may maintain (and modify, if necessary) only one source code, i.e., the code of the first database platform, during the migration phase. When the same type transaction is activated in the context of the second database platform, the interface/to the application software program is not changed.
A homogeneous transaction consists of one or more software program modules. The software program modules are Cobol programs that contain processing logic instructions and access the system via primitives. The primitives are composed of macros, which are written in the Cobol computer language. In the second database environment, the macro makes the same interface available as in the first database environment, but accesses the new Cobol module in the background. The Cobol module uses the structure of the second database component to ensure that processing is done in the new environment according to the old functionality.
The same type of transaction (chest transaction) in the second database environment is the same copy of the appropriate transaction in the first database environment, with differences in the system environment (authorization, transaction processing middleware, database and help macros) being simulated on the second database side.
The interface for the same type of transaction in the second database environment corresponds to the original transaction in the first database environment. All changes to the data warehouse are performed via the initial transaction in the first database environment as long as the first database environment is the master database. Read-only transactions of the same type may be activated on the side of the second database environment. During this time, record orientation and functional synchronization occurs between the second database environment and the first database environment. For functional synchronization, when the switch to the second database is the master, a modification or write-in type transaction may be used. To this end, the same message that has been processed in the first database environment is transmitted. In this case, revalidation does not occur on the same type of transaction side.
The changes performed in real time at the first database side use the encapsulation modules of the first database. In this manner, changed entries (records) from the first database may be synchronized with the second database. At the second database side, the records are sent to a master coexistence controller that tracks the symbiont components programs and corresponding application components in the second database environment. The encapsulation module is migrated once and then adapted to the environment of the second database. In this manner, changes to the database contents may be sent to the symbiont programs and corresponding application components in the second database platform environment via the master symbiont controller. The modification of the same type of transaction uses the same mechanism as the record synchronization to write to the second database and corresponding application elements in the second database platform environment.
After all of the same type transactions are available to the secondary database environment, the secondary database may be defined as the primary database. From this point on, all real-time (but also batch) changes are made via the same type of transaction, which triggers synchronization with the first database after a successful change in the second database. This synchronization occurs exclusively and functionally at this stage, i.e. all incoming messages or transactions are passed unchanged to the first database and tracked there. Once this phase is inferred, the same type of transaction can be replaced.
In the case of synchronization in the direction from the first to the second database, the synchronization is record-oriented or functional. Transactions are divided into three categories. This allows for an orderly differentiation of the application software programs to be ported.
The first type of transaction triggers the synchronization of record orientation (i.e., database entry orientation). These transactions must be used if only some of the entries in the first database are affected by such a change.
A second type of transaction triggers a function synchronization. These transactions must be used if a relatively large number of entries in the first database are affected by such a change.
In the case of record oriented synchronization, the encapsulation module transmits all entries of transaction changes through the first database to the master coexist controller. First, the master coexistence controller calls a coexistence utility (utility program) of the coexistence element of the second database environment to bring entries and/or changes of the first database into the second database environment. After the second database entry is successfully changed, the master coexist controller invokes a coexistence element and/or a coexistence utility (e.g., partner) of the application that contains the adaptation rules (mapping logic) from the first to the second database and/or to the application in the second database environment.
In this case, a same type of transaction that does not require the first database environment successfully brings the data into the second database environment.
In the case of functional synchronization, rather than transmitting those entries of the first database that are changed by one or more transactions to the master co-controller via the encapsulation module in real time, the raw input messages sent to the first database transactions are transmitted to the master co-controller via the encapsulation module in real time. Because of the message identifier, the primary coexist controller recognizes that the same type of transaction that includes the incoming message, rather than the recorded message, and forwards the transaction directly to the first database that performs the same transaction. When the encapsulation module of the first database is also migrated, all changes to the second database are also made via the same type of encapsulation module of the first database. The same type encapsulation module sends changes as a logging message to the master coexist controller, such as containing adaptation rules (mapping logic) from the first to the second database and/or to the application in the second database environment in the case of a logging synchronization call to the coexistence element and/or coexistence utility (partner) of the application.
In this case, the same type of transaction is used to bring the data in the correct format (e.g., as a slave record) to the second database and to trigger synchronization with the application software program. However, since the content is already validated in the context of the first database, online validation is not performed in the context of the second database. The validation of content in the second database environment is activated only if the second database is the master database.
Since the transactions on both sides are the same, all changes occur exclusively via the same type of packaged module in the first database environment. The encapsulation module uses the database macro to synchronously modify the second database. The wrapper module then also sends the same records to the master coexistence controller (as in the case of record synchronization, the same records are sent to the application's coexistence elements and/or coexistence utilization) so that they can be synchronized.
As noted above, there are basically two different ways to initiate a same type of transaction.
1. Via HostLink
2. Via message-based synchronization by CART. CART is a middleware solution that provides secure, asynchronous store and forward communications between distributed applications on different platforms.
The following explains where in the overall system what substantive information/data for the second database platform appears, and where it comes from.
If a transaction of the same type is requested via the HostLink, the request arrives at the online root program. In the online root program, it is determined what transactions and functions are requested. Based on the expected transaction code and corresponding function code, the Call is then used to Call the corresponding routine.
For example: cifrountine was called using AQYGENERAL T371 TPINFO.
In processing, the routine may then use other TP primitives to request additional information, such as an incoming message or terminal record. This information is also provided by the HostLink.
In case of function synchronization, in the context of a first database, CART messages are constructed and sent to the context of a second database. The message contains all necessary data as well as header portions so that the same type of transaction can be processed without using TP primitives.
The CART message is received by a master co-controller. In the coexistence header portion, the master coexistence controller recognizes that a message from the first database environment is included instead of a database entry. Thus, the primary coexist controller forwards the message to the functional root program in the secondary database environment.
In this root program, messages are decomposed and prepared so that CALLs can be used to CALL corresponding same-type routines.
CIFRountine was called using AQYGENERAL T371TPINFO MESSAGE-BUFFER.
Format of the synchronization message:
header section User part
CART Coexistence of TP data Message buffer
The CART header portion contains the technical information necessary to route the message to the master coexist controller.
In the coexistence header section and in another technical data, there is a function code of the transaction so that the main controller can detect a function synchronization message that easily includes a function root program.
USER PART TP data contains data that is requested in an online situation using TPGET TPINFO (e.g., object branching). This data is needed for both root programs and for transactions of the same type.
The USER PART message buffer is based on the corresponding transaction and contains critical information and USER input.
The same type transaction may determine, via the function code, whether to include a message received via function synchronization (CART) or online (HostLink).
If a HostLink input message is included, the same type transaction performs full validation of the message including any additional authorizations and triggers a change to the database via the encapsulation module. The incoming message is obtained via TP primitive TPGET IMSG and the user is again notified of the corresponding success (failure) using the TP primitive. The encapsulation module uses the DB macros to directly update the second database, and the master coexistence controller is used to update the coexistence elements and/or coexistence utility and/or application software programs (e.g., participants).
In the case of functional synchronization, the process is already executed on the first database and is now tracked in the second database and the application software program. Thus bypassing all validation/authorization. The message is processed directly and the change is initiated via the encapsulation module. Since there is no HostLink with the user workstation in case of the function synchronization message, the TP primitive may not be used. Thus, a transaction of the same type can read all necessary messages from the passing TP primitive (T371TPINFO) and the message buffer.
A comparison is performed between the first and second databases to obtain a status relating to equality of the information content of both databases. Starting from the data comparison, according to the invention, a report (error log file) is generated relating to the error and/or loss records. Finally, a correction function for erroneous and/or missing records is provided.
On the basis of the plan and reference tables, it is controlled every day which processing unit of the first database should be checked in relation to the second database. The reference table is automatically synchronized between the two databases. If nothing is to be processed, the reference table must be adjusted. The reference table indicates which processing unit can be compared on which day. The structure and logic are as follows:
tasks were started at 5:00 per day. The program calls the reference table with the keyword "CI/0005/wt/1/RECON" ("wt" is the day of the week (01 to 07)).
The structure of the reference table is as follows:
a processing unit:
01/02/03/04/05/06/07/08/09/10/11/12/13/14/15/16/17/18/34
the processing is performed if the processing unit is present on a first database running the program. On the second database, in the uninstaller, the respective processing unit is converted into a partition criterion and selected accordingly. The record type to be processed is in the reference table and is partitioned:
AL:D101/D111
KD:D201/D211/D212/D214/D215/D216/D217/D219/D220/D222/D225/D
226/D535
AD:D311/D321/D322
DP:F101/F111/F112/F113/F114/F115/F116/F117
SF:F201/F213/F214/F216/F217/F219
SV:F230
KT:K001/K002/K004/K005/K006/K007/K010/K011/K012/K013/K016
only those records that have been selected are processed. Overall, only one reference table per system is accessed, and a re-mediation run is necessary.
To this end, a data container is provided having a control list and a data list. The data container is used to simulate transactional uniformity in a first database environment in a second database environment. An error record based on the data comparison is also written to the container.
The error detection and handling is based on the structure of the error log file and the data container. During synchronization, all messages are written to and processed from the data container. If an error occurs during synchronization, the data is also identified. A link is then created from the data container to the error log file and then displayed/error.
To this end, the software program components error log file, data container, synchronization, re-delivery and error handling during data equality are combined into one logical unit. A GUI is available that allows for synchronized, initial loading, and consolidated reporting of data equivalent components. An option is also provided for manually initiating a re-transfer for data correction due to the entry.
With the repeat function, a correction of the identified differences between the first and second databases can be performed immediately. Another function, the re-delivery function, includes a set of functions: selecting an error or loss record in the second database environment in the table; corresponding changes are generated and propagated via a synchronization process back to the second database environment. The re-pass function corrects the following three possible errors:
records disappear in the first database but appear in the second database.
Records appear in the first database but disappear in the second database.
Records appear in the first database, but appear in the second database with the wrong content.
The data comparison system compares the data bins of the two databases to each other and finds as many differences as possible. The comparison can be easily performed if the data structures on both systems are almost identical. The key issue is that at a particular critical point (in time) a large amount of data must be compared to each other.
In one aspect, error detection includes the evacuation and processing of data from both databases. For this, hash values are calculated and compared with one another. If there is a discrepancy, the data is retrieved from the appropriate database. Another part of error detection is a comparison program that compares the corrupt data from the first and second databases to document differences in the synchronized error log file (and data used for synchronization in the data container) in detail. In a data container, there is an immediate attempt to apply new data to the corresponding database by performing a repeat function.
The error analysis includes a processing function of error processing for analyzing and linking data from the error log file and the data container to each other. The data is then displayed by a GUI (graphical user interface). Analysis of what errors are included may then be performed manually, if necessary. Furthermore, from this GUI, a so-called batch re-delivery function and a repeat function (retry) can be initiated.
In the case of error correction, there are 3 versions:
re-delivery and/or repeat functions (retries) of individual records.
Error correction writes error data into the data container and initiates a correction function from the data container.
The partial initial load or mass update is the same as the initial load.
In the case of an initial load, the affected tables are deleted first.
In the case of error correction, the following data structures are read and written in particular:
data container
Error log
Uninstalling the File
Hash file
Convert files
Comparison files
Re-passing the file
Q1 database
For the unload file, the same data structure as that of the initial load-unload file is used.
The hash file has the following structure:
the conversion file has the following structure:
the comparison file uses the same data structure as used for other synchronizations. The header portion of the comparison document is explained below:
name (R) Content providing method and apparatus Length of
COEX-MUTPRG Changing program names of programs PICX(08).
COEX-AGENTC Proxy code PICX(02).
COEX-APCDE Application code PICX(02).
COEX-NL Processing branches PICX(04).
COEX-UFCC-E Program function code PICX(03).
COEX-UPTYP Update type S storage M modification D deletion (erasure) PICX(01).
COEX-USERID USERID OF PRINT PERSON PICX(06).
COEX-PAKET-TIME-STAMP Date and time of grouping (YYYYMMDDhhmmssuuuuuu) PICX(20).
COEX-REC-TIME-STAMP Date and time of change (YYYYMMDDhhmmssuuuuuu) PICX(20).
COEX-NL-KD Branch of PIC
X(04).
COEX-KDST Customer code number PICX(08).
COEX-OBJID Object identification/DB 1 keyword fields PICX(20).
COEX-RECTYP Record type (type of record from database 1 or TERM, TERM record not including data portion) PICX(04).
COEX-REC-SEQUENZ (in TE)Within a packet, in the case of RM being the highest sequence number per packet) records the sequence number PIC9(08).
COEX-ORIGIN Recording start point 0, initial load (BC)1, retransmission (DB1)2, synchronization 3, retuning (DB2)5, online isotype (DB2)6, retuning (BC) PIC X(1)
COEX-REQUEST-TYPE On-line processing and batch processing PIC X(1)
COEX-RESYNC-ID Original key from TAPCPACKAGE or TAPCDATA for retransmission PICX(32)
COEX-RESYNC-STATUS Return code containing database B1 re-pass functionality PIC X(2)
COEX-LEVEL3-KEY Database 1 keyword field PICX(40)
COEX-RESERVED Retention PIC X(6)
COEX-DATA Record, old and New PICX(10600).
Name of table Insert into Change of Deleting
Data container Business service error handling Business service error handling Reorg work
Error log file Business services general service Business services general service reorg work
Uninstalled files DB2 Non-unloaded work DB2 Is free of Non-unloaded work DB2
Hash file Hash program DB1 Hash program DB2 Is free of Network operation before beginning readjustment operation
Converting files Comparison program Is free of Network operation before beginning readjustment operation
Comparing documents Selection program DB1 selection program DB2 Is free of Network operation before beginning readjustment operation
Re-passing files Re-delivery function error handling Is free of Rewriting or deleting files after migration
Q1 database Retransmission module Is free of Monitor with a display
The coexist controller program defines the programs or program components that are called for a particular record type. It is desirable for the coexist controller program to load data to be modified from the first database into the environment of the second database.
In case of a successful re-transfer, the coexistence control region program sets the error entry in the data container to "done".
An error message and error data may be displayed (sorted, if needed). A function is provided to initiate a re-delivery service.
In the data container, it is possible to distinguish between errors resulting from a reconcilement of the second database and errors resulting from a synchronization between the two databases. Furthermore, functionality is provided for display, correction or re-delivery or retry of data.
By the functionality according to the invention, the number and types of errors are reduced, the longer the parallel operation of the systems of the two database environments takes. After the end of the process (day, week, etc.) and depending on the type of record, a reconcile may be made. It is also possible to check only records that have been requested (interrogated) on the second database side. For example, records that have not been used may be checked once a month.
Reconciles the differences between the systems that found the two databases and corrects them. In this way, in the first position, errors are detected that have not been synchronously discovered. These errors may be:
unpackaged of batch/online programs on the system of the first database
Loss of messages and/or files on the transmission path
Bugs in the second database System Environment
Recovery of one of the two systems
Message records that cannot be applied in the second database environment.
It is assumed that most errors can be corrected by the re-delivery function. Optionally, the second database may also be reloaded by another initial load or a partial initial load (mass update).
Based on the database entries to be compared and their attributes, in a first step hash values are determined and compared with each other. If they are different, the raw data items are compared to each other in a second step. To this end, the hash value is first sent, if necessary, in a second step, to the second database by the encapsulation module and compared there.
DB1 record Description of the invention
D101 Alfasearch (region)
D111 Second Alfasearch
D201 Customer
D211 Customer contact
D212 Customer object
D214 Notification
D215 Blocking of
D216 Instructions
D217 Avor
D219 Score value
D220 Application program
D222 Customer master data for scoring
D225 Customer master data for enterprise line application scoring
D226 Mobile data for enterprise line scoring
D311 Customer address
D321 Return address
D322 Disclosed is a
D535 Customer premises owner for non-messaging customers
F101 Stock account owner
F111 Proof of availability
F112 Triggering
F113 Blocking of
F114 Instructions
F115 Notification
F116 Indication of
F117 Dispatching instructions
F201 Preservation of
F213 Blocking of
F214 Instructions
F216 Indication of
F217 Dispatching instructions
F219 Safely open cheque
F230 Security management
K001 Account owner external account
K002 Proof of availability
K004 Auxiliary account contact
K005 Separate trigger instruction
K006 Blocking instructions
K007 Instructions
K010 Separate trigger instruction
K011 Dispatching instructions
K012 Base level external account area
K013 Items and conditions of market interest rate method
K016 Notification

Claims (29)

1. A computer network system for performing access by work Units (UOW) on at least a first database (DB1) from at least one application workstation to generate, change or delete contents of a database (DB1) based on/with the first database (DB1) to build and/or synchronize a second database (DB2), the computer network system comprising:
1.1. at least one first server (S1) for directing and maintaining a first database (DB1), the server being connected with at least one application workstation,
1.2. at least one second server (S2) for guiding and maintaining a second database (DB2),
1.3. at least one data connection for connecting two servers (S1, S2), wherein
1.4. Software program modules are provided, created and programmed to
1.5. Performing a comparison between the first and second databases (DB1, DB2) to obtain a status for synchronization and relating to equivalence of the information content of the two databases (DB1, DB2), wherein
1.6. Starting from the data comparison, an error log file is generated relating to erroneous and/or missing records, an
1.7. Error detection and processing functions to correct/add erroneous and/or missing records, characterized in that
1.8. Providing a data container in a computer network system, said data container comprising a control table and a data table and being adapted to simulate a transactional uniformity of an environment from a first database in an environment of a second database, an
1.9. An erroneous/missing record based on the data comparison is written to the container,
1.10. the data comparison includes three components error detection, error analysis and error correction, an
1.11. Error detection involves retrieving and processing data in the computer network system from two databases (DB1, DB2), calculating hash values and comparing them to each other,
1.12. if there is a discrepancy, the data is retrieved from the corresponding database (DB1, DB2),
1.13. the corrupt data from the first and second databases (DB1, DB2) are compared in detail, differences are written to the synchronization error log file, and their data is written to the data container.
2. The computer network system according to the preceding claim, wherein
2.1. The error detection and handling function is a sub-function of the synchronization between two databases and is based on an error log file and a data container, wherein
2.2. During synchronization, all messages are written to and processed from the data container.
3. Computer network system according to one of the preceding claims, wherein
3.1. If an error occurs during synchronization, the data is identified as erroneous, an
3.2. A link is then created from the data container to the error log file and the error is then displayed/shown.
4. Computer network system according to one of the preceding claims, wherein
4.1. Combining error handling during software program component error log files, data containers, synchronization, re-delivery, and assimilation of data into one logical unit makes available a GUI that provides unified reporting of synchronization, initial loading, and assimilation components of data.
5. Computer network system according to one of the preceding claims, wherein
5.1. A repeat function is provided to perform immediate correction of identified differences between the first and second databases.
6. Computer network system according to one of the preceding claims, wherein
6.1. A re-delivery function is provided, the re-delivery function comprising a set of functions: selecting in the table an erroneous or missing record in the context of the second database (DB 2); the corresponding change is generated and sent back to the environment of the second database through a synchronization process.
7. Computer network system according to one of the preceding claims, wherein
7.1. The re-pass function corrects three possible errors:
records disappear in the first database (DB1), but appear in the second database (DB2),
records appear in the first database (DB1), but disappear in the second database (DB2),
records appear in the first database (DB1), but appear in the second database (DB2) with the wrong content.
8. Computer network system according to one of the preceding claims, wherein
8.1. From the data container, the new data is applied to the corresponding database by performing a repeat function.
9. Computer network system according to one of the preceding claims, wherein
9.1. Access to a first database (DB1) by a work Unit (UOW) occurs through a wrapper module (KM) which is set up and programmed to
9.1.1. Forwarding the work Units (UOW) to the encapsulation module,
9.1.2. decomposing a unit of work (UOW) received by the encapsulation module into one or more messages (M1.. Mn),
9.1.3. -entering a message (m1.. Mn) in a first database (DB1), and
9.1.4. the message (m1.. Mn) is sent to the second database (DB 2).
10. Computer network system according to one of the preceding claims, wherein
10.1. The packaged module programs (KM) are set up and programmed to perform those accesses by application software programs and other programs that have changed the first database, wherein these programs direct their change commands for the first database (DB1) to the packaged module programs (KM) that are used to perform the actual accesses to the first database (DB 1).
11. Computer network system according to one of the preceding claims, wherein
11.1. The controller (HS) of the second database (DB2) is set up and programmed so as to be able to execute
11.1.1. For reading a message (M1.. Mn) sent from an input waiting queue (Qin) to the controller,
11.1.2. for checking whether all messages (M1.. Mn) belonging to a work Unit (UOW) have arrived in the input waiting queue (Qin),
11.1.3. for performing an appropriate change in the second database (DB2) when all messages (M1.. Mn) belonging to one work Unit (UOW) have arrived in the input waiting queue (Qin), and optionally
11.1.4. For distributing, at least partially, depending on certain conditions, the respective changes or messages (m1.. Mn) containing said changes and belonging to one unit of work (UOW) to other databases or applications.
12. Computer network system according to one of the preceding claims, wherein
12.1. Establishing a wrapper module program (KM) and programming such that work Units (UOW) from a batch run are broken down into corresponding messages (M1.. Mn) upon reaching predetermined parameters, and said messages are written to a transfer database (Q1), and
12.2. a monitor software module is provided, set up and programmed to transfer the contents of the transfer database (Q1) to the second database (DB2) after reaching predetermined parameters.
13. Computer network system according to one of the preceding claims, wherein
13.1. For each database or application receiving data from the first database (DB1), the controller (HS) of the second database (DB2) supplies data to the coexistence element program module, builds and programs the coexistence element program module,
13.1.1. to synchronize, in particular, data for related databases or applications, and
13.1.2. to perform a change corresponding to a message (m1.. Mn) belonging to a unit of work (UOW) in the second database (DB2) or application, or in an input waiting queue (Qin) in a database associated with the application concerned.
14. Computer network system according to one of the preceding claims, wherein
14.1. A controller (HS) is set up and programmed to retrieve information from the table indicating which content is to be provided to which coexistence element programs.
15. Computer network system according to one of the preceding claims, wherein
15.1. A software program component is provided by which in case of a transaction initiated from one application workstation of the first database (DB1) a same type transaction can be invoked on the second database (DB2) and vice versa, in which case from the application workstation's perspective the same type transaction on the second database (DB2) side behaves similar to the corresponding transaction on the first database (DB1) side.
16. Computer network system according to one of the preceding claims, wherein
16.1. The functions, services and data present on the first database platform are available in the context of a second database platform that uses substantially the same program, and the interface to or to the application software program is not substantially changed when the same type of transaction is activated in the context of the second database platform.
17. Computer network system according to one of the preceding claims, wherein
17.1 interfaces for the same type of transaction in the second database environment correspond to the original transaction in the first database environment, configuring whether and how the original transaction in the first database environment or the same type of transaction in the second database environment should be used.
18. A computer-supported method for performing access by a work Unit (UOW) on at least a first database (DB1) from at least one application workstation to generate, change or delete contents of a database (DB1) based on/with the first database (DB1) to build and/or synchronize a second database (DB2), the method comprising the steps of:
18.1. directing and maintaining a first database (DB1) using at least one first server (S1) connected to at least one application workstation,
18.2. directing and maintaining a first database (DB2) using at least one first server (S2) connected to at least one application workstation,
18.3. providing at least one data connection for connecting two servers (S1, S2), wherein
18.4. At least one software program module is provided for performing a comparison between the first and second databases (DB1, DB2) to obtain a status for synchronization and relating to equivalence of the information content of the two databases, wherein
18.5. Starting from the data comparison, an error log file is generated relating to erroneous and/or missing records, an
18.6. Error detection and processing functions to correct/add erroneous and/or missing records, characterized in that
18.7. Simulating transactional uniformity from the environment of the first database in a data container having a control table and a data table in the environment of the second database, an
18.8. An erroneous/missing record based on the data comparison is written into the data container,
18.9. the data comparison includes three components error detection, error analysis and error correction, an
18.10. Error detection involves retrieving and processing data in the computer network system from two databases (DB1, DB2), calculating hash values and comparing them to each other,
18.11. if there is a discrepancy, the data is retrieved from the corresponding database,
18.12. comparing in detail the corrupt data from the first and second databases (DB1, DB2), an
18.13. The differences are written to a synchronization error log file and their data is written to a data container.
19. The computer-supported method of the preceding method claim, wherein
19.1. The error detection and handling function is a sub-function based on synchronization between two databases of an error log file and a data container, wherein
19.2. During synchronization, all messages are written to and processed from the data container.
20. Computer-supported method according to one of the preceding method claims, wherein
20.1. If an error occurs during synchronization, the data is identified as erroneous, an
20.2. A link is then created from the data container to the error log file and the error is then displayed/shown.
21. Computer-supported method according to one of the preceding method claims, wherein
21.1. Combining error handling during software program component error log files, data containers, synchronization, re-delivery, and assimilation of data into one logical unit makes available a GUI that provides unified reporting of synchronization, initial loading, and assimilation components of data.
22. Computer-supported method according to one of the preceding method claims, wherein
22.1. A repeat function is provided to perform immediate correction of identified differences between the first and second databases.
23. Computer-supported method according to one of the preceding method claims, wherein
23.1. A re-delivery function is provided, the re-delivery function comprising a set of functions: selecting in the table an erroneous or missing record in the context of the second database (DB 2); the corresponding change is generated and sent back to the environment of the second database through a synchronization process.
24. Computer-supported method according to one of the preceding method claims, wherein
24.1. The re-pass function corrects three possible errors:
records disappear in the first database (DB1), but appear in the second database (DB2),
records appear in the first database (DB1), but disappear in the second database (DB2),
records appear in the first database (DB1), but appear in the second database (DB2) with the wrong content.
25. Computer-supported method according to one of the preceding method claims, wherein
From the data container, the new data is applied to the corresponding database by performing a repeat function.
26. Computer-supported method according to one of the preceding method claims, wherein
26.1. Access to a first database (DB1) by a work Unit (UOW) occurs through a wrapper module (KM) which is set up and programmed to
26.1.1. Forwarding the work Units (UOW) to the encapsulation module,
26.1.2. decomposing a unit of work (UOW) received by the encapsulation module into one or more messages (M1.. Mn),
26.1.3. -entering a message (m1.. Mn) in a first database (DB1), and
26.1.4. the message (m1.. Mn) is sent to the second database (DB 2).
27. Computer-supported method according to one of the preceding method claims, wherein
27.1. The packaged module programs (KM) are set up and programmed to perform those accesses by application software programs and other programs that have changed the first database, wherein these programs direct their change commands for the first database (DB1) to the packaged module programs (KM) that are used to perform the actual accesses to the first database (DB 1).
28. A medium carrying a computer program having computer program code thereon which, if executed in a computer, is set up to carry out the computer-supported method according to one of the preceding claims.
29. A computer program product having computer executable program code which, if executed in a computer, is set up to implement the computer supported method of one of the preceding claims.
HK08107217.4A 2005-03-31 2006-03-31 Computer network system for synchronizing a second database with a first database and corresponding procedure HK1112301A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP05007073.9 2005-03-31

Publications (1)

Publication Number Publication Date
HK1112301A true HK1112301A (en) 2008-08-29

Family

ID=

Similar Documents

Publication Publication Date Title
US7577687B2 (en) Systems and methods for synchronizing databases
US7580970B2 (en) Systems and methods for database synchronization
US7707177B2 (en) Computer network system for building, synchronising and/or operating a second database from/with a first database, and procedures for it
US7143079B2 (en) Integrated composite data base system
US7526576B2 (en) Computer network system using encapsulation to decompose work units for synchronizing and operating a second database from a first database
US6411985B1 (en) Interserver data association apparatus
US6430577B1 (en) System and method for asynchronously receiving multiple packets of audit data from a source databased host in a resynchronization mode and asynchronously writing the data to a target host
US6539381B1 (en) System and method for synchronizing database information
US6446090B1 (en) Tracker sensing method for regulating synchronization of audit files between primary and secondary hosts
US7043444B2 (en) Synchronization of planning information in a high availability planning and scheduling architecture
US20050203946A1 (en) Method for data maintenance in a network of partially replicated database systems
CN112925614A (en) Distributed transaction processing method, device, medium and equipment
JP3803707B2 (en) Framework system
EP0394019A2 (en) Computerised database system
HK1112301A (en) Computer network system for synchronizing a second database with a first database and corresponding procedure
HK1112302A (en) Computer network system for establishing a second database from, synchronizing and/or operating it with a first database and corresponding procedure
HK1112300A (en) Computer network system for synchronizing a second database with a first database and corresponding procedure
HK1112084A (en) Computer network system for the establishment synchronisation and/or operation of a second databank from/with a first databank and procedure for the above
JPH06290098A (en) Method for processing distributed data base
JPH03202935A (en) Synchronization update system in distributed processing system
CN120258997A (en) A fund registration and transfer method and system based on distributed transaction mechanism
Gladney et al. A version management method for distributed information
WO2002001418A9 (en) System and method for a decision engine and architecture for providing high-performance data querying operations
Brooks The Politics of Entity Identifiers
KOSHCHIENKO Distributed Transaction Processing Integration in the Service Mesh Architecture