GB2486914A - Source code control of relational databases - Google Patents
Source code control of relational databases Download PDFInfo
- Publication number
- GB2486914A GB2486914A GB1022108.3A GB201022108A GB2486914A GB 2486914 A GB2486914 A GB 2486914A GB 201022108 A GB201022108 A GB 201022108A GB 2486914 A GB2486914 A GB 2486914A
- Authority
- GB
- United Kingdom
- Prior art keywords
- database
- source code
- objects
- file
- source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/211—Schema design and management
- G06F16/213—Schema design and management with details for schema evolution support
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method of extracting source code and metadata from a relational database to text files on a file system, and importing the same source files into a second target database instance comprising: identifying objects that qualify for extraction (using database instance and source schema name), extracting source code for objects into a set of files; importing source code into target database by: extracting source code from target database, comparing the two sets of source code (after both have been normalised e.g. white space removal) using character by character comparison and if there is a difference executing database statements to update the target schema; checking if the update was successful by: extracting the source code from the target database and comparing two sets of source code (after both have been normalised e.g. white space removal) using character by character comparison; and finally reporting if the update was successful
Description
E)es'cr ptkm Traditionally source code control of relational databases tends to be problematic. Since databases generally are required to keep the data stored within it in a valid state at all times, updates to relational database code or metadata have to take into consideration the existing data stored within the database and the existing structure used to store the data in. As a result relational databases usually require such changes to be deployed while the database is in an active/running state. This allows the database to validate any changes and to determine whether and how existing data will be affected.
For many types of database objects, like tables and indexes, these updates cannot be performed by simply replacingthe previous version of an objectwith the newversion of the object. Forexample you cannot update a database table by simply deploying the new table structure to the database. The reason for this is that the database has no way of knowing how the data contained within the table should be represented within the new table structure. Consider an existing table with two columns A and B. Given a change is required to this table and the new table structure is required to contain two columns with names A and C. 11 we were to indicate to the database merely the end result, that is to say that the table should have instead have two columns A and C, then in general there is no way for the database to make a transition to this new structure without some additional knowledge/instructions of the steps required togetthere. In thisparticularcase thefinal state can be achieved in atleasttwodifferentand equally valid paths. Path one would be to rename column B to C. Path two would be to remove column Band then to add column C. The end results are dearly different A third path could be to add column C, then to populate column C with a calculated value based on data currently present in col umn B and possibly some other data in another column in another table. Finally column B is removed from the
table.
The point of all this is to illustrate why it is necessary to give a database detail change instructions to get to the required state and why the database generally cannot do anything with a code instruction that represents the final state of the database.
To be clear, code changes to any object that contains data or represents data have to be deployed to the database by means of code instructions representing a change to existing structures or objects. For example instead of representing in code and deploying to a database the final strucWre of the table, it is necessary instead to send an ALTER instruction to the database in order to indicate whether the column name B should be renamed to C, or whether a new column C must be added to the table and the column with name B must be removed from the database.
These ALTER instructions are usually referred to as patches and stored in scripted patch files. Changes to a database are achieved by executing the change instructions contained within these patch files, against or within the database.
Patch files are notoriously difficult to work with. In general patch files have to be executed against the database in a very specific order, namely the order as required by the intended changes. Changing the order of executing patdi files can invalidate all remaining patch files. Once an error has been encountered it often requires detailed analysis and writing of custom code to get the database into the correct and known state again.
ln general patch files are not the only way in which updates are deployed to a database. Database administrators often have to perform emergency changes to production databases to alleviate performance bottlenecks. This is usually achieved by creating new indexes, by creating materialized views (Oracle) oreven by changing actual code (e.g. optimization a query or adding an optimization hint).
Once changes outside of the patching processing have been applied to a database, this could potentially invalidate all patches that have been developed fora database environment without these emergency changes. Companies usually have processes in place to propagate such emergency changes back into the development database environments. Sometimes however these emergenry changes are incompatible with changes made to the development database environments due to changes subsequent to the most recent release to the production environments.
There is no easy solution to all of this. A strict configuration management policy, disciplined and meticulous developers and open and clear communication channels across all tiers of the organization can potentially alleviate this problem but is unlikely to completely eliminate it.
This picture stands in stark contrast to the level of configuration management that can usually be achieved for computer programs that do not run and live within a database, e.g. Java, C++, Visual Basic etc..
Within the context of computer programs, the highest quality of software configuration management is usually achieved by means of revision control software, Examples of revision control systems are Subversion, CVS, ClearCase, SourceSafe, etc.. These tools provide an environment for storing the source code of your computer program. Control over the configuration of your computer program is achieved by always storing the latest version of the code or text files representing a software program within the revision control system. Common practice is to require software developers to send any changes of source code to the version control system as soon as possible after the change has been completed. In fact this is the only path of change to source code that is usually accepted. Whenever the latest version of a program has to be built to be sent to customers, this latest version is retrieved from the revision control system directly. The developer is usually not allowed to build and send source code directly to a customer. This would be regarded as bad practice since once the developer has done so it becomes virtually impossible to keep track of what the version of the software is that is running at a customer site at any point of time. Unless the software running at a customer can be connected to a particular version of the program as stored within the revision control system, it would be prohibitively difficult to provide general support function relating to reprodudng bugs and fixing them.
The key to achieving control over your program source code isto always store a complete copy of the latest state of your software within the revision control system. Howeverto achieve this with the software and metadata that define the state and behavior of a relational database is prohibitively difficult. The reason for this is that the source code usually produced for a relational database is in the form of patches, i.e. changes. Per definition these patch files never contain and represent the final state of a database. They just represent a series of changes to get the database from one state into another.
fn other words it is impossible to determine what the final state of the database should be by looking at the patches only. You also need to know the exact state that the database was in before executing the patch files against the database.
Stated differently, in order to have sufficient control of a database configuration, it becomes critical to verify/confirm that the current state of the database is as expected before applying a patch or set of patch files. However, since the exact and complete state of the database is not stored in the revision control system, this is not possible. Tools are avaif able to compare one database against another database but this is hardly sufficient. You need a reference within the revision control system.
So why don't developers store the complete state of the database in the revision control system? This is just not practical currently. The proNem is that once you have stored source code representing the state of a database in the revision control system, you cannot depfoy this to the database. As a result there is no way to know whether the code is in fact valid code or whether it is even the latest version of the code as represented by the database. Because of this any such efforts usuafly slowly degrade in quality over time and rarely end up being useful.
Pdor!bi.
LiquiBase is an open source tod that essentially addresses management of patch files. It creates a databasechangefog table that keeps tracks of which patches have been installed on each database. In will ensure patches are installed consecutively according to the patch number. The same patch fife will not be installed twice on the same database. The main limitation of LiquiBase is that it reliesentirefy on the databasechangelog table to determine the current status of a database. This assumes that changes to the database are only ever made via the Liqui Base patching process. This assumption is simpfy not valid for most database environments, particularly not for production databases. One of the main responsibilities of database administrators is to address performance bottlenecks on production environments. Because of the typicaf urgency in such cases the measures they take, which can include creating indexes, partitions, materialized views, changing SQL queries etc., are generally not done via patch release cycles. At best these changes are fed back, afterthe fact, to the development teams. The development team now has to create a Liqui Base patch for these emergency changes to ensure that the development and test environments are in line with the production environment Now the problem is that you have a Liqui Base patch that will be applied to all environments but the changes have already been applied to the production environment All these problems are not insurmountable but at the end there are really so many opportunities for mistakes that the risk that the production and development and test environments are out of sync is unacceptably high.
fn U.S. Pat. No. 7)809,763 granted to Non et al. on October 5, 2010 and entitled 1Method(s) for updating database object metadata" a method is proposed to address the same concerns about database configuration management. This invention is embodied in a tool with name XDF, The XDF tool extracts metadata of a table and the metadata of all its associated indexes and constraints from a first database, into a single XML representation. This XML file can then be used on a second database to update the table definition so that its metadata matches the metadata present in the XMLfiIe.
With regards to table columns the XDF method only supports adding of new columns. It does not support dropping columns, renaming existing columns, combining of the data from two columns into one, or splitting the data from a single column into two columns. The main reason for this limitation is because it does not support patch file deployment. Most of these tasks cannot be automated. Developer input is almost always required for these changes. It must be noted that the XDF tool is targeting a slightly different scenario where multiple optional application features or updates can be installed to the same target database schema. Because of that it will not throw an error if additional columns are found to exist in the target table.
With regards to XDF and its support for use in a revision control system, the XML generated by the XDF tool contains numerous environment specific fields like the last updated date and the object id for all objects. This does mean that no two database environments will yield the same extracted XML content.
Consequently it would not be possible to use traditional revision control features like branching, merging and duff unless the same database instance is always used to generate the XML. Also, direct comparison of the XML files generated using XDF with the correspondingXML files from any other database instance would yield numerous differences that are not important from a revision control point of view. In order to compensate for this limitation the XDF tool offers a comparison utility that considers only the subset of XML content that is really significant across database instances.
The key differences between the XDF method and the method of this application are: 1. The method of this invention extracts valid (executable) database code and use this to determine whether a target database needs updating by comparing with the code extracted from the target database. XDF extracts and compares metadata in an XML representation.
2. The method of this invention facilitates the application and execution of patch files which supports all possible change scenarios for tables. XDF automatically generates the code statements necessary to update a table but this technique is applicable only to adding of additional columns and changing some of the properties of existing columns. It exdudes several important scenarios of table changes like rename columns, splitting of and combining columns.
3. The method of this invention extracts source code that is normalized with respect to any specific environment. This enablesfull adoption within a revision control system. XDF generated source code contains environment specific information which limits its use within revision control systems.
S uiuary This invention addresses many of the problems assodated with software configuration management of relational database applications as highlighted in the background section. ft does not eliminate the need for patch files. However, as it is argued above, patch files are inevitable and useful.
This invention provides a way to store the complete state of a database schema in a revision control system and to actively use this information to control the state of any other target schema in the same or another database instance. More importantly it does all of this in an automated way that leaves no opportunity for user errors.
ft does so by 1. Providing an exporting utility that: a. Automatically extracts to the computer file system, the entire current status of a database schema as valid (executable) code.
b. Implementing a fine grained mapping between objects in the database and a file and directory structure c. Normalizing database code with respect to any spedfic database instance or environment.
d. Suppress extraction for database objects that have been generated by the database as secondary objects. Only objects that have been specifically created by developers are extracted.
2. Providing a corresponding import utility that: a. Identifies objects that need to be updated by comparing extracted source code with the source code on the file system.
b. Automatically deploys all source code for a schema (except for tables), as extracted by the export utility, to another target schema, by only installing, updating or removing those objects that are not up to date according to the source code on the file system.
c. Provides a method that, during the importing process, resolves all installation dependencies between different objects across all schemas being installed, such that all objects are installed in the correct order. (Example: a constraint can only be created once the table exists.) d. Provides a method to determine programmatically which patch files need to be applied to tables to update them.
e. Providing a mechanism to verify that the final status of the database after applying a patch is as expected and required.
f. Providing a mechanism to determine whether any changes have been applied to the database outside of the patching path.
g. Providing a method to establish when conflicts occur, i.e. when another deveioper has changed a database object in the same database instance.
Detikd Decptioi The invention has been implemented spedfically for the Oracle database but the same principles and algorithms can correspondingly be applied to other relational databases.
The invention is titled "Spoor" and uses the command export" to extract all database source code for an application to the file system. The export function extracts all source code for the following Oracle database objects and stored this in files on the computer's local file system: Package Java Source Global Temporary Table Constraint Package Body Type Materialized View Synonym Procedure Type Body Materialized View Log Database Link Function View Index Object Privilege Trigger Table Sequence Refresh Group Operator I ndextype Library Dimension Cluster Each object is stored to a file with name corresponding to the name of the object in the database. Each such file is stored to a specific directory and with a file extension that uniquely identifies the object type.
The directory path and corresponding extension is configured for each object type in the configuration file with name main, spoor-con fig. To illustrate: Given that the configuration file contains the two settings: TABLE OLDERdata/tables
TABLE_EXT=table
A table that has a name COMPANY in the database will be saved to the following file and relative path: data/tables/COMPANY.table This mapping of object type to file name and directory structure is strictly enforced by the tool. During import, files with the wrong directory and file extension combination will be ignored and if the name of a file does not match the name of the database object whose source code is contained in a file then an error logged.
Spoor extracts the source code as valid DDL and/or PL/SQL such that it can always be executed against the database without resulting in syntax error.
in addition the source code is instance normalized as follows: The source code never contains information that is potentially unique to a spedfic instance. For example the current value of a sequence does not appear in the extracted source code. Also, system generated names do not appear in the extracted source code. Oracle syntax allows the creation of some objects (indexes, constraints, partitions) without requiring the user to spedfy the name of these objects. The Oracle instance then assigns system generated names to these objects. However the generated names for the same object on two different Oracle i nstances are usually different. Spoor extracts source code in a format similar to the way in which the objects were initially created, i.e. without specifying the name.
Normalization of the source code in this way is critical precondition to be able to manage source configuration of multiple database environments via a revision control system. ft the extracted source code did contain instance specific information, then it would not be possible to align any particular database instance with the reference version of the source code as stored in the revision control system.
There will always be remaining differences that cannot be reconciled and this is not acceptable or useful from a software configuration management perspective.
The export tool can be executed on an empty directory structure, in which case it will create all directories and files as required, or can be executed over an existing export. in the latter case it will compare the source files currently present on the file system with the latest source as extracted from the database. lt will then only update the source files that are out of date. For new objects it will add the source code for these objects to the existing files or create new files when necessary. For objects that have changed since the lastexport it will update the corresponding source code forthat object in the file.
For objects that have been removed from the database since the last export it will remove the corresponding code from the existing file. The tool will never delete a source file fmm the file system even if all corresponding objects have been removed from the database and the resulting file is empty.
An empty file is maintained as a place holder to indicate that the corresponding object(s) should also be removed from other database environments.
During export Spoor queries the Oracle dictionary views to determine which objects are available for export and which objects have been automatically created by the database instance and should therefore not be exported. Because the list of objects to export is created automatically/programmatically developer errors are eliminated. Spoor uses the Orade DBMS_METADATA package and the USER_SOURCE dictionary views to generate DDL and/or PL1SQ for each database object that is exported. Spoor further performs normalization on the extracted source code as described above.
The complementary command "import" is used to deploy the database source code, as extracted by "export" back to another database instance. lt does this automatically thereby eliminating user errors.
During import it resolves all creation dependencies automatically. The import process performs the following tasks in the following sequence: 1. For all database objects, that correspond to the source files selected (by the user) for import, source code is extracted from the target database environment.
2. The extracted source code is compared with the source code as present on the file system.
3. For all object types except tables: a. If the corresponding object does not exist in the source ifie then this object is removed from the database.
b. If the corresponding object does not exist in the database then it is created.
c. If the source code as extracted from the database is different from the source code as present in the database then the object is updated. Objects are updated by either recreating them in place (using the Orade REPLACE keyword) or by dropping and creating them again.
4. For tables:
a. if the table is not present in the database then it is created.
b. If the table source code is not present in the source file or is differentfrom the source code in the file then patch files are considered for execution: i. All patch files that match the specific naming convention for the table under consideration, are considered for execution.
ii. Each patch files corresponds to a specific revision of table source code.
iii. A patch file is only executed if the source code for the table as extracted from the target database matches a specific revision of the table source code as obtained from the revision control system.
iv. Once the table in the target database matches the table source code in the file, no more patches are considered for execution.
5. The tool verifies that all changes were correctly imported and reports a failure if any object in the database ends up different from the corresponding source code on the file system.
A difference between the source code as extracted from the database and the source code on the file system is detected if the toolfinds at least one character difference between the two code versions. For some objects even white space differences are significant and warrants an update. These are all objects that contain PL/SQLa5 part oftheirdefinition: Packages, Package bodies, Type, Type Bodies, Procedures, Functions, Triggers and Java Source. Forthe other objects white space is only significant when it appears as part of literal text. For all these objects every contiguous sequence of white space characters is replaced by a single space character before comparison takes place.
The key aspect to note of this comparison is that it is not the metadata of the resulting objects that exist in the database that is compared. It is the source code statements that are required to create these objects that are compared.
During import objects are brought up to date with the source files in several different ways depending on the object types. Some object types, in particular the PL/SQL objects and views, support a REPLACE clause which allows these objects to be brought up to date in place, i.e. by re-executing that original statement that created them, Other objects are closer related to data, e.g. indexes and constraints. For all these objects the tool will by default bring the object up to date by first removing the object from the database and then by re-creating it again using the source code statement as contained in the source file. For tables the tool will never make any automated changes (except for initially creating the table).
The only changes allowed to existing tables are via patch files. The tool will apply patch files when appropd ate.
For the deployment of patch files the tool was designed to handle patch files corresponding to the name of the patch file. In particular whenever the patch file name contains a revision number that can be related to a specific revision of the corresponding object in the revision control system, then the tool will first extract this revision of the source code from the revision control system. It will compare this older revision of the source code with the current source code of this object as extracted from the database.
Only if the two versions match will the patch file be applied.
To illustrate, consider a patch file with name COMPANY. table.patch.r143 The file name indicates that this is a patch file for a table with name COMPANY. The last part of the name indicates that this patch file is applicable to revision 143 of the COMPANY table. In order to determine whether the patch file should be applied to the database instance, the tool will firstly retrieve from the revision control system revision 143 of the file COMPANY.table. lt will then compare the create statement within this file with the corresponding statement as extracted from the database. Only if the two create statements match does the tool determine that the patch file is appropriate to apply to the database.
When more than one patch file exist for the same table the tool will iterate through all patch files, from latest to earliest retrieve the corresponding source code revision, until it finds a matching revision. It will then apply the corresponding patch file. Once all statements in the patch have been successfully completed, the tool will again compare the extracted table source with the corresponding create statement on the file system. If the statements differ then the tool will attempt tofind another applicable patch file among the remaining patch files. It will not consider earlier revisions than has previously been applied, only later revisions. This is to avoid an infinite loop situation where two patch files cancel each other.
This approach to determining whert to apply patch files solves the problem of not knowing what the initial status of an object is before applying a patch file. This approach alleviates the need for developers to write custom code todeterminewhetheratable isin the correct statusforthe patch file to be applied. ft also alleviates the need for any developer or database administrator to know or remember whether to apply a patch file or not. lt provides a mechanism for developers to make sure a patch file will only ever be applied when the table in the target environment is in the status that it was when the patch was originally developed.
The patching algorithm will exit as soon as the extracted source for a database object matches the corresponding source code on the file system, or when there are no more patch files to consider, whichever comes first During import the invention resolves all dependendes in a four step process.
Step Two: Applying patch files The tool applies patch files for tables as set out above, lt then refreshes the list of objects present in the database and the corresponding source code representation for each object.
Step One: Dropping Objects lt removes (drops) all objects of all types from the database that needs to be removed (except tables). It then determineswhether any dependent objects have been removed as a result of this first action and updates its list of objects present in the schema. This is required since for some object types any dependent objects are also automatically removed. For example if the tool explicitly removes a materialized view then all related indexes and constraints are removed as well.
Step Three: Creating Objects The tool creates or updates objects as determined by the comparison process. It does so in a specific order depending on the object type. Specifically the order is as follows: Sequence Database Link Synonym Library Type Type Body Function Procedure Package Package Body Java Source Operator indextype Cluster Cluster Index
Table
Global Temporary Table View Index Constraint Dimension Materialized View Log Materialized View Materialized View Lndex Materialized View Constraint Object Privileges Referential Constraints Trigger Refresh Group This order was chosen to minimize failures due to creation dependencies that are not met. For example materialized views are created after views and tables since the creation of a materialized view will fail unless the tables and views on which it is based exist in the correct state at the time of creation of the materialized view.
This sequence of install/update does not resolve all dependency requirements by itself. And in fact itis not possible to resolve dependencies by taking into consideration only the type of an object. To illustrate consider a table constraint that references a function in a different schema. For the table constraint to be successfully created the schema needs an execute privilege on the function. Privileges are only created after all constraints are created. So this sequence of create will not resolve the dependency. Moving the object privileges creation to before constraint creation will not work either since privileges can only be successfully created after the corresponding objects have been created.
In order to address unresolved dependendes ike the above a forth step is required.
Step Four: Re-Process Queue Every update/create or even patch file statement that fails during execution of any of the earlier steps is put into a queue. Once Step Three has been completed, the import tool iterates through this re-process queue and attempts to execute these statements again. If the statement fails it is added againtothe backof the queue. lfitsucceedsitispermanently removedfromthe queue.
The import tool keeps cycling through this list of failed statements until all statements are successfufly completed or until all remaining state me nts fail consecutively without a single success. The tool then gives up on trying to resolve dependendes. If it is at all possible to resolve the dependendes then this procedure will achieve this.
The program also provides conflict detection. By default the tool maintains a set of base files which are used during import and export to detect conflicts. This is of particular importance when using the import tool in a shared database environment, i.e. where more than one developer is making changes to the same target schema and database instance from their own computers, either using Spoor or any other tool of their choice. Consider the scenario where one developer is in the process of making changes to a database object and while busy testing it, and before committing this change to the revision control system, another developer performs an import using the source code most recently updated from the revision control system. Without conflict detection the program will revert any changes made by the first developer and align the object with the source code in the repository. Ideally the tool should not inadvertently overwrite such work in progress. The tool maintains a copy of all source files being exported or imported in a directory with name. spoor-base. If, on import, the base file is different from the database then the tool dedares a conflict and skips importing of the corresponding database object. This conflict is detected per source statement within a source file and not for the entire source file. Similarly the tool will, on export, declare a conflict for an object if the code in the source file is different from the corresponding source code in the base file.
The tool will report that the import has failed and that the schema(s) in the target database do not match the source code on the file system if: * Any remaining update, create or drop statement or any statement in a patch file is left on the re-process queue.
* Any table does not match the corresponding on the file system.
* Any conflict was detected.
Claims (8)
- Cial mS What is claimed is: 1. A method of propagating, either the entire or a subset of the database objects belonging to a database schema, from a source schema to one or more target schemas on either the same or a different database instance, this method comprising the identification, by a computer program embodying this invention, of all objects that qualify for extraction, where the database instance and source schema name are provided as input to the computer program, and where for all such identified schema objects the computer program then extracts the database source code into a set of files on a computer files system, the extracted source code being valid executable source code but normalized with respect to any specific database instance; where the source code on the file system is then imported into a target schema on either the same or a second database instance, by means of a computer program that first extracts the source code from this target schema, and after whitespace normalization of both source code representations, executing a character by character comparison to determine if there are any differences; and for each such object where the computer program has determined that the extracted source code from the target schema is different from the source on the files system the computer program then executes database statements to update the target schema objects so that all objects in the target schema match the corresponding create statements in the source files, and, while installing, dropping and updating objects in the target database schema all installation dependencies amongst the objects in the target schema are resolved such that all objects can be successfully created or updated; and after completing the updating of all objects that needed to be updated, the computer program then extracts the target schema source code once again and compare this with the source code in the source files on the file system to report whether the objects in the target schema match the corresponding object definitions in the source files, thus reporting whether the importing was successful.
- 2. The method of claim 1 where a set of configuration file properties provides a distinct directory path for each object type, where this directory path is used as the location for the generated source files during export and also where the source files are retrieved from during import, and where another set of configuration file properties provides a file extension for each database object type where each extension is to be used as the extension of the file name of the source file where the generated source code is stored in) and where the main part of the name of the file where the object source code is stored in is set to the name of the object as it appears in the database) where each combination of directory path and file extension uniquely identifies an object type) where object types can be one of but not limited to table) cluster) dimension, indextype, library, operator, view) materialized view, index) constraint) trigger, function) procedure, package, package body, type, type body, sequence, privilege, refresh groups, database link.
- 3. The method of claim I where the subset of objects to include during export is determined by the list of directories, passed as input arguments to the computer program embodying this invention, such that all object types that map to these directory paths, or subdirectories of these directory paths, as per the configuration file properties of claim 2, are then included by the computer program when exporting source code from the source schema to the file system, and where the subset of objects to import to a target schema is determined by the directory and file name arguments that are provided as input to the computer program embodying this invention, such that all files that are physically present in these directory paths or in the sub directory tree of these directory paths are identified by the computer program to be included during the import process where the expected object types of the source files are as per the configuration file properties for directories as per claim 2.
- 4. The method of claim 2 where, for each object type to export, the list of objects to export is determined by the computer program embodying this invention, by taking the list of all objects present in the data dictionary for that object type and schema but excluding all objects that are generated automatically, as secondary objects, by the database system.
- 5. The method of claim I where the computer program achieves the goal of updating the objects in the target schema to match the source file create statements by dropping an object when the create statement for the object is not present in the corresponding source file, and executing the create statement for an object as obtained from the source file if the corresponding object does not exist in the target schema, and otherwise for those objects that need to be updated since the source code extracted from the target schema is different from the source code in the source files, for object types that support it the objects are re-created in place (REPLACE keyword) whereas for those object types that cannot be re-created in place, the objects are first dropped from the database schema and then re-created again using the create statement from the source file, with the exception of database tables that are created when not existing in the target schema but are not updated or dropped by the computer program, where for database tables the computer program instead facilitates the updating or dropping of tables via the execution of developer provided scripted patch files.
- 6. The method of claim 5 where the computer program, that embodies this invention, identifies scripted patch files on the file system, which have been prepared by developers for tables, by file name where a specific portion of the file name indicates a specific revision of the table source code in a revision control system such that the patch file will only be regarded as applicable to the current state of the table and therefore executed by the computer program if the table source code as extracted from the target schema matches the source code of the specific revision of the file as existing in the revision control system; and where scripted patch files are in this way repeatedly identified and executed when found applicable, until the table extracted source code matches the table create statement in the source file or until no applicable patch files can be found on the file system.
- 7. The method of claim 1 where installation dependencies are resolved by the computer program embodying this invention by first following a predefined order of installation according to the type of the object, where if at any point a statement to create, drop or update and object fails, the corresponding code statement is placed at the end of a retry-queue, where, after completion of the installation or updating according to object type, statements from the front of the retry-queue are re-executed and where execution succeeds the statement is removed from the queue and where the statement fails again it is put back at the end of the queue and where this process is continued until all remaining statements in the queue fail consecutively after which the program exists with a fail condition.
- 8. The method of claim 1 where, when the user request this as an input argument, the computer program, which is an embodiment of this invention, can maintain a set of base files for both the import and export process such that the base files are a set of duplicate files of the source files being imported, such that the base files keep a record of the state the target database schema was in at the end of the last import process, such that the computer program can detect whether a change was made to the target database schema subsequent to the last import process by another tool or developer, in which case the computer program indicates in its output that a conflict condition has occurred for a particular object, upon which the object is not updated in the target schema, and where the same base file mechanism is in turn also used by the computer program to detect during the exporting process when the source files have been changed since the last export such that intentional changes to the source files on the local file system is not inadvertently overwritten.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1022108.3A GB2486914A (en) | 2010-12-30 | 2010-12-30 | Source code control of relational databases |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1022108.3A GB2486914A (en) | 2010-12-30 | 2010-12-30 | Source code control of relational databases |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB201022108D0 GB201022108D0 (en) | 2011-02-02 |
| GB2486914A true GB2486914A (en) | 2012-07-04 |
Family
ID=43599114
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1022108.3A Withdrawn GB2486914A (en) | 2010-12-30 | 2010-12-30 | Source code control of relational databases |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2486914A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220121640A1 (en) * | 2020-10-21 | 2022-04-21 | Western Digital Technologies, Inc. | Emulation of relational data table relationships using a schema |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113986938A (en) * | 2021-10-28 | 2022-01-28 | 西安热工研究院有限公司 | Basic data and object initialization method of application system |
| CN116737698B (en) * | 2023-08-14 | 2023-11-28 | 金篆信科有限责任公司 | Distributed database configuration comparison method, device, equipment and storage medium |
| CN118733061B (en) * | 2024-08-30 | 2024-12-13 | 中免日上互联科技有限公司 | Automatic deployment method, system, equipment and medium for database |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030212789A1 (en) * | 2002-05-09 | 2003-11-13 | International Business Machines Corporation | Method, system, and program product for sequential coordination of external database application events with asynchronous internal database events |
| US20090292740A1 (en) * | 2008-05-23 | 2009-11-26 | Orbitz Worldwide, L.L.C | Database management system and method |
| US20090300075A1 (en) * | 2008-06-02 | 2009-12-03 | Guan Ruifeng | Method and System for Data Definition Language (DDL) Replication |
-
2010
- 2010-12-30 GB GB1022108.3A patent/GB2486914A/en not_active Withdrawn
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030212789A1 (en) * | 2002-05-09 | 2003-11-13 | International Business Machines Corporation | Method, system, and program product for sequential coordination of external database application events with asynchronous internal database events |
| US20090292740A1 (en) * | 2008-05-23 | 2009-11-26 | Orbitz Worldwide, L.L.C | Database management system and method |
| US20090300075A1 (en) * | 2008-06-02 | 2009-12-03 | Guan Ruifeng | Method and System for Data Definition Language (DDL) Replication |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220121640A1 (en) * | 2020-10-21 | 2022-04-21 | Western Digital Technologies, Inc. | Emulation of relational data table relationships using a schema |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201022108D0 (en) | 2011-02-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7401085B2 (en) | System and method for controlling the release of updates to a database configuration | |
| US10013248B2 (en) | Reducing downtime during upgrades of interrelated components in a database system | |
| EP3477500B1 (en) | Deploying changes in a multi-tenancy database system | |
| US8495564B2 (en) | Automated merging in a software development environment | |
| US7231410B1 (en) | Revision control system for large-scale systems management | |
| US8195621B2 (en) | Database schema management system | |
| US7970745B2 (en) | Schema version management for database management | |
| US8479158B2 (en) | Business information warehouse toolkit and language for warehousing simplification and automation | |
| US10339113B2 (en) | Method and system for effecting incremental changes to a repository | |
| US8527460B2 (en) | Method for carrying out database version control | |
| US20080098046A1 (en) | Low-downtime and zero-downtime upgrades of database-centric applications | |
| US20080140732A1 (en) | Method and system for sharing file based data | |
| Artho et al. | Why do software packages conflict? | |
| US9424323B2 (en) | Application tier data dictionary | |
| US11494349B2 (en) | Code packager with SQL parser | |
| GB2486914A (en) | Source code control of relational databases | |
| US20060123059A1 (en) | Method for organizing a digital database in a traceable form | |
| US11100131B2 (en) | Simulation of a synchronization of records | |
| US11681523B1 (en) | Metadata model and use thereof for cloud native software systems | |
| US7849456B2 (en) | Method, system and computer program product for synchronizing source code repositories | |
| Gorman | Entity Framework: Code-First Database Interaction | |
| CN112800136A (en) | Data selective cascade synchronization tool and synchronization method thereof | |
| Hauser | Review of db4o from db4objects | |
| Ravikumar et al. | Database Upgrades in Data Guard Environments | |
| Kuhn | Tables and Constraints |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |