CN120215994A - A database upgrade method - Google Patents
A database upgrade method Download PDFInfo
- Publication number
- CN120215994A CN120215994A CN202510271984.5A CN202510271984A CN120215994A CN 120215994 A CN120215994 A CN 120215994A CN 202510271984 A CN202510271984 A CN 202510271984A CN 120215994 A CN120215994 A CN 120215994A
- Authority
- CN
- China
- Prior art keywords
- engine
- transaction
- old
- database
- version
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
- G06F8/656—Updates while running
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
- G06F16/2336—Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
- G06F16/2343—Locking methods, e.g. distributed locking or locking implementation details
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2358—Change logging, detection, and notification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2365—Ensuring data consistency and integrity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2379—Updates performed during online database operations; commit processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
- G06F16/273—Asynchronous replication or reconciliation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/0836—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/22—Traffic shaping
- H04L47/225—Determination of shaping rate, e.g. using a moving window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/76—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Computer Security & Cryptography (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of database analysis, in particular to a database upgrading method, which comprises the steps of establishing a dual-engine parallel environment, starting new and old version database engines, constructing a cross-version communication channel, responding to the established communication channel, carrying out parallel routing on a client request to the new and old engines through a transaction distributor, analyzing data change characteristics in real time and generating an increment synchronous instruction set based on a transaction operation log generated by the old engine, carrying out version coordinated transaction lock management when the new engine carries out data replay according to the generated increment synchronous instruction set, switching the transaction route to the new engine and closing the old engine writing channel after the data consistency verification of a preset period is completed, and realizing the aims of uninterrupted service and strong data consistency in the database version upgrading process through dual-engine parallel processing, increment data synchronization and cross-version lock coordination mechanisms.
Description
Technical Field
The invention relates to the technical field of database analysis, in particular to a database upgrading method.
Background
Conventional database version upgrade schemes generally face the dual challenges of poor service continuity and data consistency risks. In the prior art, the adoption of the shutdown migration mode can cause interruption of key business, and particularly in the scenes with high real-time requirements such as financial transactions, the internet of things and the like, the shutdown per hour can cause millions of economic losses. Although the online hot upgrading method reduces the downtime, the online hot upgrading method is limited by a serial processing mechanism of a single engine architecture, and still has the problems of long data migration window period, poor cross-version transaction compatibility and the like, and is extremely easy to cause data loss or inconsistent state. When large-scale data table structure changes or storage engine replacement are involved, the traditional scheme lacks an effective concurrency control means, and service response delay is often caused by too coarse lock granularity or inter-version deadlock is generated during fine-granularity lock management.
In addition, the static resource allocation strategy is difficult to cope with dynamic fluctuation of new and old engine loads in the upgrading process, and resource idling and bottleneck coexistence are easy to cause. In the prior art, data verification mostly adopts post-hoc full-quantity comparison, and incremental difference cannot be captured in real time, so that rollback decision is lagged. These drawbacks severely limit the usability and upgrade reliability of critical business systems.
Disclosure of Invention
The present invention is directed to a method for upgrading a database, so as to solve the problems set forth in the background art.
In order to achieve the above purpose, the invention provides a method for upgrading a database, comprising the following steps:
s1, establishing a dual-engine parallel environment, starting an engine of a new version database and an old version database, and constructing a cross-version communication channel;
S2, responding to the communication channel established in the step S1, and parallelly routing the client request to the new engine and the old engine for execution through the transaction distributor;
s3, analyzing data change characteristics in real time based on the transaction operation log generated by the old engine in the step S2 and generating an increment synchronization instruction set;
s4, implementing version coordinated transaction lock management when the new engine performs data replay according to the increment synchronous instruction set generated in the step S3;
S5, switching transaction routes to a new engine and closing an old engine writing channel after finishing data consistency verification of a preset period in the step S4, wherein the steps S1 to S5 realize the aims of uninterrupted service and strong data consistency in the process of upgrading the database version through double-engine parallel processing, incremental data synchronization and cross-version lock coordination mechanisms.
As a further improvement of the present technical solution, the process of S1 specifically includes:
The method comprises the steps of deploying new and old version database examples in parallel in a computing cluster, distributing initial computing resources, setting a load sensing probe, and acquiring performance indexes of double engines according to the load sensing probe, wherein the performance indexes of the double engines comprise CPU utilization rate;
Establishing a bidirectional data pipeline comprising a multi-stage buffer queue, wherein the buffer queue comprises urgent, common and batch three-stage queues;
And triggering resource rebalancing according to the CPU utilization difference threshold value, and dynamically adjusting network bandwidth allocation based on the buffer filling rate.
As a further improvement of the present technical solution, the performing operation by the transaction distributor in S2 includes:
And performing double-engine synchronous execution and result comparison on the write operation, triggering the transaction rollback and recording the exception when the difference exceeds a preset threshold value.
As a further improvement of the present technical solution, the process of S3 specifically includes:
Analyzing the transaction operation log into an atomization data changing unit;
and merging according to the table dimension to generate a batch operation instruction set containing operation sequence constraint as an increment synchronization instruction set.
As a further improvement of the present technical solution, the transaction lock management in S4 includes dynamically selecting a lock policy according to an operation feature in the incremental synchronization instruction set, and specifically includes:
Dynamically selecting a table level lock and a row level lock according to the operation characteristics;
setting a lock timeout mechanism dynamically matched with the instruction execution time;
the dual engine lock state is synchronized through the cross-version communication channel.
As a further improvement of the present technical solution, the synchronous dual engine lock state includes:
when the double engines request the exclusive lock on the same resource, the old version lock is automatically released according to the preset priority;
allowing dual engines to simultaneously lock the same resource, supporting parallel reads.
As a further improvement of the present technical solution, the data consistency verification of the preset period includes:
Executing full data comparison and incremental change real-time comparison in a preset period;
Setting a fault tolerance threshold of consistency, and terminating the verification process when the fault tolerance threshold of consistency is exceeded.
As a further improvement of the present technical solution, the process of S5 specifically includes:
Suspending writing operation, switching transaction route and closing writing authority of old engine by stages;
Retaining an old engine reading function for monitoring verification after the switching;
and releasing the related network bandwidth of the old engine and performing new engine performance tuning.
As a further improvement of the present technical solution, the process of triggering resource rebalancing according to CPU utilization difference threshold specifically includes:
The method comprises the steps of acquiring CPU utilization rate indexes of the dual engines in real time, periodically calculating the difference value of the two indexes and comparing the difference value with a preset CPU utilization rate difference threshold, dynamically calculating a resource redistribution proportion by using a feedback control model based on the load weight coefficient of the current dual engines and the transaction priority when the difference value exceeds the CPU utilization rate difference threshold, and sending a quota adjustment instruction to a resource scheduler according to the distribution proportion, wherein the resource redistribution proportion is used for guaranteeing basic resource supply of the high-load engine preferentially.
As a further improvement of the present technical solution, the process of dynamically adjusting network bandwidth allocation based on the buffer filling rate specifically includes:
And monitoring the filling rate of the buffer area in real time, dynamically calculating the bandwidth allocation proportion according to the current load and the increasing trend when the filling rate threshold value of the buffer area is reached, optimizing the transmission parameters by utilizing a sliding window algorithm, and adjusting the bandwidth weight of each channel in real time by a flow controller to form a dynamic adaptation mechanism of the capacity of the buffer area and the network resources.
Compared with the prior art, the invention has the beneficial effects that:
The method for upgrading the database realizes zero business interruption in the process of upgrading the database through a dual-engine parallel processing architecture, combines a cross-version communication channel and an intelligent transaction distribution mechanism, ensures compatibility and integrity of a new version engine and a old version engine for cooperatively executing a transaction, generates a synchronous instruction set based on a transaction operation log analyzed in real time by an incremental data synchronization technology, cooperates with dynamic lock policy management, effectively reduces the risk of data conflict between versions, and ensures stable system throughput during upgrading by dynamically adapting network bandwidth and computing resources through a multistage buffer queue and a load-aware resource distribution mechanism.
In addition, through the consistency verification flow combining the full quantity and the increment, the data difference is rapidly identified in a preset period, the fault tolerance processing is triggered, the old engine reading function is reserved for abnormal rollback verification, and the whole scheme obviously improves the resource utilization efficiency and the system reliability in the upgrading process on the basis of ensuring the strong consistency of the data, and provides smooth upgrading guarantee for the key business system.
Drawings
FIG. 1 is a schematic diagram of the method steps of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides a method for upgrading a database, comprising the following steps:
s1, a dual-engine parallel environment is established, an new version database engine and an old version database engine are started, and a cross-version communication channel is constructed, and the method specifically comprises the following steps:
the method comprises the steps of deploying new and old version database examples in parallel in a computing cluster, distributing initial computing resources (CPU core number, memory quota and storage volume) for the double engines, starting a load sensing probe, and collecting performance indexes of the double engines in real time, wherein the double engines represent the new and old version database engines;
establishing a bidirectional data pipeline, realizing cross-version data interaction between double engines, and setting a multi-level buffer queue, wherein the multi-level buffer queue comprises emergency, common and batch three-level queues;
Dynamically monitoring channel key indexes according to a bidirectional data pipeline, wherein the channel key indexes comprise buffer area filling rate, and the buffer area filling rate directly reflects quantization indexes of channel load states and is used for triggering dynamic bandwidth adjustment;
Periodically calculating the CPU utilization difference of the dual engines, triggering resource rebalancing when the difference exceeds a set threshold, and adjusting the resource quota of each engine according to a dynamic algorithm, wherein the resource rebalancing is triggered according to the CPU utilization difference threshold, and the method specifically comprises the following steps:
Periodically calculating the difference value of the two CPU utilization indexes by acquiring the double-engine CPU utilization index in real time and comparing the difference value with a preset CPU utilization difference value threshold; when the difference value exceeds the CPU utilization ratio difference value threshold value, dynamically calculating a resource redistribution proportion by using a feedback control model based on the current load weight coefficient and transaction processing priority of the dual engine, and using the feedback control model to preferentially ensure the basic resource supply of the high-load engine;
when the buffer filling rate of the bidirectional data pipeline is increased to a preset level, the network bandwidth allocation is automatically and proportionally increased, the self-adaptive supply capacity of channel resources is embodied, and the method specifically comprises the following steps:
By monitoring the change of the filling rate of the data buffer area in real time, when the filling rate is detected to exceed a preset buffer area filling rate threshold value, dynamically constructing a bandwidth allocation model based on the current network load state and the buffer area growth trend, and preferentially increasing the bandwidth quota for the high-filling rate buffer area channel;
And simultaneously, by combining historical transmission efficiency data, calculating the optimal bandwidth proportion by utilizing a sliding window algorithm, and adjusting the bandwidth allocation weight of each channel in real time by a flow controller to form a dynamic balance mechanism of buffer capacity and network resource supply, so that overflow or idle state of the buffer is effectively prevented, and the data transmission efficiency is continuously optimized.
The established communication channel realizes real-time data interaction and resource dynamic coordination between new and old engines through the bidirectional data pipeline and the multi-stage buffer queue, ensures that the double engines keep cooperative working capacity under load fluctuation, and provides infrastructure support for bidirectional flow control and data consistency guarantee in the subsequent stage.
S2, responding to the communication channel established in the step S1, and routing the client request to the new and old engines for execution through the transaction distributor, wherein the method specifically comprises the following steps:
Identifying the transaction type (query, update or structure change) of the client request, and extracting key characteristics (such as SQL operation mode and table object version dependence);
The real-time service capability index of the double engines is calculated by combining the CPU utilization rate and buffer filling rate of the double engines monitored in the S1 stage;
The write operation is simultaneously sent to the double engine for execution, the results returned by the double engines are compared field by field, when the time consumption difference of the double engine execution exceeds a preset threshold value, the transaction rollback is triggered and the exception is recorded, and the consistency of the operation is ensured through the result comparison;
In older versions of database engines, a log of transactional operations is recorded whenever a transactional operation (such as a query, update, or structure change) occurs. The log contains detailed information of the transaction, and the log collector such as Fluentd, logstash can read and forward the log file in real time by collecting the transaction operation log from the old version database engine in real time through the log collecting tool or agent program.
S3, analyzing the data change characteristics in real time and generating an increment synchronization instruction set based on the transaction operation log generated by the old engine in the step S2, wherein the method specifically comprises the following steps:
Continuously collecting a transaction operation log generated by an old version in the double engine, and analyzing log entries in the transaction operation log into an atomization data change unit;
Merging the analyzed atomization data change units into batch operation instructions according to the table dimension, and marking operation sequence constraint in the generated synchronous instruction set to serve as an increment synchronous instruction set.
S4, according to the increment synchronous instruction set generated in the step S3, implementing version coordination transaction lock management when the new engine performs data replay, and specifically comprising the following steps:
s4.1, dynamically selecting a lock strategy according to operation characteristics (such as batch writing and single-row updating) marked in the increment synchronous instruction set generated in the S3, wherein:
the table level lock is suitable for structure change or whole table data migration, and locks the whole table to ensure the atomicity of operation;
a row-level lock, which is used for locking only a target data row aiming at single-row data operation and maximizing concurrency performance;
And a lock timeout mechanism, namely setting the upper limit of the holding time of the lock, and dynamically matching with the estimated execution time of the instruction to prevent deadlock or long-term blockage.
S4.2, synchronizing the lock state (such as lock type, lock range and holding time) held by the old version in the double engine to the new version in real time through the cross-version communication channel established in the S1 stage, wherein:
When the dual engines request mutual exclusion locks on the same resource, the old version locks are automatically released according to preset priority (new version priority);
allowing the double engines to simultaneously add a shared lock to the same resource, and supporting parallel reading;
S4.3, defining a preset time period (such as 24 hours, 48 hours and the like), continuously verifying data consistency in the period, and performing full data comparison once when the verification period starts to ensure that all tables and data in the new and old engines are completely consistent;
in the verification period, continuously monitoring and recording incremental data changes of the new engine and the old engine, and performing real-time comparison, including transaction log analysis, data synchronization instruction set generation, real-time comparison and consistency judgment, wherein:
transaction log analysis, namely continuously collecting transaction operation logs generated by old versions in the double engines, and analyzing log entries into an atomization data change unit;
Merging the analyzed atomization data change units into batch operation instructions according to the table dimension, and marking operation sequence constraint in the generated synchronization instruction set;
real-time comparison, namely comparing the increment synchronous instruction executed by the new engine with the actual data of the old engine in real time to ensure that each change is correct and error-free;
Consistency determination defining a fault tolerance threshold for data consistency, such as a maximum allowable number or proportion of inconsistencies, continuously monitoring the number or proportion of inconsistencies during a verification period, and stopping the verification process immediately upon exceeding a set threshold.
S5, after finishing data consistency verification of a preset period in the step S4, switching a transaction route to a new engine and closing an old engine writing channel, namely executing write operation suspension, transaction route switching and old engine writing authority closing in stages, reserving an old engine reading function for monitoring verification after switching, releasing the related network bandwidth of the old engine and performing new engine performance tuning, wherein the specific steps comprise:
suspending write operation requests of all clients to ensure that no new data change occurs;
stopping sending the write operation to the operation modes executed by the two new and old engines simultaneously, wherein all read-write requests are temporarily suspended;
Modifying the configuration of the transaction distributor so that it only routes all subsequent transaction requests (including reads and writes) to the new version database engine;
after confirming that the new engine can independently process all types of operation requests, setting the new engine into an exclusive mode, wherein the new engine bears all workloads;
The write permission of the old engine is cut off to prevent any accidental data writing, but still retain its read function for a period of time to facilitate monitoring and verification.
The part of the bidirectional data pipeline relevant to the old engine is removed, network bandwidth and other resources which are not used any more are released, and simultaneously, the new engine is subjected to performance tuning.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510271984.5A CN120215994B (en) | 2025-03-07 | 2025-03-07 | Database upgrading method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510271984.5A CN120215994B (en) | 2025-03-07 | 2025-03-07 | Database upgrading method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN120215994A true CN120215994A (en) | 2025-06-27 |
CN120215994B CN120215994B (en) | 2025-08-12 |
Family
ID=96102000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202510271984.5A Active CN120215994B (en) | 2025-03-07 | 2025-03-07 | Database upgrading method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN120215994B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6351753B1 (en) * | 1998-02-20 | 2002-02-26 | At&T Corp. | Method and apparatus for asynchronous version advancement in a three version database |
CN105122241A (en) * | 2013-03-15 | 2015-12-02 | 亚马逊科技公司 | Database system with database engine and independent distributed storage service |
CN107077495A (en) * | 2014-10-19 | 2017-08-18 | 微软技术许可有限责任公司 | High performance transaction in data base management system |
CN113722052A (en) * | 2021-08-23 | 2021-11-30 | 华中科技大学 | Nonvolatile memory updating method based on data double versions |
CN118733604A (en) * | 2024-09-03 | 2024-10-01 | 天翼视联科技有限公司 | A database updating method, device, electronic device and storage medium |
-
2025
- 2025-03-07 CN CN202510271984.5A patent/CN120215994B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6351753B1 (en) * | 1998-02-20 | 2002-02-26 | At&T Corp. | Method and apparatus for asynchronous version advancement in a three version database |
CN105122241A (en) * | 2013-03-15 | 2015-12-02 | 亚马逊科技公司 | Database system with database engine and independent distributed storage service |
CN107077495A (en) * | 2014-10-19 | 2017-08-18 | 微软技术许可有限责任公司 | High performance transaction in data base management system |
CN113722052A (en) * | 2021-08-23 | 2021-11-30 | 华中科技大学 | Nonvolatile memory updating method based on data double versions |
CN118733604A (en) * | 2024-09-03 | 2024-10-01 | 天翼视联科技有限公司 | A database updating method, device, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN120215994B (en) | 2025-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Luo et al. | On performance stability in LSM-based storage systems (extended version) | |
CN110362390B (en) | Distributed data integration job scheduling method and device | |
CN111597015B (en) | Transaction processing method and device, computer equipment and storage medium | |
CN104915407B (en) | A kind of resource regulating method based under Hadoop multi-job environment | |
CN104407926B (en) | A kind of dispatching method of cloud computing resources | |
CN112068934B (en) | Control system and method for shrinking container cloud service instance | |
CN110781244B (en) | Method and device for controlling concurrent operation of database | |
CN108491159B (en) | A checkpoint data writing method for massively parallel systems based on random delays to alleviate I/O bottlenecks | |
CN117032974A (en) | Dynamic scheduling method and terminal based on resource application | |
CN109241194A (en) | The load-balancing method and device of Database Systems based on High-Performance Computing Cluster distribution | |
CN119739498A (en) | Intelligent resource scheduling and dynamic data priority management method and system | |
CN106557492A (en) | A kind of method of data synchronization and device | |
CN111737226B (en) | A method for optimizing HBase cluster performance based on Redis cluster | |
WO2022266975A1 (en) | Method for millisecond-level accurate slicing of time series stream data | |
Zheng et al. | Lion: Minimizing distributed transactions through adaptive replica provision | |
CN120215994B (en) | Database upgrading method | |
CN114297002A (en) | A method and system for mass data backup based on object storage | |
Zhuang et al. | Geotp: Latency-aware geo-distributed transaction processing in database middlewares (extended version) | |
Zhang et al. | Online nonstop task management for storm-based distributed stream processing engines | |
CN118193565A (en) | Distributed big data calculation engine | |
Bouazizi et al. | Management of QoS and data freshness in RTDBSs using feedback control scheduling and data versions | |
CN118677913B (en) | Distributed storage service request processing method, device and distributed storage system | |
CN117311992B (en) | Method for predicting and automatically dynamically balancing internal resources of cluster based on established resources | |
CN120353839B (en) | Multi-level caching method, equipment and medium for distributed database | |
Jiang et al. | The analysis and design of ship monitoring system based on hybrid replication technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |