[go: up one dir, main page]

CN114064360B - Important information backup and encryption method based on combination of big data analysis and cloud computing - Google Patents

Important information backup and encryption method based on combination of big data analysis and cloud computing Download PDF

Info

Publication number
CN114064360B
CN114064360B CN202111353162.XA CN202111353162A CN114064360B CN 114064360 B CN114064360 B CN 114064360B CN 202111353162 A CN202111353162 A CN 202111353162A CN 114064360 B CN114064360 B CN 114064360B
Authority
CN
China
Prior art keywords
data
information
backup
important
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111353162.XA
Other languages
Chinese (zh)
Other versions
CN114064360A (en
Inventor
孙宇宁
黄翔
陈英达
刘晓静
谢志行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Digital Platform Technology Guangdong Co ltd
Southern Power Grid Digital Grid Research Institute Co Ltd
Original Assignee
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical Southern Power Grid Digital Grid Research Institute Co Ltd
Priority to CN202111353162.XA priority Critical patent/CN114064360B/en
Publication of CN114064360A publication Critical patent/CN114064360A/en
Application granted granted Critical
Publication of CN114064360B publication Critical patent/CN114064360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/283Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an important information backup and encryption method based on the combination of big data analysis and cloud computing, which comprises the following steps: STEP1, acquisition of information: logging in a DataWorks console, establishing a working space table, importing data, clicking a DDL mode on an editing page of the table, inputting related table construction sentences in a DDL mode dialog box, clicking a production table structure, clicking a confirmation and submitting after confirming the structure, and uploading information to the cloud through a network. The invention makes the important information backup and encrypt, and the general information which is not important selects not encrypt or generally encrypt according to the actual situation of the enterprise, so that not only the important information which is backed up and encrypted is effectively classified and ordered, and the data searching is faster and more efficient, but also the redundant information is reduced, and the efficiency of adding, deleting and modifying the important information is doubled, so that the backup and encryption of the important information are developed accurately and efficiently, thereby saving a great amount of time for the enterprise and effectively reducing the enterprise cost.

Description

Important information backup and encryption method based on combination of big data analysis and cloud computing
Technical Field
The invention belongs to the technical field of cloud computing encryption, and particularly relates to an important information backup and encryption method based on combination of big data analysis and cloud computing.
Background
Along with development of cloud computing, more and more enterprises upload important information to the cloud for encryption and backup, the cloud computing essentially uses a network to connect with a server, the server is a super computer, the computing speed is ultra-fast, the computing capacity is ultra-strong, the storage space is ultra-large, the hardware of the computer is essentially integrated, a user can operate own data only by logging in, of course, the cloud computing also charges according to own flow metering rules, and the uploading of the information to the cloud is equivalent to 'bare running', so that the important information is prevented from being leaked, and the important information is generally backed up and encrypted.
At present, along with the development of big data analysis, more and more enterprises select to backup and encrypt important information by combining big data analysis with cloud computing, but in the process, because the information of enterprises is huge, and a great part of information can be disclosed and stored independently, network resources are wasted when encryption is performed, meanwhile, the information for backup and encryption is generally directly uploaded and is not reasonably thinned and regulated, like a wardrobe, different clothes are not stacked and placed in order and are packed into the wardrobe, messy and low efficiency in the information uploading process are easily caused, more time is spent no matter the information is uploaded and downloaded, and the time is wasted in the enterprise management, so that the messy phenomenon in the information backup and encryption process is urgently needed to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the important information backup and encryption method based on the combination of big data analysis and cloud computing, and has the advantages of accurate backup encryption and convenient searching.
In order to achieve the above purpose, the present invention provides the following technical solutions: the important information backup and encryption method based on the combination of big data analysis and cloud computing comprises the following steps:
STEP1, acquisition of information: logging in a DataWorks console, establishing a working space table, importing data, clicking a DDL mode on an editing page of the table, inputting related table construction sentences in a DDL mode dialog box, clicking a production table structure, clicking a confirmation and submitting after confirming without errors, and uploading information to the cloud through a network;
STEP2, classification: the information uploaded to the cloud exists in a data form, at the moment, the data is required to be classified, a data classification model is established, the data is operated through a high-speed computing environment of the cloud, and then the data is imported, so that the data is redisplayed in a manner of breaking the original arrangement mode;
STEP3, normalization: the reclassified data are orderly, the reclassified data are presented in a packaged form, at the moment, disassembly and normalization are needed, the connection between the information is finished, and finally, the normalization is finished;
STEP4, data backup: a backup module is established to orderly backup the data;
STEP5, reclassifying backup data:
STEP51, classifying according to the importance degree of the information: the data is effectively classified by filtering out unimportant and publicable data, important data is extracted and regulated, and data analysis is carried out by adopting a data analysis model;
STEP52, categorize according to the association between information: by analyzing the connection between the regulated data, introducing a data analysis model to analyze and judge, and finding out and judging the connection between the important data and the general data, the general data connected with the important data is listed as important data, and the classification of the data realizes longitudinal linkage;
STEP6, classified data are divided into four categories: (1) information can be disclosed; (2) general information that cannot be disclosed; (3) important confidential information; (4) core data;
STEP7, big data analysis: the data is imported into a big data analysis model, (1) the data is firstly subjected to visual analysis, so that the data is transparent, and the data is visually displayed through visual analysis no matter the data analysis expert or the common user is very friendly; (2) the data mining algorithm displays the data to a machine for observation, clusters, partitions and analyzes isolated points of the data, and other algorithms penetrate the data to mine value; (3) predictive analysis, which enables an analyst to better understand data through data mining, and which enables the analyst to make some predictive decisions based on visual analysis and the structure of the data mining; (4) a semantic engine is introduced to analyze and extract the data; (5) data quality data management: the data is processed through standardized processes and tools, so that a predefined high-quality analysis result can be ensured;
STEP8, establishing a general data warehouse: storing the (1) data in STEP6, and establishing a login key;
STEP9, encryption: encrypting the (2), (3) and (4) data in STEP6, randomly generating a secret key K through an encryption module, and sequentially encrypting the (2), (3) and (4) data;
STEP10, storage: and establishing an important data warehouse, and importing the encrypted important information into a database.
Preferably, the information is acquired in a mode of building a table and uploading, the data works can realize quick uploading of local data, disorder can be avoided through the mode of building the table locally, uploading of the data can be faster and higher in efficiency during uploading, the data can be divided in the form of the table even if the data is interrupted, subsequent searching is facilitated, and the data is enabled to be effectively regular.
Preferably, the data in STEP2 is classified into primary classification, the data uploaded to the cloud is presented in a form of a table, a classification model of the primary classification is mainly based on a neural network engine, the data uploaded to the cloud is presented in a form of a table, at this time, the data is still chaotic, some irrelevant information is not needed to be encrypted, two-stage classification is needed to be carried out on the data, the primary classification firstly separates unimportant and publicable information to complete the primary classification, and then the backup is carried out, so that the integrity of the data is ensured, and conditions are created for the follow-up refined secondary classification.
Preferably, the data backup is in a packing uploading mode, the data backup mode can be any one of a data backup method, linux and Windows virtual machine backup, the data backup is in a packing mode, the data can be quickly backed up, an administrator can select any one of the data backup method, linux and Windows virtual machine backup according to the type of enterprises and the type of data, so that the data backup has a wide selection space, and the cost reduction of the enterprises is facilitated.
Preferably, the data is reclassified in two ways, namely, classifying according to the importance degree of the information and classifying according to the relation between the information, the primary classification of the data is sequentially filtered, the reclassification is that the relation and the sensitivity degree between the data are longitudinally compared, confidential information related to enterprises is classified as confidential, and some irrelevant confidence related to the confidential information is likely to be capable of reversely pushing important core confidential after the information is disclosed, so that the important information must be reclassified to avoid problems.
Preferably, the (2) information in STEP6 is data having a relation with the (3) or (4) information, the (2) information in STEP6 cannot be disclosed, and the (2) information in STEP6 is related with the important secret although the core secret in the backup data uploaded by the enterprise is not involved, if the core secret is disclosed, the core secret is likely to be reversely pushed out, so that serious leakage of the important secret data is caused.
Preferably, the data classification in STEP2 is classified into the following types: (1) class data; (2) sequencing data; (3) distance data; (4) the fixed ratio data can divide the statistical classification data into (1) fixed class data according to the metering hierarchy of the data; (2) sequencing data; (3) distance data; (4) the fixed ratio data is the lowest layer, and the data is classified according to category attributes, so that equal relations are formed among the categories; sequencing data is of an intermediate level, and the data is divided into different categories, and the quality of the data can be compared through sequencing among the categories; the distance data can be used for obtaining the difference between the two variables; the fixed ratio data are the same as the fixed distance data in terms of expression form, and are all actual measured values.
Preferably, the data warehouse in STEP10 is any one of Teradata AsterData, EMC greenplus and HP vertical, and is used for storing important data, and the data warehouse of different types is selected to adapt to the data types, so that the data storage environment is improved, and the data is safer.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the information is uploaded and primarily classified by changing the information uploading mode, the information uploaded to the cloud is presented in the form of a data+table, the data is transversely and longitudinally cut through two classifications, general information which is not important and related to important information is filtered, the information is accurately cut by taking security as a standard, the important information is backed up and encrypted, the general information which is not important is selected to be not encrypted or generally encrypted according to the actual situation of an enterprise, the important information which is backed up and encrypted is effectively classified regularly, the data is searched more rapidly and efficiently, redundant information is reduced, the adding and deleting efficiency of the important information is doubled, the backup and encryption of the important information are developed accurately and efficiently, a large amount of time is saved for the enterprise, and the enterprise cost is effectively reduced.
Detailed Description
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides the technical scheme that: the important information backup and encryption method based on the combination of big data analysis and cloud computing comprises the following steps:
STEP1, acquisition of information: logging in a DataWorks console, establishing a working space table, importing data, clicking a DDL mode on an editing page of the table, inputting related table construction sentences in a DDL mode dialog box, clicking a production table structure, clicking a confirmation and submitting after confirming without errors, and uploading information to the cloud through a network;
STEP2, classification: the information uploaded to the cloud exists in a data form, at the moment, the data is required to be classified, a data classification model is established, the data is operated through a high-speed computing environment of the cloud, and then the data is imported, so that the data is redisplayed in a manner of breaking the original arrangement mode;
STEP3, normalization: the reclassified data are orderly, the reclassified data are presented in a packaged form, at the moment, disassembly and normalization are needed, the connection between the information is finished, and finally, the normalization is finished;
STEP4, data backup: a backup module is established to orderly backup the data;
STEP5, reclassifying backup data:
STEP51, classifying according to the importance degree of the information: the data is effectively classified by filtering out unimportant and publicable data, important data is extracted and regulated, and data analysis is carried out by adopting a data analysis model;
STEP52, categorize according to the association between information: by analyzing the connection between the regulated data, introducing a data analysis model to analyze and judge, and finding out and judging the connection between the important data and the general data, the general data connected with the important data is listed as important data, and the classification of the data realizes longitudinal linkage;
STEP6, classified data are divided into four categories: (1) information can be disclosed; (2) general information that cannot be disclosed; (3) important confidential information; (4) core data;
STEP7, big data analysis: the data is imported into a big data analysis model, (1) the data is firstly subjected to visual analysis, so that the data is transparent, and the data is visually displayed through visual analysis no matter the data analysis expert or the common user is very friendly; (2) the data mining algorithm displays the data to a machine for observation, clusters, partitions and analyzes isolated points of the data, and other algorithms penetrate the data to mine value; (3) predictive analysis, which enables an analyst to better understand data through data mining, and which enables the analyst to make some predictive decisions based on visual analysis and the structure of the data mining; (4) a semantic engine is introduced to analyze and extract the data; (5) data quality data management: the data is processed through standardized processes and tools, so that a predefined high-quality analysis result can be ensured;
STEP8, establishing a general data warehouse: storing the (1) data in STEP6, and establishing a login key;
STEP9, encryption: encrypting the (2), (3) and (4) data in STEP6, randomly generating a secret key K through an encryption module, and sequentially encrypting the (2), (3) and (4) data;
STEP10, storage: and establishing an important data warehouse, and importing the encrypted important information into a database.
The data works can realize quick uploading of local data by adopting a mode of building a table and uploading the information, and can enable the data to be uploaded faster and more efficiently in uploading by adopting a local table building mode, and even if the data is interrupted, the data can be separated in a table form, so that subsequent searching is facilitated, and the data is effectively regular.
The data in STEP2 is classified into primary classification, the data uploaded to the cloud is presented in a form of a table, a classification model of the primary classification is mainly based on a neural network engine, the data uploaded to the cloud is presented in a form of a table, at this time, the data is still chaotic, encryption is not needed when some irrelevant information is needed, two-stage classification is needed, the primary classification firstly separates unimportant and publicable information to complete the primary classification, and then backups are carried out, so that the integrity of the data is ensured, and conditions are created for subsequent refined secondary classification.
The data backup is carried out in a packing uploading mode, the data backup mode can be any one of a data backup method, linux and Windows virtual machine backup, the data backup is carried out in a packing mode, the data can be quickly backed up, an administrator can select any one of the data backup method, linux and Windows virtual machine backup according to the enterprise type and the data type, the data backup has wide selection space, and cost reduction of enterprises is facilitated.
The data is classified according to the importance degree of the information and the relation between the information, the primary classification of the data is sequentially filtered, the reclassification is that the relation and the sensitivity degree of the data are longitudinally compared, confidential information related to enterprises is classified as confidential, and some irrelevant confidence related to the confidential information is likely to be capable of reversely pushing important core confidential after the information is disclosed, so that the important information must be reclassified to avoid problems.
The (2) information in STEP6 is data having a relation with the (3) or (4) information, the (2) information in STEP6 cannot be disclosed, and the (2) information in STEP6 is related to the important secret although the core secret in the enterprise uploading backup data is not related to the core secret, if the core secret is disclosed, the core secret is likely to be reversely pushed out, so that serious leakage of the important secret data is caused.
The data classification modes in STEP2 are classified into the following types: (1) class data; (2) sequencing data; (3) distance data; (4) the fixed ratio data can divide the statistical classification data into (1) fixed class data according to the metering hierarchy of the data; (2) sequencing data; (3) distance data; (4) the fixed ratio data is the lowest layer, and the data is classified according to category attributes, so that equal relations are formed among the categories; sequencing data is of an intermediate level, and the data is divided into different categories, and the quality of the data can be compared through sequencing among the categories; the distance data can be used for obtaining the difference between the two variables; the fixed ratio data are the same as the fixed distance data in terms of expression form, and are all actual measured values.
The data warehouse in STEP10 is any one of Teradata AsterData, EMC greenplus and HP vertical, and is used for storing important data, and the data warehouse of different types is selected to adapt to the data types, so that the data storage environment is improved, and the data is safer.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. The important information backup and encryption method based on the combination of big data analysis and cloud computing is characterized by comprising the following steps of: the method comprises the following steps:
STEP1, acquisition of information: logging in a DataWorks console, establishing a working space table, importing data, clicking a DDL mode on an editing page of the table, inputting related table construction sentences in a DDL mode dialog box, clicking a production table structure, clicking a confirmation and submitting after confirming without errors, and uploading information to the cloud through a network;
STEP2, classification: the information uploaded to the cloud exists in a data form, at the moment, the data is required to be classified, a data classification model is established, the data is operated through a high-speed computing environment of the cloud, and then the data is imported, so that the data is redisplayed in a manner of breaking the original arrangement mode;
STEP3, normalization: the reclassified data are orderly, the reclassified data are presented in a packaged form, at the moment, disassembly and normalization are needed, the connection between the information is finished, and finally, the normalization is finished;
STEP4, data backup: a backup module is established to orderly backup the data;
STEP5, reclassifying backup data:
STEP51, classifying according to the importance degree of the information: the data is effectively classified by filtering out unimportant and publicable data, important data is extracted and regulated, and data analysis is carried out by adopting a data analysis model;
STEP52, categorize according to the association between information: by analyzing the connection between the regulated data, introducing a data analysis model to analyze and judge, and finding out and judging the connection between the important data and the general data, the general data connected with the important data is listed as important data, and the classification of the data realizes longitudinal linkage;
STEP6, classified data are divided into four categories: (1) information can be disclosed; (2) general information that cannot be disclosed; (3) important confidential information; (4) core data;
STEP7, big data analysis: the data is imported into a big data analysis model, (1) the data is firstly subjected to visual analysis, so that the data is transparent, and the data is visually displayed through visual analysis no matter the data analysis expert or the common user is very friendly; (2) the data mining algorithm displays the data to a machine for observation, clusters, partitions and analyzes isolated points of the data, and other algorithms penetrate the data to mine value; (3) predictive analysis, which enables an analyst to better understand data through data mining, and which enables the analyst to make some predictive decisions based on visual analysis and the structure of the data mining; (4) a semantic engine is introduced to analyze and extract the data; (5) data quality data management: the data is processed through standardized processes and tools, so that a predefined high-quality analysis result can be ensured;
STEP8, establishing a general data warehouse: storing the (1) data in STEP6, and establishing a login key;
STEP9, encryption: encrypting the (2), (3) and (4) data in STEP6, randomly generating a secret key K through an encryption module, and sequentially encrypting the (2), (3) and (4) data;
STEP10, storage: and establishing an important data warehouse, and importing the encrypted important information into a database.
2. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the information is acquired by adopting a mode of building a table and uploading, and the DataWorks can realize the rapid uploading of local data.
3. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the data in STEP2 is classified as a primary classification, the data uploaded to the cloud is presented in the form of a table, and the classification model of the primary classification is mainly based on the neural network engine.
4. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the data backup adopts a packing uploading mode, and the data backup mode can be any one of a data backup method, linux and Windows virtual machine backup.
5. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: there are two ways of data reclassification, and classification is based on the importance of the information and on the links between the information, respectively.
6. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the (2) type information in STEP6 is data having a relationship with the (3) or (4) type information, and the (2) type information in STEP6 is not publicable.
7. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the data classification method in STEP2 is classified into the following types: (1) class data; (2) sequencing data; (3) distance data; (4) and (5) fixed ratio data.
8. The method for backup and encryption of important information based on combination of big data analysis and cloud computing according to claim 1, wherein the method is characterized in that: the data warehouse in STEP10 is any one of Teradata AsterData, EMC greenplus, HP vertical.
CN202111353162.XA 2021-11-15 2021-11-15 Important information backup and encryption method based on combination of big data analysis and cloud computing Active CN114064360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111353162.XA CN114064360B (en) 2021-11-15 2021-11-15 Important information backup and encryption method based on combination of big data analysis and cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111353162.XA CN114064360B (en) 2021-11-15 2021-11-15 Important information backup and encryption method based on combination of big data analysis and cloud computing

Publications (2)

Publication Number Publication Date
CN114064360A CN114064360A (en) 2022-02-18
CN114064360B true CN114064360B (en) 2024-04-09

Family

ID=80272648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111353162.XA Active CN114064360B (en) 2021-11-15 2021-11-15 Important information backup and encryption method based on combination of big data analysis and cloud computing

Country Status (1)

Country Link
CN (1) CN114064360B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1231039A (en) * 1996-07-22 1999-10-06 Cyva研究公司 Tools for personal information security and exchange
CN103065088A (en) * 2011-09-20 2013-04-24 卡巴斯基实验室封闭式股份公司 System and method for detecting computer security threat based on decision of computer use
CN106951781A (en) * 2017-03-22 2017-07-14 福建平实科技有限公司 Extort software defense method and apparatus
CN109729170A (en) * 2019-01-09 2019-05-07 武汉巨正环保科技有限公司 A kind of cloud computing data backup of new algorithm and restoring method
CN111263938A (en) * 2017-09-29 2020-06-09 甲骨文国际公司 Rule-based autonomous database cloud service framework
CN111724046A (en) * 2020-05-29 2020-09-29 国网福建省电力有限公司 A power purchase management system
CN113239406A (en) * 2021-06-07 2021-08-10 深圳市蔚来芯科技有限公司 Enterprise cloud client management system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1231039A (en) * 1996-07-22 1999-10-06 Cyva研究公司 Tools for personal information security and exchange
CN1497453A (en) * 1996-07-22 2004-05-19 CYVA�о���˾ Tool for safety and exchanging personal information
CN103065088A (en) * 2011-09-20 2013-04-24 卡巴斯基实验室封闭式股份公司 System and method for detecting computer security threat based on decision of computer use
CN106951781A (en) * 2017-03-22 2017-07-14 福建平实科技有限公司 Extort software defense method and apparatus
CN111263938A (en) * 2017-09-29 2020-06-09 甲骨文国际公司 Rule-based autonomous database cloud service framework
CN109729170A (en) * 2019-01-09 2019-05-07 武汉巨正环保科技有限公司 A kind of cloud computing data backup of new algorithm and restoring method
CN111724046A (en) * 2020-05-29 2020-09-29 国网福建省电力有限公司 A power purchase management system
CN113239406A (en) * 2021-06-07 2021-08-10 深圳市蔚来芯科技有限公司 Enterprise cloud client management system and method

Also Published As

Publication number Publication date
CN114064360A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN101419627B (en) Cigarette composition maintenance action digging system based on associations ruler and method thereof
DE102014204842A1 (en) Clustering of data
CN110442620B (en) Big data exploration and cognition method, device, equipment and computer storage medium
CN114429563B (en) Surrounding rock integrity identification method and system based on while-drilling test and TBM rock slag image
Ajibade et al. Big data research outputs in the library and information science: South African’s contribution using bibliometric study of knowledge production
CN117951118B (en) Geotechnical engineering investigation big data archiving method and system
Sembiring et al. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method
CN116235158A (en) System and method for implementing automated feature engineering
CN114998004A (en) Method and system based on enterprise financial loan wind control
CN114064360B (en) Important information backup and encryption method based on combination of big data analysis and cloud computing
Girsang et al. Business intelligence for construction company acknowledgement reporting system
CN117522329A (en) Enterprise management comprehensive application method for big data mining analysis modeling
US11803761B2 (en) Analytic insights for hierarchies
CN115277124B (en) Online system and server for searching matching attack mode based on system traceability graph
US9665621B1 (en) Accelerated query execution within a storage array
Bagozi et al. Interactive data exploration as a service for the smart factory
CN117972111B (en) A knowledge reasoning method based on online graph processing technology for knowledge graph
CN117827382B (en) Container cloud resource management method based on resource deployment audit
Singh et al. Knowledge based retrieval scheme from big data for aviation industry
CN116777687A (en) Patent data management system, method, medium and equipment
Nowak-Brzezińska et al. Exploratory clustering and visualization
CN111090708A (en) User characteristic output method and system based on data warehouse
Wang et al. Decision rule mining for machining method chains based on rough set theory
CN113157191A (en) Data visualization method based on OLAP system
Kaidalova et al. An inventory of the business and IT alignment research field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 406, No.1, Yichuang street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou, Guangdong 510000

Patentee after: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

Country or region after: China

Address before: Room 406, No.1, Yichuang street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou, Guangdong 510000

Patentee before: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20240904

Address after: 518101, 3rd Floor, Building 40, Baotian Industrial Zone, Chentian Community, Xixiang Street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: China Southern Power Grid Digital Platform Technology (Guangdong) Co.,Ltd.

Country or region after: China

Address before: Room 406, No.1, Yichuang street, Zhongxin Guangzhou Knowledge City, Huangpu District, Guangzhou, Guangdong 510000

Patentee before: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right