[go: up one dir, main page]

CN118982347A - IT operation and maintenance service management method based on big data - Google Patents

IT operation and maintenance service management method based on big data Download PDF

Info

Publication number
CN118982347A
CN118982347A CN202411472875.1A CN202411472875A CN118982347A CN 118982347 A CN118982347 A CN 118982347A CN 202411472875 A CN202411472875 A CN 202411472875A CN 118982347 A CN118982347 A CN 118982347A
Authority
CN
China
Prior art keywords
data
user
cluster
service management
management method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411472875.1A
Other languages
Chinese (zh)
Other versions
CN118982347B (en
Inventor
丁旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Aoju Technology Co ltd
Original Assignee
Hangzhou Aoju Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Aoju Technology Co ltd filed Critical Hangzhou Aoju Technology Co ltd
Priority to CN202411472875.1A priority Critical patent/CN118982347B/en
Publication of CN118982347A publication Critical patent/CN118982347A/en
Application granted granted Critical
Publication of CN118982347B publication Critical patent/CN118982347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/50Business processes related to the communications industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明涉及一种基于大数据的IT运维服务管理方法,包括以下步骤:S1:收集来自日志、监控工具、用户行为的数据,并进行预处理;S2:对预处理后的数据集进异常检测,并进行二次数据清洗;S3:基于二次数据清洗后的数据,预测系统性能指标的趋势;S4:利用层次聚类聚类分析根据用户行为数据,构建静态用户画像,并基于路径动态分析,动态更新用户画像以自适应最新行为数据;S5:基于系统性能指标的趋势和用户画像,使用强化学习算法优化资源分配和任务调度;S6:集成外部安全情报,利用自适应策略调整机制。本发明不仅有效果提升系统运行效率和可靠性,能提高IT运维复杂性与动态变化的环境下的有效管理。

The present invention relates to an IT operation and maintenance service management method based on big data, comprising the following steps: S1: collecting data from logs, monitoring tools, and user behaviors, and performing preprocessing; S2: performing anomaly detection on the preprocessed data set, and performing secondary data cleaning; S3: predicting the trend of system performance indicators based on the data after secondary data cleaning; S4: constructing a static user portrait based on user behavior data using hierarchical clustering and cluster analysis, and dynamically updating the user portrait to adapt to the latest behavior data based on path dynamic analysis; S5: optimizing resource allocation and task scheduling using a reinforcement learning algorithm based on the trend of system performance indicators and user portraits; S6: integrating external security intelligence and using an adaptive strategy adjustment mechanism. The present invention not only effectively improves system operation efficiency and reliability, but also improves effective management in an environment of IT operation and maintenance complexity and dynamic changes.

Description

IT operation and maintenance service management method based on big data
Technical Field
The invention relates to the field of IT operation and maintenance management, in particular to an IT operation and maintenance service management method based on big data.
Background
In today's digital and information intensive environments, the IT infrastructure of an enterprise has become the basic support for ITs operations and strategies. IT operation and maintenance service management based on big data is becoming an important means for organizations to ensure their system stability, security and efficient operation. With the rapid development of internet of things (IoT), cloud computing, and big data technologies, the amount of data facing enterprises has increased explosively, and the operating environment has also become more complex and dynamic. These changes present new challenges, including: how to collect and process mass data effectively, detect system abnormality rapidly, predict system performance trend accurately, and make intelligent decision in changeable environment. Meanwhile, the security and reliability of enterprise systems directly affect their market competitiveness and customer satisfaction.
Disclosure of Invention
In order to solve the problems, the invention aims to provide the IT operation and maintenance service management method based on big data, which not only effectively improves the operation efficiency and reliability of the system, but also improves the complexity of IT operation and maintenance and effective management under the dynamically changing environment.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
An IT operation and maintenance service management method based on big data comprises the following steps:
S1, collecting data from logs, monitoring tools and user behaviors, and preprocessing by using a Pandas library of Python, wherein the preprocessing comprises data cleaning and missing value processing;
s2, carrying out anomaly detection by combining the data set preprocessed by the K-Means and the DBSCAN, carrying out secondary data cleaning based on the anomaly detection result, and optimizing a data cleaning strategy;
S3, predicting the trend of the system performance index based on the ARIMA model based on the data after the secondary data cleaning;
S4, constructing a static user portrait according to the user behavior data by using a hierarchical clustering algorithm, and dynamically updating the user portrait to adapt to the latest behavior data based on path dynamic analysis;
s5, optimizing resource allocation and task scheduling by using a reinforcement learning algorithm based on the trend of the system performance index and the user portraits;
And S6, integrating external safety information, applying to safety event analysis, and automatically modifying a protection strategy according to a safety situation by utilizing a self-adaptive strategy adjustment mechanism.
Further, S1 is specifically:
using Logstar as a log collector, extracting log data from different servers and application programs, and sending the log data to an elastic search for storage and indexing;
collecting network and system performance indexes by utilizing Prometaus and Zabbix;
deriving user access and interaction data from a user behavior analysis tool;
Using APACHE KAFKA as a data flow platform, and transmitting the data of all data sources to a database in real time;
Data is loaded from the database using the Pandas library and the repeat value is removed using the drop duplicates () function of Pandas, unnecessary rows or columns are filtered out according to certain conditions, and the missing values are processed with forward padding.
Further, S2 is specifically:
Carrying out standardization processing on the preprocessed data, carrying out cluster analysis on the standardized data by using a K-Means algorithm, selecting a cluster number, initializing a cluster center, and repeating the following steps until convergence:
Wherein C j is the set of all member samples of cluster j, and μ j is the center of the j-th cluster; x i is the eigenvector of the ith data point, c i is the cluster to which the ith data point is assigned;
Calculating Euclidean distance from each point to the center of the cluster to which the point belongs:
Where m is the total dimension representing the feature space; Representing data points A value in a kth dimension; representing the value of cluster center mu j in the kth dimension;
finding out abnormal points according to the distance threshold;
;
wherein, Representing data pointsA cluster to which the cluster belongs; x d represents an outlier; d u denotes a distance threshold;
Density clustering of data by DBSCAN, for each data point, detecting it The number of points in the neighborhood, if the number of points reaches min_samples, the number of points is a core point; the method includes the steps that a boundary point which does not meet the core point standard but belongs to a neighborhood of a certain core point is a noise point, and the noise point is marked as abnormal;
in combination with the abnormal data sets obtained from K-Means and DBSCAN, intersection and independent portions are analyzed and secondary cleaning is performed to improve the quality of the data sets.
Further, in combination with the abnormal data sets obtained from K-Means and DBSCAN, intersections and independent portions are analyzed and secondary cleaning is performed to improve the quality of the data sets, as follows: identifying data points marked as abnormal by K-Means and DBSCAN at the same time, and modifying the numerical data by using a data modification strategy as clear abnormal points; for the data points which are only identified as abnormal by the K-Means, checking whether the data points belong to edge values or not, and if analysis is not affected, filtering; for the data points which are only identified as abnormal by the DBSCAN, confirming that the data is normal, and retaining; for abnormal data that is uncorrectable or invalid for analysis, removing from the dataset;
further, S3 is specifically:
Checking the stationarity of the data by ADF, if not stationary, applying differential operations to remove trends and seasonal effects;
dividing data into a training set and a testing set, wherein the training set is used for model fitting, and the testing set is used for model verification;
Selecting appropriate parameters (p, d, q) according to the behavior characteristics of the autocorrelation function ACF and the partial autocorrelation function PACF; p is the autoregressive partial order; d is the degree of difference; q is the moving average partial order;
constructing an ARIMA model by using the selected parameters, fitting the denoised training data, and training the training model to obtain parameter estimation and model fitting results;
wherein, An observation at time t for the time series; c is a constant term; Coefficients that are autoregressive parts; p is the order of the autoregressive portion, Is the corresponding index variable; Is that An observation of the order; For the coefficients of the moving average portion, q is the order of the moving average portion, Is the corresponding index variable; is a past error term; An error term at time t;
adopting Ljung-Box to check whether the residual error is white noise or not so as to confirm that the model captures the data structure;
Wherein Q is Ljung-Box statistic; n is the sample size, i.e. the number of observations of the time series data; as autocorrelation coefficients of residual, hysteresis Is a coefficient of autocorrelation of (a); h is the number of lags tested.
Further, S4 is specifically:
extracting key features from the user behavior log, and constructing the key features into feature vectors for cluster analysis;
Analyzing the user behavior data by using a Ward variance minimization method to obtain a clustering result;
classifying users into different groups according to the clustering result, calculating the characteristic mean value of each group, and forming a characteristic set representing the group portrait to obtain a static user portrait;
analyzing a user behavior path, and identifying an interaction mode of a user by modeling the user behavior as a state transition process, and adding the interaction mode as a dynamic characteristic into a static user portrait to obtain a dynamic user portrait;
The user portrait is adjusted by using a real-time data updating mechanism, the dynamic updating is carried out by a weighted average method, and new and old data are combined:
Updated_Profile=α×Old_Profile+(1−α)×New_Data;
wherein updated_profile represents the Updated user representation, old_profile represents the original user representation, new_data represents the New Data, and α is the weight.
Further, the Ward variance minimization method is adopted to analyze the user behavior data, and a clustering result is obtained, and the method is specifically as follows:
extracting key behavior characteristics including session duration, page access number and click rate to form a feature matrix
X is a group; the Euclidean distance is used as a standard distance measurement method, the distance between different users is calculated, and a distance matrix is constructed;
Clustering by using Ward algorithm according to the calculated distance matrix, clustering by calculating the dispersion of each cluster, combining two clusters each time, so that the square sum WCSS increment of errors in the combined clusters is minimum, generating a cluster tree, and constructing a tree diagram;
Wherein WCSS is WCSS increment; x is a group of A user; Cluster Is the average value of (2);
Based on the dendrogram, the cluster number is determined by CH index and Dunn index:
wherein, In order to be an inter-cluster scatter matrix,Is an intra-cluster scatter matrix; Is the cluster number; Is the total number of data;
the Dunn index is the ratio of the minimum distance between two nearest neighbor clusters to the maximum cluster diameter selected.
Further, analyzing the behavior path of the user, and identifying the interaction mode of the user in the application by modeling the user behavior as a state transition process (such as a Markov chain); determining all possible user behaviors as states in the Markov chain;
Counting the frequency of each behavior conversion, and calculating the transition probability between states;
wherein, Is in state ofTransition to StateProbability of (2); c () is a transfer counter; Representing acquisition status Is a total number of transfers;
Based on the calculated transition probabilities between the states, a transition matrix is constructed, and a high-frequency transition path is identified by using the transition matrix, so that a user interaction mode is determined.
Further, S5 is specifically:
Defining a state space as a combination of a system load level and a user path position, and defining an action set as a resource adjustment and user guidance strategy; combining the user portrait information with the system state characteristics to form the state input of reinforcement learning, so that the scheduling decision is more personalized; the action space is adjusted according to different user groups;
The bonus function R (s, a) combines resource utilization and path completion rate:
R(s,a)=w1×Resource_Efficiency(s,a)+w2×Path_Completion_Success(s);
Wherein resource_efficiency (s, a) is the Resource utilization rate; path_completion_success(s) Path Completion rate; w 1 and w 2 are corresponding weights;
Q value update:
wherein Q (s, a) is a value estimate of performing action a in state ss; alpha is the learning rate; gamma is a discount factor, and the current value of future rewards is weighted within the range of 0-gamma <1; is the maximum expected Q value for all possible actions a 'at the next state s';
Collecting behavior data and performance indexes, obtaining an optimization strategy through Q-Learning adjustment strategy and training, and improving resource allocation efficiency and user experience based on the optimization strategy;
Implementation in real-time streaming -Action selection balancing in a greedy policy-driven model.
Further, S6 is specifically:
and setting performance and experience thresholds by using a CEP module of the Flink, analyzing the data stream in real time, and triggering an alarm or automatically adjusting rules when abnormality occurs.
Packaging feedback information into FLINK PIPELINE, and automatically adjusting according to trigger rulesLearning rate and update frequency of state features.
The invention has the following beneficial effects:
1. The invention not only effectively improves the running efficiency and reliability of the system, but also improves the complexity of IT operation and maintenance and effective management under the dynamically changing environment.
2. According to the invention, abnormal data is identified and corrected through a multi-stage data optimization method and through secondary cleaning, and the overall quality of a data set is improved through deleting or expanding operation, so that the data is ensured to be more accurate and more efficiently added with an analysis flow for decision making, and the intelligent improvement of efficient and accurate abnormal detection and data cleaning is realized;
3. The invention can effectively identify different user behavior patterns by using the Ward method, construct user portraits on the basis, combine accurate feature extraction and Euclidean distance to realize efficient data segmentation and group analysis, dynamically respond to the static frequency and quantitative state change of the user by introducing path analysis, can also identify the behavior patterns of the user in the system, and be embodied in user portraits updating and resource optimizing decisions, finally combine a Markov process and a reinforcement learning algorithm, can cope with the problems of user experience and performance coordination in complex application environments, realize personalized resource management and user guiding mechanisms, and improve the efficiency and user satisfaction of the system.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and specific examples:
Referring to fig. 1, in this embodiment, an IT operation and maintenance service management method based on big data is provided, which includes the following steps:
s1, collecting data (system logs, application program logs, network monitoring tools (such as Prometheus, zabbix) and user behavior logs) from various sources of logs, monitoring tools and user behaviors, and preprocessing by using a Pandas library of Python, including data cleaning and missing value processing;
s2, carrying out anomaly detection by combining the data set preprocessed by the K-Means and the DBSCAN, carrying out secondary data cleaning based on the anomaly detection result, and optimizing a data cleaning strategy;
S3, predicting the trend of the system performance index based on the ARIMA model based on the data after the secondary data cleaning;
s4, constructing a static user portrait according to the user behavior data by using a hierarchical cluster analysis algorithm, and dynamically updating the user portrait to adapt to the latest behavior data based on path dynamic analysis;
s5, optimizing resource allocation and task scheduling by using a reinforcement Learning algorithm (such as Q-Learning) based on the trend of the system performance index and the user portrait;
and S6, integrating external security information (CTI), applying to security event analysis, and automatically modifying a protection strategy according to a security situation by utilizing an adaptive strategy adjustment mechanism.
In this embodiment, S1 is specifically:
using Logstar as a log collector, extracting log data from different servers and application programs, and sending the log data to an elastic search for storage and indexing;
collecting network and system performance indexes by utilizing Prometaus and Zabbix;
deriving user access and interaction data from a user behavior analysis tool;
Using APACHE KAFKA as a data flow platform, and transmitting the data of all data sources to a database in real time;
Data is loaded from the database using the Pandas library and the repeat value is removed using the drop duplicates () function of Pandas, unnecessary rows or columns are filtered out according to certain conditions, and the missing values are processed with forward padding.
In this embodiment, S2 is specifically:
Carrying out standardization processing on the preprocessed data, carrying out cluster analysis on the standardized data by using a K-Means algorithm, selecting a cluster number, initializing a cluster center, and repeating the following steps until convergence:
Wherein C j is the set of all member samples of cluster j, and μ j is the center of the j-th cluster; x i is the eigenvector of the ith data point, c i is the cluster to which the ith data point is assigned;
Calculating Euclidean distance from each point to the center of the cluster to which the point belongs:
Where m is the total dimension representing the feature space; Representing data points A value in a kth dimension; Representing the value of cluster center mu j in the kth dimension;
finding out abnormal points according to the distance threshold;
;
wherein, Representing data pointsA cluster to which the cluster belongs; x d represents an outlier; d u denotes a distance threshold;
Density clustering of data by DBSCAN, for each data point, detecting it The number of points in the neighborhood, if the number of points reaches min_samples, the number of points is a core point; the method includes the steps that a boundary point which does not meet the core point standard but belongs to a neighborhood of a certain core point is a noise point, and the noise point is marked as abnormal;
in combination with the abnormal data sets obtained from K-Means and DBSCAN, intersection and independent portions are analyzed and secondary cleaning is performed to improve the quality of the data sets.
In this embodiment, intersection and independent parts are analyzed in combination with the abnormal data sets obtained from K-Means and DBSCAN, and a secondary cleaning is performed to improve the quality of the data sets, concretely as follows: identifying data points marked as abnormal by K-Means and DBSCAN at the same time, and modifying the digital data by using a data modification strategy (such as interpolation and median substitution) for the specific abnormal points; for the data points which are only identified as abnormal by the K-Means, checking whether the data points belong to edge values or not, and if analysis is not affected, filtering; for the data points which are only identified as abnormal by the DBSCAN, confirming that the data is normal, and retaining; for abnormal data that is uncorrectable or invalid for analysis, removing from the dataset;
in this embodiment, S3 is specifically:
Checking the stationarity of the data by ADF, if not stationary, applying differential operations to remove trends and seasonal effects;
dividing data into a training set and a testing set, wherein the training set is used for model fitting, and the testing set is used for model verification;
Selecting appropriate parameters (p, d, q) according to the behavior characteristics of the autocorrelation function ACF and the partial autocorrelation function PACF; p is the autoregressive partial order; d is the degree of difference; q is the moving average partial order;
constructing an ARIMA model by using the selected parameters, fitting the denoised training data, and training the training model to obtain parameter estimation and model fitting results;
wherein, An observation at time t for the time series; c is a constant term; Coefficients that are autoregressive parts; p is the order of the autoregressive portion, Is the corresponding index variable; Is that An observation of the order; For the coefficients of the moving average portion, q is the order of the moving average portion, Is the corresponding index variable; is a past error term; An error term at time t;
adopting Ljung-Box to check whether the residual error is white noise or not so as to confirm that the model captures the data structure;
Q is Ljung-Box statistic; n is the sample size, i.e. the number of observations of the time series data; as autocorrelation coefficients of residual, hysteresis Is a coefficient of autocorrelation of (a); h is the number of lags tested.
In this embodiment, S4 is specifically:
Extracting key features such as session time length, page stay time, click times and the like from a user behavior log, and constructing the key features into feature vectors for cluster analysis;
Analyzing the user behavior data by using a Ward variance minimization method to obtain a clustering result;
Classifying users into different groups according to the clustering result, calculating the characteristic mean value of each group, and forming a characteristic set representing the group portrait to obtain a static user portrait; (user portraits include demographics, consumption habits, traffic usage patterns, personalized interest preferences, etc.);
Analyzing a user behavior path, and identifying an interaction mode of the user by modeling the user behavior as a state transition process (such as a Markov chain), and adding the interaction mode as a dynamic characteristic into a static user portrait to obtain a dynamic user portrait;
(updating the feature matrix and the state transition probability in real time, adjusting the user portrait) and using a real-time data updating mechanism to adjust the user portrait, dynamically updating by a weighted average method, and combining new and old data:
Updated_Profile=α×Old_Profile+(1−α)×New_Data;
wherein updated_profile represents the Updated user representation, old_profile represents the original user representation, new_data represents the New Data, and α is the weight.
In this embodiment, a Ward variance minimization method is adopted to analyze user behavior data, and a clustering result is obtained, which is specifically as follows:
extracting key behavior characteristics including session duration, page access number and click rate to form a feature matrix
X is a group; the Euclidean distance is used as a standard distance measurement method, the distance between different users is calculated, and a distance matrix is constructed;
Clustering by using Ward algorithm according to the calculated distance matrix, clustering by calculating the dispersion of each cluster, combining two clusters each time, so that the square sum WCSS increment of errors in the combined clusters is minimum, generating a cluster tree, and constructing a tree diagram;
Wherein WCSS is WCSS increment; x is a group of A user; Cluster Is the average value of (2);
Based on the dendrogram, the cluster number is determined by CH index and Dunn index:
wherein, In order to be an inter-cluster scatter matrix,Is an intra-cluster scatter matrix; Is the cluster number; Is the total number of data;
the Dunn index is the ratio of the minimum distance between two nearest neighbor clusters to the maximum cluster diameter selected.
In this embodiment, the behavior path of the user is analyzed, and the interaction mode of the user in the application is identified by modeling the user behavior as a state transition process (such as a markov chain); determining all possible user behaviors as states in the Markov chain;
Counting the frequency of each behavior conversion, and calculating the transition probability between states;
wherein, Is in state ofTransition to StateProbability of (2); c () is a transfer counter; Representing acquisition status Is a total number of transfers;
Based on the calculated transition probabilities between the states, a transition matrix is constructed, and a high-frequency transition path is identified by using the transition matrix, so that a user interaction mode is determined.
In this embodiment, S5 is specifically:
Defining a state space as a combination of a system load level and a user path position, and defining an action set as a resource adjustment and user guidance strategy; combining the user portrait information with the system state characteristics to form the state input of reinforcement learning, so that the scheduling decision is more personalized; the action space is adjusted according to different user groups;
The bonus function R (s, a) combines resource utilization and path completion rate:
R(s,a)=w1×Resource_Efficiency(s,a)+w2×Path_Completion_Success(s);
Wherein resource_efficiency (s, a) is the Resource utilization rate; path_completion_success(s) Path Completion rate; w 1 and w 2 are corresponding weights;
Q value update:
wherein Q (s, a) is a value estimate of performing action a in state s; alpha is the learning rate; gamma is a discount factor, and the current value of future rewards is weighted within the range of 0-gamma <1; is the maximum expected Q value for all possible actions a 'at the next state s';
Collecting behavior data and performance indexes, obtaining an optimization strategy through Q-Learning adjustment strategy and training, and improving resource allocation efficiency and user experience based on the optimization strategy;
Implementation in real-time streaming -Action selection balancing in a greedy policy-driven model.
In this embodiment, S6 is specifically:
And setting performance and experience thresholds by using a CEP (complex event processing) module of the Flink, analyzing the data stream in real time, and triggering an alarm or automatically adjusting rules when abnormality occurs.
Packaging feedback information into FLINK PIPELINE, and automatically adjusting according to trigger rulesLearning rate and update frequency of state features.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (10)

1.基于大数据的IT运维服务管理方法,其特征在于,包括以下步骤:1. An IT operation and maintenance service management method based on big data, characterized in that it comprises the following steps: S1:收集来自日志、监控工具、用户行为的数据,并使用Python的Pandas库进行预处理,包括数据清洗,处理缺失值;S1: Collect data from logs, monitoring tools, and user behaviors, and use Python's Pandas library for preprocessing, including data cleaning and handling missing values; S2:结合K-Means和DBSCAN预处理后的数据集进异常检测,并基于异常检测的结果进行二次数据清洗,优化数据清洗策略;S2: Combine the K-Means and DBSCAN preprocessed data sets for anomaly detection, and perform secondary data cleaning based on the anomaly detection results to optimize the data cleaning strategy; S3:基于二次数据清洗后的数据,基于ARIMA模型预测系统性能指标的趋势;S3: Based on the data after secondary data cleaning, the trend of system performance indicators is predicted based on the ARIMA model; S4:利用层次聚类分析算法,根据用户行为数据,构建静态用户画像,并基于路径动态分析,动态更新用户画像以自适应最新行为数据;S4: Using the hierarchical clustering analysis algorithm, a static user profile is constructed based on user behavior data, and based on path dynamic analysis, the user profile is dynamically updated to adapt to the latest behavior data; S5:基于系统性能指标的趋势和用户画像,使用强化学习算法优化资源分配和任务调度;S5: Based on the trends of system performance indicators and user profiles, use reinforcement learning algorithms to optimize resource allocation and task scheduling; S6:集成外部安全情报,应用于安全事件分析,利用自适应策略调整机制,根据安全态势自动修改防护策略。S6: Integrate external security intelligence and apply it to security incident analysis, using the adaptive policy adjustment mechanism to automatically modify the protection strategy based on the security situation. 2.根据权利要求1所述的基于大数据的IT运维服务管理方法,其特征在于,所述S1具体为:2. The IT operation and maintenance service management method based on big data according to claim 1, characterized in that S1 specifically comprises: 使用Logstash作为日志收集器,将日志数据从不同服务器和应用程序中提取,将日志数据发送到Elasticsearch进行存储和索引;Use Logstash as a log collector to extract log data from different servers and applications and send the log data to Elasticsearch for storage and indexing; 利用Prometheus和Zabbix收集网络和系统性能指标;Collect network and system performance metrics using Prometheus and Zabbix; 从用户行为分析工具导出用户访问和交互数据;Export user access and interaction data from user behavior analysis tools; 使用Apache Kafka作为数据流平台,将所有数据源的数据实时传输到数据库中;Use Apache Kafka as a data streaming platform to transfer data from all data sources to the database in real time; 使用Pandas库从数据库加载数据,并使用Pandas的drop_duplicates()函数去除重复值,根据特定条件过滤掉不需要的行或列,采用前向填充处理缺失值。Use the Pandas library to load data from the database, and use the drop_duplicates() function of Pandas to remove duplicate values, filter out unnecessary rows or columns based on specific conditions, and use forward filling to handle missing values. 3.根据权利要求1所述的基于大数据的IT运维服务管理方法,其特征在于,所述S2具体为:3. The IT operation and maintenance service management method based on big data according to claim 1, characterized in that S2 specifically comprises: 对预处理后的数据进行标准化处理,使用K-Means算法对标准化后的数据进行聚类分析,选定聚类数并初始化聚类中心,重复以下步骤直到收敛:Standardize the preprocessed data, use the K-Means algorithm to perform cluster analysis on the standardized data, select the number of clusters and initialize the cluster centers, and repeat the following steps until convergence: 其中,Cj为簇 j的所有成员样本的集合,μj为第j个簇中心;xi为第i个数据点的特征向量,ci为第i个数据点分配到的簇; Where Cj is the set of all member samples of cluster j, μj is the center of the jth cluster; xi is the feature vector of the ith data point, and ci is the cluster to which the ith data point is assigned; 计算每个点到其所属簇中心的欧氏距离:Compute the Euclidean distance from each point to the center of its cluster: ; 其中,m是表示特征空间的总维数;表示数据点 在第 k 维度上的取值;表示簇中心 μ j 在第 k维度上的取值;Among them, m represents the total dimension of the feature space; Represents data points The value in the kth dimension; represents the value of the cluster center μ j in the kth dimension; 根据距离阈值找出异常点;Find outliers based on distance threshold; ; ; 其中,表示数据点所属的簇;x d 表示异常点;d u 表示距离阈值;in, Represents data points The cluster to which it belongs; x d represents an outlier; d u represents the distance threshold; 通过DBSCAN对数据进行密度聚类,对于每个数据点,检测其邻域内的点数量,若达到min_samples则为核心点;不满足核心点标准但属于某核心点邻域的为边界点,否则为噪声点,并标记噪声点为异常;The data is densely clustered using DBSCAN. For each data point, its The number of points in the neighborhood. If it reaches min_samples, it is a core point. If it does not meet the core point standard but belongs to the neighborhood of a core point, it is a boundary point. Otherwise, it is a noise point and is marked as an anomaly. 结合从K-Means和DBSCAN获得的异常数据集,分析交集和独立部分,并进行二次清洗以提高数据集的质量。The anomaly datasets obtained from K-Means and DBSCAN are combined, the intersection and independent parts are analyzed, and secondary cleaning is performed to improve the quality of the dataset. 4.根据权利要求3所述的基于大数据的IT运维服务管理方法,其特征在于,所述结合从K-Means和DBSCAN获得的异常数据集,分析交集和独立部分,并进行二次清洗以提高数据集的质量,具体如下:识别同时被K-Means和DBSCAN标记为异常的数据点,为明确的异常点,使用数据修正策略对数值型数据进行修改;对于仅被K-Means识别为异常的数据点,检查数据点是否属于边缘值,若不影响分析,则过滤;对于仅被DBSCAN识别为异常的数据点,确认数据正常,则保留;对于不可修正或对分析无效的异常数据,从数据集中移除。4. According to the IT operation and maintenance service management method based on big data in claim 3, it is characterized in that the abnormal data set obtained from K-Means and DBSCAN is combined, the intersection and independent parts are analyzed, and secondary cleaning is performed to improve the quality of the data set, specifically as follows: data points marked as abnormal by both K-Means and DBSCAN are identified as clear abnormal points, and the numerical data are modified using a data correction strategy; for data points only identified as abnormal by K-Means, check whether the data points belong to edge values, and filter them if they do not affect the analysis; for data points only identified as abnormal by DBSCAN, confirm that the data is normal and retain them; for abnormal data that cannot be corrected or is invalid for analysis, remove them from the data set. 5.根据权利要求1所述的基于大数据的IT运维服务管理方法,其特征在于,所述S3具体为:5. The IT operation and maintenance service management method based on big data according to claim 1, characterized in that S3 specifically comprises: 通过ADF检验数据的平稳性,如果非平稳,应用差分操作来去除趋势和季节性影响;The stationarity of the data was tested by ADF. If it was non-stationary, the difference operation was applied to remove the trend and seasonal effects; 划分数据为训练集和测试集,其中训练集用于模型拟合,测试集用于模型验证;Divide the data into training set and test set, where the training set is used for model fitting and the test set is used for model verification; 根据自相关函数ACF和偏自相关函数PACF的行为特征,选择合适的参数(p,d,q);p为自回归部分阶数;d为差分程度;q为移动平均部分阶数;According to the behavioral characteristics of the autocorrelation function ACF and the partial autocorrelation function PACF, select appropriate parameters (p, d, q); p is the order of the autoregressive part; d is the degree of difference; q is the order of the moving average part; 使用选定的参数构建ARIMA模型,拟合去噪后的训练数据,用于训练集训练模型,获得参数估计和模型拟合结果;Use the selected parameters to build an ARIMA model, fit the denoised training data, use the training set to train the model, and obtain parameter estimation and model fitting results; ; 其中,为时间序列在时间 t的观测值;c为常数项;为自回归部分的系数;p为自回归部分的阶数,为对应的索引变量;阶的观测值;为移动平均部分的系数,q为移动平均部分的阶数,为对应的索引变量;为过去的误差项;在时间 t的误差项;in, is the observed value of the time series at time t; c is a constant term; is the coefficient of the autoregressive part; p is the order of the autoregressive part, is the corresponding index variable; for Observation value of order; is the coefficient of the moving average part, q is the order of the moving average part, is the corresponding index variable; is the past error term; The error term at time t; 采用Ljung-Box检验残差是否为白噪声,以确认模型捕捉到了数据的结构;The Ljung-Box test was used to test whether the residuals were white noise to confirm that the model captured the structure of the data; ; 其中,Q为Ljung-Box统计量;n为样本量,即时间序列数据的观测值数量;为残差的自相关系数,滞后的自相关系数;h为检验的滞后数目。Where Q is the Ljung-Box statistic; n is the sample size, that is, the number of observations of the time series data; is the autocorrelation coefficient of the residual, lag The autocorrelation coefficient of ; h is the number of lags tested. 6.根据权利要求1所述基于大数据的IT运维服务管理方法,其特征在于,所述S4具体为:6. According to the IT operation and maintenance service management method based on big data in claim 1, it is characterized in that the S4 is specifically: 从用户行为日志中提取关键特征,将其构建为特征向量,用于聚类分析;Extract key features from user behavior logs and construct them into feature vectors for cluster analysis; 采用Ward方差最小化方法对用户行为数据进行分析,获取聚类结果;Use Ward variance minimization method to analyze user behavior data and obtain clustering results; 根据聚类结果,将用户分类成不同群组,计算每个群组的特征均值,形成代表该群组画像的特征集,得到静态用户画像;According to the clustering results, users are classified into different groups, and the feature mean of each group is calculated to form a feature set representing the group portrait, and a static user portrait is obtained; 分析用户的行为路径,通过将用户行为建模为状态转移过程,识别用户的交互模式,作为动态特征加入静态用户画像,得到动态的用户画像;Analyze the user's behavior path, identify the user's interaction pattern by modeling the user behavior as a state transition process, and add it to the static user profile as a dynamic feature to obtain a dynamic user profile; 使用实时数据更新机制来对用户画像进行调整,加权平均的方法进行动态更新,合并新旧数据:Use the real-time data update mechanism to adjust the user portrait, dynamically update it using the weighted average method, and merge new and old data: Updated_Profile=α×Old_Profile+(1−α)×New_Data;Updated_Profile=α×Old_Profile+(1−α)×New_Data; 其中,Updated_Profile表示更新后的用户画像,Old_Profile表示原用户画像,New_Data表示新数据,α为权重。Among them, Updated_Profile represents the updated user profile, Old_Profile represents the original user profile, New_Data represents the new data, and α is the weight. 7.根据权利要求6所述的基于大数据的IT运维服务管理方法,其特征在于,所述采用Ward方差最小化方法对用户行为数据进行分析,获取聚类结果,具体如下:7. The IT operation and maintenance service management method based on big data according to claim 6 is characterized in that the Ward variance minimization method is used to analyze the user behavior data to obtain the clustering results, which are specifically as follows: 提取关键行为特征,包括会话时长、页面访问数、点击率,形成特征矩阵Extract key behavioral features, including session duration, page visits, and click-through rate, to form a feature matrix X;并使用欧几里得距离作为标准距离度量方法,计算不同用户之间的距离,构建距离矩阵;X; and use Euclidean distance as the standard distance measurement method to calculate the distance between different users and construct a distance matrix; 根据计算得到的距离矩阵,应用Ward算法进行聚类,通过计算各簇离散度来进行聚类,每次迭代合并两个簇,使得合并后的簇内误差平方和WCSS增量最小,产生一棵聚类树,并构建树状图;According to the calculated distance matrix, the Ward algorithm is used for clustering. Clustering is performed by calculating the discreteness of each cluster. Two clusters are merged each iteration to minimize the increment of the square error sum WCSS within the merged cluster. A clustering tree is generated and a dendrogram is constructed. ; 其中,WCSS为WCSS增量;x为属于簇用户; 的均值;Where WCSS is the WCSS increment; x is the number of cells belonging to the cluster user; cluster The mean of 基于树状图,通过CH指标、Dunn指数来确定聚类数:Based on the dendrogram, the number of clusters is determined by the CH index and Dunn index: 其中,为是簇间散布矩阵,是簇内散布矩阵;是聚类数;是数据总数; in, is the inter-cluster scatter matrix, is the intra-cluster scatter matrix; is the number of clusters; is the total number of data; Dunn指数为选择距离两个最相邻簇之间的最小距离与最大的簇直径的比值。The Dunn index is the ratio of the minimum distance between the two most adjacent clusters to the maximum cluster diameter. 8.根据权利要求6所述基于大数据的IT运维服务管理方法,其特征在于,所述分析用户的行为路径,通过将用户行为建模为状态转移过程,采用马尔可夫链识别用户在应用内的交互模式;确定所有可能的用户行为作为马尔可夫链中的状态;8. According to claim 6, the IT operation and maintenance service management method based on big data is characterized in that the user's behavior path is analyzed by modeling the user behavior as a state transition process, using a Markov chain to identify the user's interaction mode in the application; all possible user behaviors are determined as states in the Markov chain; 统计每种行为转换的频率,计算状态间的转移概率;Count the frequency of each behavior transition and calculate the transition probability between states; 其中,为状态转移到状态的概率;C(.)为转移计数器;表示获取状态 的总转移次数; in, Status Transfer to state The probability of; C(.) is the transfer counter; Indicates the acquisition status Total number of transfers; 基于计算状态间的转移概率,构建转移矩阵,并使用转移矩阵识别高频转移路径,确定用户交互模式。Based on the calculated transition probabilities between states, a transition matrix is constructed, and the transition matrix is used to identify high-frequency transition paths and determine user interaction patterns. 9.根据权利要求1所述基于大数据的IT运维服务管理方法,其特征在于,所述S5具体为:9. According to the IT operation and maintenance service management method based on big data in claim 1, it is characterized in that the S5 is specifically: 定义状态空间为系统负载水平和用户路径位置组合,定义动作集合为资源调整和用户引导策略;并将用户画像信息与系统状态特征结合构成强化学习的状态输入,使调度决策更个性化;动作空间根据不同用户群体调整;Define the state space as a combination of system load level and user path location, and define the action set as resource adjustment and user guidance strategy; combine user profile information with system state characteristics to form the state input of reinforcement learning, making scheduling decisions more personalized; adjust the action space according to different user groups; 奖励函数R(s,a)结合资源利用率和路径完成率:The reward function R(s,a) combines resource utilization and path completion rate: R(s,a)=w1×Resource_Efficiency(s,a)+w2×Path_Completion_Success(s);R(s,a)=w 1 ×Resource_Efficiency(s,a)+w 2 ×Path_Completion_Success(s); 其中,Resource_Efficiency(s,a)为资源利用率;Path_Completion_Success(s)路径完成率;w1和w2为对应的权重;Among them, Resource_Efficiency(s,a) is the resource utilization; Path_Completion_Success(s) is the path completion rate; w1 and w2 are the corresponding weights; Q值更新:Q value update: ; 其中,Q(s,a)为在状态 ss下执行动作 a 的价值估计;α为学习率;γ为折扣因子;为在下一个状态 s′ 处对所有可能动作 a′的最大预期 Q 值;Where Q(s,a) is the estimated value of executing action a in state ss; α is the learning rate; γ is the discount factor; is the maximum expected Q value for all possible actions a′ at the next state s′; 收集行为数据与性能指标,通过Q-Learning调整策略,通过训练得到优化策略,并基于优化策略来提升资源分配效率和用户体验;Collect behavioral data and performance indicators, adjust strategies through Q-Learning, obtain optimization strategies through training, and improve resource allocation efficiency and user experience based on the optimization strategies; 在实时流处理中实现-贪婪策略推进模型中的动作选择平衡。Implemented in real-time stream processing -Balanced action selection in the greedy policy boosting model. 10.根据权利要求9所述的基于大数据的IT运维服务管理方法,其特征在于,所述S6具体为:10. The IT operation and maintenance service management method based on big data according to claim 9, characterized in that S6 specifically comprises: 利用Flink的CEP模块,设定性能和体验阈值,实时分析数据流,出现异常时触发报警或自动调整规则;Use Flink's CEP module to set performance and experience thresholds, analyze data streams in real time, and trigger alarms or automatically adjust rules when anomalies occur; 将反馈信息封装至Flink pipeline中,根据触发规则自动调整、学习率和状态特征的更新频率。Encapsulate feedback information into the Flink pipeline and automatically adjust it according to the trigger rules , learning rate, and update frequency of state features.
CN202411472875.1A 2024-10-22 2024-10-22 IT operation and maintenance service management method based on big data Active CN118982347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411472875.1A CN118982347B (en) 2024-10-22 2024-10-22 IT operation and maintenance service management method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411472875.1A CN118982347B (en) 2024-10-22 2024-10-22 IT operation and maintenance service management method based on big data

Publications (2)

Publication Number Publication Date
CN118982347A true CN118982347A (en) 2024-11-19
CN118982347B CN118982347B (en) 2025-01-21

Family

ID=93449608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411472875.1A Active CN118982347B (en) 2024-10-22 2024-10-22 IT operation and maintenance service management method based on big data

Country Status (1)

Country Link
CN (1) CN118982347B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119412786A (en) * 2024-11-29 2025-02-11 江苏征途电气科技有限公司 A load remote control method and system based on remote control and energy-saving optimization

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3129987A1 (en) * 2020-09-03 2022-03-03 Royal Bank Of Canada Systems and methods of dynamic resource allocation among networked computing devices
CN117478208A (en) * 2023-12-26 2024-01-30 中国电子科技集团公司第五十四研究所 Dynamic resource allocation satellite mobile communication system for differentiated user group
CN117950865A (en) * 2024-01-29 2024-04-30 苏州辰瓴光学有限公司 Digital resource allocation method, system, device and storage medium for metaverse
CN118070053A (en) * 2024-03-01 2024-05-24 江西欧易科技有限公司 A remote management device and method for rotational molding machine supporting multi-user collaboration
CN118363765A (en) * 2024-06-20 2024-07-19 湖南索思科技开发有限公司 Cloud resource automatic allocation system
CN118797291A (en) * 2024-07-09 2024-10-18 南京源成数字科技有限公司 A resource intelligent matching method based on metaverse and metaverse system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3129987A1 (en) * 2020-09-03 2022-03-03 Royal Bank Of Canada Systems and methods of dynamic resource allocation among networked computing devices
CN117478208A (en) * 2023-12-26 2024-01-30 中国电子科技集团公司第五十四研究所 Dynamic resource allocation satellite mobile communication system for differentiated user group
CN117950865A (en) * 2024-01-29 2024-04-30 苏州辰瓴光学有限公司 Digital resource allocation method, system, device and storage medium for metaverse
CN118070053A (en) * 2024-03-01 2024-05-24 江西欧易科技有限公司 A remote management device and method for rotational molding machine supporting multi-user collaboration
CN118363765A (en) * 2024-06-20 2024-07-19 湖南索思科技开发有限公司 Cloud resource automatic allocation system
CN118797291A (en) * 2024-07-09 2024-10-18 南京源成数字科技有限公司 A resource intelligent matching method based on metaverse and metaverse system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119412786A (en) * 2024-11-29 2025-02-11 江苏征途电气科技有限公司 A load remote control method and system based on remote control and energy-saving optimization

Also Published As

Publication number Publication date
CN118982347B (en) 2025-01-21

Similar Documents

Publication Publication Date Title
CN119512883B (en) Python monitoring task resource using method and system
CN118761745B (en) OA collaborative workflow optimization method applied to enterprise
CN119938335A (en) Service proxy method and system based on Doris front-end node
CN119577556B (en) Power grid fault online identification system and method based on big data
CN118764395B (en) A cloud computing node fault prediction method based on improved graph neural network
JP2023504103A (en) MODEL UPDATE SYSTEM, MODEL UPDATE METHOD AND RELATED DEVICE
CN119690725B (en) Intelligent prediction and tracking method and system for memory leaks in microservice architecture
CN119988157B (en) A data collection method and system based on big data of intelligent operation and maintenance platform
CN117094184B (en) Modeling method, system and medium of risk prediction model based on intranet platform
CN120336190A (en) An integrated test scenario prediction method and system based on multi-dimensional data
CN120547219A (en) Method and device for identifying abnormal network status
CN118982347B (en) IT operation and maintenance service management method based on big data
CN118694673A (en) A life cycle risk management method based on neural network algorithm
CN119939277B (en) Equipment fault identification method, system and storage medium
CN119051996B (en) Training method and device for abnormal flow detection model, monitoring method and equipment
CN119356807B (en) An adaptive scheduling method and apparatus based on open-source components
CN120872665A (en) Computer equipment fault detection system and method based on artificial intelligence
CN120146468A (en) A method and system for intelligent property management and deployment based on big data
CN119988240A (en) Test risk identification method and system based on artificial intelligence
CN117971337A (en) A hybrid cloud automatic configuration method based on LSTM model
CN115858318A (en) Micro-service elastic scaling method and system for performance bottleneck perception
CN120611352B (en) Neural network reasoning performance analysis method oriented to NPU computing architecture
CN118449842B (en) Network failure prediction method, prediction model generation method, storage medium, and device
CN121010180B (en) A method and system for intelligent management of furniture production
CN121542243A (en) Database operation and maintenance decision method, equipment and medium based on large language model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: IT operation and maintenance service management method based on big data

Granted publication date: 20250121

Pledgee: Hangzhou High-tech Financing Guarantee Co.,Ltd.

Pledgor: Hangzhou aoju Technology Co.,Ltd.

Registration number: Y2025330000240