Disclosure of Invention
The invention aims to provide an asset management method and system based on an AI large model and IT asset data, which solve the problem of poor comprehensiveness of the existing asset management.
To achieve the above object, in a first aspect, the present invention provides an asset management method based on an AI large model and IT asset data, including the steps of:
Collecting original data from a plurality of IT asset data sources, mapping and converting structured data in the original data, and writing the unstructured data in the original data into an asset data feature set after data enhancement;
inputting the asset data feature set into the adjusted AI large model for analysis, and generating a management suggestion list;
and carrying out multidimensional early warning judgment based on the management suggestion list, calling the adjusted AI large model to generate structured asset early warning information according to an early warning signal triggered by a judgment result, and outputting an asset management work order.
The method comprises the steps of collecting original data from a plurality of IT asset data sources, mapping and converting structured data in the original data, and writing the unstructured data in an asset data feature set after data enhancement, wherein the method comprises the following steps:
When a data source is connected for the first time, detecting the data type of the current data source, and generating a corresponding source data characteristic description list;
acquiring all data of the current data source by adopting a full-quantity acquisition and incremental acquisition mixed mode, packaging the acquired original data, and releasing the acquired initial asset data to a bus;
mapping and converting the structured data in the asset data feature set;
And after the unstructured data is subjected to data enhancement, writing the unstructured data into the asset data feature set.
Wherein mapping the structured data into a standard feature set and writing the standard feature set into an asset data feature set comprises:
matching the structured data in the initial asset data with a set semantic mapping rule base, and extracting the original data according to the matched rule to carry out clearing and format conversion;
And sequencing a plurality of records of the same asset in the obtained standard asset data records according to the weight and the time stamp, and selecting the first record to write into the asset data feature set.
After data enhancement is performed on unstructured data in the data, the unstructured data is written into an asset data feature set, and the method comprises the following steps:
converting unstructured data in the initial asset data into text semantic vectors;
extracting association relations from the standard asset data records based on configuration rules, constructing asset association patterns, and calculating corresponding pattern derivative features;
adding the profile derived features to the standard asset data record and writing to an asset data feature set.
Wherein before inputting the asset data feature set into the adjusted AI big model for analysis, the method further comprises:
continuously pre-training the AI large model by using the constructed IT field corpus, and constructing a multi-task training sample based on the historical asset data feature set and the corresponding real event label;
and adjusting the AI large model by combining a mixing loss function and a fine tuning strategy.
The asset data feature set is input into an adjusted AI large model for analysis, and a management suggestion list is generated, which comprises the following steps:
converting the current standard asset data record and the corresponding natural language template into an input prompt;
And inputting the input prompt into the adjusted AI large model, and combining an output result with the asset association graph to generate a management suggestion list.
The multi-dimensional early warning judgment is carried out based on the management suggestion list, and according to an early warning signal triggered by a judgment result, the regulated AI large model is called to generate structured asset early warning information, and an asset management work order is output, and the method comprises the following steps:
comparing various probability values in the management suggestion list with corresponding preset thresholds, and calculating the characteristic abnormal deviation degree if any probability value is larger than the corresponding threshold;
gradient enhancement is carried out on the early warning signals according to the calculated characteristic deviation range, and associated early warning is generated by associating asset association maps;
and calling the adjusted AI large model to generate structured asset early warning information and outputting an asset management work order.
The method for generating the structured asset early warning information by calling the adjusted AI large model comprises the following steps of:
Calling the adjusted AI large model, and generating structured asset early warning information by combining the abnormal characteristics of triggering early warning and the asset association map;
Performing similarity aggregation and characteristic deviation repeated verification on the asset early warning information;
And outputting a predicted requirement value by utilizing the adjusted AI large model by combining the historical performance measurement time sequence data, the business prediction index and the current configuration information, and generating a management work order by combining the current asset early warning information.
In a second aspect, the present invention provides an asset management system based on AI big model and IT asset data, applied to an asset management method based on AI big model and IT asset data as provided in the first aspect, the asset management system based on AI big model and IT asset data includes a data feature unification processing module, an AI big model integration and training module and an asset management module,
The data characteristic unification processing module is used for collecting original data from a plurality of IT asset data sources, mapping and converting structured data in the original data, and writing the unstructured data in the original data into an asset data characteristic set after data enhancement;
the AI large model integration and training module is used for inputting the asset data feature set into the adjusted AI large model for analysis to generate a management suggestion list;
and the asset management module is used for carrying out multidimensional early warning judgment based on the management suggestion list, calling the adjusted AI large model to generate structured asset early warning information according to an early warning signal triggered by a judgment result, and outputting an asset management work order.
The invention discloses an asset management method and system based on an AI large model and IT asset data, wherein the asset management system based on the AI large model and the IT asset data comprises a data characteristic unification processing module, an AI large model integration and training module and an asset management module, raw data are collected from a plurality of IT asset data sources, structured data in the raw data are mapped and converted, unstructured data in the raw data are enhanced, the unstructured data are written into an asset data feature set, the asset data feature set is input into an adjusted AI large model for analysis, a management suggestion list is generated, multidimensional early warning judgment is carried out based on the management suggestion list, the adjusted AI large model is called according to an early warning signal triggered by a judgment result to generate structured asset early warning information, and an asset management work order is output, so that the problem of poor comprehensiveness of current asset management is solved.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
The first embodiment of the application is as follows:
Referring to fig. 1-3, the present invention provides an asset management method based on AI big model and IT asset data, comprising the following steps:
S101, collecting original data from a plurality of IT asset data sources, mapping and converting structured data in the original data, and writing the unstructured data in the original data into an asset data feature set after data enhancement.
Specifically, an architecture of an intelligent adapter and a unified data bus is adopted to replace the traditional manual configuration or single interface acquisition mode. The architecture has self-description capability and can dynamically adapt to new data sources and data structure changes. The intelligent adapter layer is a software component that interfaces directly with a specific data source. Each type of source (e.g., vCenter, AWS, K s, network device SNMP, local CMDB) has a corresponding dedicated intelligent adapter. The adapter, when first connecting to a data source, will automatically probe the data model of the source (e.g., by calling the cloud's API description interface or reading the database Schema) to generate a source data profile. The manifest records all available fields of the source and their original names, data types (e.g., integer, string) and sample data. The adapter supports two modes. The first acquisition is full acquisition, and all the asset snapshots are acquired. The subsequent operation is mainly incremental collection, and only changed data is obtained by monitoring event logs (such as CloudTrail), polling change time stamps or identifying sequence number changes, so that network and system loads are greatly reduced. Regardless of the source data format, the adapter, after acquisition, encapsulates the raw data into a unified initial asset data object. The object comprises two parts:
and the metadata area is used for storing management information such as a source data characteristic description list, an acquisition time stamp, a data source identifier and the like.
And the load data area is used for storing the original asset information (JSON, key value pair or text block) acquired from the source terminal.
The unified data bus layer is a high availability, high throughput messaging channel (e.g., based on APACHE KAFKA or Pulsar implementations). All intelligent adapters publish the encapsulated initial asset data object onto the bus. The bus achieves decoupling of acquisition and subsequent processing. The processing unit can subscribe interested data according to the need, so that the expandability and flexibility of the system are realized. The bus is also responsible for preliminary ordering and caching of data, ensuring that the data is not lost.
And reading a source data characteristic description list in the initial asset data object and matching with a predefined semantic mapping rule base. The core of the rule base is the mapping logic that defines from thousands of possible source fields to our unified standard fields (i.e., asset data feature sets, such as asset identifiers, asset types, status indicators, etc.). For such key features as asset types, the rule base contains not only simple string matches, but also synonyms and context matches based on natural language processing. For example, a source field named "hostname" or "SERVERNAME" can both be mapped correctly to the "hostname" attribute under the standard property asset identifier. For performance metric values (e.g., memory size), the rule base can identify different units (e.g., "GB", "GiB", "MB") and automatically convert to standard units (e.g., "GB"). According to the matched rules, the engine extracts the original value of the load data area, cleans (such as removing illegal characters), converts the format and fills the corresponding field of the standard asset data record.
Because the same asset may be collected by multiple sources, such as a cloud host being monitored by both the cloud platform and the agent deployed thereon, data logging conflicts result. Thus, a data source confidence weight needs to be predefined for each data source. When multiple records describing the same asset are found, the value from the high weight source is selected as the main value for the same feature (e.g., IP address), and the most current record with the timestamp is preferentially used for the status class feature, such as "running" or "powered off" in the status index. The final output is a unique, conflict-resolved standard asset data record.
All standard asset data records processed as described above, organized according to a unified asset data feature set, are written into a centralized standardized asset data warehouse (such as a data lake or large database). This repository is a single trusted data source for subsequent AI model training and reasoning.
Unstructured text fields extracted from standardized asset data records, such as asset descriptions, fault logs, change records, and the like. These texts are converted into text semantic vectors using a lightweight text embedding model (e.g., sentence-BERT). The vector numerically represents the deep semantics of the text, and new and valuable labels can be automatically found and generated by carrying out cluster analysis on the text semantic vectors. For example, from a large number of maintenance records, implicit asset health tags such as "suspected disk aging", "network connection fluctuation" and the like are clustered and added as new features to the asset data feature set.
The association is extracted from the standardized asset data record based on configuration rules (e.g., "run on", "connect to"). For example, from the relationship of "virtual machine running on physical host" and "application deployment in virtual machine", an asset association graph of "physical machine-virtual machine-application" is constructed.
Map derived features calculation a series of map derived features are calculated for each asset node based on a map algorithm, such as number of associated assets: number of other assets directly associated with the asset, and topology importance scoring: calculating centrality of the asset in the whole map using PageRank et al algorithm. A load balancer will score significantly higher than a common backend server. These profile-derived features are added as new, important dimensions to the standard asset data record for the asset.
After data enhancement is carried out on unstructured data in the data, standardized asset data records are greatly enriched, and the data records comprise text semantic vectors, asset health tags, map derived features and time sequence pattern features besides original standard features. This enhanced, multidimensional set of asset data characteristics will serve as the most ideal input to the AI model inference engine, enabling deeper, more accurate analysis and prediction.
S102, inputting the asset data feature set into the adjusted AI large model for analysis, and generating a management suggestion list.
Specifically, before analysis, the AI large model needs to be adjusted, and the specific adjustment process is as follows:
A large language model pre-trained on large-scale general corpus and codes, such as LLaMA, chatGLM, is selected as a basic pre-training model. The model has strong language understanding and logical reasoning capability. And then, carrying out lightweight continuous pre-training on the model by using an IT field corpus formed by published IT documents, technical manuals, fault reports and the like, so that the bottom parameters of the model are preliminarily adapted to the terms and the contexts of the IT field, and providing a better starting point for subsequent fine tuning, wherein the model generated in the process is called an IT field adaptation base model.
Based on the historical asset data feature set and the corresponding real event label, if faults occur, the type of the maintenance work order, the resource utilization rate level and the like, a multi-task training sample is constructed. Each sample contained:
The input sequence is that all the features (including basic features and enhanced features) in a standard asset data record are converted into natural language description according to a predefined template. For example, the asset identifier is SVR-001, the asset type is virtual machine, CPU utilization is 85%, memory health score is 0.7, the number of associated assets is 5, and the latest log semantic vector indicates that an IO delay warning exists.
Multitasking, namely setting a plurality of learning targets for the model simultaneously, such as:
failure prediction task, namely whether the next period fails (two classification).
Lifecycle stage classification task-which stage (multi-classification) is in lead-in, stable, decay, etc.
Resource demand regression task, predicting the CPU demand peak value (regression) of the next month.
And carrying out weighted summation on the tasks by using a loss function, so as to ensure that the model does not ignore any task.
The progressive fine tuning strategy is:
Stage one (feature adaptation) only fine-tuning the parameters of the last layers of the model, allowing the model to learn first about the input format and basic pattern of the asset data feature set.
Stage two (knowledge fusion) of thawing more middle layer parameters and performing deep fine-tuning to enable the model to deeply fuse the domain knowledge of IT asset management into ITs internal representation.
Stage three (instruction refinement) the use of conversational data containing specific management instructions (e.g. "please evaluate the asset risk") to further fine tune the model learning to generate structured output on demand, not just classification or regression. The final output of this stage is a domain-adaptive AI large model.
And after the AI large model is adjusted, analyzing the real-time inflow standard asset data record by using the adjusted AI large model, and generating an intelligent management suggestion which can be operated and interpreted.
When a new standard asset data record enters the standardized asset data warehouse, the inference scheduler will place it in a queue as an inference task. The record is converted to a model input prompt according to the same natural language template as the fine tuning stage, for example, please analyze the asset status: [ asset characteristics list ]. Please judge the fault risk, life cycle stage and give management advice.
The constructed input prompt is input into the adjusted AI large model, the model is required to output the reasoning process of the model step by step, and then a final conclusion is given. For example:
step one, analyzing that the characteristic-CPU utilization rate continuously exceeds 80% for one week, and memory allocation errors exist in the log.
And secondly, risk assessment-high load superposition potential memory problems, so that the fault probability is increased to a high level.
And thirdly, judging the life cycle, wherein the property is in a declining trend and is in a 'declining period' after the property is continuously operated for 3 years.
And step four, generating a preliminary suggestion-suggesting to immediately perform health examination, and planning migration or replacement.
The process converts the 'black box' calculation in the model into a traceable thinking chain which accords with human logic, and greatly improves the credibility and the interpretability of the result.
The enterprise's policy repository, such as budget period, maintenance window, is then queried to ensure that the suggestion is viable. For example, the model suggests "immediate replacement", but if it is not currently purchased, the engine may revise it as "apply for emergency purchase" or "enable temporary standby". And combining the asset association graph to automatically supplement the suggested influence range. For example, an additional hint is provided next to the suggestion "restart server" that this will affect two applications that rely on it A, B, suggesting that execution be performed during the low-peak period of traffic. "
Finally, standardized intelligent management advice is generated, which comprises the following fields of a target asset identifier, a problem description, a risk level (such as P0/P1/P2), a specific action advice (an operable task description), advice bases (cited key characteristics and reasoning steps) and expected influences and dependencies, and for complex situations, the model can generate 2-3 alternative advice such as expansion, migration or optimization application, and the estimated cost, implementation difficulty and risk of each scheme are attached for decision reference of a manager.
In order to enable the field self-adaptive AI large model to continuously evolve along with the change of an IT environment and the generation of new data, the performance of the model is prevented from being attenuated along with time, a closed-loop optimization system based on online learning and feedback driving is constructed, and the detailed technical scheme is as follows:
Feedback loop establishment
Explicit feedback collection, namely setting feedback buttons (such as 'adopt', 'valid', 'invalid') for each generated intelligent management proposal on a management interface, and allowing an administrator to fill out the reason of the invalid.
Implicit feedback verification-the system automatically tracks the suggested follow-up results. For example, if an early warning with high fault risk is sent, the asset is truly faulty, the correctness of the early warning is verified, and if the early warning is carried out after the asset is safe for a risk period, the early warning is false, or the maintenance action is acted, and the system can try to learn the associated maintenance record.
Incremental data pool and management
All newly generated standard asset data records, corresponding model reasoning results and explicit and implicit feedback together form an incremental learning data pool.
Innovative data value assessment-not all new data is equally important. The system will evaluate the "information value" of each data sample:
uncertainty sampling, namely preferentially selecting samples with low model prediction confidence.
Differential sampling-samples that are significantly different from the existing training data distribution are preferentially selected, possibly representing new types of faults or assets.
High value samples are preferentially placed into the incremental learning data pool.
Safe and efficient online fine tuning
And the triggering mechanism is used for triggering an online fine tuning process when the incremental learning data pool is accumulated to a certain scale or the prediction accuracy of the model on the recent data is monitored to continuously decrease.
Elastic weight consolidation (ELASTIC WEIGHT Consolidation, EWC) technique-when performing online fine tuning, the EWC algorithm computes the importance of each parameter in the model to old knowledge that has been learned. When updating parameters, constraint is applied to important parameters to prevent the important parameters from being changed drastically, so that the problem of catastrophic forgetting is effectively alleviated, namely, the model does not forget old knowledge when learning new knowledge.
The shadow mode and the A/B test that the newly fine-tuned model version does not replace the on-line version immediately, but operates for a period of time under the shadow mode, compares the predicted result with the on-line version, verifies the effect by the small-flow A/B test, and fully comes on line after the improvement of the performance is confirmed, thereby ensuring the stability of the system.
Model version management and rollback
And (3) archiving and managing the field self-adaptive AI large model of each version, and recording the performance index of the field self-adaptive AI large model. If a new version becomes problematic, a rollback to a stable version can be quickly made.
Through the self-adaptive learning mechanism, the whole asset management method is changed from a static system to a living system which can grow together with the IT environment of an enterprise and is more and more intelligent, so that real intellectualization and sustainability are realized.
S103, carrying out multidimensional early warning judgment based on the management suggestion list, calling the adjusted AI large model to generate structured asset early warning information according to an early warning signal triggered by a judgment result, and outputting an asset management work order.
In particular, a real-time or near real-time standard asset data record is entered containing all of the base features and enhancement features. Comprehensively judging whether to trigger early warning or not through the following dimensions:
Model prediction probability threshold value, failure probability and performance deterioration probability of the regulated AI large model output. When the probability exceeds a set dynamic threshold (e.g., failure probability > 0.8), it is used as the primary trigger condition.
And calculating the statistical deviation degree (such as Z-Score) of the characteristic value of the current asset and the normal baseline or the self historical baseline of the similar asset. When a plurality of key characteristics (such as CPU utilization rate and memory error count) are deviated obviously, the early warning signal is enhanced in a gradient type.
And (5) analyzing the asset correlation map by correlation map conduction effect. If an important asset (e.g., core switch) triggers an early warning, the system will evaluate the "risk conduction likelihood" of its associated asset (its downstream server) and consider generating a low-level associated early warning ahead of time.
Wherein the early warning probability threshold is not fixed. The system can dynamically adjust according to the historical early warning accuracy of the type of asset. For example, if the false alarm rate of a certain type of asset (such as an old hard disk) is high, the system will automatically raise the early warning trigger probability threshold for that type of asset to reduce noise.
After the early warning is triggered, the system calls the adjusted AI large model, and the structural asset early warning information is generated by combining the key feature combination triggering the early warning and the chain type reasoning process. The information includes an early warning unique identifier, a target asset identifier, an early warning type (such as 'performance bottleneck', 'potential fault', 'security risk', 'configuration drift'), an early warning level (such as 'emergency', 'important', 'warning', 'prompt'), a core problem description (described in a compact natural language such as 'disk IO delay of database server DB-01 is continuously 3 standard deviations above normal base line, and trend is still rising'), a key evidence feature (list of the most important features and values thereof that lead to early warning), and an expected impact time window (based on time sequence prediction, giving a time range over which the problem may burst), wherein the early warning level is dynamically calculated by an algorithm instead of being preset. The calculation formula comprehensively considers the model prediction probability value, the asset topology importance score and the affected business criticality grade. For example, a core database server with a prediction probability of 0.7 may have a higher early warning level than a test environment server with a prediction probability of 0.9.
A state machine is also maintained for each asset pre-warning message, such as "active", "confirmed", "resolved", "false positive", "auto-close". To avoid "early warning storms", the system sets an auto-convergence rule:
and aggregating a plurality of similar early warning generated in a short time into one based on the asset identifier, the early warning type and the feature similarity, and marking the frequency.
And (3) repeatedly verifying the characteristic deviation degree, namely if one early warning is in the active period, re-verifying the characteristic deviation degree, and if the corresponding key evidence characteristic shows that the problem is automatically relieved (if the CPU utilization rate is recovered to be normal), automatically reducing the early warning level of the system or automatically closing the system after confirming the stability, and recording the reason.
The regression task module input into the adjusted AI large model is responsible for predicting the resource demand peak value of a specific future period (such as the following month and the following quarter) such as the CPU core number, the memory size and the storage IOPS by combining the historical performance metric time sequence data of the asset, the service prediction index (such as the increase of users) and the current configuration information. The model identifies the type of resource that is about to become a bottleneck and its severity by analyzing trends. For example, the memory requirements of application APP-X will exceed 150% of the current allocation value in the two months in the future.
Various proposal policy templates are preset, such as 'vertical capacity expansion' (upgrading an existing server) ',' horizontal capacity expansion '(adding a server instance)', 'load migration' (migrating to other resource pools) 'resource recovery' (recovering idle resources). While considering a number of often conflicting objectives:
target 1 performance target-ensure application performance SLA (service level agreement).
Goal 2 cost goal-minimize resource procurement or cloud service charges.
Target 3 stability target-minimize the risk of disruption to traffic due to changes.
For the identified bottlenecks, the execution effect of the different suggested policies is simulated. For example, for a virtual machine requiring capacity expansion, it simulates two schemes of "vertical capacity expansion to 8 cores 16G" and "horizontal capacity expansion to 24 cores 8G instances" simultaneously, and predicts the effect of each scheme on performance, cost and stability.
And simultaneously, combining the current asset early warning information to output a group of pareto optimal allocation suggestion sets. For example:
Scheme A (cost first) resource tuning is performed, 20% cost savings are expected, and performance is at 5% risk of degradation.
The scheme B (performance priority) is that the capacity is immediately expanded, the performance is guaranteed by 100 percent, and the cost is increased by 30 percent.
Scheme C (balance type) is that the capacity is re-expanded in the next quarter, the load is optimized in the process, the cost is unchanged, and the performance is at 10% reduction risk.
Maintenance work orders, change records and execution results (success/failure) of the continuous learning history form a scheme knowledge base. Each historical schema is deconstructed into a series of schema elements and relationships between them are constructed. Elements include question type, asset type, operational actions (e.g., reboot, replacement, configuration modification), tools required, expected time consumption, responsible person roles, checkpoints, etc. These elements are interrelated to form a plan element map.
And combining the asset early warning information and the distribution suggestion set, carrying out similarity matching on the current problem (comprising asset type, early warning type and key characteristics) and the historical cases in the scheme knowledge base, and finding out N most similar successful cases. The matching process is to extract general and effective scheme elements from the matched cases, and then intelligently adapt and recombine the elements by combining the specific context (such as unique software configuration and network environment) of the current asset to generate a brand new and highly customized intelligent management work order scheme. For example, for Web server CPU over high early warning, match to case a (optimization code) and case B (add cache). The system generated solution may be to first check for recent code changes with reference to case a, and second suggest adding CDN caches for static resources with reference to case B, and at the same time add the step of 'checking for container resource restrictions' based on the nature of the server running containerization.
The final intelligent management work order scheme comprises the following steps:
Work order title (clearly describe questions and actions);
execution summary (brief description of the manager's view);
Detailed operation steps (substep, executable operation instructions);
rollback plan (how to recover if an operation fails);
success criteria (how to verify that the operation has been successful);
The execution time window (based on business impact analysis) is recommended.
Where possible, the system may attempt to simulate the critical steps of an execution scheme in a sandboxed environment to verify its feasibility, discovering potential problems in advance. The method can also collect and feed back the execution effect of the generated management work order, train and optimize and update the AI large model.
Through the scheme, the system outputs no simple alarm or general suggestion, but integrates data analysis, an optimization algorithm and a historical intelligent and near-expert detailed action plan, so that the comprehensiveness and decision quality of IT management are greatly improved.
The second embodiment of the application is as follows:
referring to fig. 4, the present invention provides an asset management system based on AI large model and IT asset data, which is applied to an asset management method based on AI large model and IT asset data as provided in the first embodiment, and includes a data feature unification processing module 101, an AI large model integration and training module 102 and an asset management module 103,
The data feature unification processing module 101 is configured to collect raw data from a plurality of IT asset data sources, map and convert structured data therein, and write unstructured data therein into an asset data feature set after data enhancement;
The AI big model integration and training module 102 is configured to input the asset data feature set into the adjusted AI big model for analysis, and generate a management suggestion list;
the asset management module 103 is configured to perform multidimensional early warning judgment based on the management suggestion list, invoke the adjusted AI big model to generate structured asset early warning information according to an early warning signal triggered by a judgment result, and output an asset management work order.
The specific manner in which the various modules perform the operations in relation to the systems of the above embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
For system embodiments, reference is made to the description of method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Correspondingly, the application further provides electronic equipment, which comprises one or more processors, a memory and an asset management method, wherein the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the asset management method based on the AI large model and the IT asset data. As shown in fig. 5, a hardware structure diagram of an apparatus with data processing capability, where an asset management system based on AI large model and IT asset data is located, according to an embodiment of the present application, except for a processor, a memory and a network interface shown in fig. 5, any apparatus with data processing capability in an embodiment is generally according to an actual function of the apparatus with data processing capability, and may further include other hardware, which is not described herein.
Accordingly, the present application also provides a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement an asset management method based on AI large models and IT asset data as described above. The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any of the data processing enabled devices described in any of the previous embodiments. The computer readable storage medium may also be an external storage device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), an SD card, a flash memory card (FLASH CARD), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any device having data processing capabilities. The computer readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing apparatus, and may also be used for temporarily storing data that has been output or is to be output.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof.