CN118714408B - Method and system for controlling intelligent viewing terminal equipment - Google Patents
Method and system for controlling intelligent viewing terminal equipment Download PDFInfo
- Publication number
- CN118714408B CN118714408B CN202411195040.6A CN202411195040A CN118714408B CN 118714408 B CN118714408 B CN 118714408B CN 202411195040 A CN202411195040 A CN 202411195040A CN 118714408 B CN118714408 B CN 118714408B
- Authority
- CN
- China
- Prior art keywords
- control
- vector
- feature
- user
- viewing terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4542—Blocking scenes or portions of the received content, e.g. censoring scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4627—Rights management associated to the content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
- H04N21/6543—Transmission by server directed to the client for forcing some client operations, e.g. recording
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Social Psychology (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a management and control method and a system of intelligent viewing terminal equipment, which relate to the technical field of intelligent terminals and comprise the steps of establishing a safe communication connection based on a blockchain technology with a cloud management and control platform and receiving a management and control task issued by the cloud management and control platform; generating a control strategy and a structured control parameter based on the control task; after receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control action mode, generating a corresponding limiting operation instruction, and transmitting the corresponding limiting operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing action.
Description
Technical Field
The invention relates to an intelligent terminal technology, in particular to a control method and a control system of intelligent viewing terminal equipment.
Background
With the popularization of intelligent viewing terminal devices such as intelligent televisions and set-top boxes, users acquire various viewing programs and services through the devices, and the acquisition of various viewing programs and services by the devices becomes an indispensable part of daily life. However, while enjoying convenience, there are also some urgent problems to be solved, such as the minors being addicted to the television, being in bad contact with the content, the health being affected by too long watching time of some users, and the copyright protection of the audiovisual content being inadequate. Therefore, how to effectively manage and control the intelligent viewing terminal equipment, guide users to reasonably use, ensure content safety and become important focus of industry.
The traditional intelligent viewing terminal equipment management and control mode mainly comprises the following steps:
(1) Setting a viewing time limit: the user can set a watching duration limit for a specific time period through the control function of the device, and when the set duration is exceeded, the device is automatically closed or locked. However, this method is not flexible enough and cannot be dynamically adjusted according to the actual situation of the user.
(2) Content rating and limitation: by ranking the audiovisual content, the content levels viewable at different age groups are set, and when a user selects content beyond their age groups, a password or parental authorization needs to be entered. This approach can prevent minor from contacting bad content to some extent, but the rating criteria are difficult to unify and are easily bypassed.
(3) Black and white list: the equipment presets or the user defines some channels, blacklists and whitelists of programs, contents belonging to the blacklists cannot be watched, and contents belonging to the whitelists can be watched freely. The mode has obvious control effect, but the setting and maintenance of the list are complicated, and the content outside the list cannot be judged.
(4) Hardware lock: the physical lock or the electronic lock is arranged on the equipment, the equipment cannot be used without authorization by unlocking through a key or a password. This approach is highly reliable but inconvenient to use and once the lock is broken, the control is disabled.
The traditional control modes have advantages and disadvantages, and have certain limitations in practical application.
Disclosure of Invention
The embodiment of the invention provides a control method and a control system for intelligent viewing terminal equipment, which can at least solve part of problems in the prior art.
In a first aspect of an embodiment of the present invention,
The management and control method for the intelligent viewing terminal equipment comprises the following steps:
The method comprises the steps of establishing a safe communication connection based on a blockchain technology with a cloud management and control platform, and receiving a management and control task issued by the cloud management and control platform, wherein the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
After receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
If the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control action mode, generating a corresponding restrictive operation instruction, and issuing the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing action; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a second aspect of an embodiment of the present invention,
Providing a management and control system of intelligent viewing terminal equipment, comprising:
The system comprises a first unit, a second unit and a third unit, wherein the first unit is used for receiving a management and control task issued by a cloud management and control platform by establishing a safe communication connection based on a blockchain technology with the cloud management and control platform, and the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
The second unit is used for analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the management and control parameters after receiving the management and control parameters by the intelligent viewing terminal equipment, and judging whether the current viewing behaviors meet the management and control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
The third unit is used for dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode if the control condition is met, generating a corresponding restrictive operation instruction and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The knowledge graph is introduced to carry out semantic enhancement feature representation on the intelligent viewing terminal equipment, so that multidimensional features of the equipment and the user can be fully mined, and a foundation is laid for subsequent management and control; the hidden characteristic representation of the control object in the characteristic map is learned by adopting a graph neural network, and the control grouping is realized by utilizing a clustering algorithm, so that the control object can be adaptively grouped according to the control requirement; the generated control strategy is constrained and optimized by using the control rule base, so that the interpretability and the executable performance of the strategy are improved; the trusted issuing of the control parameters is realized through the intelligent contract, so that the safety of control is ensured.
The control condition judgment model based on depth feature matching and correlation calculation can comprehensively consider user features and viewing contents, dynamically judge the satisfaction condition of the control conditions, and realize intelligent viewing behavior control. The model introduces word embedding and attention mechanisms, and improves the accuracy of user portrait matching and content correlation calculation. Meanwhile, the control strictness degree can be flexibly adjusted through weighted fusion and threshold comparison, and different control requirements are met.
Drawings
Fig. 1 is a flow chart of a control method of an intelligent viewing terminal device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a management and control system of an intelligent viewing terminal device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flow chart of a control method of an intelligent viewing terminal device according to an embodiment of the present invention, as shown in fig. 1, where the method includes:
S101, a safe communication connection based on a blockchain technology is established with a cloud management and control platform, and a management and control task issued by the cloud management and control platform is received, wherein the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
s102, after receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
S103, if the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode, generating a corresponding restrictive operation instruction, and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In an alternative embodiment of the present invention,
Based on the management and control task, the intelligent viewing terminal equipment knowledge graph and the machine learning model are utilized to carry out feature extraction and grouping identification on the intelligent viewing terminal equipment, and a management and control strategy and a structured management and control parameter are generated, which comprises the following steps:
The management and control object list is utilized, and the knowledge spectrum of the pre-constructed intelligent viewing terminal equipment is combined, and through knowledge reasoning and semantic association analysis, the feature extraction and vectorization representation are carried out on the multidimensional features of the current intelligent viewing terminal equipment, so that the feature spectrum of the current intelligent viewing terminal equipment is generated, wherein the multidimensional features comprise any one or more of hardware configuration, a software system, a network environment and user group features;
Constructing a graph neural network; inputting the characteristic map into a graph neural network, and dividing different intelligent viewing terminal devices into corresponding management and control groups based on characteristic vectors output by the graph neural network through a k-means clustering algorithm or a density clustering algorithm; for the management and control grouping result, adopting a clustering evaluation index comprising a contour coefficient and a Galois-Harabase index to measure the cohesion and the outward separation of the management and control grouping, estimating the reliability of the clustering effect, and taking the management and control grouping with the reliability higher than a preset confidence threshold as the identified management and control grouping;
Generating a model by utilizing a pre-trained management and control strategy aiming at the identified management and control group, and generating an optimized management and control strategy aiming at the management and control group by combining a rule template and constraint conditions in a management and control rule base, wherein the management and control strategy comprises management and control time planning, management and control content recommendation, management and control mode selection and management and control intensity adjustment; and converting the generated control strategy into structured control parameters, and safely issuing the control parameters to the intelligent viewing terminal equipment through an intelligent contract mechanism, wherein the control parameters comprise a control time interval, a control channel set, a control program list, a control user portrait and a control behavior mode.
Illustratively, the method utilizes a pre-constructed knowledge graph of the intelligent viewing terminal equipment to extract and vectorize the multidimensional features of the intelligent viewing terminal equipment in combination with a management and control object list. The specific implementation steps are as follows:
And constructing a knowledge graph of the intelligent viewing terminal equipment. The knowledge graph is a structured semantic network, and consists of three basic elements, namely an Entity (Entity), a relationship (Relation) and an Attribute (Attribute). The knowledge graph of the field of intelligent viewing terminal equipment is constructed by crawling public data on the Internet, such as intelligent television model parameters, user comments and the like, and extracting entities, relations and attributes by utilizing a natural language processing technology. The construction process of the knowledge graph can refer to the method in the literature.
And extracting multidimensional features. Given a management and control object list, namely an intelligent viewing terminal equipment set which needs to be managed and controlled, the multidimensional features related to the management and control objects are extracted from the knowledge graph by utilizing knowledge reasoning and semantic association analysis technology. The multi-dimensional features considered herein include:
hardware configuration: parameters such as screen size, resolution, CPU model, memory size, storage capacity and the like are included;
Software system: the information comprises an operating system version, a middleware version, an application program list and the like;
Network environment: parameters including access network type (e.g., WIFI, 4G), network bandwidth, latency, etc.;
User group characteristics: including statistics such as user age bracket distribution, gender ratio, viewing preferences, etc.
Feature vectorization. And carrying out numerical treatment and standardization treatment on the extracted multidimensional features to generate feature vectors for subsequent grouping identification and management and control strategy generation. Common feature vectorization methods include One-Hot encoding, word2Vec, and the like. One-Hot encoding is applicable to discrete features, such as operating system type; word2Vec is suitable for text-type features such as user tags.
And generating a characteristic map. The control object and its feature vector are constructed as a heterogeneous map (Heterogeneous Graph), called a feature map. The heterogram contains two types of nodes: and managing object nodes and characteristic nodes and connecting edges of the two types of nodes. The heterograms can well fuse the structural information and semantic information of the management and control object, and provide richer feature representation for subsequent grouping identification.
After generating a feature map of the intelligent viewing terminal device, it is proposed herein to group-identify management objects using a graph neural network (Graph Neural Network, GNN). GNN is a deep learning model based on graph structure data, and can effectively learn the feature representation of nodes in a graph. The packet identification method adopted in the text mainly comprises the following steps:
And constructing a graph neural network. A graph convolution neural network (Graph Convolutional Network, GCN) is used to learn the hidden feature representation of the nodes in the feature map. The GCN realizes convolution operation by using the graph Laplace matrix, and can aggregate the structure information and the attribute information of the nodes. The neural network structure comprising two layers of GCNs is built, wherein the first layer of GCNs is used for carrying out low-dimensional feature learning on the management and control object nodes and the feature nodes, and the second layer of GCNs is used for generating final feature representation of the management and control object nodes on the basis of the first layer.
Clustering algorithm application. And grouping the management and control objects by using a clustering algorithm in unsupervised learning according to the management and control object feature vector output by the GCN. The k-means clustering algorithm and the density clustering algorithm (DBSCAN) were chosen for experimental comparison. Wherein, the k-means algorithm needs to pre-designate the clustering number k, and the DBSCAN algorithm can adaptively generate a clustering result. In the implementation process, the similarity between objects is controlled by using Euclidean distance degree.
And (5) clustering evaluation. To evaluate the clustering effect, two indices, the contour coefficient (Silhouette Coefficient) and the garrison-harabase index (Calinski-Harabasz Index), are used herein. The outline coefficient measures the cohesiveness and the separability of the clusters, the value range is [ -1,1], and the larger the value is, the better the clustering effect is. The garrison-harabase index evaluates cluster quality by calculating intra-class variance and inter-class variance, with larger values indicating better clustering results. For the result that the clustering effect meets the preset threshold (such as the contour coefficient is larger than 0.5), the clustering effect is considered as a trusted management and control group.
After identifying the trusted management and control group, the present document further utilizes a machine learning method to generate corresponding management and control policies and management and control parameters for different groups. The specific implementation steps are as follows:
and constructing a management and control strategy generation model. The generation of the regulatory strategy can be seen as a sequence generation problem, and is thus implemented herein using a sequence-to-sequence model (Seq 2 Seq) based on a transducer. Firstly, pre-training a Seq2Seq model by using an existing management and control strategy sample (such as management and control time planning, management and control content recommendation and the like) so as to enable the model to have the basic capability of management and control strategy generation. Then, for the identified management and control group, extracting common characteristics (such as user group preference, hardware configuration level and the like) of the management and control object in the group, inputting the common characteristics into the Seq2Seq model, and generating a management and control strategy sequence adapting to the group.
And (5) optimizing a management and control strategy. In order to improve the feasibility and rationality of the generated control strategy, a control rule base is introduced to restrict and optimize the generated result. A series of rule templates and constraint conditions are predefined in the management rule base, such as 'management time cannot exceed 6 hours per day', 'management content must conform to laws and regulations', and the like. And matching the control strategy generated by the Seq2Seq model with a rule base, filtering out strategies which do not meet constraint conditions, and filling and perfecting the strategies by utilizing rule templates.
And (5) generating management and control parameters. And extracting structured control parameters including a control time interval, a control channel set, a control program list, a control user portrait, a control behavior mode and the like according to the optimized control strategy. These control parameters can be directly applied to the control execution of the intelligent viewing terminal device.
And (5) controlling and issuing. And safely and credibly issuing the management and control parameters to the corresponding intelligent viewing terminal equipment through an intelligent contract mechanism. The intelligent contract is a trusted execution environment based on the blockchain technology, and can ensure the integrity and the non-tamper property of the management and control parameters. The intelligent contract issued by management and control is realized by using the Ethernet platform, and the functions of encryption, signature, verification and the like of management and control parameters are included, so that the safe issuing and execution of the management and control strategy are ensured.
The knowledge graph is introduced to carry out semantic enhancement feature representation on the intelligent viewing terminal equipment, so that multidimensional features of the equipment and the user can be fully mined, and a foundation is laid for subsequent management and control; the hidden characteristic representation of the control object in the characteristic map is learned by adopting a graph neural network, and the control grouping is realized by utilizing a clustering algorithm, so that the control object can be adaptively grouped according to the control requirement; the generated control strategy is constrained and optimized by using the control rule base, so that the interpretability and the executable performance of the strategy are improved; the trusted issuing of the control parameters is realized through the intelligent contract, so that the safety of control is ensured.
In an alternative embodiment of the present invention,
And carrying out feature extraction and vectorization representation on the multidimensional features of the current intelligent viewing terminal equipment by utilizing the management and control object list and combining with a pre-constructed knowledge graph of the intelligent viewing terminal equipment through knowledge reasoning and semantic association analysis, and generating a feature graph of the current intelligent viewing terminal equipment, wherein the feature graph comprises the following steps:
Mapping the current intelligent viewing terminal equipment in the management and control object list to corresponding entity nodes in the knowledge graph of the intelligent viewing terminal equipment; generating a multi-hop associated subgraph by using the entity node of the current intelligent viewing terminal equipment as a central node through a random walk algorithm, wherein the multi-hop associated subgraph comprises entity nodes and relationship edges which have semantic association with the central node; based on the multi-jump associated subgraph, learning knowledge enhancement feature representation of the entity node of the current intelligent viewing terminal equipment through message transmission and aggregation operation in a graph neural network model;
Acquiring multi-dimensional heterogeneous attribute information of the entity node of the current intelligent viewing terminal equipment, and extracting a structured multi-dimensional characteristic through a characteristic template; splicing the multidimensional features of the entity nodes of the current intelligent viewing terminal equipment with the knowledge enhancement feature representations, and learning importance weights of the features with different dimensions through an attention mechanism to generate a comprehensive feature embedding vector of the entity nodes of the current intelligent viewing terminal equipment; inputting the comprehensive feature embedded vector of the entity node in the multi-hop associated subgraph into a graph neural network model, and updating the comprehensive feature embedded vector through multi-layer message transmission and feature aggregation;
Optimizing the comprehensive feature embedded vector of the entity node of the current intelligent viewing terminal equipment by comparing with a learning model; calculating the pair-wise similarity between the optimized comprehensive feature embedded vectors of the entity nodes of the current intelligent viewing terminal equipment, and constructing a similarity matrix; performing feature decomposition on the similarity matrix to obtain feature vectors corresponding to the first m maximum feature values, wherein the feature vectors are used as low-dimensional representation of the comprehensive feature embedding vectors; and carrying out nonlinear dimension reduction on the low-dimensional representation of the comprehensive feature embedded vector by using a t-distribution neighborhood embedding algorithm, minimizing KL divergence among nodes in a low-dimensional space by gradient descent optimization, obtaining node coordinates on a two-dimensional plane, and generating a feature map of the visualized current intelligent viewing terminal equipment.
Illustratively, the application utilizes knowledge graph and deep learning technology to mine the features of the intelligent viewing terminal equipment from multiple dimensions, generate comprehensive feature embedding vectors and finally construct a visualized feature graph. The specific implementation steps are as follows:
First, mapping the current intelligent viewing terminal equipment in the management and control object list to the corresponding entity node in the pre-constructed knowledge graph of the intelligent viewing terminal equipment. The knowledge graph is a structured semantic network, and consists of three basic elements, namely an Entity (Entity), a relationship (Relation) and an Attribute (Attribute). And establishing association between the control object and the entity node in the knowledge graph through mapping operation.
And then, taking the entity node of the current intelligent viewing terminal equipment as a central node, and generating a multi-hop associated subgraph through a random walk algorithm. The random walk algorithm is a common graph sampling method, and a local subgraph related to a central node is acquired by random walk in the graph. Specifically, starting from the central node, randomly selecting the neighbor nodes to walk with a certain probability until the preset hop count or coverage requirement is reached. The generated multi-hop associative subgraph comprises entity nodes with semantic associations with the central node and relationship edges connecting them.
Based on the multi-jump associated subgraph, learning the knowledge enhancement characteristic representation of the entity node of the current intelligent viewing terminal equipment through a graph neural network (Graph Neural Network, GNN) model. GNN is a deep learning model based on graph structure data, and can effectively aggregate structure information and attribute information of nodes.
A graph attention network (Graph Attention Network, GAT) is employed herein to implement feature learning. The GAT distributes different weights to the neighbor nodes through the attention mechanism, and can adaptively aggregate important neighbor information. Specifically, GAT is applied on the multi-hop associative subgraph, and the characteristic representation of the node is updated through multi-layer message passing and aggregation operations. The calculation formula of each layer is as follows:
;
wherein, Representing the feature vector of node i at layer l +1, N i representing the set of neighbor nodes of node i,Representing the attention weight between node i and neighbor node j,Representing the linear transformation matrix of the first layer,Representing an activation function.
Through the calculation of multi-layer GAT, the knowledge enhancement characteristic representation of the entity node of the current intelligent viewing terminal equipment can be obtained and recorded asWhere L is the number of layers of GAT.
In addition to semantic information in the knowledge graph, the intelligent viewing terminal device also has heterogeneous properties of multiple dimensions, such as hardware configuration, software version, network environment and the like. In order to fully utilize the attribute information, a multi-dimensional feature extraction and fusion method is provided.
Firstly, obtaining multidimensional heterogeneous attribute information of entity nodes of the current intelligent viewing terminal equipment, and extracting structured multidimensional features through a feature template. Feature templates are a rule-based feature extraction method that matches and extracts attribute values by defining a series of templates. For example, for a hardware configuration dimension, "CPU model" may be defined: { model }, "memory size: { size } "and the like, and then extracting the corresponding value from the attribute information.
And then, splicing the extracted multidimensional features with the knowledge enhancement feature representation to obtain a comprehensive feature vector. To learn the importance weights of different dimensional features, attention mechanisms are introduced herein. Specifically, the comprehensive feature vector is subjected to linear transformation and SoftMax normalization to obtain the attention weight of each dimension feature. And finally, multiplying the comprehensive feature vector by the attention weight to obtain a weighted and fused comprehensive feature embedded vector, which is marked as e center.
In order to further mine the association information between the nodes in the multi-hop association subgraph, the graph neural network model is applied again, and message transmission and feature aggregation are carried out on the comprehensive feature embedded vectors of the nodes.
Specifically, the integrated characteristic embedded vectors of all nodes in the multi-hop associated subgraph form a matrixWhere N is the number of nodes in the subgraph. Then, a multi-layer convolution operation is performed on the feature matrix by using a graph convolution network (Graph Convolutional Network, GCN), and the feature representation of the node is updated. The calculation formula of each layer is as follows:
;
wherein, Representing the node feature matrix of the first layer,The adjacency matrix representing the sub-graph,A degree matrix representing the adjacency matrix of the subgraph.
The updated node characteristic matrix can be obtained through the calculation of the multi-layer GCN, each row in the updated node characteristic matrix corresponds to a characteristic vector of a node, and the characteristics of the node and the characteristic information of the neighbor nodes are integrated.
To further optimize the comprehensive feature embedding vector of the entity node of the current intelligent viewing terminal equipment, a contrast learning (Contrastive Learning) model is introduced. Contrast learning is an unsupervised representation learning method that learns a characteristic representation of data by maximizing the similarity of positive pairs of samples and minimizing the similarity of negative pairs of samples.
Specifically, the comprehensive feature embedded vector of the entity node of the current intelligent viewing terminal equipment is used as an Anchor point (Anchor), and a positive sample node and a negative sample node are obtained by sampling from the multi-hop associated subgraph. Positive sample nodes are nodes with high correlation to the anchor point, and negative sample nodes are nodes that are not or weakly correlated to the anchor point. The integrated feature embedding vector is then optimized by minimizing the contrast loss function:
;
wherein, A feature embedding vector representing a positive sample node,The feature embedding vector representing the i-th negative sample node, K being the number of negative samples, sim () representing the similarity function,Indicating a temperature super-parameter.
And the comprehensive feature embedding vector of the entity node of the current intelligent viewing terminal equipment after optimization can be obtained by minimizing the contrast loss function through a gradient descent method.
And on the basis of the optimized integrated feature embedded vector, calculating the paired similarity between the entity node of the current intelligent viewing terminal equipment and other nodes, and constructing a similarity matrix S.
And then, carrying out feature decomposition on the similarity matrix to obtain feature vectors corresponding to the first m maximum feature values, and using the feature vectors as low-dimensional representation of the comprehensive feature embedding vector. Feature decomposition is a commonly used dimension reduction method that can map high-dimensional data into low-dimensional space while preserving the main features of the data.
Finally, the low-dimensional representation E m of the integrated feature embedding vector is subjected to nonlinear dimension reduction by using a t-distribution neighborhood embedding (t-Distributed Stochastic Neighbor Embedding, t-SNE) algorithm, and mapped onto a two-dimensional plane. t-SNE is a manifold learning algorithm that maintains a local structure of data by minimizing KL divergence between nodes in a low-dimensional space.
In particular, the goal of the t-SNE algorithm is to find a low-dimensional embedding space such that the distribution of nodes in that space is as similar as possible to the distribution in the high-dimensional space. The following cost function is optimized by gradient descent:
;
wherein P represents a similarity distribution between nodes in a high-dimensional space, Q represents a similarity distribution between nodes in a low-dimensional space, AndThe similarity of the node i and the node j in the high-dimensional space and the low-dimensional space are represented, respectively.
Node coordinates on a two-dimensional plane can be obtained through t-SNE dimension reduction, the coordinate points are drawn on the two-dimensional plane, and the edges are connected according to the node types and the relations, so that the characteristic map of the visualized current intelligent viewing terminal equipment can be obtained. The characteristic map intuitively displays the association relation and the relative position between the current intelligent viewing terminal equipment and other entities, and provides an intuitive reference for subsequent management and control grouping and strategy generation.
The application further discloses a complete technical scheme for extracting the characteristics and vectorizing the representation of the intelligent viewing terminal equipment. The scheme fully utilizes the technologies of knowledge graph, deep learning, dimension reduction visualization and the like, the characteristics of the intelligent viewing terminal equipment are mined and fused from multiple dimensions, and comprehensive characteristic embedding vectors and visualized characteristic graphs are generated, so that a foundation is laid for intelligent management and control.
In an alternative embodiment of the present invention,
Generating a model for the identified management and control group by utilizing a pre-trained management and control strategy, and generating an optimized management and control strategy for the management and control group by combining rule templates and constraint conditions in a management and control rule base, wherein the method comprises the following steps:
Calculating a similarity matrix among the devices according to the attribute feature vector of each device node in the management and control group, and dividing the similarity matrix by applying a spectral clustering algorithm to obtain device sub-groups; the attribute feature vectors of the sub-groups of each device are aggregated, the representation vectors of the sub-groups are learned through an attention mechanism and matched with rule templates in a management and control rule base, and candidate rule sets suitable for the sub-groups are screened out;
the group feature vectors of the management and control groups and the representation vectors of the equipment sub-groups are spliced to form management and control context vectors, and the management and control context vectors are input into a pre-trained management and control time planning model to generate a candidate management and control time scheme; carrying out feasibility analysis on each candidate management and control time scheme, and screening out infeasible time schemes through constraint satisfaction calculation and rule conflict detection to obtain a management and control time planning sequence; inputting the behavior feature vector and the control time planning sequence of the control group into a pre-trained control content recommendation model, adopting a collaborative filtering algorithm based on a graph neural network, realizing interactive modeling of user-content through message transmission and embedding aggregation, and generating a control content recommendation list;
Constructing heterogram representation according to the network topology structure and the communication mode of the sub-groups of the fine-grained equipment, inputting the heterogram representation into a pre-trained management and control mode selection model, realizing the aggregation representation learning of nodes and edges through a graph attention network, and predicting by adopting a graph classification algorithm to obtain management and control deployment mode selection; constructing a management and control intensity evaluation index system, inputting the management and control intensity evaluation index system into a pre-trained management and control intensity adjustment model, and generating management and control intensity adjustment configuration; and carrying out combined optimization on the control time planning sequence, the control content recommendation list, the control deployment mode selection and the control intensity adjustment configuration, and solving the constraint satisfaction problem through an integer programming method to obtain an optimized control strategy aiming at the control group.
Illustratively, the present application utilizes a pre-trained management and control strategy generation model, in combination with rule templates and constraints in a management and control rule base, to automatically generate a personalized management and control strategy for management and control groups through a series of machine learning and optimization algorithms. The specific implementation steps are as follows:
Firstly, calculating a similarity matrix between devices through cosine similarity according to attribute feature vectors of each device node in a management group. Cosine similarity is a commonly used vector similarity measurement method, and the degree of similarity is measured by calculating the cosine value of the included angle between two vectors.
And then, dividing the similarity matrix by using a spectral clustering algorithm to obtain equipment sub-groups. Spectral clustering is a clustering method based on graph theory, and the data is divided into different subgroups by carrying out feature decomposition on the Laplacian matrix of the data. Specifically, the similarity matrix is converted into an adjacent matrix of the undirected weighted graph, the normalized Laplacian matrix is calculated, and eigenvalue decomposition is carried out on the matrix. And taking feature vectors corresponding to the first k minimum non-zero feature values to form a feature matrix, and carrying out k-means clustering on the feature matrix to obtain k equipment sub-groups.
For each device sub-packet, the attribute feature vectors of its internal device nodes are aggregated, and the representation vectors of the sub-packets are learned by an attention mechanism. The attention mechanism is a common feature aggregation method, and can adaptively assign weights to different features. Assuming that the sub-packet Gi contains ni device nodes, and the attribute feature vector matrix thereof is Xi, the attention mechanism is adopted to calculate the sub-packet representation vector Gi. Firstly, performing nonlinear transformation on Xi through parameter matrixes W1 and W2, calculating attention distribution ai, and then carrying out weighted summation on ai and Xi to obtain gi.
Next, the expression vector gi of each sub-group is matched with the rule templates in the management rule base, and the candidate rule set applicable to the sub-group is screened out. The rule templates define the basic structure and parameter ranges of the management rules, and can be matched with the sub-groups through the constraint conditions of the attribute characteristics. And calculating the similarity between gi and the rule template representation by adopting cosine similarity, and selecting a rule with the similarity larger than a preset threshold as a candidate.
When the control time scheme is generated, firstly, the group feature vector of the control group and the representation vector of each equipment sub-group are spliced to form a control context vector which is used as the input of the control time planning model. The model adopts a neural network structure from sequence to sequence (seq 2 seq), maps the management and control context vector to the hidden space through an encoder, and then generates a candidate management and control time sequence through a decoder.
A feasibility analysis is then performed for each candidate time scheme. And (3) calculating the satisfaction degree of the time scheme to the rule constraint conditions, detecting conflicts among different rules, and screening out the infeasible time scheme to obtain a final control time planning sequence. The constraint satisfaction degree can be calculated by measuring the deviation degree of time scheme parameters and rule requirements, and rule conflict detection is carried out by constructing a rule dependency graph and analyzing the combination of different rules by using a graph algorithm.
The management and control content recommendation aims at screening the optimal content combination from the candidate management and control content library according to the behavior characteristics of the management and control group. The relationship between the user and the content is modeled by a user-content interaction graph using a collaborative filtering algorithm based on a Graph Neural Network (GNN).
Firstly, constructing a user-content interaction diagram according to the behavior feature vector of the control group and the control time planning sequence. The management and control group and the content item are regarded as nodes of the graph, and the interaction behavior (such as browsing, clicking and the like) is regarded as edges of the graph, so that the heterogeneous interaction graph is constructed. Then, through the message passing and embedding aggregation mechanism of the GNN, the embedded representation of the user node and the content node is learned, and the interactive modeling of the user-content is realized. And finally, calculating preference scores of the user on different contents according to the similarity embedded by the nodes, and generating a personalized management and control content recommendation list.
For fine-grained device sub-packets, a proper management and control deployment mode needs to be selected according to the network topology and communication mode. First, the network information and communication records of the sub-packets are constructed as heterogeneous graph representations including device nodes, chain sides, and communication sides. The heterogeneous graph is then input into a pre-trained graph annotation force network (GAT) model, and the representation vectors of the nodes and edges of the graph are learned by concentration of the nodes and edges. Finally, a graph classification algorithm such as a graph rolling network (GCN) is adopted to predict a management and control deployment mode according to the embedded representation of the nodes and the edges, so that the self-adaptive selection of the management and control mode is realized.
The aim of controlling the intensity adjustment is to balance the controlling effect and the user experience, and the strictness degree of the controlling strategy is dynamically adjusted. Firstly, constructing a multi-dimensional management and control intensity evaluation index system, comprehensively considering factors such as management and control cost, user satisfaction, safety risk and the like, and quantifying the influence of management and control intensity. And inputting the attribute characteristics and the management and control historical data of the management and control groups into a pre-trained management and control intensity adjustment model, and learning a nonlinear mapping relation between management and control intensity and an evaluation index through a deep neural network to generate an optimal management and control intensity adjustment configuration.
On the basis of generating a management and control time plan, content recommendation, deployment mode selection and strength adjustment, the overall management and control strategy is further obtained through combination optimization. And modeling discrete variables of the management and control strategy by adopting an integer programming method, setting an objective function to maximize comprehensive management and control effectiveness, and solving an integer programming problem to obtain an optimal strategy combination by constraint conditions including satisfaction of management and control rules, resource limitation and the like. The combination optimization process can balance the trade-off among different control elements to obtain a globally optimal control strategy.
Through the steps, the technical scheme realizes the self-adaptive strategy optimization of the management and control group. The machine learning algorithm and the optimizing method are comprehensively utilized, the modeling and solving of the system are performed in the aspects of equipment sub-grouping, strategy element generation, strategy combination optimization and the like, and personalized and fine management and control strategies can be automatically generated.
In an alternative embodiment of the present invention,
The group feature vector of the management and control group and the representation vector of each equipment sub-group are spliced to form a management and control context vector, and the management and control context vector is input into a pre-trained management and control time planning model to generate a candidate management and control time scheme, wherein the method comprises the following steps of:
Arranging the device sub-group representation vectors and the corresponding group feature vectors according to a time stamp sequence to form a control context sequence, and encoding time stamp information into the vector representation by using a position-based embedding method; according to the multi-layer stacked bidirectional gating cyclic unit network, the control context sequence is taken as input, and the time sequence dependency relationship between sub-packets is obtained through information transmission in the forward direction and the backward direction; applying an attention mechanism to each layer of the bidirectional gating cyclic unit network, and adjusting the importance of different time step characteristics by calculating and controlling the similarity weight between the context vectors to obtain a hidden state vector after weight aggregation;
Carrying out feature fusion on the hidden state vector of the last layer of bi-directional gating circulating unit through a multi-head self-attention mechanism, wherein each attention head independently calculates feature interaction of different subspaces, and splicing the outputs of all heads to form a comprehensive feature representation of a management and control decision; adding the output of the multi-head self-attention mechanism to the management and control context sequence by adopting residual connection, and applying layer normalization operation to obtain enhanced management and control context vector representation;
The enhanced control context vector is expressed and input into a control time attribute decoder, three parallel sub-decoders in the control time attribute decoder are utilized to respectively process probability distribution of output control starting time, duration and repetition period, and a first activation function is applied to the probability distribution of the control starting time to obtain a prediction result of the control starting time; applying a second activation function to the probability distribution of the duration and the repetition period, and multiplying the probability distribution by a preset maximum duration and period value to obtain a prediction result of the continuous value; and combining the predicted results of the control start time and the continuous value, constructing a search space of the control time scheme, and performing heuristic search by applying a beam search algorithm to obtain the first N candidate control time schemes with highest probability.
Illustratively, the management and control time plan is a key link in optimization of the management and control strategy, and aims to automatically generate an optimal management and control time scheme according to group characteristics and equipment sub-group characteristics of the management and control group. The application provides a control time planning model based on a depth sequence model and an attention mechanism, which can model the time sequence dependency relationship of control contexts, excavate the control rules of different time granularities and generate a personalized control time scheme.
Firstly, splicing group characteristic vectors of the management and control groups and representing vectors of each equipment sub-group to form a management and control context vector which is used as input of a management and control time planning model. Specifically, the representation vectors of the device sub-groups and the group feature vectors of the corresponding management and control groups are arranged according to the time stamp order to form a management and control context sequence. To introduce time information into the feature representation, the time stamp is encoded using a location-based embedding method. The position embedding maps the time stamps to a high-dimensional space by a trigonometric function, with even dimensions using a sine function and odd dimensions using a cosine function to distinguish between features at different time positions. And adding the position embedding vector and the control context vector element by element to obtain a final sequence representation.
To capture timing dependencies in a managed context sequence, the sequence is modeled using a network of multi-layered stacked bi-directional gating loop units (BiGRU). BiGRU allow for both forward and backward information transfer of the sequence, and more fully characterize the timing interaction pattern between sub-packets. At each time step BiGRU controls the updating of the hidden state by resetting the gate and updating the gate. Resetting the gate determines the importance of the past state and updating the gate controls the degree of fusion of the current input and the past state. By means of a gating mechanism BiGRU can adaptively choose to memorize and forget information of different time scales. The forward and reverse hidden states are spliced at each time step to form a complete sequence representation.
At each layer of BiGRU the attention mechanism is introduced, and the importance of the features is adaptively adjusted by calculating the correlation between the different time-step features. With dot product attention, first map BiGRU's hidden state to the query, key, value three vector space through linear transformation. Then, the dot product of the query vector and all key vectors is calculated, and the attention weight distribution is obtained through softmax normalization. Normalization ensures the comparability of the attention weights between different time steps. Finally, the attention weight and the corresponding value vector are weighted and summed to obtain the aggregated feature representation. The attention mechanism can dynamically pay attention to the remarkable characteristics of different time steps, and extract key information for managing time planning.
At the last layer of BiGRU network, a multi-headed self-attention mechanism is employed to further enhance the representation capabilities of the managed context vector. Multi-head self-attention can capture more diversified dependency relationships by calculating characteristic interactions of different subspaces in parallel through a plurality of independent attention heads. Each attention header performs the attention calculations independently within the subspace using a different query, key, value transformation matrix. The outputs of all attention heads are then stitched and features captured by the different heads are fused by linear transformation. Multiple self-attentiveness can excavate diversified interaction modes of control contexts in different characteristic subspaces, and the representation capability of time planning is improved.
To facilitate optimization and generalization of the model, residual join and layer normalization were introduced after multi-head self-attention. The multi-headed self-attention output is added element-by-element with the original management and control context sequence to form a residual connection. Residual connection can alleviate the optimization problem of the depth network and promote the back propagation of gradients. Then, the result of the residual connection is subjected to layer normalization. Layer normalization each feature was normalized to zero mean and unit variance by subtracting the feature from the mean and dividing by the standard deviation. After normalization, the model can learn the characteristic distribution in a self-adaptive way by adjusting the scaling and offset parameters. Layer normalization can accelerate model convergence and improve training stability.
After the enhanced managed context vector representation, a final managed time scheme is generated by a managed time attribute decoder. The decoder consists of three parallel sub-decoders, which respectively predict the start time, duration and repetition period of the management. For the start time, a softmax function is used to map the feature representation to a time step probability distribution, taking the time step with the highest probability as the prediction result. And for the duration and the repetition period, mapping the characteristic representation to a continuous value space through linear transformation, and multiplying the continuous value space by a preset maximum duration and period value to obtain a final prediction result. By decoding different time attributes in parallel, a diversified management and control time scheme can be flexibly generated.
To further optimize the time management scheme, a beam search algorithm is used to evaluate and screen the multiple candidate schemes. Beam searching extends the current best solution at each time step by maintaining a fixed size set of candidate solutions and pruning low quality solutions. And selecting the first N optimal control time schemes as final output by comprehensively considering the generation probability and the control utility. The beam search can efficiently find the globally optimal solution in the search space, balancing the scheme quality and generation efficiency.
Through the steps, the control time planning model can automatically generate a personalized control time scheme according to the group characteristics of the control groups and the representation vectors of the equipment sub-groups. The model comprehensively utilizes a depth sequence model, an attention mechanism and a decoding optimization technology, can model the time sequence dependency relationship of the control context, excavates the control rules with multiple granularities, and generates flexible and various control time attribute combinations. The intelligent control method provides an effective technical means for realizing intelligent optimization of the control strategy, and can remarkably improve timeliness and accuracy of control.
In an alternative embodiment of the present invention,
Analyzing the user viewing behavior by using a pre-trained attention-mechanism-based user portrayal extraction model based on the management and control parameters, comprising:
The intelligent viewing terminal equipment analyzes the received management and control parameters, extracts management and control starting time and management and control ending time in a management and control time interval, and converts the management and control starting time and the management and control ending time into a timestamp format in the equipment; acquiring current system time through a built-in clock circuit or a network time setting protocol, comparing the acquired current system time with a start-stop time stamp of a management and control time interval, and judging whether the current system time is in the management and control time interval or not; if the comparison result shows that the current system time is not in the control time interval, the intelligent viewing terminal equipment enters a standby monitoring state until the control effective condition is met;
If the comparison result shows that the current system time is in the control time interval, the intelligent viewing terminal equipment triggers a control effective mark, which shows that the system time is in the control effective period, and initializes the control execution environment; continuously acquiring user viewing behaviors of the intelligent viewing terminal equipment in a management and control effective period, wherein the user viewing behaviors comprise channel switching, program selection and viewing time; the intelligent viewing terminal equipment preprocesses the collected user viewing behaviors and inputs a user portrait extraction model based on an attention mechanism, and maps the preprocessed user viewing behaviors into dense vectors through an embedding layer to generate user behavior feature vectors;
Performing multi-head self-attention coding on the generated user behavior feature vector, and extracting key modes and preferences of user behaviors by calculating similarity weights among different features; modeling the extracted sequence of the key mode and preference of the user behavior through a gating circulation unit network, and determining a change trend vector representing the preference of the user; fusing the key mode and preference of the user behavior with the change trend vector, and obtaining a comprehensive user behavior representation vector through residual connection and layer normalization technology; based on the comprehensive user behavior representation vector, a multi-dimensional feature image of the user including demographic attributes, interest preferences, viewing habits is predicted.
Illustratively, under the guidance of the management and control parameters, the intelligent viewing terminal equipment needs to analyze the viewing behaviors of the user, extract the multidimensional feature portraits of the user, and provide data support for personalized content recommendation and advertisement delivery. The application provides a user portrait extraction model based on an attention mechanism, which can mine key modes and preferences from the viewing behaviors of users and forecast multidimensional features such as demographic attributes, hobbies and viewing habits of the users.
The intelligent viewing terminal equipment firstly analyzes the received management and control parameters, extracts the start and stop time of the management and control time interval, and converts the start and stop time into a timestamp format in the equipment. The current system time is obtained through a built-in clock circuit or a network time setting protocol, and the current time is compared with the control time interval. If the current time is not in the control time interval, the intelligent viewing terminal equipment enters a standby monitoring state, and the control effective condition is checked regularly. When the control effective condition is met, namely the current time enters a control time interval, the intelligent viewing terminal equipment triggers a control effective mark to indicate entering a control effective period, and initializes a control execution environment.
And in the management and control effective period, the intelligent viewing terminal equipment continuously acquires the viewing behavior data of the user. Viewing behavior includes channel switching, program selection, viewing duration, etc. Preprocessing the collected user viewing behavior data, such as data cleaning, feature extraction, normalization and the like. The preprocessed user viewing behavior data is represented in a fixed-length sequence, and each time step corresponds to one viewing behavior record.
The preprocessed sequence of user viewing behaviors is input into an embedded layer, and each viewing behavior is mapped into a dense vector representation by a look-up table operation. The embedded layer may learn semantic similarity between viewing behaviors, converting discrete behavior IDs into continuous feature space. The dimension of the embedded vector is optimized by cross-validation and other methods to balance the feature representation capability and the computational complexity. The output of the embedded layer is a sequence of user behavior feature vectors, which serve as input to a subsequent attention mechanism.
And carrying out multi-head self-attention coding on the user behavior feature vector sequence, and extracting key modes and preferences of the user behavior by calculating correlations among different features. The multi-headed self-attention mechanism includes a plurality of parallel attention heads, each independently calculating an attention weight and feature aggregation. For each attention header, a sequence of user behavior feature vectors is mapped to a query vector, a key vector, and a value vector by linear transformation. The dot product similarity of the query vector to all key vectors is then calculated and a softmax function is applied to derive a normalized attention weight. And carrying out weighted summation on the attention weight and the corresponding value vector to obtain the characteristic aggregation result of the current head. Finally, the outputs of all attention heads are spliced, and a multi-head self-attention encoded output vector sequence is obtained through linear transformation.
The vector sequences of key patterns and preferences of user behavior obtained by multi-head self-attention coding are input into a gating and circulating unit (GRU) network, and sequence modeling is carried out on the vector sequences. The GRU network adaptively updates the hidden state through a gating mechanism, which can capture long-term dependence and dynamic changes of user preferences. At each time step, the GRU controls the flow and update of information by resetting the gates and updating the gates according to the current input and past hidden states. The reset gate determines the importance of the past state and the update gate controls the degree of fusion of the current input and the past state. The hidden state of the GRU is extracted at the last time step of the sequence as a vector representing the trend of the user preference change.
And splicing the key mode and the preference vector of the user behavior with the change trend vector obtained by the GRU to form a preliminary user behavior representation. To further enhance the representation capabilities of the features, residual connection and layer normalization techniques are introduced. And adding the preliminary user behavior representation with the original user behavior feature vector sequence element by element to form residual connection. Residual connection can alleviate the optimization problem of the depth network and promote the back propagation of gradients. Then, the result of the residual connection is subjected to layer normalization. Layer normalization normalizes by subtracting the features from the mean and dividing by the standard deviation such that each feature has zero mean and unit variance. After normalization, the model can learn the characteristic distribution in a self-adaptive way by adjusting the scaling and offset parameters. Layer normalization can accelerate model convergence and improve training stability. The finally obtained vector is a comprehensive user behavior representation, and the key mode, the change trend and the original behavior characteristics of the user preference are fused.
Based on the comprehensive user behavior representation vector, the multi-dimensional feature image of the user is predicted through the full connection layer and the activation function. For different types of features, different prediction modes are adopted. For demographic attributes such as gender, age, etc., a softmax classifier is used to map the user behavior representation to a corresponding class probability distribution and select the class with the highest probability as the prediction result. For continuous value features such as interest preferences and viewing habits, linear regression is used to map the user behavior representation to predicted values for the feature. The normalized degree of the feature is represented by compressing the predicted value to between 0 and 1 by a sigmoid function. The final user portraits comprise a plurality of dimensions such as demographic attributes, interest preferences, viewing habits and the like, and rich user characteristics are provided for subsequent personalized recommendation and advertisement delivery.
Through the steps, the user portrait extraction model based on the attention mechanism can automatically learn the key preference and the behavior mode of the user from the viewing behavior of the user, and predict the multi-dimensional user feature portrait. The model fully utilizes the advantages of the attention mechanism in the aspect of extracting key information, and the long-term dependence and dynamic change trend of the user behavior are excavated through the multi-head self-attention coding and gating circulating unit network. Meanwhile, residual connection and layer normalization technology are introduced, so that the characteristic representation capability and training stability of the model are enhanced.
In an alternative embodiment of the present invention,
Judging whether the current viewing behavior meets the control condition or not through depth feature matching and correlation calculation, wherein the method comprises the following steps:
Converting the management user portraits in the management parameters into management user portraits vectors, wherein the management user portraits are defined in a structured attribute-value pair form and represent the characteristics of target user groups to be managed, and semantic vector mapping is carried out on the attributes and the values in the management user portraits by adopting a word embedding model to obtain the management user portraits vectors; calculating a similarity score between a real-time user portrait vector and the management user portrait vector by using a depth feature matching algorithm, wherein the real-time user portrait vector is obtained by vectorizing a multi-dimensional feature portrait of a user, the depth feature matching algorithm adopts cosine similarity as a measurement function of feature matching, and an included angle cosine value of the real-time user portrait vector and the management user portrait vector is used as the similarity score;
The current viewing channel identification and the program identification are subjected to single-hot coding to obtain a Boolean type channel feature vector and a Boolean type program feature vector, a management and control channel set and a management and control program list are also expressed as a single-hot coding feature matrix, and channel attention weight distribution and program attention weight distribution are obtained based on attention mechanism calculation; multiplying the channel attention weight distribution and the channel feature vector element by adopting a weighted summation mode, and summing to obtain a channel matching degree score; multiplying the program attention weight distribution by the program feature vector element by element and summing to obtain a program matching degree score; according to the channel matching degree score and the program matching degree score, calculating the comprehensive content relevance score in a fusion mode of weighted summation;
Calculating the comprehensive satisfaction degree of the control condition through a weighted fusion function according to the similarity scores of the real-time user image vectors and the control user image vectors and the content correlation scores, wherein the weighted fusion function controls the contribution ratio of the user image similarity and the content correlation to the control condition through a weight coefficient; comparing the calculated satisfaction degree of the control condition with a preset threshold, and when the satisfaction degree exceeds the preset threshold, judging that the current viewing behavior meets the control condition and taking corresponding control measures; otherwise, judging that the current viewing behavior does not trigger the control condition, and not executing the control operation.
Illustratively, after obtaining the real-time portrayal of the user, the intelligent viewing terminal device needs to determine whether the viewing behavior of the current user meets the control conditions defined in the control parameters. The application provides a control condition judgment model based on depth feature matching and correlation calculation, which is used for calculating the satisfaction degree of control conditions by comprehensively considering the similarity of user images and the correlation of viewing contents and carrying out control decision according to a preset threshold value.
First, the administrative user representation in the administrative parameters is converted into a vectorized representation. The administrative user representation is defined in the form of structured attribute-value pairs representing characteristics of the target user population to be managed. For each attribute-value pair, a pre-trained Word embedding model, such as Word2Vec or GloVe, is used to map the attributes and values into semantic vectors. By stitching or averaging the attribute vectors and the value vectors, an embedded vector representing a single attribute-value pair is obtained. And finally, carrying out weighted average or splicing on the embedded vectors of all the attribute-value pairs to obtain the management and control user portrait vector. The administrative user portrait vector is expressed in the form of a dense vector of fixed length, capturing the semantic features of the administrative user population.
For example, assume that the administrative user representation is { "gender": "male", "age group": "young", "hobbies": "sporting event" }, which can be converted by the word embedding model into a 300-dimensional managed user portrait vector, such as [0.2, 0.1, ], 0.5].
And calculating the similarity between the real-time user portrait vector and the management user portrait vector by using a depth feature matching algorithm. The real-time user image vector is obtained by vectorizing the multi-dimensional feature image of the user, and has the same dimension as the management user image vector. And (3) taking cosine similarity as a measurement function of feature matching, and calculating an included angle cosine value between the two vectors as a similarity score. The cosine similarity has a value ranging from-1 to 1, with a larger value indicating that the two vectors are more similar. By setting the similarity threshold, it can be determined whether the real-time user portrait matches the administrative user portrait.
For example, assume that the real-time user image vector is [0.3, 0.2, ], 0.6], and cosine similarity is calculated with the above-mentioned managed user image vector, resulting in a similarity score of 0.8, indicating that the two user images have a higher degree of matching.
And respectively carrying out single-hot coding on the channel and the program watched by the current user to obtain a Boolean type channel feature vector and a Boolean type program feature vector. The single hot code maps channels and programs into binary vectors of fixed length, each dimension corresponds to a unique channel or program, and the value of each dimension is 0 or 1, so that whether the channel or program is currently watched or not is indicated. Meanwhile, the management and control channel set and the management and control program list in the management and control parameters are also expressed as a feature matrix of the single-hot coding.
Next, a channel attention weight distribution and a program attention weight distribution are calculated based on the attention mechanism. For the channel feature vector, the correlation weight of the channel feature vector and each channel in the management channel set is calculated through an attention mechanism. Specifically, dot product calculation is carried out on the channel characteristic vector and the management channel characteristic matrix to obtain an attention score vector. And normalizing the attention score vector by using a softmax function to obtain channel attention weight distribution, wherein the channel attention weight distribution represents the correlation degree of the current watching channel and each channel in the management channel set. Similarly, attention calculation is performed on the program feature vectors and the management program feature matrix to obtain program attention weight distribution.
Then, the channel attention weight distribution and the channel feature vector are multiplied element by element and summed in a weighted summation mode to obtain the channel matching degree score. Similarly, the program attention weight distribution and the program feature vector are multiplied element by element and summed to obtain the program matching degree score. The channel matching degree and the program matching degree have the value range of 0 to 1, and the larger the value is, the higher the correlation between the currently watched content and the controlled content is.
And finally, combining the channel matching degree score and the program matching degree score by a fusion mode of weighted summation to obtain the comprehensive content relevance score. The fusion weight can be adjusted according to the importance of the channel and the program to the control condition, for example, the channel is given a higher weight.
For example, assume that the current user is watching channel a and program B, the channel feature vector is [0, 1, 0, ], 0], the program feature vector is [0, 0,1, ], 0]. The set of managed channels is { A, C, D }, and the list of managed programs is { B, E, F }. Channel attention weight distribution is [0.6, 0.3, 0.1] and program attention weight distribution is [0.8, 0.1, 0.1] obtained through attention mechanism calculation. The weighted summation results in a channel matching degree of 0.6 and a program matching degree of 0.8. Assuming that the channel weight is 0.6 and the program weight is 0.4, the overall content correlation score is 0.6×0.6+0.4×0.8=0.68.
And calculating the comprehensive satisfaction degree of the control conditions through a weighted fusion function according to the similarity scores and the content correlation scores of the real-time user image vectors and the control user image vectors. The weighted fusion function controls the contribution ratio of the user portrait similarity and the content correlation to the control condition through the weight coefficient, for example, the user portrait similarity can be given higher weight. The satisfaction of the control condition is in the range of 0 to 1, and the larger the value is, the more possible the control condition is triggered by the current viewing behavior.
Comparing the calculated satisfaction degree of the control condition with a preset threshold value, and when the satisfaction degree exceeds the preset threshold value, judging that the current viewing behavior meets the control condition, wherein the intelligent viewing terminal equipment needs to take corresponding control measures, such as shielding programs, playing prompt information and the like. Otherwise, judging that the current viewing behavior does not trigger the control condition, and the intelligent viewing terminal equipment continues to normally play the program without executing the control operation. The preset threshold value can be adjusted according to the control strictness, and the higher the threshold value is, the stricter the control condition judgment is.
For example, assuming that the user portrait similarity is 0.8, the content relevance score is 0.68, the similarity weight is 0.7, and the content weight is 0.3, the management condition satisfaction is 0.7x0.8+0.3x0.68=0.764. Assuming that the preset threshold is 0.75, the current viewing behavior satisfies the control condition, and a corresponding control measure needs to be executed.
Through the steps, the control condition judgment model based on depth feature matching and correlation calculation can comprehensively consider user features and viewing contents, dynamically judge the satisfaction condition of the control conditions, and realize intelligent viewing behavior control. The model introduces word embedding and attention mechanisms, and improves the accuracy of user portrait matching and content correlation calculation. Meanwhile, the control strictness degree can be flexibly adjusted through weighted fusion and threshold comparison, and different control requirements are met.
Fig. 2 is a schematic structural diagram of a management and control system of an intelligent viewing terminal device according to an embodiment of the present invention, as shown in fig. 2, where the system includes:
The system comprises a first unit, a second unit and a third unit, wherein the first unit is used for receiving a management and control task issued by a cloud management and control platform by establishing a safe communication connection based on a blockchain technology with the cloud management and control platform, and the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
The second unit is used for analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the management and control parameters after receiving the management and control parameters by the intelligent viewing terminal equipment, and judging whether the current viewing behaviors meet the management and control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
The third unit is used for dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode if the control condition is met, generating a corresponding restrictive operation instruction and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411195040.6A CN118714408B (en) | 2024-08-29 | 2024-08-29 | Method and system for controlling intelligent viewing terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411195040.6A CN118714408B (en) | 2024-08-29 | 2024-08-29 | Method and system for controlling intelligent viewing terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118714408A CN118714408A (en) | 2024-09-27 |
CN118714408B true CN118714408B (en) | 2024-11-08 |
Family
ID=92816631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411195040.6A Active CN118714408B (en) | 2024-08-29 | 2024-08-29 | Method and system for controlling intelligent viewing terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118714408B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118921526A (en) * | 2024-10-09 | 2024-11-08 | 宁波江北华数广电网络有限公司 | Non-intelligent set top box double-management operation method and system |
CN119322485A (en) * | 2024-10-17 | 2025-01-17 | 杭州鲸云智能工业科技有限公司 | Manufacturing equipment monitoring method based on Internet of things |
CN119443253A (en) * | 2025-01-10 | 2025-02-14 | 西安欣创电子技术有限公司 | Knowledge graph construction system and method based on prompt words |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108462888A (en) * | 2018-03-14 | 2018-08-28 | 江苏有线数据网络有限责任公司 | The intelligent association analysis method and system of user's TV and internet behavior |
CN108650520A (en) * | 2018-03-30 | 2018-10-12 | 北京金山安全软件有限公司 | Video live broadcast control method, related equipment and computer storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8181201B2 (en) * | 2005-08-30 | 2012-05-15 | Nds Limited | Enhanced electronic program guides |
CN117768665A (en) * | 2023-11-17 | 2024-03-26 | 吉蛋互娱(武汉)科技有限公司 | Live broadcast accurate drainage method and system based on Internet big data analysis |
CN118138794B (en) * | 2024-05-08 | 2024-09-17 | 深圳市科路教育科技有限公司 | Mobile network-based teaching video live broadcast control method |
-
2024
- 2024-08-29 CN CN202411195040.6A patent/CN118714408B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108462888A (en) * | 2018-03-14 | 2018-08-28 | 江苏有线数据网络有限责任公司 | The intelligent association analysis method and system of user's TV and internet behavior |
CN108650520A (en) * | 2018-03-30 | 2018-10-12 | 北京金山安全软件有限公司 | Video live broadcast control method, related equipment and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118714408A (en) | 2024-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN118714408B (en) | Method and system for controlling intelligent viewing terminal equipment | |
Lin et al. | A survey on reinforcement learning for recommender systems | |
Wang et al. | App-net: A hybrid neural network for encrypted mobile traffic classification | |
CN118734254A (en) | Safety education and training monitoring and evaluation method and system based on operation mechanism optimization | |
D’Aniello et al. | Effective quality-aware sensor data management | |
He et al. | Learning informative representation for fairness-aware multivariate time-series forecasting: A group-based perspective | |
Chen et al. | Trajectory-user linking via hierarchical spatio-temporal attention networks | |
Wang et al. | Enhancing user interest modeling with knowledge-enriched itemsets for sequential recommendation | |
CN118921526A (en) | Non-intelligent set top box double-management operation method and system | |
CN119312160A (en) | Multi-source data information fusion method and system based on Internet of Things protocol | |
Fong et al. | Gesture recognition from data streams of human motion sensor using accelerated PSO swarm search feature selection algorithm | |
CN119719670B (en) | Distribution network data asset vulnerability identification method, device, system, and storage medium | |
Gopalakrishna et al. | Relevance in cyber‐physical systems with humans in the loop | |
CN120074883A (en) | Multi-level dynamic threat monitoring system based on deep learning | |
CN118898049B (en) | Cross-modal data fusion method and system based on knowledge graph and deep learning | |
CN119961628A (en) | Model hallucination detection method and device, storage medium and electronic device | |
Guo et al. | Identification of perceptive users based on the graph convolutional network | |
Zhang et al. | Network security situation assessment based on BKA and cross dual-channel | |
CN116756554B (en) | Training method, device, equipment, medium and program product for alignment model | |
CN119782580B (en) | A heterogeneous graph information mining method for online learning video recommendation | |
CN116521972B (en) | Information prediction method, device, electronic equipment and storage medium | |
Shang et al. | Triadic Closure-Heterogeneity-Harmony GCN for Link Prediction | |
CN118587780B (en) | A jewelry inventory management authentication method and system based on multimodal data recognition | |
CN119760170B (en) | Online learning video resource personalized recommendation method based on graph neural network | |
Wang et al. | SA-LSPL: Sequence-Aware Long-and Short-Term Preference Learning for next POI recommendation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |