[go: up one dir, main page]

CN118714408B - Method and system for controlling intelligent viewing terminal equipment - Google Patents

Method and system for controlling intelligent viewing terminal equipment Download PDF

Info

Publication number
CN118714408B
CN118714408B CN202411195040.6A CN202411195040A CN118714408B CN 118714408 B CN118714408 B CN 118714408B CN 202411195040 A CN202411195040 A CN 202411195040A CN 118714408 B CN118714408 B CN 118714408B
Authority
CN
China
Prior art keywords
control
vector
feature
user
viewing terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411195040.6A
Other languages
Chinese (zh)
Other versions
CN118714408A (en
Inventor
刘克娜
林培
庞士武
李文清
王晓飞
张帆
王猛
蔺文燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longyu Tianxia Beijing Culture Media Co ltd
Original Assignee
Longyu Tianxia Beijing Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longyu Tianxia Beijing Culture Media Co ltd filed Critical Longyu Tianxia Beijing Culture Media Co ltd
Priority to CN202411195040.6A priority Critical patent/CN118714408B/en
Publication of CN118714408A publication Critical patent/CN118714408A/en
Application granted granted Critical
Publication of CN118714408B publication Critical patent/CN118714408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4542Blocking scenes or portions of the received content, e.g. censoring scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/654Transmission by server directed to the client
    • H04N21/6543Transmission by server directed to the client for forcing some client operations, e.g. recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Social Psychology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a management and control method and a system of intelligent viewing terminal equipment, which relate to the technical field of intelligent terminals and comprise the steps of establishing a safe communication connection based on a blockchain technology with a cloud management and control platform and receiving a management and control task issued by the cloud management and control platform; generating a control strategy and a structured control parameter based on the control task; after receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control action mode, generating a corresponding limiting operation instruction, and transmitting the corresponding limiting operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing action.

Description

Control method and system for intelligent viewing terminal equipment
Technical Field
The invention relates to an intelligent terminal technology, in particular to a control method and a control system of intelligent viewing terminal equipment.
Background
With the popularization of intelligent viewing terminal devices such as intelligent televisions and set-top boxes, users acquire various viewing programs and services through the devices, and the acquisition of various viewing programs and services by the devices becomes an indispensable part of daily life. However, while enjoying convenience, there are also some urgent problems to be solved, such as the minors being addicted to the television, being in bad contact with the content, the health being affected by too long watching time of some users, and the copyright protection of the audiovisual content being inadequate. Therefore, how to effectively manage and control the intelligent viewing terminal equipment, guide users to reasonably use, ensure content safety and become important focus of industry.
The traditional intelligent viewing terminal equipment management and control mode mainly comprises the following steps:
(1) Setting a viewing time limit: the user can set a watching duration limit for a specific time period through the control function of the device, and when the set duration is exceeded, the device is automatically closed or locked. However, this method is not flexible enough and cannot be dynamically adjusted according to the actual situation of the user.
(2) Content rating and limitation: by ranking the audiovisual content, the content levels viewable at different age groups are set, and when a user selects content beyond their age groups, a password or parental authorization needs to be entered. This approach can prevent minor from contacting bad content to some extent, but the rating criteria are difficult to unify and are easily bypassed.
(3) Black and white list: the equipment presets or the user defines some channels, blacklists and whitelists of programs, contents belonging to the blacklists cannot be watched, and contents belonging to the whitelists can be watched freely. The mode has obvious control effect, but the setting and maintenance of the list are complicated, and the content outside the list cannot be judged.
(4) Hardware lock: the physical lock or the electronic lock is arranged on the equipment, the equipment cannot be used without authorization by unlocking through a key or a password. This approach is highly reliable but inconvenient to use and once the lock is broken, the control is disabled.
The traditional control modes have advantages and disadvantages, and have certain limitations in practical application.
Disclosure of Invention
The embodiment of the invention provides a control method and a control system for intelligent viewing terminal equipment, which can at least solve part of problems in the prior art.
In a first aspect of an embodiment of the present invention,
The management and control method for the intelligent viewing terminal equipment comprises the following steps:
The method comprises the steps of establishing a safe communication connection based on a blockchain technology with a cloud management and control platform, and receiving a management and control task issued by the cloud management and control platform, wherein the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
After receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
If the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control action mode, generating a corresponding restrictive operation instruction, and issuing the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing action; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a second aspect of an embodiment of the present invention,
Providing a management and control system of intelligent viewing terminal equipment, comprising:
The system comprises a first unit, a second unit and a third unit, wherein the first unit is used for receiving a management and control task issued by a cloud management and control platform by establishing a safe communication connection based on a blockchain technology with the cloud management and control platform, and the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
The second unit is used for analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the management and control parameters after receiving the management and control parameters by the intelligent viewing terminal equipment, and judging whether the current viewing behaviors meet the management and control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
The third unit is used for dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode if the control condition is met, generating a corresponding restrictive operation instruction and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The knowledge graph is introduced to carry out semantic enhancement feature representation on the intelligent viewing terminal equipment, so that multidimensional features of the equipment and the user can be fully mined, and a foundation is laid for subsequent management and control; the hidden characteristic representation of the control object in the characteristic map is learned by adopting a graph neural network, and the control grouping is realized by utilizing a clustering algorithm, so that the control object can be adaptively grouped according to the control requirement; the generated control strategy is constrained and optimized by using the control rule base, so that the interpretability and the executable performance of the strategy are improved; the trusted issuing of the control parameters is realized through the intelligent contract, so that the safety of control is ensured.
The control condition judgment model based on depth feature matching and correlation calculation can comprehensively consider user features and viewing contents, dynamically judge the satisfaction condition of the control conditions, and realize intelligent viewing behavior control. The model introduces word embedding and attention mechanisms, and improves the accuracy of user portrait matching and content correlation calculation. Meanwhile, the control strictness degree can be flexibly adjusted through weighted fusion and threshold comparison, and different control requirements are met.
Drawings
Fig. 1 is a flow chart of a control method of an intelligent viewing terminal device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a management and control system of an intelligent viewing terminal device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flow chart of a control method of an intelligent viewing terminal device according to an embodiment of the present invention, as shown in fig. 1, where the method includes:
S101, a safe communication connection based on a blockchain technology is established with a cloud management and control platform, and a management and control task issued by the cloud management and control platform is received, wherein the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
s102, after receiving the control parameters, the intelligent viewing terminal equipment analyzes the viewing behaviors of the user by utilizing a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters, and judges whether the current viewing behaviors meet the control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
S103, if the control conditions are met, dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode, generating a corresponding restrictive operation instruction, and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In an alternative embodiment of the present invention,
Based on the management and control task, the intelligent viewing terminal equipment knowledge graph and the machine learning model are utilized to carry out feature extraction and grouping identification on the intelligent viewing terminal equipment, and a management and control strategy and a structured management and control parameter are generated, which comprises the following steps:
The management and control object list is utilized, and the knowledge spectrum of the pre-constructed intelligent viewing terminal equipment is combined, and through knowledge reasoning and semantic association analysis, the feature extraction and vectorization representation are carried out on the multidimensional features of the current intelligent viewing terminal equipment, so that the feature spectrum of the current intelligent viewing terminal equipment is generated, wherein the multidimensional features comprise any one or more of hardware configuration, a software system, a network environment and user group features;
Constructing a graph neural network; inputting the characteristic map into a graph neural network, and dividing different intelligent viewing terminal devices into corresponding management and control groups based on characteristic vectors output by the graph neural network through a k-means clustering algorithm or a density clustering algorithm; for the management and control grouping result, adopting a clustering evaluation index comprising a contour coefficient and a Galois-Harabase index to measure the cohesion and the outward separation of the management and control grouping, estimating the reliability of the clustering effect, and taking the management and control grouping with the reliability higher than a preset confidence threshold as the identified management and control grouping;
Generating a model by utilizing a pre-trained management and control strategy aiming at the identified management and control group, and generating an optimized management and control strategy aiming at the management and control group by combining a rule template and constraint conditions in a management and control rule base, wherein the management and control strategy comprises management and control time planning, management and control content recommendation, management and control mode selection and management and control intensity adjustment; and converting the generated control strategy into structured control parameters, and safely issuing the control parameters to the intelligent viewing terminal equipment through an intelligent contract mechanism, wherein the control parameters comprise a control time interval, a control channel set, a control program list, a control user portrait and a control behavior mode.
Illustratively, the method utilizes a pre-constructed knowledge graph of the intelligent viewing terminal equipment to extract and vectorize the multidimensional features of the intelligent viewing terminal equipment in combination with a management and control object list. The specific implementation steps are as follows:
And constructing a knowledge graph of the intelligent viewing terminal equipment. The knowledge graph is a structured semantic network, and consists of three basic elements, namely an Entity (Entity), a relationship (Relation) and an Attribute (Attribute). The knowledge graph of the field of intelligent viewing terminal equipment is constructed by crawling public data on the Internet, such as intelligent television model parameters, user comments and the like, and extracting entities, relations and attributes by utilizing a natural language processing technology. The construction process of the knowledge graph can refer to the method in the literature.
And extracting multidimensional features. Given a management and control object list, namely an intelligent viewing terminal equipment set which needs to be managed and controlled, the multidimensional features related to the management and control objects are extracted from the knowledge graph by utilizing knowledge reasoning and semantic association analysis technology. The multi-dimensional features considered herein include:
hardware configuration: parameters such as screen size, resolution, CPU model, memory size, storage capacity and the like are included;
Software system: the information comprises an operating system version, a middleware version, an application program list and the like;
Network environment: parameters including access network type (e.g., WIFI, 4G), network bandwidth, latency, etc.;
User group characteristics: including statistics such as user age bracket distribution, gender ratio, viewing preferences, etc.
Feature vectorization. And carrying out numerical treatment and standardization treatment on the extracted multidimensional features to generate feature vectors for subsequent grouping identification and management and control strategy generation. Common feature vectorization methods include One-Hot encoding, word2Vec, and the like. One-Hot encoding is applicable to discrete features, such as operating system type; word2Vec is suitable for text-type features such as user tags.
And generating a characteristic map. The control object and its feature vector are constructed as a heterogeneous map (Heterogeneous Graph), called a feature map. The heterogram contains two types of nodes: and managing object nodes and characteristic nodes and connecting edges of the two types of nodes. The heterograms can well fuse the structural information and semantic information of the management and control object, and provide richer feature representation for subsequent grouping identification.
After generating a feature map of the intelligent viewing terminal device, it is proposed herein to group-identify management objects using a graph neural network (Graph Neural Network, GNN). GNN is a deep learning model based on graph structure data, and can effectively learn the feature representation of nodes in a graph. The packet identification method adopted in the text mainly comprises the following steps:
And constructing a graph neural network. A graph convolution neural network (Graph Convolutional Network, GCN) is used to learn the hidden feature representation of the nodes in the feature map. The GCN realizes convolution operation by using the graph Laplace matrix, and can aggregate the structure information and the attribute information of the nodes. The neural network structure comprising two layers of GCNs is built, wherein the first layer of GCNs is used for carrying out low-dimensional feature learning on the management and control object nodes and the feature nodes, and the second layer of GCNs is used for generating final feature representation of the management and control object nodes on the basis of the first layer.
Clustering algorithm application. And grouping the management and control objects by using a clustering algorithm in unsupervised learning according to the management and control object feature vector output by the GCN. The k-means clustering algorithm and the density clustering algorithm (DBSCAN) were chosen for experimental comparison. Wherein, the k-means algorithm needs to pre-designate the clustering number k, and the DBSCAN algorithm can adaptively generate a clustering result. In the implementation process, the similarity between objects is controlled by using Euclidean distance degree.
And (5) clustering evaluation. To evaluate the clustering effect, two indices, the contour coefficient (Silhouette Coefficient) and the garrison-harabase index (Calinski-Harabasz Index), are used herein. The outline coefficient measures the cohesiveness and the separability of the clusters, the value range is [ -1,1], and the larger the value is, the better the clustering effect is. The garrison-harabase index evaluates cluster quality by calculating intra-class variance and inter-class variance, with larger values indicating better clustering results. For the result that the clustering effect meets the preset threshold (such as the contour coefficient is larger than 0.5), the clustering effect is considered as a trusted management and control group.
After identifying the trusted management and control group, the present document further utilizes a machine learning method to generate corresponding management and control policies and management and control parameters for different groups. The specific implementation steps are as follows:
and constructing a management and control strategy generation model. The generation of the regulatory strategy can be seen as a sequence generation problem, and is thus implemented herein using a sequence-to-sequence model (Seq 2 Seq) based on a transducer. Firstly, pre-training a Seq2Seq model by using an existing management and control strategy sample (such as management and control time planning, management and control content recommendation and the like) so as to enable the model to have the basic capability of management and control strategy generation. Then, for the identified management and control group, extracting common characteristics (such as user group preference, hardware configuration level and the like) of the management and control object in the group, inputting the common characteristics into the Seq2Seq model, and generating a management and control strategy sequence adapting to the group.
And (5) optimizing a management and control strategy. In order to improve the feasibility and rationality of the generated control strategy, a control rule base is introduced to restrict and optimize the generated result. A series of rule templates and constraint conditions are predefined in the management rule base, such as 'management time cannot exceed 6 hours per day', 'management content must conform to laws and regulations', and the like. And matching the control strategy generated by the Seq2Seq model with a rule base, filtering out strategies which do not meet constraint conditions, and filling and perfecting the strategies by utilizing rule templates.
And (5) generating management and control parameters. And extracting structured control parameters including a control time interval, a control channel set, a control program list, a control user portrait, a control behavior mode and the like according to the optimized control strategy. These control parameters can be directly applied to the control execution of the intelligent viewing terminal device.
And (5) controlling and issuing. And safely and credibly issuing the management and control parameters to the corresponding intelligent viewing terminal equipment through an intelligent contract mechanism. The intelligent contract is a trusted execution environment based on the blockchain technology, and can ensure the integrity and the non-tamper property of the management and control parameters. The intelligent contract issued by management and control is realized by using the Ethernet platform, and the functions of encryption, signature, verification and the like of management and control parameters are included, so that the safe issuing and execution of the management and control strategy are ensured.
The knowledge graph is introduced to carry out semantic enhancement feature representation on the intelligent viewing terminal equipment, so that multidimensional features of the equipment and the user can be fully mined, and a foundation is laid for subsequent management and control; the hidden characteristic representation of the control object in the characteristic map is learned by adopting a graph neural network, and the control grouping is realized by utilizing a clustering algorithm, so that the control object can be adaptively grouped according to the control requirement; the generated control strategy is constrained and optimized by using the control rule base, so that the interpretability and the executable performance of the strategy are improved; the trusted issuing of the control parameters is realized through the intelligent contract, so that the safety of control is ensured.
In an alternative embodiment of the present invention,
And carrying out feature extraction and vectorization representation on the multidimensional features of the current intelligent viewing terminal equipment by utilizing the management and control object list and combining with a pre-constructed knowledge graph of the intelligent viewing terminal equipment through knowledge reasoning and semantic association analysis, and generating a feature graph of the current intelligent viewing terminal equipment, wherein the feature graph comprises the following steps:
Mapping the current intelligent viewing terminal equipment in the management and control object list to corresponding entity nodes in the knowledge graph of the intelligent viewing terminal equipment; generating a multi-hop associated subgraph by using the entity node of the current intelligent viewing terminal equipment as a central node through a random walk algorithm, wherein the multi-hop associated subgraph comprises entity nodes and relationship edges which have semantic association with the central node; based on the multi-jump associated subgraph, learning knowledge enhancement feature representation of the entity node of the current intelligent viewing terminal equipment through message transmission and aggregation operation in a graph neural network model;
Acquiring multi-dimensional heterogeneous attribute information of the entity node of the current intelligent viewing terminal equipment, and extracting a structured multi-dimensional characteristic through a characteristic template; splicing the multidimensional features of the entity nodes of the current intelligent viewing terminal equipment with the knowledge enhancement feature representations, and learning importance weights of the features with different dimensions through an attention mechanism to generate a comprehensive feature embedding vector of the entity nodes of the current intelligent viewing terminal equipment; inputting the comprehensive feature embedded vector of the entity node in the multi-hop associated subgraph into a graph neural network model, and updating the comprehensive feature embedded vector through multi-layer message transmission and feature aggregation;
Optimizing the comprehensive feature embedded vector of the entity node of the current intelligent viewing terminal equipment by comparing with a learning model; calculating the pair-wise similarity between the optimized comprehensive feature embedded vectors of the entity nodes of the current intelligent viewing terminal equipment, and constructing a similarity matrix; performing feature decomposition on the similarity matrix to obtain feature vectors corresponding to the first m maximum feature values, wherein the feature vectors are used as low-dimensional representation of the comprehensive feature embedding vectors; and carrying out nonlinear dimension reduction on the low-dimensional representation of the comprehensive feature embedded vector by using a t-distribution neighborhood embedding algorithm, minimizing KL divergence among nodes in a low-dimensional space by gradient descent optimization, obtaining node coordinates on a two-dimensional plane, and generating a feature map of the visualized current intelligent viewing terminal equipment.
Illustratively, the application utilizes knowledge graph and deep learning technology to mine the features of the intelligent viewing terminal equipment from multiple dimensions, generate comprehensive feature embedding vectors and finally construct a visualized feature graph. The specific implementation steps are as follows:
First, mapping the current intelligent viewing terminal equipment in the management and control object list to the corresponding entity node in the pre-constructed knowledge graph of the intelligent viewing terminal equipment. The knowledge graph is a structured semantic network, and consists of three basic elements, namely an Entity (Entity), a relationship (Relation) and an Attribute (Attribute). And establishing association between the control object and the entity node in the knowledge graph through mapping operation.
And then, taking the entity node of the current intelligent viewing terminal equipment as a central node, and generating a multi-hop associated subgraph through a random walk algorithm. The random walk algorithm is a common graph sampling method, and a local subgraph related to a central node is acquired by random walk in the graph. Specifically, starting from the central node, randomly selecting the neighbor nodes to walk with a certain probability until the preset hop count or coverage requirement is reached. The generated multi-hop associative subgraph comprises entity nodes with semantic associations with the central node and relationship edges connecting them.
Based on the multi-jump associated subgraph, learning the knowledge enhancement characteristic representation of the entity node of the current intelligent viewing terminal equipment through a graph neural network (Graph Neural Network, GNN) model. GNN is a deep learning model based on graph structure data, and can effectively aggregate structure information and attribute information of nodes.
A graph attention network (Graph Attention Network, GAT) is employed herein to implement feature learning. The GAT distributes different weights to the neighbor nodes through the attention mechanism, and can adaptively aggregate important neighbor information. Specifically, GAT is applied on the multi-hop associative subgraph, and the characteristic representation of the node is updated through multi-layer message passing and aggregation operations. The calculation formula of each layer is as follows:
wherein, Representing the feature vector of node i at layer l +1, N i representing the set of neighbor nodes of node i,Representing the attention weight between node i and neighbor node j,Representing the linear transformation matrix of the first layer,Representing an activation function.
Through the calculation of multi-layer GAT, the knowledge enhancement characteristic representation of the entity node of the current intelligent viewing terminal equipment can be obtained and recorded asWhere L is the number of layers of GAT.
In addition to semantic information in the knowledge graph, the intelligent viewing terminal device also has heterogeneous properties of multiple dimensions, such as hardware configuration, software version, network environment and the like. In order to fully utilize the attribute information, a multi-dimensional feature extraction and fusion method is provided.
Firstly, obtaining multidimensional heterogeneous attribute information of entity nodes of the current intelligent viewing terminal equipment, and extracting structured multidimensional features through a feature template. Feature templates are a rule-based feature extraction method that matches and extracts attribute values by defining a series of templates. For example, for a hardware configuration dimension, "CPU model" may be defined: { model }, "memory size: { size } "and the like, and then extracting the corresponding value from the attribute information.
And then, splicing the extracted multidimensional features with the knowledge enhancement feature representation to obtain a comprehensive feature vector. To learn the importance weights of different dimensional features, attention mechanisms are introduced herein. Specifically, the comprehensive feature vector is subjected to linear transformation and SoftMax normalization to obtain the attention weight of each dimension feature. And finally, multiplying the comprehensive feature vector by the attention weight to obtain a weighted and fused comprehensive feature embedded vector, which is marked as e center.
In order to further mine the association information between the nodes in the multi-hop association subgraph, the graph neural network model is applied again, and message transmission and feature aggregation are carried out on the comprehensive feature embedded vectors of the nodes.
Specifically, the integrated characteristic embedded vectors of all nodes in the multi-hop associated subgraph form a matrixWhere N is the number of nodes in the subgraph. Then, a multi-layer convolution operation is performed on the feature matrix by using a graph convolution network (Graph Convolutional Network, GCN), and the feature representation of the node is updated. The calculation formula of each layer is as follows:
wherein, Representing the node feature matrix of the first layer,The adjacency matrix representing the sub-graph,A degree matrix representing the adjacency matrix of the subgraph.
The updated node characteristic matrix can be obtained through the calculation of the multi-layer GCN, each row in the updated node characteristic matrix corresponds to a characteristic vector of a node, and the characteristics of the node and the characteristic information of the neighbor nodes are integrated.
To further optimize the comprehensive feature embedding vector of the entity node of the current intelligent viewing terminal equipment, a contrast learning (Contrastive Learning) model is introduced. Contrast learning is an unsupervised representation learning method that learns a characteristic representation of data by maximizing the similarity of positive pairs of samples and minimizing the similarity of negative pairs of samples.
Specifically, the comprehensive feature embedded vector of the entity node of the current intelligent viewing terminal equipment is used as an Anchor point (Anchor), and a positive sample node and a negative sample node are obtained by sampling from the multi-hop associated subgraph. Positive sample nodes are nodes with high correlation to the anchor point, and negative sample nodes are nodes that are not or weakly correlated to the anchor point. The integrated feature embedding vector is then optimized by minimizing the contrast loss function:
wherein, A feature embedding vector representing a positive sample node,The feature embedding vector representing the i-th negative sample node, K being the number of negative samples, sim () representing the similarity function,Indicating a temperature super-parameter.
And the comprehensive feature embedding vector of the entity node of the current intelligent viewing terminal equipment after optimization can be obtained by minimizing the contrast loss function through a gradient descent method.
And on the basis of the optimized integrated feature embedded vector, calculating the paired similarity between the entity node of the current intelligent viewing terminal equipment and other nodes, and constructing a similarity matrix S.
And then, carrying out feature decomposition on the similarity matrix to obtain feature vectors corresponding to the first m maximum feature values, and using the feature vectors as low-dimensional representation of the comprehensive feature embedding vector. Feature decomposition is a commonly used dimension reduction method that can map high-dimensional data into low-dimensional space while preserving the main features of the data.
Finally, the low-dimensional representation E m of the integrated feature embedding vector is subjected to nonlinear dimension reduction by using a t-distribution neighborhood embedding (t-Distributed Stochastic Neighbor Embedding, t-SNE) algorithm, and mapped onto a two-dimensional plane. t-SNE is a manifold learning algorithm that maintains a local structure of data by minimizing KL divergence between nodes in a low-dimensional space.
In particular, the goal of the t-SNE algorithm is to find a low-dimensional embedding space such that the distribution of nodes in that space is as similar as possible to the distribution in the high-dimensional space. The following cost function is optimized by gradient descent:
wherein P represents a similarity distribution between nodes in a high-dimensional space, Q represents a similarity distribution between nodes in a low-dimensional space, AndThe similarity of the node i and the node j in the high-dimensional space and the low-dimensional space are represented, respectively.
Node coordinates on a two-dimensional plane can be obtained through t-SNE dimension reduction, the coordinate points are drawn on the two-dimensional plane, and the edges are connected according to the node types and the relations, so that the characteristic map of the visualized current intelligent viewing terminal equipment can be obtained. The characteristic map intuitively displays the association relation and the relative position between the current intelligent viewing terminal equipment and other entities, and provides an intuitive reference for subsequent management and control grouping and strategy generation.
The application further discloses a complete technical scheme for extracting the characteristics and vectorizing the representation of the intelligent viewing terminal equipment. The scheme fully utilizes the technologies of knowledge graph, deep learning, dimension reduction visualization and the like, the characteristics of the intelligent viewing terminal equipment are mined and fused from multiple dimensions, and comprehensive characteristic embedding vectors and visualized characteristic graphs are generated, so that a foundation is laid for intelligent management and control.
In an alternative embodiment of the present invention,
Generating a model for the identified management and control group by utilizing a pre-trained management and control strategy, and generating an optimized management and control strategy for the management and control group by combining rule templates and constraint conditions in a management and control rule base, wherein the method comprises the following steps:
Calculating a similarity matrix among the devices according to the attribute feature vector of each device node in the management and control group, and dividing the similarity matrix by applying a spectral clustering algorithm to obtain device sub-groups; the attribute feature vectors of the sub-groups of each device are aggregated, the representation vectors of the sub-groups are learned through an attention mechanism and matched with rule templates in a management and control rule base, and candidate rule sets suitable for the sub-groups are screened out;
the group feature vectors of the management and control groups and the representation vectors of the equipment sub-groups are spliced to form management and control context vectors, and the management and control context vectors are input into a pre-trained management and control time planning model to generate a candidate management and control time scheme; carrying out feasibility analysis on each candidate management and control time scheme, and screening out infeasible time schemes through constraint satisfaction calculation and rule conflict detection to obtain a management and control time planning sequence; inputting the behavior feature vector and the control time planning sequence of the control group into a pre-trained control content recommendation model, adopting a collaborative filtering algorithm based on a graph neural network, realizing interactive modeling of user-content through message transmission and embedding aggregation, and generating a control content recommendation list;
Constructing heterogram representation according to the network topology structure and the communication mode of the sub-groups of the fine-grained equipment, inputting the heterogram representation into a pre-trained management and control mode selection model, realizing the aggregation representation learning of nodes and edges through a graph attention network, and predicting by adopting a graph classification algorithm to obtain management and control deployment mode selection; constructing a management and control intensity evaluation index system, inputting the management and control intensity evaluation index system into a pre-trained management and control intensity adjustment model, and generating management and control intensity adjustment configuration; and carrying out combined optimization on the control time planning sequence, the control content recommendation list, the control deployment mode selection and the control intensity adjustment configuration, and solving the constraint satisfaction problem through an integer programming method to obtain an optimized control strategy aiming at the control group.
Illustratively, the present application utilizes a pre-trained management and control strategy generation model, in combination with rule templates and constraints in a management and control rule base, to automatically generate a personalized management and control strategy for management and control groups through a series of machine learning and optimization algorithms. The specific implementation steps are as follows:
Firstly, calculating a similarity matrix between devices through cosine similarity according to attribute feature vectors of each device node in a management group. Cosine similarity is a commonly used vector similarity measurement method, and the degree of similarity is measured by calculating the cosine value of the included angle between two vectors.
And then, dividing the similarity matrix by using a spectral clustering algorithm to obtain equipment sub-groups. Spectral clustering is a clustering method based on graph theory, and the data is divided into different subgroups by carrying out feature decomposition on the Laplacian matrix of the data. Specifically, the similarity matrix is converted into an adjacent matrix of the undirected weighted graph, the normalized Laplacian matrix is calculated, and eigenvalue decomposition is carried out on the matrix. And taking feature vectors corresponding to the first k minimum non-zero feature values to form a feature matrix, and carrying out k-means clustering on the feature matrix to obtain k equipment sub-groups.
For each device sub-packet, the attribute feature vectors of its internal device nodes are aggregated, and the representation vectors of the sub-packets are learned by an attention mechanism. The attention mechanism is a common feature aggregation method, and can adaptively assign weights to different features. Assuming that the sub-packet Gi contains ni device nodes, and the attribute feature vector matrix thereof is Xi, the attention mechanism is adopted to calculate the sub-packet representation vector Gi. Firstly, performing nonlinear transformation on Xi through parameter matrixes W1 and W2, calculating attention distribution ai, and then carrying out weighted summation on ai and Xi to obtain gi.
Next, the expression vector gi of each sub-group is matched with the rule templates in the management rule base, and the candidate rule set applicable to the sub-group is screened out. The rule templates define the basic structure and parameter ranges of the management rules, and can be matched with the sub-groups through the constraint conditions of the attribute characteristics. And calculating the similarity between gi and the rule template representation by adopting cosine similarity, and selecting a rule with the similarity larger than a preset threshold as a candidate.
When the control time scheme is generated, firstly, the group feature vector of the control group and the representation vector of each equipment sub-group are spliced to form a control context vector which is used as the input of the control time planning model. The model adopts a neural network structure from sequence to sequence (seq 2 seq), maps the management and control context vector to the hidden space through an encoder, and then generates a candidate management and control time sequence through a decoder.
A feasibility analysis is then performed for each candidate time scheme. And (3) calculating the satisfaction degree of the time scheme to the rule constraint conditions, detecting conflicts among different rules, and screening out the infeasible time scheme to obtain a final control time planning sequence. The constraint satisfaction degree can be calculated by measuring the deviation degree of time scheme parameters and rule requirements, and rule conflict detection is carried out by constructing a rule dependency graph and analyzing the combination of different rules by using a graph algorithm.
The management and control content recommendation aims at screening the optimal content combination from the candidate management and control content library according to the behavior characteristics of the management and control group. The relationship between the user and the content is modeled by a user-content interaction graph using a collaborative filtering algorithm based on a Graph Neural Network (GNN).
Firstly, constructing a user-content interaction diagram according to the behavior feature vector of the control group and the control time planning sequence. The management and control group and the content item are regarded as nodes of the graph, and the interaction behavior (such as browsing, clicking and the like) is regarded as edges of the graph, so that the heterogeneous interaction graph is constructed. Then, through the message passing and embedding aggregation mechanism of the GNN, the embedded representation of the user node and the content node is learned, and the interactive modeling of the user-content is realized. And finally, calculating preference scores of the user on different contents according to the similarity embedded by the nodes, and generating a personalized management and control content recommendation list.
For fine-grained device sub-packets, a proper management and control deployment mode needs to be selected according to the network topology and communication mode. First, the network information and communication records of the sub-packets are constructed as heterogeneous graph representations including device nodes, chain sides, and communication sides. The heterogeneous graph is then input into a pre-trained graph annotation force network (GAT) model, and the representation vectors of the nodes and edges of the graph are learned by concentration of the nodes and edges. Finally, a graph classification algorithm such as a graph rolling network (GCN) is adopted to predict a management and control deployment mode according to the embedded representation of the nodes and the edges, so that the self-adaptive selection of the management and control mode is realized.
The aim of controlling the intensity adjustment is to balance the controlling effect and the user experience, and the strictness degree of the controlling strategy is dynamically adjusted. Firstly, constructing a multi-dimensional management and control intensity evaluation index system, comprehensively considering factors such as management and control cost, user satisfaction, safety risk and the like, and quantifying the influence of management and control intensity. And inputting the attribute characteristics and the management and control historical data of the management and control groups into a pre-trained management and control intensity adjustment model, and learning a nonlinear mapping relation between management and control intensity and an evaluation index through a deep neural network to generate an optimal management and control intensity adjustment configuration.
On the basis of generating a management and control time plan, content recommendation, deployment mode selection and strength adjustment, the overall management and control strategy is further obtained through combination optimization. And modeling discrete variables of the management and control strategy by adopting an integer programming method, setting an objective function to maximize comprehensive management and control effectiveness, and solving an integer programming problem to obtain an optimal strategy combination by constraint conditions including satisfaction of management and control rules, resource limitation and the like. The combination optimization process can balance the trade-off among different control elements to obtain a globally optimal control strategy.
Through the steps, the technical scheme realizes the self-adaptive strategy optimization of the management and control group. The machine learning algorithm and the optimizing method are comprehensively utilized, the modeling and solving of the system are performed in the aspects of equipment sub-grouping, strategy element generation, strategy combination optimization and the like, and personalized and fine management and control strategies can be automatically generated.
In an alternative embodiment of the present invention,
The group feature vector of the management and control group and the representation vector of each equipment sub-group are spliced to form a management and control context vector, and the management and control context vector is input into a pre-trained management and control time planning model to generate a candidate management and control time scheme, wherein the method comprises the following steps of:
Arranging the device sub-group representation vectors and the corresponding group feature vectors according to a time stamp sequence to form a control context sequence, and encoding time stamp information into the vector representation by using a position-based embedding method; according to the multi-layer stacked bidirectional gating cyclic unit network, the control context sequence is taken as input, and the time sequence dependency relationship between sub-packets is obtained through information transmission in the forward direction and the backward direction; applying an attention mechanism to each layer of the bidirectional gating cyclic unit network, and adjusting the importance of different time step characteristics by calculating and controlling the similarity weight between the context vectors to obtain a hidden state vector after weight aggregation;
Carrying out feature fusion on the hidden state vector of the last layer of bi-directional gating circulating unit through a multi-head self-attention mechanism, wherein each attention head independently calculates feature interaction of different subspaces, and splicing the outputs of all heads to form a comprehensive feature representation of a management and control decision; adding the output of the multi-head self-attention mechanism to the management and control context sequence by adopting residual connection, and applying layer normalization operation to obtain enhanced management and control context vector representation;
The enhanced control context vector is expressed and input into a control time attribute decoder, three parallel sub-decoders in the control time attribute decoder are utilized to respectively process probability distribution of output control starting time, duration and repetition period, and a first activation function is applied to the probability distribution of the control starting time to obtain a prediction result of the control starting time; applying a second activation function to the probability distribution of the duration and the repetition period, and multiplying the probability distribution by a preset maximum duration and period value to obtain a prediction result of the continuous value; and combining the predicted results of the control start time and the continuous value, constructing a search space of the control time scheme, and performing heuristic search by applying a beam search algorithm to obtain the first N candidate control time schemes with highest probability.
Illustratively, the management and control time plan is a key link in optimization of the management and control strategy, and aims to automatically generate an optimal management and control time scheme according to group characteristics and equipment sub-group characteristics of the management and control group. The application provides a control time planning model based on a depth sequence model and an attention mechanism, which can model the time sequence dependency relationship of control contexts, excavate the control rules of different time granularities and generate a personalized control time scheme.
Firstly, splicing group characteristic vectors of the management and control groups and representing vectors of each equipment sub-group to form a management and control context vector which is used as input of a management and control time planning model. Specifically, the representation vectors of the device sub-groups and the group feature vectors of the corresponding management and control groups are arranged according to the time stamp order to form a management and control context sequence. To introduce time information into the feature representation, the time stamp is encoded using a location-based embedding method. The position embedding maps the time stamps to a high-dimensional space by a trigonometric function, with even dimensions using a sine function and odd dimensions using a cosine function to distinguish between features at different time positions. And adding the position embedding vector and the control context vector element by element to obtain a final sequence representation.
To capture timing dependencies in a managed context sequence, the sequence is modeled using a network of multi-layered stacked bi-directional gating loop units (BiGRU). BiGRU allow for both forward and backward information transfer of the sequence, and more fully characterize the timing interaction pattern between sub-packets. At each time step BiGRU controls the updating of the hidden state by resetting the gate and updating the gate. Resetting the gate determines the importance of the past state and updating the gate controls the degree of fusion of the current input and the past state. By means of a gating mechanism BiGRU can adaptively choose to memorize and forget information of different time scales. The forward and reverse hidden states are spliced at each time step to form a complete sequence representation.
At each layer of BiGRU the attention mechanism is introduced, and the importance of the features is adaptively adjusted by calculating the correlation between the different time-step features. With dot product attention, first map BiGRU's hidden state to the query, key, value three vector space through linear transformation. Then, the dot product of the query vector and all key vectors is calculated, and the attention weight distribution is obtained through softmax normalization. Normalization ensures the comparability of the attention weights between different time steps. Finally, the attention weight and the corresponding value vector are weighted and summed to obtain the aggregated feature representation. The attention mechanism can dynamically pay attention to the remarkable characteristics of different time steps, and extract key information for managing time planning.
At the last layer of BiGRU network, a multi-headed self-attention mechanism is employed to further enhance the representation capabilities of the managed context vector. Multi-head self-attention can capture more diversified dependency relationships by calculating characteristic interactions of different subspaces in parallel through a plurality of independent attention heads. Each attention header performs the attention calculations independently within the subspace using a different query, key, value transformation matrix. The outputs of all attention heads are then stitched and features captured by the different heads are fused by linear transformation. Multiple self-attentiveness can excavate diversified interaction modes of control contexts in different characteristic subspaces, and the representation capability of time planning is improved.
To facilitate optimization and generalization of the model, residual join and layer normalization were introduced after multi-head self-attention. The multi-headed self-attention output is added element-by-element with the original management and control context sequence to form a residual connection. Residual connection can alleviate the optimization problem of the depth network and promote the back propagation of gradients. Then, the result of the residual connection is subjected to layer normalization. Layer normalization each feature was normalized to zero mean and unit variance by subtracting the feature from the mean and dividing by the standard deviation. After normalization, the model can learn the characteristic distribution in a self-adaptive way by adjusting the scaling and offset parameters. Layer normalization can accelerate model convergence and improve training stability.
After the enhanced managed context vector representation, a final managed time scheme is generated by a managed time attribute decoder. The decoder consists of three parallel sub-decoders, which respectively predict the start time, duration and repetition period of the management. For the start time, a softmax function is used to map the feature representation to a time step probability distribution, taking the time step with the highest probability as the prediction result. And for the duration and the repetition period, mapping the characteristic representation to a continuous value space through linear transformation, and multiplying the continuous value space by a preset maximum duration and period value to obtain a final prediction result. By decoding different time attributes in parallel, a diversified management and control time scheme can be flexibly generated.
To further optimize the time management scheme, a beam search algorithm is used to evaluate and screen the multiple candidate schemes. Beam searching extends the current best solution at each time step by maintaining a fixed size set of candidate solutions and pruning low quality solutions. And selecting the first N optimal control time schemes as final output by comprehensively considering the generation probability and the control utility. The beam search can efficiently find the globally optimal solution in the search space, balancing the scheme quality and generation efficiency.
Through the steps, the control time planning model can automatically generate a personalized control time scheme according to the group characteristics of the control groups and the representation vectors of the equipment sub-groups. The model comprehensively utilizes a depth sequence model, an attention mechanism and a decoding optimization technology, can model the time sequence dependency relationship of the control context, excavates the control rules with multiple granularities, and generates flexible and various control time attribute combinations. The intelligent control method provides an effective technical means for realizing intelligent optimization of the control strategy, and can remarkably improve timeliness and accuracy of control.
In an alternative embodiment of the present invention,
Analyzing the user viewing behavior by using a pre-trained attention-mechanism-based user portrayal extraction model based on the management and control parameters, comprising:
The intelligent viewing terminal equipment analyzes the received management and control parameters, extracts management and control starting time and management and control ending time in a management and control time interval, and converts the management and control starting time and the management and control ending time into a timestamp format in the equipment; acquiring current system time through a built-in clock circuit or a network time setting protocol, comparing the acquired current system time with a start-stop time stamp of a management and control time interval, and judging whether the current system time is in the management and control time interval or not; if the comparison result shows that the current system time is not in the control time interval, the intelligent viewing terminal equipment enters a standby monitoring state until the control effective condition is met;
If the comparison result shows that the current system time is in the control time interval, the intelligent viewing terminal equipment triggers a control effective mark, which shows that the system time is in the control effective period, and initializes the control execution environment; continuously acquiring user viewing behaviors of the intelligent viewing terminal equipment in a management and control effective period, wherein the user viewing behaviors comprise channel switching, program selection and viewing time; the intelligent viewing terminal equipment preprocesses the collected user viewing behaviors and inputs a user portrait extraction model based on an attention mechanism, and maps the preprocessed user viewing behaviors into dense vectors through an embedding layer to generate user behavior feature vectors;
Performing multi-head self-attention coding on the generated user behavior feature vector, and extracting key modes and preferences of user behaviors by calculating similarity weights among different features; modeling the extracted sequence of the key mode and preference of the user behavior through a gating circulation unit network, and determining a change trend vector representing the preference of the user; fusing the key mode and preference of the user behavior with the change trend vector, and obtaining a comprehensive user behavior representation vector through residual connection and layer normalization technology; based on the comprehensive user behavior representation vector, a multi-dimensional feature image of the user including demographic attributes, interest preferences, viewing habits is predicted.
Illustratively, under the guidance of the management and control parameters, the intelligent viewing terminal equipment needs to analyze the viewing behaviors of the user, extract the multidimensional feature portraits of the user, and provide data support for personalized content recommendation and advertisement delivery. The application provides a user portrait extraction model based on an attention mechanism, which can mine key modes and preferences from the viewing behaviors of users and forecast multidimensional features such as demographic attributes, hobbies and viewing habits of the users.
The intelligent viewing terminal equipment firstly analyzes the received management and control parameters, extracts the start and stop time of the management and control time interval, and converts the start and stop time into a timestamp format in the equipment. The current system time is obtained through a built-in clock circuit or a network time setting protocol, and the current time is compared with the control time interval. If the current time is not in the control time interval, the intelligent viewing terminal equipment enters a standby monitoring state, and the control effective condition is checked regularly. When the control effective condition is met, namely the current time enters a control time interval, the intelligent viewing terminal equipment triggers a control effective mark to indicate entering a control effective period, and initializes a control execution environment.
And in the management and control effective period, the intelligent viewing terminal equipment continuously acquires the viewing behavior data of the user. Viewing behavior includes channel switching, program selection, viewing duration, etc. Preprocessing the collected user viewing behavior data, such as data cleaning, feature extraction, normalization and the like. The preprocessed user viewing behavior data is represented in a fixed-length sequence, and each time step corresponds to one viewing behavior record.
The preprocessed sequence of user viewing behaviors is input into an embedded layer, and each viewing behavior is mapped into a dense vector representation by a look-up table operation. The embedded layer may learn semantic similarity between viewing behaviors, converting discrete behavior IDs into continuous feature space. The dimension of the embedded vector is optimized by cross-validation and other methods to balance the feature representation capability and the computational complexity. The output of the embedded layer is a sequence of user behavior feature vectors, which serve as input to a subsequent attention mechanism.
And carrying out multi-head self-attention coding on the user behavior feature vector sequence, and extracting key modes and preferences of the user behavior by calculating correlations among different features. The multi-headed self-attention mechanism includes a plurality of parallel attention heads, each independently calculating an attention weight and feature aggregation. For each attention header, a sequence of user behavior feature vectors is mapped to a query vector, a key vector, and a value vector by linear transformation. The dot product similarity of the query vector to all key vectors is then calculated and a softmax function is applied to derive a normalized attention weight. And carrying out weighted summation on the attention weight and the corresponding value vector to obtain the characteristic aggregation result of the current head. Finally, the outputs of all attention heads are spliced, and a multi-head self-attention encoded output vector sequence is obtained through linear transformation.
The vector sequences of key patterns and preferences of user behavior obtained by multi-head self-attention coding are input into a gating and circulating unit (GRU) network, and sequence modeling is carried out on the vector sequences. The GRU network adaptively updates the hidden state through a gating mechanism, which can capture long-term dependence and dynamic changes of user preferences. At each time step, the GRU controls the flow and update of information by resetting the gates and updating the gates according to the current input and past hidden states. The reset gate determines the importance of the past state and the update gate controls the degree of fusion of the current input and the past state. The hidden state of the GRU is extracted at the last time step of the sequence as a vector representing the trend of the user preference change.
And splicing the key mode and the preference vector of the user behavior with the change trend vector obtained by the GRU to form a preliminary user behavior representation. To further enhance the representation capabilities of the features, residual connection and layer normalization techniques are introduced. And adding the preliminary user behavior representation with the original user behavior feature vector sequence element by element to form residual connection. Residual connection can alleviate the optimization problem of the depth network and promote the back propagation of gradients. Then, the result of the residual connection is subjected to layer normalization. Layer normalization normalizes by subtracting the features from the mean and dividing by the standard deviation such that each feature has zero mean and unit variance. After normalization, the model can learn the characteristic distribution in a self-adaptive way by adjusting the scaling and offset parameters. Layer normalization can accelerate model convergence and improve training stability. The finally obtained vector is a comprehensive user behavior representation, and the key mode, the change trend and the original behavior characteristics of the user preference are fused.
Based on the comprehensive user behavior representation vector, the multi-dimensional feature image of the user is predicted through the full connection layer and the activation function. For different types of features, different prediction modes are adopted. For demographic attributes such as gender, age, etc., a softmax classifier is used to map the user behavior representation to a corresponding class probability distribution and select the class with the highest probability as the prediction result. For continuous value features such as interest preferences and viewing habits, linear regression is used to map the user behavior representation to predicted values for the feature. The normalized degree of the feature is represented by compressing the predicted value to between 0 and 1 by a sigmoid function. The final user portraits comprise a plurality of dimensions such as demographic attributes, interest preferences, viewing habits and the like, and rich user characteristics are provided for subsequent personalized recommendation and advertisement delivery.
Through the steps, the user portrait extraction model based on the attention mechanism can automatically learn the key preference and the behavior mode of the user from the viewing behavior of the user, and predict the multi-dimensional user feature portrait. The model fully utilizes the advantages of the attention mechanism in the aspect of extracting key information, and the long-term dependence and dynamic change trend of the user behavior are excavated through the multi-head self-attention coding and gating circulating unit network. Meanwhile, residual connection and layer normalization technology are introduced, so that the characteristic representation capability and training stability of the model are enhanced.
In an alternative embodiment of the present invention,
Judging whether the current viewing behavior meets the control condition or not through depth feature matching and correlation calculation, wherein the method comprises the following steps:
Converting the management user portraits in the management parameters into management user portraits vectors, wherein the management user portraits are defined in a structured attribute-value pair form and represent the characteristics of target user groups to be managed, and semantic vector mapping is carried out on the attributes and the values in the management user portraits by adopting a word embedding model to obtain the management user portraits vectors; calculating a similarity score between a real-time user portrait vector and the management user portrait vector by using a depth feature matching algorithm, wherein the real-time user portrait vector is obtained by vectorizing a multi-dimensional feature portrait of a user, the depth feature matching algorithm adopts cosine similarity as a measurement function of feature matching, and an included angle cosine value of the real-time user portrait vector and the management user portrait vector is used as the similarity score;
The current viewing channel identification and the program identification are subjected to single-hot coding to obtain a Boolean type channel feature vector and a Boolean type program feature vector, a management and control channel set and a management and control program list are also expressed as a single-hot coding feature matrix, and channel attention weight distribution and program attention weight distribution are obtained based on attention mechanism calculation; multiplying the channel attention weight distribution and the channel feature vector element by adopting a weighted summation mode, and summing to obtain a channel matching degree score; multiplying the program attention weight distribution by the program feature vector element by element and summing to obtain a program matching degree score; according to the channel matching degree score and the program matching degree score, calculating the comprehensive content relevance score in a fusion mode of weighted summation;
Calculating the comprehensive satisfaction degree of the control condition through a weighted fusion function according to the similarity scores of the real-time user image vectors and the control user image vectors and the content correlation scores, wherein the weighted fusion function controls the contribution ratio of the user image similarity and the content correlation to the control condition through a weight coefficient; comparing the calculated satisfaction degree of the control condition with a preset threshold, and when the satisfaction degree exceeds the preset threshold, judging that the current viewing behavior meets the control condition and taking corresponding control measures; otherwise, judging that the current viewing behavior does not trigger the control condition, and not executing the control operation.
Illustratively, after obtaining the real-time portrayal of the user, the intelligent viewing terminal device needs to determine whether the viewing behavior of the current user meets the control conditions defined in the control parameters. The application provides a control condition judgment model based on depth feature matching and correlation calculation, which is used for calculating the satisfaction degree of control conditions by comprehensively considering the similarity of user images and the correlation of viewing contents and carrying out control decision according to a preset threshold value.
First, the administrative user representation in the administrative parameters is converted into a vectorized representation. The administrative user representation is defined in the form of structured attribute-value pairs representing characteristics of the target user population to be managed. For each attribute-value pair, a pre-trained Word embedding model, such as Word2Vec or GloVe, is used to map the attributes and values into semantic vectors. By stitching or averaging the attribute vectors and the value vectors, an embedded vector representing a single attribute-value pair is obtained. And finally, carrying out weighted average or splicing on the embedded vectors of all the attribute-value pairs to obtain the management and control user portrait vector. The administrative user portrait vector is expressed in the form of a dense vector of fixed length, capturing the semantic features of the administrative user population.
For example, assume that the administrative user representation is { "gender": "male", "age group": "young", "hobbies": "sporting event" }, which can be converted by the word embedding model into a 300-dimensional managed user portrait vector, such as [0.2, 0.1, ], 0.5].
And calculating the similarity between the real-time user portrait vector and the management user portrait vector by using a depth feature matching algorithm. The real-time user image vector is obtained by vectorizing the multi-dimensional feature image of the user, and has the same dimension as the management user image vector. And (3) taking cosine similarity as a measurement function of feature matching, and calculating an included angle cosine value between the two vectors as a similarity score. The cosine similarity has a value ranging from-1 to 1, with a larger value indicating that the two vectors are more similar. By setting the similarity threshold, it can be determined whether the real-time user portrait matches the administrative user portrait.
For example, assume that the real-time user image vector is [0.3, 0.2, ], 0.6], and cosine similarity is calculated with the above-mentioned managed user image vector, resulting in a similarity score of 0.8, indicating that the two user images have a higher degree of matching.
And respectively carrying out single-hot coding on the channel and the program watched by the current user to obtain a Boolean type channel feature vector and a Boolean type program feature vector. The single hot code maps channels and programs into binary vectors of fixed length, each dimension corresponds to a unique channel or program, and the value of each dimension is 0 or 1, so that whether the channel or program is currently watched or not is indicated. Meanwhile, the management and control channel set and the management and control program list in the management and control parameters are also expressed as a feature matrix of the single-hot coding.
Next, a channel attention weight distribution and a program attention weight distribution are calculated based on the attention mechanism. For the channel feature vector, the correlation weight of the channel feature vector and each channel in the management channel set is calculated through an attention mechanism. Specifically, dot product calculation is carried out on the channel characteristic vector and the management channel characteristic matrix to obtain an attention score vector. And normalizing the attention score vector by using a softmax function to obtain channel attention weight distribution, wherein the channel attention weight distribution represents the correlation degree of the current watching channel and each channel in the management channel set. Similarly, attention calculation is performed on the program feature vectors and the management program feature matrix to obtain program attention weight distribution.
Then, the channel attention weight distribution and the channel feature vector are multiplied element by element and summed in a weighted summation mode to obtain the channel matching degree score. Similarly, the program attention weight distribution and the program feature vector are multiplied element by element and summed to obtain the program matching degree score. The channel matching degree and the program matching degree have the value range of 0 to 1, and the larger the value is, the higher the correlation between the currently watched content and the controlled content is.
And finally, combining the channel matching degree score and the program matching degree score by a fusion mode of weighted summation to obtain the comprehensive content relevance score. The fusion weight can be adjusted according to the importance of the channel and the program to the control condition, for example, the channel is given a higher weight.
For example, assume that the current user is watching channel a and program B, the channel feature vector is [0, 1, 0, ], 0], the program feature vector is [0, 0,1, ], 0]. The set of managed channels is { A, C, D }, and the list of managed programs is { B, E, F }. Channel attention weight distribution is [0.6, 0.3, 0.1] and program attention weight distribution is [0.8, 0.1, 0.1] obtained through attention mechanism calculation. The weighted summation results in a channel matching degree of 0.6 and a program matching degree of 0.8. Assuming that the channel weight is 0.6 and the program weight is 0.4, the overall content correlation score is 0.6×0.6+0.4×0.8=0.68.
And calculating the comprehensive satisfaction degree of the control conditions through a weighted fusion function according to the similarity scores and the content correlation scores of the real-time user image vectors and the control user image vectors. The weighted fusion function controls the contribution ratio of the user portrait similarity and the content correlation to the control condition through the weight coefficient, for example, the user portrait similarity can be given higher weight. The satisfaction of the control condition is in the range of 0 to 1, and the larger the value is, the more possible the control condition is triggered by the current viewing behavior.
Comparing the calculated satisfaction degree of the control condition with a preset threshold value, and when the satisfaction degree exceeds the preset threshold value, judging that the current viewing behavior meets the control condition, wherein the intelligent viewing terminal equipment needs to take corresponding control measures, such as shielding programs, playing prompt information and the like. Otherwise, judging that the current viewing behavior does not trigger the control condition, and the intelligent viewing terminal equipment continues to normally play the program without executing the control operation. The preset threshold value can be adjusted according to the control strictness, and the higher the threshold value is, the stricter the control condition judgment is.
For example, assuming that the user portrait similarity is 0.8, the content relevance score is 0.68, the similarity weight is 0.7, and the content weight is 0.3, the management condition satisfaction is 0.7x0.8+0.3x0.68=0.764. Assuming that the preset threshold is 0.75, the current viewing behavior satisfies the control condition, and a corresponding control measure needs to be executed.
Through the steps, the control condition judgment model based on depth feature matching and correlation calculation can comprehensively consider user features and viewing contents, dynamically judge the satisfaction condition of the control conditions, and realize intelligent viewing behavior control. The model introduces word embedding and attention mechanisms, and improves the accuracy of user portrait matching and content correlation calculation. Meanwhile, the control strictness degree can be flexibly adjusted through weighted fusion and threshold comparison, and different control requirements are met.
Fig. 2 is a schematic structural diagram of a management and control system of an intelligent viewing terminal device according to an embodiment of the present invention, as shown in fig. 2, where the system includes:
The system comprises a first unit, a second unit and a third unit, wherein the first unit is used for receiving a management and control task issued by a cloud management and control platform by establishing a safe communication connection based on a blockchain technology with the cloud management and control platform, and the management and control task comprises a management and control rule base, a management and control object list and a management and control strategy generation model; based on the management and control task, carrying out feature extraction and grouping identification on the intelligent viewing terminal equipment by utilizing the knowledge graph and the machine learning model of the intelligent viewing terminal equipment to generate a management and control strategy and a structured management and control parameter;
The second unit is used for analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the management and control parameters after receiving the management and control parameters by the intelligent viewing terminal equipment, and judging whether the current viewing behaviors meet the management and control conditions or not through depth feature matching and correlation calculation; if the control conditions are not met, entering a real-time data sensing and analyzing stage of the next round, and circularly executing the steps of analyzing and later analyzing the user viewing behaviors by using a pre-trained user portrait extraction model based on an attention mechanism based on the control parameters until the current system time exceeds a control time interval;
The third unit is used for dynamically deciding the optimal control execution action by using a reinforcement learning algorithm according to the control behavior mode if the control condition is met, generating a corresponding restrictive operation instruction and transmitting the corresponding restrictive operation instruction to the intelligent viewing terminal equipment so as to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of: limiting channel switching, limiting volume adjustment, limiting on/off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending proper content, dynamically adjusting control intensity, and increasing affinity reminding interaction.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1.智能收视终端设备的管控方法,其特征在于,包括:1. A method for controlling a smart viewing terminal device, comprising: 通过与云端管控平台建立基于区块链技术的安全通信连接,接收云端管控平台下发的管控任务,所述管控任务中包括管控规则库、管控对象列表以及管控策略生成模型;基于所述管控任务,利用智能收视终端设备知识图谱和机器学习模型对智能收视终端设备进行特征提取和分组识别,生成管控策略和结构化的管控参数;By establishing a secure communication connection based on blockchain technology with the cloud-based control platform, the control task issued by the cloud-based control platform is received, and the control task includes a control rule library, a control object list, and a control strategy generation model; based on the control task, the smart viewing terminal device knowledge graph and machine learning model are used to extract features and group identify the smart viewing terminal devices, and generate control strategies and structured control parameters; 智能收视终端设备在接收到管控参数后,基于所述管控参数利用预先训练的基于注意力机制的用户画像提取模型对用户收视行为进行分析,并通过深度特征匹配和相关性计算,判断当前收视行为是否满足管控条件;若不满足管控条件,则进入下一轮的实时数据感知和分析阶段,循环执行基于所述管控参数利用预先训练的基于注意力机制的用户画像提取模型对用户收视行为进行分析以及之后的步骤,直至当前系统时间超出管控时间区间;After receiving the control parameters, the intelligent viewing terminal device analyzes the user viewing behavior based on the control parameters using a pre-trained user portrait extraction model based on the attention mechanism, and determines whether the current viewing behavior meets the control conditions through deep feature matching and correlation calculation; if the control conditions are not met, it enters the next round of real-time data perception and analysis phase, and cyclically executes the steps of analyzing the user viewing behavior based on the control parameters using a pre-trained user portrait extraction model based on the attention mechanism and subsequent steps until the current system time exceeds the control time interval; 若满足管控条件,则根据管控行为模式,利用强化学习算法动态决策最优的管控执行动作,生成相应的限制性操作指令并下发至智能收视终端设备,以对当前收视行为进行干预和引导;所述限制性操作指令包括以下至少一项:限制切换频道、限制调节音量、限制开关机、弹出管控提示信息、强制切换到指定频道、强制播放指定内容、智能推荐合适内容、动态调节管控强度、增加亲情提醒互动;If the control conditions are met, the optimal control execution action is dynamically decided by using the reinforcement learning algorithm according to the control behavior mode, and the corresponding restrictive operation instructions are generated and sent to the smart viewing terminal device to intervene and guide the current viewing behavior; the restrictive operation instructions include at least one of the following: restricting channel switching, restricting volume adjustment, restricting power on and off, popping up control prompt information, forcibly switching to a specified channel, forcibly playing specified content, intelligently recommending appropriate content, dynamically adjusting the control intensity, and increasing family reminder interaction; 基于所述管控任务,利用智能收视终端设备知识图谱和机器学习模型对智能收视终端设备进行特征提取和分组识别,生成管控策略和结构化的管控参数,包括:Based on the control task, the knowledge graph of smart viewing terminal devices and the machine learning model are used to extract features and group the smart viewing terminal devices, and generate control strategies and structured control parameters, including: 利用所述管控对象列表,结合预先构建的智能收视终端设备知识图谱,通过知识推理和语义关联分析,对当前智能收视终端设备的多维度特征进行特征提取和向量化表示,生成当前智能收视终端设备的特征图谱,所述多维度特征包括硬件配置、软件系统、网络环境、用户群体特征中的任一种或多种;By using the control object list and combining the pre-built knowledge graph of smart viewing terminal devices, the multi-dimensional features of the current smart viewing terminal devices are extracted and vectorized through knowledge reasoning and semantic association analysis to generate a feature graph of the current smart viewing terminal devices, wherein the multi-dimensional features include any one or more of hardware configuration, software system, network environment, and user group features; 构建图神经网络;将所述特征图谱输入图神经网络中,基于图神经网络输出的特征向量通过k-均值聚类算法或密度聚类算法,将不同智能收视终端设备划分到相应的管控分组中;针对管控分组结果,采用包括轮廓系数、加利斯基-哈拉巴斯指数的聚类评估指标,度量管控分组的内聚性和外离度,估聚类效果的可信度,以可信度高于预设置信度阈值的管控分组作为识别出的管控分组;Constructing a graph neural network; inputting the feature graph into the graph neural network, and dividing different smart viewing terminal devices into corresponding control groups based on the feature vector output by the graph neural network through the k-means clustering algorithm or the density clustering algorithm; for the control grouping results, using clustering evaluation indicators including the silhouette coefficient and the Galiski-Harabas index to measure the cohesion and separation of the control grouping, and estimating the credibility of the clustering effect, and taking the control grouping whose credibility is higher than the preset credibility threshold as the identified control grouping; 针对识别出的管控分组,利用预先训练的管控策略生成模型,结合管控规则库中的规则模板和约束条件,生成针对该管控分组的优化管控策略,所述管控策略包括管控时间规划、管控内容推荐、管控模式选择以及管控强度调节;将生成的管控策略转化为结构化的管控参数,通过智能合约机制将管控参数安全下发至智能收视终端设备,所述管控参数包括管控时间区间、管控频道集合、管控节目列表、管控用户画像以及管控行为模式;For the identified control group, the pre-trained control strategy generation model is used, combined with the rule templates and constraints in the control rule library, to generate an optimized control strategy for the control group. The control strategy includes control time planning, control content recommendation, control mode selection, and control intensity adjustment. The generated control strategy is converted into structured control parameters, and the control parameters are securely sent to the smart viewing terminal device through the smart contract mechanism. The control parameters include control time interval, control channel set, control program list, control user portrait, and control behavior pattern. 利用所述管控对象列表,结合预先构建的智能收视终端设备知识图谱,通过知识推理和语义关联分析,对当前智能收视终端设备的的多维度特征进行特征提取和向量化表示,生成当前智能收视终端设备的特征图谱,包括:By using the control object list and combining the pre-built knowledge graph of smart viewing terminal devices, the multi-dimensional features of the current smart viewing terminal devices are extracted and vectorized through knowledge reasoning and semantic association analysis to generate a feature graph of the current smart viewing terminal devices, including: 将所述管控对象列表中的当前智能收视终端设备映射到所述智能收视终端设备知识图谱中的对应实体节点;以所述当前智能收视终端设备实体节点为中心节点,通过随机游走算法生成多跳关联子图,所述多跳关联子图包含与所述中心节点具有语义关联的实体节点和关系边;在所述多跳关联子图的基础上,通过图神经网络模型中的消息传递和聚合操作,学习所述当前智能收视终端设备实体节点的知识增强特征表示;Map the current smart viewing terminal device in the control object list to the corresponding entity node in the knowledge graph of the smart viewing terminal device; take the current smart viewing terminal device entity node as the central node, generate a multi-hop association subgraph through a random walk algorithm, and the multi-hop association subgraph includes entity nodes and relationship edges that have semantic associations with the central node; on the basis of the multi-hop association subgraph, learn the knowledge-enhanced feature representation of the current smart viewing terminal device entity node through message passing and aggregation operations in the graph neural network model; 获取所述当前智能收视终端设备实体节点的多维度异构属性信息,并通过特征模板抽取出结构化的多维度特征;将所述当前智能收视终端设备实体节点的多维度特征与知识增强特征表示进行拼接,并通过注意力机制学习不同维度特征的重要性权重,生成所述当前智能收视终端设备实体节点的综合特征嵌入向量;将所述多跳关联子图中的实体节点的综合特征嵌入向量输入到图神经网络模型中,通过多层消息传递和特征聚合,更新所述综合特征嵌入向量;Acquire multi-dimensional heterogeneous attribute information of the entity node of the current smart viewing terminal device, and extract structured multi-dimensional features through feature templates; splice the multi-dimensional features of the entity node of the current smart viewing terminal device with the knowledge-enhanced feature representation, and learn the importance weights of features of different dimensions through the attention mechanism to generate a comprehensive feature embedding vector of the entity node of the current smart viewing terminal device; input the comprehensive feature embedding vector of the entity node in the multi-hop association subgraph into the graph neural network model, and update the comprehensive feature embedding vector through multi-layer message passing and feature aggregation; 通过对比学习模型优化所述当前智能收视终端设备实体节点的综合特征嵌入向量;计算优化后的所述当前智能收视终端设备实体节点的综合特征嵌入向量之间的成对相似度,构建相似度矩阵;对所述相似度矩阵进行特征分解,获得前m个最大特征值对应的特征向量,作为所述综合特征嵌入向量的低维表示;利用t-分布邻域嵌入算法对所述综合特征嵌入向量的低维表示进行非线性降维,通过梯度下降优化,最小化低维空间中节点之间的KL散度,获得二维平面上的节点坐标,生成可视化的当前智能收视终端设备的特征图谱。The comprehensive feature embedding vector of the entity node of the current smart viewing terminal device is optimized through a comparative learning model; the pairwise similarity between the optimized comprehensive feature embedding vectors of the entity node of the current smart viewing terminal device is calculated to construct a similarity matrix; the similarity matrix is feature decomposed to obtain the eigenvectors corresponding to the first m maximum eigenvalues as the low-dimensional representation of the comprehensive feature embedding vector; the low-dimensional representation of the comprehensive feature embedding vector is nonlinearly reduced by using a t-distribution neighborhood embedding algorithm, and the KL divergence between nodes in the low-dimensional space is minimized through gradient descent optimization to obtain the node coordinates on the two-dimensional plane, and generate a visualized feature map of the current smart viewing terminal device. 2.根据权利要求1所述的方法,其特征在于,针对识别出的管控分组,利用预先训练的管控策略生成模型,结合管控规则库中的规则模板和约束条件,生成针对该管控分组的优化管控策略,包括:2. The method according to claim 1 is characterized in that, for the identified control group, a pre-trained control strategy generation model is used, combined with the rule templates and constraints in the control rule library, to generate an optimized control strategy for the control group, including: 根据所述管控分组中每个设备节点的属性特征向量,通过余弦相似度计算设备之间的相似度矩阵,并应用谱聚类算法对相似度矩阵进行划分,得到设备子分组;对每个设备子分组的属性特征向量进行聚合,通过注意力机制学习子分组的表示向量,并与管控规则库中的规则模板进行匹配,筛选出适用于该子分组的候选规则集合;According to the attribute feature vector of each device node in the control group, the similarity matrix between the devices is calculated by cosine similarity, and the similarity matrix is divided by applying the spectral clustering algorithm to obtain device subgroups; the attribute feature vector of each device subgroup is aggregated, the representation vector of the subgroup is learned by the attention mechanism, and matched with the rule template in the control rule library, and the candidate rule set applicable to the subgroup is screened out; 将所述管控分组的群体特征向量和每个设备子分组的表示向量拼接形成管控上下文向量,输入到预先训练的管控时间规划模型中,生成候选管控时间方案;对每个候选管控时间方案进行可行性分析,通过约束满足度计算和规则冲突检测,筛除不可行的时间方案,得到管控时间规划序列;将所述管控分组的行为特征向量和管控时间规划序列输入到预训练的管控内容推荐模型中,采用基于图神经网络的协同过滤算法,通过消息传递和嵌入聚合实现用户-内容的交互建模,生成管控内容推荐列表;The group feature vector of the control group and the representation vector of each device subgroup are concatenated to form a control context vector, which is input into a pre-trained control time planning model to generate candidate control time plans; a feasibility analysis is performed on each candidate control time plan, and infeasible time plans are screened out through constraint satisfaction calculation and rule conflict detection to obtain a control time planning sequence; the behavior feature vector of the control group and the control time planning sequence are input into a pre-trained control content recommendation model, and a collaborative filtering algorithm based on a graph neural network is used to realize user-content interaction modeling through message passing and embedding aggregation to generate a control content recommendation list; 根据细粒度设备子分组的网络拓扑结构和通信模式,构建异构图表示,输入到预训练的管控模式选择模型中,通过图注意力网络实现节点和边的聚合表示学习,并采用图分类算法预测得到管控部署模式选择;构建管控强度评估指标体系,并输入到预训练的管控强度调节模型中,生成管控强度调节配置;将管控时间规划序列、管控内容推荐列表、管控部署模式选择和管控强度调节配置进行组合优化,通过整数规划方法求解约束满足问题,得到针对所述管控分组的优化管控策略。According to the network topology and communication mode of fine-grained device sub-groups, a heterogeneous graph representation is constructed and input into the pre-trained control mode selection model. The aggregate representation learning of nodes and edges is realized through the graph attention network, and the graph classification algorithm is used to predict the control deployment mode selection; a control intensity evaluation index system is constructed and input into the pre-trained control intensity adjustment model to generate the control intensity adjustment configuration; the control time planning sequence, the control content recommendation list, the control deployment mode selection and the control intensity adjustment configuration are combined and optimized, and the constraint satisfaction problem is solved by the integer programming method to obtain the optimized control strategy for the control group. 3.根据权利要求2所述的方法,其特征在于,将所述管控分组的群体特征向量和每个设备子分组的表示向量拼接形成管控上下文向量,输入到预先训练的管控时间规划模型中,生成候选管控时间方案,包括:3. The method according to claim 2 is characterized in that the group feature vector of the control group and the representation vector of each device subgroup are concatenated to form a control context vector, which is input into a pre-trained control time planning model to generate a candidate control time plan, including: 将设备子分组表示向量与对应的群体特征向量按照时间戳顺序排列,构成管控上下文序列,并使用基于位置的嵌入方法将时间戳信息编码到向量表示中;根据多层堆叠的双向门控循环单元网络,以所述管控上下文序列为输入,通过前向和后向两个方向的信息传递,得到子分组之间的时序依赖关系;在双向门控循环单元网络的每一层应用注意力机制,通过计算管控上下文向量之间的相似度权重,调整不同时间步特征的重要性,得到权重聚合后的隐藏状态向量;Arrange the device subgroup representation vectors and the corresponding group feature vectors in timestamp order to form a control context sequence, and use a position-based embedding method to encode the timestamp information into the vector representation; according to a multi-layer stacked bidirectional gated recurrent unit network, take the control context sequence as input, and obtain the temporal dependency between subgroups through information transfer in both forward and backward directions; apply an attention mechanism to each layer of the bidirectional gated recurrent unit network, adjust the importance of features at different time steps by calculating the similarity weights between the control context vectors, and obtain a hidden state vector after weight aggregation; 将最后一层双向门控循环单元的隐藏状态向量通过多头自注意力机制进行特征融合,其中每个注意力头独立计算不同子空间的特征交互,并将所有头的输出拼接形成管控决策的综合特征表示;采用残差连接将多头自注意力机制的输出与管控上下文序列相加,并应用层归一化操作,得到增强的管控上下文向量表示;The hidden state vector of the last layer of bidirectional gated recurrent units is subjected to feature fusion through a multi-head self-attention mechanism, where each attention head independently calculates the feature interactions of different subspaces, and the outputs of all heads are concatenated to form a comprehensive feature representation of the control decision. The output of the multi-head self-attention mechanism is added to the control context sequence using a residual connection, and a layer normalization operation is applied to obtain an enhanced control context vector representation. 将增强的管控上下文向量表示输入管控时间属性解码器,利用管控时间属性解码器中三个并行的子解码器分别进行处理,输出管控开始时间、持续时长和重复周期的概率分布,对管控开始时间的概率分布应用第一激活函数,得到管控开始时间的预测结果;对持续时长和重复周期的概率分布应用第二激活函数,并乘以预设的最大时长和周期值,得到连续值的预测结果;组合管控开始时间和连续值的预测结果,构建管控时间方案的搜索空间,并应用 束搜索算法进行启发式搜索,得到前 N 个概率最高的候选管控时间方案。The enhanced control context vector representation is input into the control time attribute decoder, and the three parallel sub-decoders in the control time attribute decoder are used to process them respectively, and the probability distribution of the control start time, duration and repetition period is output. The first activation function is applied to the probability distribution of the control start time to obtain the prediction result of the control start time; the second activation function is applied to the probability distribution of the duration and the repetition period, and multiplied by the preset maximum duration and period value to obtain the prediction result of the continuous value; the prediction results of the control start time and the continuous value are combined to construct the search space of the control time plan, and the beam search algorithm is applied for heuristic search to obtain the top N candidate control time plans with the highest probability. 4.根据权利要求1所述的方法,其特征在于,基于所述管控参数利用预先训练的基于注意力机制的用户画像提取模型对用户收视行为进行分析,包括:4. The method according to claim 1, characterized in that the user viewing behavior is analyzed based on the control parameters using a pre-trained user portrait extraction model based on the attention mechanism, comprising: 智能收视终端设备解析接收到的管控参数,提取管控时间区间中的管控开始时间和管控结束时间,并将其转换为设备内部的时间戳格式;通过内置的时钟电路或网络对时协议获取当前的系统时间,将获取到的当前系统时间与管控时间区间的起止时间戳进行比对,判断当前系统时间是否处于管控时间区间内;若比对结果表明当前系统时间不在管控时间区间内,则智能收视终端设备进入待机监听状态,直到满足管控生效条件为止;The smart viewing terminal device parses the received control parameters, extracts the control start time and control end time in the control time interval, and converts them into the timestamp format inside the device; obtains the current system time through the built-in clock circuit or network time synchronization protocol, compares the obtained current system time with the start and end timestamps of the control time interval, and determines whether the current system time is within the control time interval; if the comparison result shows that the current system time is not within the control time interval, the smart viewing terminal device enters the standby monitoring state until the control effectiveness conditions are met; 若比对结果表明当前系统时间处于管控时间区间内,则智能收视终端设备触发管控生效标志,表示处于管控生效期,并初始化管控执行环境;在管控生效期内,持续获取智能收视终端设备的用户收视行为,用户收视行为包括频道切换、节目选择、观看时长;智能收视终端设备对采集到的用户收视行为进行预处理并输入基于注意力机制的用户画像提取模型,通过嵌入层将预处理后的用户收视行为映射为稠密向量,生成用户行为特征向量;If the comparison result shows that the current system time is within the control time interval, the smart viewing terminal device triggers the control effectiveness flag, indicating that it is in the control effectiveness period, and initializes the control execution environment; during the control effectiveness period, the user viewing behavior of the smart viewing terminal device is continuously obtained, and the user viewing behavior includes channel switching, program selection, and viewing time; the smart viewing terminal device pre-processes the collected user viewing behavior and inputs it into the user portrait extraction model based on the attention mechanism, and maps the pre-processed user viewing behavior to a dense vector through the embedding layer to generate a user behavior feature vector; 对生成的用户行为特征向量进行多头自注意力编码,通过计算不同特征之间的相似度权重,提取用户行为的关键模式和偏好;通过门控循环单元网络对提取到的用户行为关键模式和偏好的序列进行建模,确定表示用户偏好的变化趋势向量;将用户行为的关键模式和偏好与变化趋势向量进行融合,通过残差连接和层归一化技术,得到综合用户行为表示向量;基于综合用户行为表示向量,预测用户的包括人口统计属性、兴趣偏好、观看习惯的多维度特征画像。The generated user behavior feature vector is encoded with multi-head self-attention, and the key patterns and preferences of user behavior are extracted by calculating the similarity weights between different features. The sequence of the extracted key patterns and preferences of user behavior is modeled through a gated recurrent unit network to determine the change trend vector representing the user preference. The key patterns and preferences of user behavior are fused with the change trend vector, and a comprehensive user behavior representation vector is obtained through residual connection and layer normalization technology. Based on the comprehensive user behavior representation vector, a multi-dimensional feature portrait of the user, including demographic attributes, interest preferences, and viewing habits, is predicted. 5.根据权利要求4所述的方法,其特征在于,通过深度特征匹配和相关性计算,判断当前收视行为是否满足管控条件,包括:5. The method according to claim 4, characterized in that judging whether the current viewing behavior meets the control conditions through deep feature matching and correlation calculation comprises: 将管控参数中的管控用户画像转换为管控用户画像向量,所述管控用户画像以结构化的属性-值对形式定义,表示待进行管控的目标用户群体的特征,通过采用词嵌入模型对管控用户画像中的属性和值进行语义向量映射,得到所述管控用户画像向量;利用深度特征匹配算法,计算实时用户画像向量与所述管控用户画像向量之间的相似度分值,所述实时用户画像向量是通过对用户的多维度特征画像进行向量化表示得到的,所述深度特征匹配算法采用余弦相似度作为特征匹配的度量函数,将实时用户画像向量和管控用户画像向量的夹角余弦值作为相似度分值;The control user portrait in the control parameters is converted into a control user portrait vector, wherein the control user portrait is defined in the form of a structured attribute-value pair, representing the characteristics of the target user group to be controlled, and the control user portrait vector is obtained by semantically mapping the attributes and values in the control user portrait using a word embedding model; the similarity score between the real-time user portrait vector and the control user portrait vector is calculated using a deep feature matching algorithm, wherein the real-time user portrait vector is obtained by vectorizing the multi-dimensional feature portrait of the user, and the deep feature matching algorithm uses cosine similarity as a metric function for feature matching, and uses the cosine value of the angle between the real-time user portrait vector and the control user portrait vector as a similarity score; 对当前收视频道标识和节目标识进行独热编码,得到布尔类型的频道特征向量和节目特征向量,将管控频道集合和管控节目列表也表示为独热编码的特征矩阵,基于注意力机制计算得到频道注意力权重分布和节目注意力权重分布;采用加权求和的方式,将频道注意力权重分布与频道特征向量逐元素相乘并求和,得到频道匹配度分值;将节目注意力权重分布与节目特征向量逐元素相乘并求和,得到节目匹配度分值;根据频道匹配度分值和节目匹配度分值,通过加权求和的融合方式计算综合的内容相关性分数;Perform one-hot encoding on the current channel ID and program ID to obtain a Boolean channel feature vector and program feature vector. The controlled channel set and the controlled program list are also represented as a one-hot encoded feature matrix. The channel attention weight distribution and the program attention weight distribution are calculated based on the attention mechanism. The channel attention weight distribution is multiplied element by element with the channel feature vector and summed to obtain the channel matching score. The program attention weight distribution is multiplied element by element with the program feature vector and summed to obtain the program matching score. According to the channel matching score and the program matching score, a comprehensive content relevance score is calculated by a weighted summation fusion method. 根据所述实时用户画像向量与管控用户画像向量的相似度分值和所述内容相关性分数,通过加权融合函数计算综合的管控条件满足度,所述加权融合函数通过权重系数控制用户画像相似度和内容相关性对管控条件的贡献比例;将计算得到的管控条件满足度与预设阈值进行比较,当满足度超过预设阈值时,判定当前收视行为满足管控条件,采取相应的管控措施;否则,判定当前收视行为未触发管控条件,不执行管控操作。According to the similarity score between the real-time user portrait vector and the control user portrait vector and the content relevance score, the comprehensive control condition satisfaction is calculated through a weighted fusion function, and the weighted fusion function controls the contribution ratio of user portrait similarity and content relevance to the control condition through a weight coefficient; the calculated control condition satisfaction is compared with a preset threshold, and when the satisfaction exceeds the preset threshold, it is determined that the current viewing behavior meets the control condition, and corresponding control measures are taken; otherwise, it is determined that the current viewing behavior does not trigger the control condition, and no control operation is performed. 6.智能收视终端设备的管控系统,用于实现前述权利要求1-5中任一项所述的方法,其特征在于,包括:6. A management and control system for intelligent viewing terminal equipment, used to implement the method according to any one of claims 1 to 5, characterized in that it comprises: 第一单元,用于通过与云端管控平台建立基于区块链技术的安全通信连接,接收云端管控平台下发的管控任务,所述管控任务中包括管控规则库、管控对象列表以及管控策略生成模型;基于所述管控任务,利用智能收视终端设备知识图谱和机器学习模型对智能收视终端设备进行特征提取和分组识别,生成管控策略和结构化的管控参数;The first unit is used to establish a secure communication connection based on blockchain technology with the cloud control platform, receive the control task issued by the cloud control platform, and the control task includes a control rule library, a control object list, and a control strategy generation model; based on the control task, use the smart viewing terminal device knowledge graph and machine learning model to extract features and group identify the smart viewing terminal devices, and generate control strategies and structured control parameters; 第二单元,用于智能收视终端设备在接收到管控参数后,基于所述管控参数利用预先训练的基于注意力机制的用户画像提取模型对用户收视行为进行分析,并通过深度特征匹配和相关性计算,判断当前收视行为是否满足管控条件;若不满足管控条件,则进入下一轮的实时数据感知和分析阶段,循环执行基于所述管控参数利用预先训练的基于注意力机制的用户画像提取模型对用户收视行为进行分析以及之后的步骤,直至当前系统时间超出管控时间区间;The second unit is used for the intelligent viewing terminal device to analyze the user viewing behavior based on the control parameters using the pre-trained user portrait extraction model based on the attention mechanism after receiving the control parameters, and to determine whether the current viewing behavior meets the control conditions through deep feature matching and correlation calculation; if the control conditions are not met, the next round of real-time data perception and analysis phase is entered, and the analysis of the user viewing behavior based on the control parameters using the pre-trained user portrait extraction model based on the attention mechanism and subsequent steps are cyclically executed until the current system time exceeds the control time interval; 第三单元,用于若满足管控条件,则根据管控行为模式,利用强化学习算法动态决策最优的管控执行动作,生成相应的限制性操作指令并下发至智能收视终端设备,以对当前收视行为进行干预和引导;所述限制性操作指令包括以下至少一项:限制切换频道、限制调节音量、限制开关机、弹出管控提示信息、强制切换到指定频道、强制播放指定内容、智能推荐合适内容、动态调节管控强度、增加亲情提醒互动。The third unit is used to dynamically decide the optimal control execution action based on the control behavior pattern using a reinforcement learning algorithm if the control conditions are met, generate corresponding restrictive operation instructions and send them to the smart viewing terminal device to intervene in and guide the current viewing behavior; the restrictive operation instructions include at least one of the following: restricting channel switching, restricting volume adjustment, restricting power on and off, popping up control prompt information, forcibly switching to a designated channel, forcibly playing designated content, intelligently recommending appropriate content, dynamically adjusting the control intensity, and increasing family reminder interaction. 7.一种电子设备,其特征在于,包括:7. An electronic device, comprising: 处理器;processor; 用于存储处理器可执行指令的存储器;a memory for storing processor-executable instructions; 其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至5中任意一项所述的方法。The processor is configured to call the instructions stored in the memory to execute the method described in any one of claims 1 to 5. 8.一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至5中任意一项所述的方法。8. A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method according to any one of claims 1 to 5.
CN202411195040.6A 2024-08-29 2024-08-29 Method and system for controlling intelligent viewing terminal equipment Active CN118714408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411195040.6A CN118714408B (en) 2024-08-29 2024-08-29 Method and system for controlling intelligent viewing terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411195040.6A CN118714408B (en) 2024-08-29 2024-08-29 Method and system for controlling intelligent viewing terminal equipment

Publications (2)

Publication Number Publication Date
CN118714408A CN118714408A (en) 2024-09-27
CN118714408B true CN118714408B (en) 2024-11-08

Family

ID=92816631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411195040.6A Active CN118714408B (en) 2024-08-29 2024-08-29 Method and system for controlling intelligent viewing terminal equipment

Country Status (1)

Country Link
CN (1) CN118714408B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118921526A (en) * 2024-10-09 2024-11-08 宁波江北华数广电网络有限公司 Non-intelligent set top box double-management operation method and system
CN119322485A (en) * 2024-10-17 2025-01-17 杭州鲸云智能工业科技有限公司 Manufacturing equipment monitoring method based on Internet of things
CN119443253A (en) * 2025-01-10 2025-02-14 西安欣创电子技术有限公司 Knowledge graph construction system and method based on prompt words

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108462888A (en) * 2018-03-14 2018-08-28 江苏有线数据网络有限责任公司 The intelligent association analysis method and system of user's TV and internet behavior
CN108650520A (en) * 2018-03-30 2018-10-12 北京金山安全软件有限公司 Video live broadcast control method, related equipment and computer storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8181201B2 (en) * 2005-08-30 2012-05-15 Nds Limited Enhanced electronic program guides
CN117768665A (en) * 2023-11-17 2024-03-26 吉蛋互娱(武汉)科技有限公司 Live broadcast accurate drainage method and system based on Internet big data analysis
CN118138794B (en) * 2024-05-08 2024-09-17 深圳市科路教育科技有限公司 Mobile network-based teaching video live broadcast control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108462888A (en) * 2018-03-14 2018-08-28 江苏有线数据网络有限责任公司 The intelligent association analysis method and system of user's TV and internet behavior
CN108650520A (en) * 2018-03-30 2018-10-12 北京金山安全软件有限公司 Video live broadcast control method, related equipment and computer storage medium

Also Published As

Publication number Publication date
CN118714408A (en) 2024-09-27

Similar Documents

Publication Publication Date Title
CN118714408B (en) Method and system for controlling intelligent viewing terminal equipment
Lin et al. A survey on reinforcement learning for recommender systems
Wang et al. App-net: A hybrid neural network for encrypted mobile traffic classification
CN118734254A (en) Safety education and training monitoring and evaluation method and system based on operation mechanism optimization
D’Aniello et al. Effective quality-aware sensor data management
He et al. Learning informative representation for fairness-aware multivariate time-series forecasting: A group-based perspective
Chen et al. Trajectory-user linking via hierarchical spatio-temporal attention networks
Wang et al. Enhancing user interest modeling with knowledge-enriched itemsets for sequential recommendation
CN118921526A (en) Non-intelligent set top box double-management operation method and system
CN119312160A (en) Multi-source data information fusion method and system based on Internet of Things protocol
Fong et al. Gesture recognition from data streams of human motion sensor using accelerated PSO swarm search feature selection algorithm
CN119719670B (en) Distribution network data asset vulnerability identification method, device, system, and storage medium
Gopalakrishna et al. Relevance in cyber‐physical systems with humans in the loop
CN120074883A (en) Multi-level dynamic threat monitoring system based on deep learning
CN118898049B (en) Cross-modal data fusion method and system based on knowledge graph and deep learning
CN119961628A (en) Model hallucination detection method and device, storage medium and electronic device
Guo et al. Identification of perceptive users based on the graph convolutional network
Zhang et al. Network security situation assessment based on BKA and cross dual-channel
CN116756554B (en) Training method, device, equipment, medium and program product for alignment model
CN119782580B (en) A heterogeneous graph information mining method for online learning video recommendation
CN116521972B (en) Information prediction method, device, electronic equipment and storage medium
Shang et al. Triadic Closure-Heterogeneity-Harmony GCN for Link Prediction
CN118587780B (en) A jewelry inventory management authentication method and system based on multimodal data recognition
CN119760170B (en) Online learning video resource personalized recommendation method based on graph neural network
Wang et al. SA-LSPL: Sequence-Aware Long-and Short-Term Preference Learning for next POI recommendation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant