[go: up one dir, main page]

CN114756714B - A method, device and storage medium for processing graph data - Google Patents

A method, device and storage medium for processing graph data Download PDF

Info

Publication number
CN114756714B
CN114756714B CN202210288728.3A CN202210288728A CN114756714B CN 114756714 B CN114756714 B CN 114756714B CN 202210288728 A CN202210288728 A CN 202210288728A CN 114756714 B CN114756714 B CN 114756714B
Authority
CN
China
Prior art keywords
partition
partitions
graph
nodes
graph data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210288728.3A
Other languages
Chinese (zh)
Other versions
CN114756714A (en
Inventor
林文清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210288728.3A priority Critical patent/CN114756714B/en
Publication of CN114756714A publication Critical patent/CN114756714A/en
Application granted granted Critical
Publication of CN114756714B publication Critical patent/CN114756714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/288Entity relationship models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5566Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history by matching opponents or finding partners to build a team, e.g. by skill level, geographical area, background, play style

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了一种图数据的处理方法、装置以及存储介质,可应用于地图领域。通过将初始图数据分割为第一分区;确定第一分区对应的分区子图;对每个分区子图进行图分割得到每个分区子图对应的子图分割分区;将子图分割分区作为图节点生成缩略图;将缩略图分割为第二分区,并基于第二分区对初始图数据中的节点进行分区,以得到目标节点集合。从而实现并行的图数据处理的过程,由于采用多个分区可以采用不同设备进行处理,即每个设备对各自的数据进行图分割,可以通过缩略图进行映射还原,充分利用了并行处理的优势,提高了图数据的处理效率。

The present application discloses a method, device and storage medium for processing graph data, which can be applied to the field of maps. The initial graph data is divided into a first partition; a partition subgraph corresponding to the first partition is determined; each partition subgraph is segmented to obtain a subgraph segmentation partition corresponding to each partition subgraph; thumbnails are generated using the subgraph segmentation partitions as graph nodes; the thumbnails are segmented into second partitions, and the nodes in the initial graph data are partitioned based on the second partitions to obtain a target node set. Thus, the process of parallel graph data processing is realized. Since multiple partitions are used, different devices can be used for processing, that is, each device performs graph segmentation on its own data, which can be mapped and restored through thumbnails, making full use of the advantages of parallel processing and improving the processing efficiency of graph data.

Description

Graph data processing method and device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing graph data, and a storage medium.
Background
With the rapid development of internet technology, people have increasingly demanded social networks. The social network can be constructed through graph data, the points of the graph data clock represent the entities, the edges represent the relationship between the two entities, and the social network can be associated or simplified through processing the graph data.
In general, the graph data may be processed using the METIS algorithm, i.e., by continuously compressing the graph structure, such that two nodes may be combined into a new node. Thus, a smaller scale graph is obtained based on the new nodes.
However, in a large-scale data processing scenario, the METIS algorithm can put all data into the memory, which has high computational cost and long running time in the computer, and affects the efficiency of graph data processing.
Disclosure of Invention
In view of the above, the present application provides a method for processing graph data, which can effectively improve the efficiency of graph data processing.
The first aspect of the present application provides a method for processing graph data, which can be applied to a system or a program including a processing function of the graph data in a terminal device, and specifically includes:
acquiring initial image data, dividing the initial image data into K first partitions, wherein the data volume corresponding to each first partition is close, each first partition corresponds to different execution objects, and K is a positive integer greater than or equal to 2,K;
Determining a partition subgraph corresponding to the first partition;
performing graph segmentation on each partition sub-graph to obtain T sub-graph segmentation partitions corresponding to each partition sub-graph, wherein T is more than or equal to 2, and T is a positive integer;
taking K.T sub-graph partition areas as graph nodes to generate thumbnail corresponding to the initial graph data;
Dividing the thumbnail into K second partitions, and partitioning nodes in the initial graph data based on the second partitions to obtain a target node set, wherein the target node set is used for recommending execution of tasks, and each second partition corresponds to different execution objects.
Optionally, in some possible implementations of the present application, the acquiring initial graph data and dividing the initial graph data into K first partitions includes:
acquiring the initial graph data;
Determining neighbor nodes corresponding to each node in the initial graph data to generate a first tuple;
generating R random numbers, and combining the random numbers with the first binary group to generate a second binary group, wherein R is more than or equal to 0 and less than or equal to K;
And polymerizing based on the random numbers in the second binary group to obtain K first partitions.
Optionally, in some possible implementations of the present application, the method further includes:
determining processing capability information corresponding to an execution object;
Determining unit processing parameters corresponding to the execution objects based on the number of the neighbor nodes and the processing capability information;
And determining the value of K according to the ratio of the data quantity corresponding to the initial graph data and the unit processing parameter.
Optionally, in some possible implementations of the present application, the generating, by using k×t sub-graph partition as graph nodes, a thumbnail corresponding to the initial graph data includes:
Taking K.T sub-graph partition areas as graph nodes;
configuring weight information based on the node number of the sub-graph partition in the graph nodes;
Determining connection nodes, of which connection edges span edges of the sub-graph partition areas, in the graph nodes to serve as thumbnail edges;
and determining the number of the thumbnail edges based on the weight information to generate the thumbnail corresponding to the initial image data.
Optionally, in some possible implementations of the present application, the determining, based on the weight information, the number of the thumbnail edges to generate the thumbnail corresponding to the initial graph data includes:
Obtaining average node degrees corresponding to the thumbnail to determine a thumbnail coefficient;
Performing node sparse processing on each graph node based on the thumbnail coefficients, and extracting edges subjected to the sparse processing to obtain processed edges;
sorting based on the weight information of the processing edges to determine target edges;
and generating the thumbnail corresponding to the initial image data according to the target edge.
Optionally, in some possible implementations of the present application, the method further includes:
determining a partition subgraph corresponding to the second partition;
carrying out graph segmentation on the sub-graph corresponding to each second partition to obtain T sub-graph segmentation partitions corresponding to the second partitions;
taking the sub-graph partition areas corresponding to the K x T second partitions as graph nodes to generate thumbnail images corresponding to the second partitions;
Dividing the thumbnail corresponding to the second partition into K third partitions, and partitioning nodes in the initial graph data based on the third partitions to obtain a circulating node set.
Optionally, in some possible implementations of the present application, the method further includes:
Obtaining the number of the dividing edges corresponding to the circulating node set;
and if the number of the dividing edges is converged, executing a recommendation task based on the circulating node set.
A second aspect of the present application provides a processing apparatus for graph data, including:
The acquisition unit is used for acquiring initial image data, dividing the initial image data into K first partitions, wherein the data quantity corresponding to each first partition is close, and K is a positive integer more than or equal to 2,K;
The determining unit is used for determining a partition subgraph corresponding to the first partition;
The processing unit is used for carrying out graph segmentation on each partition subgraph so as to obtain T subgraph segmentation partitions corresponding to each partition subgraph, wherein T is more than or equal to 2, and T is a positive integer;
The processing unit is further configured to use k×t sub-graph partition partitions as graph nodes, so as to generate a thumbnail corresponding to the initial graph data;
The processing unit is further configured to partition the thumbnail into K second partitions, and partition nodes in the initial graph data based on the second partitions, so as to obtain a target node set, where the target node set is used for recommending execution of a task, and each second partition corresponds to a different execution object.
Optionally, in some possible implementations of the present application, the acquiring unit is specifically configured to acquire the initial graph data;
the acquiring unit is specifically configured to determine a neighboring node corresponding to each node in the initial graph data, so as to generate a first tuple;
The acquisition unit is specifically used for generating R random numbers, and combining the random numbers with the first binary group to generate a second binary group, wherein R is more than or equal to 0 and less than or equal to K;
the acquiring unit is specifically configured to aggregate based on the random numbers in the second tuple, so as to obtain K first partitions.
Optionally, in some possible implementations of the present application, the acquiring unit is specifically configured to determine processing capability information corresponding to an execution object;
The acquisition unit is specifically configured to determine a unit processing parameter corresponding to the execution object based on the number of the neighboring nodes and the processing capability information;
The acquisition unit is specifically configured to determine a value of K according to a ratio of the data amount corresponding to the initial map data to the unit processing parameter.
Optionally, in some possible implementations of the present application, the processing unit is specifically configured to use k×t sub-graph partition as a graph node;
The processing unit is specifically configured to configure weight information based on the number of nodes of the sub-graph partition in the graph nodes;
the processing unit is specifically configured to determine a connection node, of the graph nodes, where a connection edge spans an edge of the sub-graph partition, as a abbreviated edge;
The processing unit is specifically configured to determine, based on the weight information, the number of thumbnail edges, so as to generate the thumbnail corresponding to the initial map data.
Optionally, in some possible implementation manners of the present application, the processing unit is specifically configured to obtain an average node degree corresponding to the thumbnail, so as to determine a thumbnail coefficient;
The processing unit is specifically configured to perform node sparse processing on each graph node based on the thumbnail coefficients, and extract edges after the sparse processing to obtain processed edges;
The processing unit is specifically configured to sort the processing edges based on weight information of the processing edges, so as to determine target edges;
The processing unit is specifically configured to generate the thumbnail corresponding to the initial graph data according to the target edge.
Optionally, in some possible implementations of the present application, the processing unit is specifically configured to determine a partition sub-graph corresponding to the second partition;
The processing unit is specifically configured to perform graph segmentation on the partition subgraph corresponding to each second partition, so as to obtain T subgraph segmentation partitions corresponding to the second partitions;
the processing unit is specifically configured to use sub-graph partition areas corresponding to k×t second partition areas as graph nodes, so as to generate thumbnail images corresponding to the second partition areas;
the processing unit is specifically configured to partition the thumbnail corresponding to the second partition into K third partitions, and partition the nodes in the initial graph data based on the third partitions, so as to obtain a circulating node set.
Optionally, in some possible implementation manners of the present application, the processing unit is specifically configured to obtain the number of partition edges corresponding to the circulating node set;
The processing unit is specifically configured to execute a recommendation task based on the loop node set if the number of the split edges converges.
A third aspect of the application provides a computer device comprising a memory for storing program code, a processor for executing the method of processing graph data of the first aspect or any of the first aspects according to instructions in the program code, and a bus system.
A fourth aspect of the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of processing graph data of the first aspect or any one of the first aspects.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method of processing graph data provided in the above-mentioned first aspect or various alternative implementations of the first aspect.
From the above technical solutions, the embodiment of the present application has the following advantages:
The method comprises the steps of obtaining initial graph data, dividing the initial graph data into K first partitions, dividing the data quantity corresponding to each first partition into K second partitions, dividing the first partitions into different execution objects corresponding to the first partitions, determining partition subgraphs corresponding to the first partitions, dividing the partition subgraphs into T sub-graph dividing partitions corresponding to the partition subgraphs, further taking the K x T sub-graph dividing partitions as graph nodes to generate thumbnails corresponding to the initial graph data, dividing the thumbnails into K second partitions, dividing the nodes in the initial graph data into target node sets based on the second partitions, and enabling the target node sets to be used for recommending task execution. Therefore, the parallel graph data processing process is realized, and because a plurality of first partitions can be processed by adopting different devices, namely each device performs graph segmentation on respective data and can perform mapping reduction through thumbnail, the advantage of parallel processing is fully utilized, and the processing efficiency of graph data is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings that can be used in the embodiments or the prior art, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a network architecture diagram of the operation of the processing system of the data of FIG. 1;
FIG. 2 is a flow chart of a graph data process according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for processing graph data according to an embodiment of the present application;
Fig. 4 is a schematic view of a scenario of a method for processing graph data according to an embodiment of the present application;
FIG. 5 is a schematic view of a scenario illustrating another method for processing graph data according to an embodiment of the present application;
FIG. 6 is a flowchart of another method for processing graph data according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a processing device for graph data according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 10A is a diagram illustrating a data sharing system according to an embodiment of the present application;
FIG. 10B is a block diagram of a blockchain of an embodiment of the present application;
Fig. 10C is a block chain node input information provided in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a processing method and a related device of graph data, which can be applied to a system or a program containing a processing function of the graph data in terminal equipment, and is characterized in that initial graph data are acquired, the initial graph data are divided into K first partitions, the data quantity corresponding to each first partition is close, each first partition corresponds to different execution objects, then partition subgraphs corresponding to the first partitions are determined, graph division is carried out on each partition subgraph to obtain T sub-graph division partitions corresponding to each partition subgraph, the K T sub-graph division partitions are further used as graph nodes to generate thumbnail corresponding to the initial graph data, the thumbnail is further divided into K second partitions, and nodes in the initial graph data are partitioned based on the second partitions to obtain a target node set, wherein the target node set is used for recommending task execution. Therefore, the parallel graph data processing process is realized, and because a plurality of first partitions can be processed by adopting different devices, namely each device performs graph segmentation on respective data and can perform mapping reduction through thumbnail, the advantage of parallel processing is fully utilized, and the processing efficiency of graph data is improved.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
First, some terms that may appear in the embodiments of the present application will be explained.
Graph partitioning is to divide the node set V of graph g= (V, E) into k equal-sized and non-overlapping subsets, denoted V 1,V2,......,VkV1,V2,......,Vk, where k is a user-defined hyper-parameter, V is the node set, and E is the edge set. Thus, for any 1.ltoreq.i < j.ltoreq.k, there may beAnd
It should be understood that the method for processing graph data provided in the present application may be applied to a system or a program including a processing function of graph data in a terminal device, for example, social applications, specifically, the processing system of graph data may be operated in a network architecture shown in fig. 1, as shown in fig. 1, which is a network architecture diagram operated by the processing system of graph data, as shown in fig. 1, the processing system of graph data may provide a processing procedure of graph data with multiple information sources, that is, through a request operation of a terminal side, trigger a server to process graph data corresponding to a terminal, thereby issuing a corresponding processing result, and it may be understood that fig. 1 shows various terminal devices, which may be computer devices, in an actual scenario, there may be more or less kinds of terminal devices participating in the processing procedure of graph data, and the specific number and kinds are not limited herein, in addition, fig. 1 shows one server, but in an actual scenario, there may also be a plurality of servers participating in, and the specific number of servers is determined by an actual scenario.
In this embodiment, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminals and servers may be directly or indirectly connected by wired or wireless communication, and the terminals and servers may be connected to form a blockchain network, which is not limited herein.
It can be understood that the graph data processing system can be operated on a personal mobile terminal, for example, can be used as an application such as a social application, can also be operated on a server, can also be used as a third party device to provide graph data processing to obtain the graph data processing result of an information source, and can be operated in the device in a program form, can also be operated as a system component in the device, can also be operated as a cloud service program, and is not limited herein, wherein the specific operation mode is determined by actual scenes.
With the rapid development of internet technology, people have increasingly demanded social networks. The social network can be constructed through graph data, the points of the graph data clock represent the entities, the edges represent the relationship between the two entities, and the social network can be associated or simplified through processing the graph data.
In general, the graph data may be processed using the METIS algorithm, i.e., by continuously compressing the graph structure, such that two nodes may be combined into a new node. Thus, a smaller scale graph is obtained based on the new nodes.
However, in a large-scale data processing scenario, the METIS algorithm can put all data into the memory, which has high computational cost and long running time in the computer, and affects the efficiency of graph data processing.
In order to solve the above problems, the present application provides a method for processing graph data, where the method is applied to a flow frame for processing graph data shown in fig. 2, and as shown in fig. 2, the flow frame for processing graph data is provided in an embodiment of the present application, and a user generates a corresponding social relationship network at a server through interactive operation of a terminal, and performs graph data processing on the social relationship network to determine a segmented set, so as to perform targeted object recommendation.
In the process of processing the graph data, the data are randomly distributed on different machines through a strategy of dividing and controlling the sum iteration, each machine performs graph segmentation on the respective data, then a communication graph is constructed based on the results of all machines, and all graph data can be segmented again through segmentation of the communication graph, so that the results of the round of calculation are obtained. And finally, obtaining a final result through iterative computation of multiple rounds.
It can be understood that the method provided by the application can be a program writing method, which is used as a processing logic in a hardware system, and can also be used as a graph data processing device, and the processing logic can be realized in an integrated or external mode. The image data processing device obtains initial image data, divides the initial image data into K first partitions, wherein the data amount corresponding to each first partition is close to that of each first partition, each first partition corresponds to a different execution object, then determines partition subgraphs corresponding to the first partition, performs image division on each partition subgraph to obtain T sub-image division partitions corresponding to each partition subgraph, further takes the K x T sub-image division partitions as image nodes to generate thumbnail corresponding to the initial image data, further divides the thumbnail into K second partitions, and divides nodes in the initial image data based on the second partitions to obtain a target node set, wherein the target node set is used for recommending task execution. Therefore, the parallel graph data processing process is realized, and because a plurality of first partitions can be processed by adopting different devices, namely each device performs graph segmentation on respective data and can perform mapping reduction through thumbnail, the advantage of parallel processing is fully utilized, and the processing efficiency of graph data is improved.
With reference to fig. 3, fig. 3 is a flowchart of a method for processing graph data according to an embodiment of the present application, where the method for managing graph data may be executed by a server or a terminal, and the embodiment of the present application at least includes the following steps:
301. initial image data are acquired, and the initial image data are divided into K first partitions.
In this embodiment, the initial graph data may be a social network corresponding to a user, and the complex social network may be segmented (simplified) to obtain targeted object recommendations, such as friend recommendations, master recommendations, and the like. The method comprises the steps that a distributed graph segmentation algorithm is applied to a friend recommendation system of a game and used as a recall method of friend recommendation. Specifically, in the teacher and apprentice recommendation of the game of countermeasure, the social network of the game of countermeasure may be first segmented with a distributed graph segmentation algorithm. Fig. 4 is a schematic view of a scenario of a graph data processing method according to an embodiment of the present application, where a scenario of a master recommendation performed by a user is shown, and player 1, player 2 and player 3 are obtained by the graph data processing method according to the embodiment. Thus, each player in the game of opponent may belong to a partition. For each player, 50 players may be randomly selected from the partition in which the player is located, and then ranking recommendations may be made. The method for selecting in the subareas can be compared with the method for selecting in the whole user set, and the experimental result shows that the method is greatly improved.
Specifically, the first partitions are sub-partitions, wherein the data amount corresponding to each first partition is close, each first partition corresponds to a different execution object, K is larger than or equal to 2,K and is a positive integer, so that a distributed processing process can be performed, namely, given a graph data G= (V, E), wherein V is a node set, and E is an edge set. The initial partition (initial map data) may divide V into k subsets, denoted V 1,V2,...,Vk, such that these subsets satisfy these characteristics (1)(2) For any of i and j(3) For-1 < α <1 there is |v i |= (1+α) |v|/k, i.e. the size of the subsets is relatively close.
Optionally, for the segmentation process of the initial graph data, the initial graph data may be first obtained, then neighboring nodes corresponding to each node in the initial graph data are determined to generate a first tuple, R random numbers are generated, the random numbers and the first tuple are combined to generate a second tuple, R < K > is 0 and is less than or equal to R < K >, and then aggregation is performed based on the random numbers in the second tuple to obtain K first partitions. The method is used for reducing the computational complexity and avoiding the problems of 'chicken eggs and chicken eggs', namely, the graph data G= (V, E) is stored in a manner of an adjacency list, namely, each node u stores a neighbor binary group < u, nbr (u) > (a first binary group), wherein nbr (u) is a neighbor node set of the node u on G, and in distributed computation, a random number generator can be used for generating a number r to meet 0 r < k when the neighbor binary group < u, nbr (u) >, of each node u is processed. The new tuple can thus be changed to a new tuple < r, < u, nbr (u) > (second tuple), where the first element is a random number r and the second element is a neighbor tuple of node u, and the new tuple (second tuple) is aggregated (using the Group By function in Spark) from the first element so that nodes with the same random number and their neighbor tuples are all aggregated to the same partition, forming the input partition of the initial distributed graph splitting algorithm.
It will be appreciated that the resulting k initial partitions may be relatively similar in size due to the random allocation. And, the aggregation operation is adopted, so that the data with the same partition number are all converged on the same machine.
In addition, regarding the setting of the parameter k, the relation between the machine memory size, the sub-graph dividing performance, and the data amount can be considered. The method comprises the steps of determining processing capacity information corresponding to an execution object, determining unit processing parameters corresponding to the execution object based on the number of neighbor nodes and the processing capacity information, and determining the value of K according to the ratio of the data quantity corresponding to initial graph data and the unit processing parameters. That is, assuming that the size of a sub-graph can be handled well on one machine is y (unit processing parameter) and the total data amount is x (data amount corresponding to the initial graph data), there may be k=x/y. Wherein, the value of y should consider the relation between the single graph division and the machine memory. Let m denote the machine memory size (processing capability information), then the single graph partitions the amount of data that can be processed y=m/d, where d is the average degree of the subgraph (i.e., the average number of neighbors of a node). So the value of k is k=dx/m.
It can be appreciated that the initial graph partitioning partition can use a label propagation method to calculate an initial partition of each node, so as to improve the partition efficiency of the initial graph data.
302. And determining a partition subgraph corresponding to the first partition.
In this embodiment, that is, for the data of the same partition V i (the first partition), that is, the new tuple < i, < u, nbr (u) > > with the same first element, the sub-graph G i=(Vi,Ei on the same partition is constructed, specifically, the first element of the neighboring tuple may be extracted from the new tuple to form the node set V i, and then for the second element of the neighboring tuple, the neighboring node V in the set V i, that is, V e nbr (u) and V e V i, is reserved from the new tuple. And forming an edge (u, v) by the reserved neighbor node v and the node u, and putting the edge (u, v) into a set E i (partition subgraph).
303. And carrying out graph segmentation on each partition subgraph to obtain T sub-graph segmentation partitions corresponding to each partition subgraph.
In this embodiment, T.gtoreq.2, T is a positive integer, i.e., given one sub-graph G i=(Vi,Ei, G i may be partitioned into T subsets, denoted asMeets these conditions (1)(2) For any of j and j(3) For-1 < alpha <1 there is(4) The number of edges across the different subsets is minimized. The METIS algorithm can be used for sub-graph segmentation on a single machine. Thus, when K subgraphs all complete the respective graph partitioning, k×t subsets (subgraph partitioning partitions) are generated together.
In addition, regarding the setting of the parameter T, the number of the last total subset may be considered, i.e. the value of K x T cannot be too large. Because the size of the latter thumbnail is related to K x T, if T is too large, the thumbnail cannot be processed on a single machine. In order to make a thumbnail exist in the memory of a machine, assuming that the memory size is m and the average node number of the thumbnail is h, there may be k×th=m. That is, t=m/kh may be set.
304. And taking the K-T sub-graph dividing partitions as graph nodes to generate thumbnail corresponding to the initial graph data.
In this embodiment, i.e. based on k×t subsets (sub-graph partitioned partitions), a thumbnail can be constructed, so that the overall structure of the graph can be depicted like a pixel.
Specifically, the partition of k×t sub-graph may be first used as graph nodes, i.e., for the j-th subset of each sub-graph G i A node p i,j forming a thumbnail, then configuring weight information based on the number of nodes of the sub-graph partition in the graph node, i.e. setting the weight of the node p i,j as a subsetOf the number of nodes, i.eDetermining connection nodes of which connection edges span edges of sub-graph partition partitions in the graph nodes as abbreviated edges, namely for any two nodesAndIf there is an edge (u, v) E crossing the subset, an edge (p i,j,pi′,j′) of the thumbnail is constructed, and the number of the thumbnail edges is determined based on the weight information to generate the thumbnail corresponding to the initial image data, namely, the edge (p i,j,pi′,j′) of the thumbnail is weighted to the number of edges crossing different subsets, namely
In the case of lower quality graph cut, the number of edges of the thumbnail may be relatively large, which may result in the size of the thumbnail exceeding the computer memory and not being able to be efficiently processed. To solve this problem, a sparse pattern may be employed to reduce the number of edges of the thumbnail while ensuring connectivity of the pattern. The method comprises the steps of firstly obtaining average node degrees corresponding to thumbnail images to determine thumbnail coefficients, namely, obtaining h/2 neighbor nodes with the largest weight for each node p i,j, so that each node in the step keeps h/2 edges, carrying out node sparse processing on each graph node based on the thumbnail coefficients, extracting edges subjected to the sparse processing to obtain processed edges, for example, obtaining K Th/2 edges with the largest weight on the whole, merging the edges in the step to form a new edge set to obtain a final sparse thumbnail edge set, and then sequencing based on weight information of the processed edges to determine target edges, so that the thumbnail images corresponding to initial graph data are generated according to the target edges.
Further, one way to take the front K Th/2 sides with the greatest weights in the above steps is to sort all sides in the reverse order of weights, and then take the front K Th/2 sides after sorting. But this approach has O (M log M) computational complexity. In the case of a relatively large number of edges of the thumbnail, the run time is still very long. In order to improve the operation efficiency, the method of sampling may be used, specifically, the number of all edges of the thumbnail may be counted first and expressed as M, then the retention ratio is calculated as q=k×th/2M, and a fixed number of edges, such as 10000 edges, are randomly sampled from all edges, then the sampled edges are ordered in the reverse order of the edge weights, and the minimum weight of the front 10000q edges with the largest weight in the sampled edges, that is, the weight of the edge at the position 10000q after the reverse order is calculated and expressed as w ', and then the edge with the weight greater than or equal to w' in the thumbnail is retained to form the result set of the algorithm. The method only needs to scan all sides once, so that the method has linear calculation complexity and higher operation efficiency.
305. Dividing the thumbnail into K second partitions, and partitioning nodes in the initial graph data based on the second partitions to obtain a target node set.
In this embodiment, the target node set is used for executing a recommendation task, where the recommendation task is to select objects from the target node set to recommend, for example, the recommendation task is to be recommended by a master and a slave in the game, and the distributed graph splitting algorithm of this embodiment may be used to split the social network of the game first, because each player in the game belongs to a partition, for each player, 50 players may be randomly selected from the partition where the player is located, and then the ranking recommendation of the master and the slave relationships is performed, that is, the recommendation task is to select objects having social association with the object initiating the recommendation task from the randomly selected 50 players to recommend, where the embodiment is not limited to the recommendation of the master and the slave relationships, and is also applicable to the object recommendation scene associated with the social network, for example, friend recommendation, classmate recommendation, and the like.
It will be appreciated that the recommendation task is not limited to object recommendation based on social networks, but may be based on transaction relationship networks, interaction relationship networks, etc., i.e. the relationship networks of the above type correspond to the process of dividing the social network, and are not limited herein.
Specifically, the partitioning process based on the second partition is given a thumbnail G a=(Va,Ea, w, w '), where V a is a node set, E a is an edge set, w is the weight of the node, and w' is the weight of the edge. Unlike graph partitioning on subgraphs, the partitioning of the thumbnail takes into account the weights of the nodes and the weights of the edges, so that the thumbnail partitions into k subsets, denoted V 1,V2,...,Vk, satisfying these conditions (1)(2)(3) For-1 < alpha <1 there is(4) The number of edges across the different subsets is minimized. That is, the graph on the thumbnail is partitioned such that the node weight sum of each partition is relatively close, and then the number of nodes of the corresponding original graph is also relatively close. The METIS algorithm [1] can be used for thumbnail image segmentation on a stand-alone machine.
And (3) representing the partition ID of the node p of the thumbnail by gamma (p), wherein 1 is less than or equal to gamma (p) is less than or equal to k. For the subset P corresponding to the thumbnail node P, there may be nodes in the subset P whose partition IDs are all γ (P). Then, to this step, the nodes in the original graph can be divided into new k partitions. The number of dividing edges of the new partition and the iteration number can be calculated, so that whether the next round of iterative calculation can be performed or not is judged, otherwise, the current partition result is returned to be used as a final graph dividing result.
In each iteration, the embodiment adopts a divide-and-conquer strategy, performs fine-granularity segmentation based on the previous segmentation result, and performs coarse-granularity segmentation based on the segmentation blocks, so as to finally obtain the segmentation result of the layer.
It can be understood that the above process may be iterated, that is, after determining the second partitions, determining partition subgraphs corresponding to the second partitions, and performing graph segmentation on the partition subgraphs corresponding to each second partition to obtain sub-graph segmentation partitions corresponding to T second partitions, then taking the sub-graph segmentation partitions corresponding to k×t second partitions as graph nodes to generate thumbnails corresponding to the second partitions, further partitioning the thumbnails corresponding to the second partitions into K third partitions, and partitioning nodes in the initial graph data based on the third partitions to obtain a loop node set, thereby implementing the iterated process.
In one possible scenario, each first partition corresponds to a different execution object, and when determining the second partition, the second partition also corresponds to a different execution object, so the third partition also corresponds to a different execution object, that is, the implementation adopts a processing mode of a distributed architecture, and can improve the computing efficiency based on a parallel processing graph segmentation process of a plurality of execution objects.
It will be appreciated that the execution objects may be different servers, or may be different partitions obtained by dividing processing resources in the same server, which is not limited herein.
It should be noted that, the initiating object of the recommending task may be an executing object, that is, a process of performing, for example, friend recommendation by the executing object, and the initiating object of the recommending task may be a service server other than the executing object, for example, a social network server responsible for storing social network information initiates the recommending task, and the executing object in the computing group performs the graph splitting process of the embodiment, where the specific initiating object depends on an actual scenario.
In addition, the stopping condition of iteration can be that the number of the dividing edges corresponding to the loop node set is obtained, and if the number of the dividing edges is converged, the recommended task is executed based on the loop node set. Or stopping when the iteration reaches the preset times.
In one possible scenario, the process of graph processing is shown in fig. 5, and fig. 5 is a schematic diagram of a scenario of another graph data processing method provided by an embodiment of the present application, where the following process is shown in the figure:
(1) The partition of the initial node is realized, so that the node is divided into k partitions, the data size of each partition is relatively close, and the data size can be put into a machine memory for processing;
(2) Constructing a subgraph on each partition based on the data of the partition;
(3) Dividing the subgraph to obtain t subgraph division areas;
(4) Constructing a thumbnail based on the sub-graph partition, so that the outline structure of the graph can be described;
(5) Re-segmenting the thumbnail, and then re-mapping the thumbnail to the original node to obtain a segmentation partition ID of each node;
(6) Calculating the number of the dividing edges of the new dividing result;
stopping if the iteration number reaches a certain number or the number of the dividing edges reaches the condition of convergence, otherwise, jumping to the step (2) to continue a new round of calculation.
According to the method, the device and the system for processing the initial graph data, the initial graph data are acquired and divided into K first partitions, the data amount corresponding to each first partition is close to that of each first partition, each first partition corresponds to a different execution object, partition subgraphs corresponding to the first partitions are determined, graph division is conducted on each partition subgraph to obtain T sub-graph division partitions corresponding to each partition subgraph, K x T sub-graph division partitions are further used as graph nodes to generate thumbnail corresponding to the initial graph data, the thumbnail is further divided into K second partitions, and nodes in the initial graph data are partitioned based on the second partitions to obtain a target node set, wherein the target node set is used for recommending task execution. Therefore, the parallel graph data processing process is realized, and because a plurality of first partitions can be processed by adopting different devices, namely each device performs graph segmentation on respective data and can perform mapping reduction through thumbnail, the advantage of parallel processing is fully utilized, and the processing efficiency of graph data is improved.
The above embodiment describes the processing procedure of the graph data, and the processing procedure of the graph data is described below by integrating social recommendation scenes. Referring to fig. 6, fig. 6 is a flowchart of another method for processing graph data according to an embodiment of the present application, where the embodiment of the present application at least includes the following steps:
601. And acquiring a recommendation request initiated by the target object.
In this embodiment, the recommendation request is a friend recommendation request, that is, a distributed graph segmentation algorithm is applied to a friend recommendation system of a game, and is used as a recall method for friend recommendation. In particular, in the recommendation of a master and a slave of a game of countermeasure, a social network of the game of countermeasure may be first segmented by a distributed graph segmentation algorithm. Thus, each player in the game of opponent may belong to a partition. For each player, 50 players may be randomly selected from the partition in which the player is located, and then ranking recommendations may be made. The method for selecting in the subareas can be compared with the method for selecting in the whole user set, and the experimental result shows that the method is greatly improved.
602. And expanding the social relation network of the target object in response to the recommendation request.
In this embodiment, the social relationship network of the target object is extended, that is, the edge point in the social network is extended through the friend relationship of the target object, or the map data corresponding to the social network in a larger scale is called.
603. And carrying out graph segmentation on the expanded social data to obtain a target recommendation set.
In this embodiment, the process of performing graph segmentation refers to the process of steps 301-305 shown in fig. 3, and will not be described herein.
Specifically, in a master and apprentice recommendation scenario for a game of countermeasure, 10 other players may be recommended for each player a, so that player a may click on these players to establish a master and apprentice relationship. Since the number of other players that may be recommended to player A may be very high, one candidate set may be calculated first, so that a refined ranking may be performed on the candidate set using the ranking model. The model of refined sorting adopts xgboost, and the generation method of the candidate set adopts the graph segmentation method of the technical scheme. Specifically, the distributed graph partitioning method of the present embodiment may be run based on the social network of friends of players of the countermeasure game in the game, such that each player belongs to a certain partition. For each player a, 100 players may be randomly selected from the partitioned partitions of a as the recommendation candidate set for player a. As a comparison method, a random selection of 100 players from all players as a candidate set may be employed. The same refined ranking model is used for both candidate set generation methods. Table 1 shows the comparison of the application effects of different algorithms, wherein the application effects of the two methods are shown, and the click rate of the distributed graph segmentation method is improved by 11.62% compared with that of the random method, and the click rate of the distributed graph segmentation method is improved by 3.12% on the current day.
Table 1 comparison of application effects of different algorithms
Algorithm Click through% The passing rate of the day is%
Random candidate set 31.58 67.26
Distributed graph segmentation candidate set 35.25 35.25
The embodiment designs a large-scale distributed graph segmentation calculation framework, can process billion-scale graph data based on Spark or MapReduce calculation framework, gradually improves segmentation quality through repeated iterative calculation, adopts a double-hierarchy method for each iterative calculation, divides a large graph into a plurality of blocks in a rough granularity mode through a divide-by-divide strategy, then divides each block into a fine granularity graph, and finally re-divides the fine granularity segmented blocks to obtain a segmentation result of the layer, designs a thumbnail construction method and a graph sparse method, effectively improves composition efficiency, and also uses a distributed graph segmentation algorithm to generate recommended candidate sets, thereby improving service application effects.
In order to better implement the above-described aspects of the embodiments of the present application, the following provides related apparatuses for implementing the above-described aspects. Referring to fig. 7, fig. 7 is a schematic structural diagram of a processing device for graph data according to an embodiment of the present application, where the processing device 700 for graph data includes:
An obtaining unit 701, configured to obtain initial graph data, and divide the initial graph data into K first partitions, where the data size corresponding to each first partition is close, and K is a positive integer greater than or equal to 2,K;
A determining unit 702, configured to determine a partition sub-graph corresponding to the first partition;
A processing unit 703, configured to perform graph segmentation on each partition sub-graph to obtain T sub-graph segmented partitions corresponding to each partition sub-graph, where T is greater than or equal to 2, and T is a positive integer;
the processing unit 703 is further configured to use k×t sub-graph partition partitions as graph nodes, so as to generate a thumbnail corresponding to the initial graph data;
The processing unit 703 is further configured to partition the thumbnail into K second partitions, and partition nodes in the initial graph data based on the second partitions, so as to obtain a target node set, where the target node set is used for recommending execution of a task.
Optionally, in some possible implementations of the present application, the acquiring unit 701 is specifically configured to acquire the initial graph data;
The acquiring unit 701 is specifically configured to determine a neighboring node corresponding to each node in the initial graph data, so as to generate a first tuple;
the acquiring unit 701 is specifically configured to generate R random numbers, and combine the random numbers with the first tuple to generate a second tuple, where R is greater than or equal to 0 and less than or equal to K;
The acquiring unit 701 is specifically configured to aggregate based on the random numbers in the second tuple, so as to obtain K first partitions.
Optionally, in some possible implementations of the present application, the acquiring unit 701 is specifically configured to determine processing capability information corresponding to an execution object;
The acquiring unit 701 is specifically configured to determine a unit processing parameter corresponding to the execution object based on the number of the neighboring nodes and the processing capability information;
The acquiring unit 701 is specifically configured to determine a value of K according to a ratio of the data amount corresponding to the initial map data to the unit processing parameter.
Optionally, in some possible implementations of the present application, the processing unit 703 is specifically configured to use k×t sub-graph partition as a graph node;
The processing unit 703 is specifically configured to configure weight information based on the number of nodes of the sub-graph partition in the graph nodes;
the processing unit 703 is specifically configured to determine, as a abbreviated edge, a connection node in the graph node, where the connection edge spans an edge of the sub-graph partition;
The processing unit 703 is specifically configured to determine, based on the weight information, the number of the thumbnail edges, so as to generate the thumbnail corresponding to the initial map data.
Optionally, in some possible implementations of the present application, the processing unit 703 is specifically configured to obtain an average node degree corresponding to the thumbnail, so as to determine a thumbnail coefficient;
the processing unit 703 is specifically configured to perform node sparse processing on each graph node based on the thumbnail coefficients, and extract edges after the sparse processing to obtain processed edges;
The processing unit 703 is specifically configured to sort the processing edges based on the weight information of the processing edges, so as to determine a target edge;
the processing unit 703 is specifically configured to generate the thumbnail corresponding to the initial image data according to the target edge.
Optionally, in some possible implementations of the present application, the processing unit 703 is specifically configured to determine a partition sub-graph corresponding to the second partition;
the processing unit 703 is specifically configured to perform graph segmentation on the sub-graph corresponding to each second partition, so as to obtain T sub-graph segmented partitions corresponding to the second partitions;
the processing unit 703 is specifically configured to use sub-graph partition areas corresponding to k×t second partition areas as graph nodes, so as to generate thumbnail images corresponding to the second partition areas;
the processing unit 703 is specifically configured to partition the thumbnail corresponding to the second partition into K third partitions, and partition the nodes in the initial graph data based on the third partitions, so as to obtain a cyclic node set.
Optionally, in some possible implementations of the present application, the processing unit 703 is specifically configured to obtain the number of dividing edges corresponding to the set of circulating nodes;
the processing unit 703 is specifically configured to execute a recommendation task based on the set of loop nodes if the number of the split edges converges.
The method comprises the steps of obtaining initial graph data, dividing the initial graph data into K first partitions, dividing the data quantity corresponding to each first partition into K second partitions, dividing the first partitions into different execution objects corresponding to the first partitions, determining partition subgraphs corresponding to the first partitions, dividing the partition subgraphs into T sub-graph dividing partitions corresponding to the partition subgraphs, further taking the K x T sub-graph dividing partitions as graph nodes to generate thumbnails corresponding to the initial graph data, dividing the thumbnails into K second partitions, dividing the nodes in the initial graph data into target node sets based on the second partitions, and enabling the target node sets to be used for recommending task execution. Therefore, the parallel graph data processing process is realized, and because a plurality of first partitions can be processed by adopting different devices, namely each device performs graph segmentation on respective data and can perform mapping reduction through thumbnail, the advantage of parallel processing is fully utilized, and the processing efficiency of graph data is improved.
The embodiment of the present application further provides a terminal device, as shown in fig. 8, which is a schematic structural diagram of another terminal device provided in the embodiment of the present application, for convenience of explanation, only the portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a point of sale (POS), a vehicle-mounted computer, and the like, taking the mobile phone as an example:
Fig. 8 is a block diagram showing a part of the structure of a mobile phone related to a terminal provided by an embodiment of the present application. Referring to fig. 8, the handset includes Radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WIRELESS FIDELITY, wiFi) module 870, processor 880, and power supply 890. Those skilled in the art will appreciate that the handset configuration shown in fig. 8 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 8:
The RF circuit 810 may be used for receiving and transmitting signals during a message or a call, and particularly, receives downlink information of a base station, processes the downlink information for the processor 880, and transmits uplink data to the base station. Typically, the RF circuitry 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (low noise amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 810 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global System for Mobile communications (global system of mobile communication, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), long term evolution (long term evolution, LTE), email, short message service (short MESSAGING SERVICE, SMS), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area which may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), etc., and a storage data area which may store data created according to the use of the mobile phone (such as audio data, a phonebook, etc.), etc. In addition, memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 830 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset. In particular, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user using any suitable object or accessory such as a finger, a stylus, etc., on or near the touch panel 831, and spaced touch operations within a certain range on the touch panel 831), and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 880, and can receive and execute commands sent by the processor 880. In addition, the touch panel 831 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 840 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 840 may include a display panel 841, and optionally, the display panel 841 may be configured in the form of a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), or the like. Further, the touch panel 831 may overlay the display panel 841, and when the touch panel 831 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 880 to determine the type of touch event, and the processor 880 then provides a corresponding visual output on the display panel 841 according to the type of touch event. Although in fig. 8, the touch panel 831 and the display panel 841 are implemented as two separate components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 850, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 841 and/or the backlight when the mobile phone moves to the ear. The accelerometer sensor can be used for detecting the acceleration in all directions (generally three axes), detecting the gravity and the direction when the accelerometer sensor is static, and can be used for identifying the gesture of a mobile phone (such as transverse and vertical screen switching, related games, magnetometer gesture calibration), vibration identification related functions (such as pedometer and knocking), and other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors which are also configured by the mobile phone are not repeated herein.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 may convert the received audio data into an electrical signal, transmit the electrical signal to the speaker 861, and convert the electrical signal to a sound signal for output by the speaker 861, while the microphone 862 converts the collected sound signal into an electrical signal, receives the electrical signal from the audio circuit 860, converts the electrical signal into audio data, and outputs the audio data to the processor 880 for processing, and then sends the audio data to another mobile phone, for example, via the RF circuit 810, or outputs the audio data to the memory 820 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 870, so that wireless broadband Internet access is provided for the user. Although fig. 8 shows a WiFi module 870, it is understood that it does not belong to the necessary constitution of a cell phone, and can be omitted entirely within a range that does not change the essence of the invention.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by running or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby performing overall management of the mobile phone. Alternatively, the processor 880 may include one or more processing units, and alternatively, the processor 880 may integrate an application processor that primarily processes an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 880.
The handset further includes a power supply 890 (e.g., a battery) for powering the various components, optionally in logical communication with the processor 880 through a power management system, as well as performing functions such as managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 880 included in the terminal further has a function of executing each step of the page processing method as described above.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 900 may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (central processing units, CPU) 922 (e.g., one or more processors) and a memory 932, one or more storage mediums 930 (e.g., one or more mass storage devices) storing application programs 942 or data 944. Wherein the memory 932 and the storage medium 930 may be transitory or persistent. The program stored in the storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 922 may be arranged to communicate with a storage medium 930 to execute a series of instruction operations in the storage medium 930 on the server 900.
The server 900 may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input/output interfaces 958, and/or one or more operating systems 941, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
The steps performed by the management apparatus in the above-described embodiments may be based on the server structure shown in fig. 9.
In an embodiment of the present application, there is further provided a computer readable storage medium having stored therein processing instructions of graph data, which when executed on a computer, cause the computer to perform steps performed by a graph data processing apparatus in a method described in the foregoing embodiment shown in fig. 3 to 6.
There is also provided in an embodiment of the application a computer program product comprising processing instructions for graph data, which when run on a computer causes the computer to perform the steps performed by the processing means for graph data in the method described in the embodiments of figures 3 to 6 as described above.
The embodiment of the application also provides a processing system of the graph data, which can comprise the processing device of the graph data in the embodiment described in fig. 7, or the terminal equipment in the embodiment described in fig. 8, or the server described in fig. 9.
In one possible scenario, the method for managing network resources in the present application is applied to a blockchain device, that is, an execution object corresponding to each partition is a blockchain device, and the blockchain device is a node in a blockchain, which is described below with reference to the accompanying drawings, and the data sharing system 1000 shown in fig. 10A refers to a system for performing data sharing between nodes, where the data sharing system may include a plurality of nodes 1001, and the plurality of nodes 1001 may be respective clients in the data sharing system. Each node 1001 may receive input information while operating normally and maintain shared data within the data sharing system based on the received input information. In order to ensure the information intercommunication in the data sharing system, information connection can exist between each node in the data sharing system, and the nodes can transmit information through the information connection. For example, when any node in the data sharing system receives input information, other nodes in the data sharing system acquire the input information according to a consensus algorithm, and store the input information as data in the shared data, so that the data stored on all nodes in the data sharing system are consistent.
Each node in the data sharing system has a node identifier corresponding to the node identifier, and each node in the data sharing system can store the node identifiers of other nodes in the data sharing system, so that the generated block can be broadcast to other nodes in the data sharing system according to the node identifiers of other nodes. Each node can maintain a node identification list shown in the following table, and the node names and the node identifications are correspondingly stored in the node identification list. The node identifier may be an IP (Internet Protocol, protocol interconnected between networks) address and any other information that can be used to identify the node, and the IP address is only illustrated in table 2.
Table 2 correspondence between node names and node identifications
Node name Node identification
Node 1 117.114.151.174
Node 2 117.116.189.145
... ...
Node N 119.123.789.258
Each node in the data sharing system stores one and the same blockchain. The block chain consists of a plurality of blocks, referring to fig. 10B, the block chain consists of a plurality of blocks, the starting block comprises a block head and a block main body, the block head stores an input information characteristic value, a version number, a time stamp and a difficulty value, the block main body stores input information, the next block of the starting block takes the starting block as a father block, the next block also comprises a block head and a block main body, the block head stores the input information characteristic value of the current block, the block head characteristic value, the version number, the time stamp and the difficulty value of the father block, and the like, so that the block data stored in each block in the block chain are associated with the block data stored in the father block, and the safety of the input information in the block is ensured.
When each block in the blockchain is generated, referring to fig. 10C, when the node where the blockchain is located receives input information, the node checks the input information, stores the input information into a memory pool after the check is completed, and updates a hash tree used for recording the input information:
SHA256(SHA256(version+prev_hash+merkle_root+ntime+nbits+x))<TARGET
The SHA256 is a eigenvalue algorithm used for calculating eigenvalues, version is version information of related block protocols in a block chain, prev_hash is a block header eigenvalue of a parent block of a current block, merkle _root is an eigenvalue of input information, ntime is update time of an update timestamp, nbits is a current difficulty, is a fixed value in a period of time and is determined again after exceeding a fixed period of time, x is a random number, and TARGET is an eigenvalue threshold, wherein the eigenvalue threshold can be determined according to nbits.
Thus, when the random number meeting the formula is calculated, the information can be correspondingly stored to generate the block head and the block main body, and the current block is obtained. And then, the node where the blockchain is located sends the newly generated blocks to other nodes in the data sharing system where the newly generated blocks are located according to the node identification of other nodes in the data sharing system, the other nodes verify the newly generated blocks, and the newly generated blocks are added into the blockchain stored in the newly generated blocks after the verification is completed.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. The object of the present embodiment can be achieved according to the fact that some or all of the units may be selected.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a processing means of graph data, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.

Claims (12)

1.一种图数据的处理方法,其特征在于,包括:1. A method for processing graph data, comprising: 获取初始图数据,并将所述初始图数据分割为K个第一分区,每个所述第一分区对应的数据量接近,每个所述第一分区对应不同的执行对象,K≥2,K为正整数;Acquire initial graph data, and divide the initial graph data into K first partitions, each of which corresponds to a similar amount of data, and each of which corresponds to a different execution object, K≥2, where K is a positive integer; 确定所述第一分区对应的分区子图;Determine a partition subgraph corresponding to the first partition; 对每个所述分区子图进行图分割,以得到每个所述分区子图对应的T个子图分割分区,T≥2,T为正整数;Performing graph segmentation on each of the partition subgraphs to obtain T subgraph segmentation partitions corresponding to each of the partition subgraphs, where T≥2 and T is a positive integer; 将K*T个所述子图分割分区作为图节点,以生成所述初始图数据对应的缩略图;Using K*T sub-graph partitions as graph nodes to generate thumbnails corresponding to the initial graph data; 将所述缩略图分割为K个第二分区,并基于所述第二分区对所述初始图数据中的节点进行分区,以得到目标节点集合,所述目标节点集合用于推荐任务的执行,每个所述第二分区对应不同的执行对象;Divide the thumbnail into K second partitions, and partition the nodes in the initial graph data based on the second partitions to obtain a target node set, wherein the target node set is used for execution of the recommendation task, and each second partition corresponds to a different execution object; 所述获取初始图数据,并将所述初始图数据分割为K个第一分区,包括:The obtaining of initial graph data and dividing the initial graph data into K first partitions includes: 获取所述初始图数据;Acquire the initial graph data; 确定所述初始图数据中的每个节点对应的邻居节点,以生成第一二元组;Determine a neighbor node corresponding to each node in the initial graph data to generate a first two-tuple; 生成R个随机数,并将所述随机数与所述第一二元组进行组合生成第二二元组,0≤R<K;Generate R random numbers, and combine the random numbers with the first two-tuple to generate a second two-tuple, 0≤R<K; 基于所述第二二元组中的随机数进行聚合,以得到K个所述第一分区;Aggregating based on the random number in the second two-tuple to obtain K first partitions; K的取值过程,包括:The process of determining the value of K includes: 确定执行对象对应的处理能力信息;Determine the processing capability information corresponding to the execution object; 基于所述邻居节点的数量以及所述处理能力信息确定所述执行对象对应的单位处理参数;Determine a unit processing parameter corresponding to the execution object based on the number of the neighboring nodes and the processing capability information; 根据所述初始图数据对应的数据量与所述单位处理参数的比值确定K的取值。The value of K is determined according to the ratio of the data volume corresponding to the initial graph data to the unit processing parameter. 2.根据权利要求1所述的方法,其特征在于,所述将K*T个所述子图分割分区作为图节点,以生成所述初始图数据对应的缩略图,包括:2. The method according to claim 1, characterized in that the step of dividing the K*T subgraphs into partitions as graph nodes to generate thumbnails corresponding to the initial graph data comprises: 将K*T个所述子图分割分区作为图节点;Divide the K*T subgraphs into partitions as graph nodes; 基于所述图节点中所述子图分割分区的节点数量配置权重信息;Configuring weight information based on the number of nodes of the subgraph partition in the graph node; 确定所述图节点中连接边跨越了所述子图分割分区的边的连接节点,以作为缩略边;Determine, in the graph nodes, connection nodes whose connection edges cross the edges of the subgraph partitions as shortened edges; 基于所述权重信息确定所述缩略边的数量,以生成所述初始图数据对应的所述缩略图。The number of the thumbnail edges is determined based on the weight information to generate the thumbnail image corresponding to the initial graph data. 3.根据权利要求2所述的方法,其特征在于,所述基于所述权重信息确定所述缩略边的数量,以生成所述初始图数据对应的所述缩略图,包括:3. The method according to claim 2, characterized in that the determining the number of the thumbnail edges based on the weight information to generate the thumbnail corresponding to the initial graph data comprises: 获取所述缩略图对应的平均节点度数,以确定缩略系数;Obtaining the average node degree corresponding to the thumbnail to determine the thumbnail coefficient; 对每个所述图节点基于所述缩略系数进行节点稀疏处理,并对稀疏处理后的边进行提取得到处理边;Performing node sparse processing on each of the graph nodes based on the reduction coefficient, and extracting the edges after the sparse processing to obtain processed edges; 基于所述处理边的权重信息进行排序,以确定目标边;Sorting based on the weight information of the processed edges to determine the target edge; 根据所述目标边生成所述初始图数据对应的所述缩略图。The thumbnail corresponding to the initial graph data is generated according to the target edge. 4.根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:4. The method according to any one of claims 1 to 3, characterized in that the method further comprises: 确定所述第二分区对应的分区子图;Determine a partition subgraph corresponding to the second partition; 对每个所述第二分区对应的分区子图进行图分割,以得到T个所述第二分区对应的子图分割分区;Performing graph partitioning on each partition subgraph corresponding to the second partition to obtain T subgraph partitions corresponding to the second partition; 将K*T个所述第二分区对应的子图分割分区作为图节点,以生成所述第二分区对应的缩略图;Use K*T sub-graph partitions corresponding to the second partitions as graph nodes to generate thumbnails corresponding to the second partitions; 将所述第二分区对应的缩略图分割为K个第三分区,并基于所述第三分区对所述初始图数据中的节点进行分区,以得到循环节点集合。The thumbnail corresponding to the second partition is divided into K third partitions, and the nodes in the initial graph data are partitioned based on the third partitions to obtain a circular node set. 5.根据权利要求4所述的方法,其特征在于,所述方法还包括:5. The method according to claim 4, characterized in that the method further comprises: 获取所述循环节点集合对应的分割边数量;Obtain the number of split edges corresponding to the cycle node set; 若所述分割边数量收敛,则基于所述循环节点集合执行推荐任务。If the number of segmented edges converges, a recommendation task is performed based on the set of cycle nodes. 6.一种图数据的处理装置,其特征在于,包括:6. A graph data processing device, comprising: 获取单元,用于获取初始图数据,并将所述初始图数据分割为K个第一分区,每个所述第一分区对应的数据量接近,每个所述第一分区对应不同的执行对象,K≥2,K为正整数;An acquisition unit, used for acquiring initial graph data, and dividing the initial graph data into K first partitions, each of which corresponds to a similar amount of data, each of which corresponds to a different execution object, K≥2, and K is a positive integer; 确定单元,用于确定所述第一分区对应的分区子图;A determining unit, configured to determine a partition subgraph corresponding to the first partition; 处理单元,用于对每个所述分区子图进行图分割,以得到每个所述分区子图对应的T个子图分割分区,T≥2,T为正整数;A processing unit, configured to perform graph segmentation on each of the partition subgraphs to obtain T subgraph segmentation partitions corresponding to each of the partition subgraphs, where T≥2 and T is a positive integer; 所述处理单元,还用于将K*T个所述子图分割分区作为图节点,以生成所述初始图数据对应的缩略图;The processing unit is further used to divide the K*T subgraphs into partitions as graph nodes to generate thumbnails corresponding to the initial graph data; 所述处理单元,还用于将所述缩略图分割为K个第二分区,并基于所述第二分区对所述初始图数据中的节点进行分区,以得到目标节点集合,所述目标节点集合用于推荐任务的执行,每个所述第二分区对应不同的执行对象;The processing unit is further used to divide the thumbnail into K second partitions, and partition the nodes in the initial graph data based on the second partitions to obtain a target node set, wherein the target node set is used for the execution of the recommendation task, and each second partition corresponds to a different execution object; 所述获取单元,具体用于获取所述初始图数据;The acquisition unit is specifically used to acquire the initial graph data; 所述获取单元,具体用于确定所述初始图数据中的每个节点对应的邻居节点,以生成第一二元组;The acquisition unit is specifically used to determine the neighbor nodes corresponding to each node in the initial graph data to generate a first two-tuple; 所述获取单元,具体用于生成R个随机数,并将所述随机数与所述第一二元组进行组合生成第二二元组,0≤R<K;The acquisition unit is specifically used to generate R random numbers, and combine the random numbers with the first two-tuple to generate a second two-tuple, 0≤R<K; 所述获取单元,具体用于基于所述第二二元组中的随机数进行聚合,以得到K个所述第一分区;The acquisition unit is specifically configured to perform aggregation based on the random number in the second two-tuple to obtain K first partitions; 所述装置还包括:所述获取单元,具体用于确定执行对象对应的处理能力信息;The device further comprises: the acquisition unit, which is specifically used to determine the processing capability information corresponding to the execution object; 所述获取单元,具体用于基于所述邻居节点的数量以及所述处理能力信息确定所述执行对象对应的单位处理参数;The acquisition unit is specifically configured to determine the unit processing parameter corresponding to the execution object based on the number of the neighboring nodes and the processing capability information; 所述获取单元,具体用于根据所述初始图数据对应的数据量与所述单位处理参数的比值确定K的取值。The acquisition unit is specifically used to determine the value of K based on the ratio of the data volume corresponding to the initial graph data to the unit processing parameter. 7.根据权利要求6所述的装置,其特征在于,所述装置还包括:所述处理单元,具体用于将K*T个所述子图分割分区作为图节点;7. The device according to claim 6, characterized in that the device further comprises: the processing unit is specifically used to divide the K*T subgraphs into partitions as graph nodes; 所述处理单元,具体用于基于所述图节点中所述子图分割分区的节点数量配置权重信息;The processing unit is specifically used to configure weight information based on the number of nodes of the subgraph partition in the graph node; 所述处理单元,具体用于确定所述图节点中连接边跨越了所述子图分割分区的边的连接节点,以作为缩略边;The processing unit is specifically configured to determine, in the graph nodes, connection nodes whose connection edges cross the edges of the subgraph partitions, as thumbnail edges; 所述处理单元,具体用于基于所述权重信息确定所述缩略边的数量,以生成所述初始图数据对应的所述缩略图。The processing unit is specifically configured to determine the number of the thumbnail edges based on the weight information to generate the thumbnail corresponding to the initial graph data. 8.根据权利要求7所述的装置,其特征在于,所述装置还包括:所述处理单元,具体用于获取所述缩略图对应的平均节点度数,以确定缩略系数;8. The device according to claim 7, characterized in that the device further comprises: the processing unit is specifically used to obtain the average node degree corresponding to the thumbnail to determine the thumbnail coefficient; 所述处理单元,具体用于对每个所述图节点基于所述缩略系数进行节点稀疏处理,并对稀疏处理后的边进行提取得到处理边;The processing unit is specifically used to perform node sparse processing on each of the graph nodes based on the reduction coefficient, and extract the edges after the sparse processing to obtain processed edges; 所述处理单元,具体用于基于所述处理边的权重信息进行排序,以确定目标边;The processing unit is specifically configured to sort the processed edges based on the weight information to determine the target edge; 所述处理单元,具体用于根据所述目标边生成所述初始图数据对应的所述缩略图。The processing unit is specifically configured to generate the thumbnail corresponding to the initial graph data according to the target edge. 9.根据权利要求6-8任一项所述的装置,其特征在于,所述装置还包括:所述处理单元,具体用于确定所述第二分区对应的分区子图;9. The device according to any one of claims 6 to 8, characterized in that the device further comprises: the processing unit is specifically configured to determine a partition subgraph corresponding to the second partition; 所述处理单元,具体用于对每个所述第二分区对应的分区子图进行图分割,以得到T个所述第二分区对应的子图分割分区;The processing unit is specifically configured to perform graph segmentation on the partition subgraph corresponding to each second partition to obtain T subgraph segmentation partitions corresponding to the second partition; 所述处理单元,具体用于将K*T个所述第二分区对应的子图分割分区作为图节点,以生成所述第二分区对应的缩略图;The processing unit is specifically configured to use K*T sub-graph partitions corresponding to the second partitions as graph nodes to generate thumbnails corresponding to the second partitions; 所述处理单元,具体用于将所述第二分区对应的缩略图分割为K个第三分区,并基于所述第三分区对所述初始图数据中的节点进行分区,以得到循环节点集合。The processing unit is specifically used to divide the thumbnail corresponding to the second partition into K third partitions, and partition the nodes in the initial graph data based on the third partitions to obtain a circular node set. 10.根据权利要求9所述的装置,其特征在于,所述装置还包括:所述处理单元,具体用于获取所述循环节点集合对应的分割边数量;10. The device according to claim 9, characterized in that the device further comprises: the processing unit is specifically used to obtain the number of split edges corresponding to the set of loop nodes; 所述处理单元,具体用于若所述分割边数量收敛,则基于所述循环节点集合执行推荐任务。The processing unit is specifically configured to execute a recommendation task based on the cycle node set if the number of segmentation edges converges. 11.一种计算机设备,其特征在于,所述计算机设备包括处理器以及存储器:11. A computer device, characterized in that the computer device comprises a processor and a memory: 所述存储器用于存储程序代码;所述处理器用于根据所述程序代码中的指令执行权利要求1至5任一项所述的图数据的处理方法。The memory is used to store program code; the processor is used to execute the method for processing graph data described in any one of claims 1 to 5 according to instructions in the program code. 12.一种计算机程序产品,包括计算机程序/指令,其特征在于,所述计算机程序/指令被处理器执行时实现上述权利要求1至5任一项所述的图数据的处理方法的步骤。12. A computer program product, comprising a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of the method for processing graph data described in any one of claims 1 to 5 are implemented.
CN202210288728.3A 2022-03-23 2022-03-23 A method, device and storage medium for processing graph data Active CN114756714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210288728.3A CN114756714B (en) 2022-03-23 2022-03-23 A method, device and storage medium for processing graph data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210288728.3A CN114756714B (en) 2022-03-23 2022-03-23 A method, device and storage medium for processing graph data

Publications (2)

Publication Number Publication Date
CN114756714A CN114756714A (en) 2022-07-15
CN114756714B true CN114756714B (en) 2024-12-10

Family

ID=82327851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210288728.3A Active CN114756714B (en) 2022-03-23 2022-03-23 A method, device and storage medium for processing graph data

Country Status (1)

Country Link
CN (1) CN114756714B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594958A (en) * 2023-05-25 2023-08-15 之江实验室 Graph dataset loading method, system, electronic device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069412A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Information recommendation method and device, computer equipment and storage medium
CN112100450A (en) * 2020-09-07 2020-12-18 厦门渊亭信息科技有限公司 Graph calculation data segmentation method, terminal device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8578130B2 (en) * 2003-03-10 2013-11-05 International Business Machines Corporation Partitioning of node into more than one partition
US8738559B2 (en) * 2011-01-24 2014-05-27 Microsoft Corporation Graph partitioning with natural cuts
CN104618153B (en) * 2015-01-20 2018-08-03 北京大学 Dynamic fault-tolerant method and system based on P2P in the processing of distributed parallel figure
CN108319698B (en) * 2018-02-02 2021-01-15 华中科技大学 Game-based flow graph dividing method and system
CN111144577B (en) * 2019-12-26 2022-04-22 北京百度网讯科技有限公司 Method, apparatus and electronic device for generating node representation in heterogeneous graph
CN112214499B (en) * 2020-12-03 2021-03-19 腾讯科技(深圳)有限公司 Graph data processing method and device, computer equipment and storage medium
CN112699134A (en) * 2021-03-25 2021-04-23 北京智源人工智能研究院 Distributed graph database storage and query method based on graph subdivision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100450A (en) * 2020-09-07 2020-12-18 厦门渊亭信息科技有限公司 Graph calculation data segmentation method, terminal device and storage medium
CN112069412A (en) * 2020-09-11 2020-12-11 腾讯科技(深圳)有限公司 Information recommendation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114756714A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN112084422B (en) Account data intelligent processing method and device
WO2019200714A1 (en) Server connection method, computer readable storage medium, terminal device, and apparatus
CN110995810B (en) Object identification method based on artificial intelligence and related device
CN108846767B (en) Feature acquisition method and device for social group
CN115050079B (en) Face recognition method, device and storage medium
WO2018223772A1 (en) Content recommendation method and system
CN116450808B (en) Data processing method and device and storage medium
CN106874936A (en) Image propagates monitoring method and device
CN114756714B (en) A method, device and storage medium for processing graph data
CN113051126A (en) Image construction method, device and equipment and storage medium
CN113032160B (en) Data synchronization management method and related device
CN112383386B (en) Data transmission method, device, computer equipment and computer readable storage medium
CN111506730B (en) Data clustering method and related device
CN115861662B (en) Prediction method, device, equipment and medium based on combined neural network model
WO2024088119A1 (en) Data processing method and apparatus, and terminal and network-side device
CN110674134A (en) Geographic information data storage method, query method and device
CN116029368A (en) Super-parameter optimization method, related device and storage medium
CN112783992B (en) Map functional area determining method and device based on interest points
CN111914945A (en) Data processing method and device, image generation method and electronic equipment
CN117688230B (en) A model training method, model application method and related device
CN116955712A (en) Data processing method, device, equipment and storage medium
CN114462461B (en) A method and related device for determining community characteristics based on relationship network
CN120031155A (en) Model training method, device, equipment and storage medium based on federated learning
CN116644128A (en) Block chain-based data processing method, device and storage medium
CN117237652A (en) Object set detection method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant