[go: up one dir, main page]

CN117934805B - Object screening method and device, storage medium and electronic equipment - Google Patents

Object screening method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117934805B
CN117934805B CN202410341988.1A CN202410341988A CN117934805B CN 117934805 B CN117934805 B CN 117934805B CN 202410341988 A CN202410341988 A CN 202410341988A CN 117934805 B CN117934805 B CN 117934805B
Authority
CN
China
Prior art keywords
scene
competition
position coordinates
candidate object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410341988.1A
Other languages
Chinese (zh)
Other versions
CN117934805A (en
Inventor
张奔
郑磊
郑中
涂海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410341988.1A priority Critical patent/CN117934805B/en
Publication of CN117934805A publication Critical patent/CN117934805A/en
Application granted granted Critical
Publication of CN117934805B publication Critical patent/CN117934805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an object screening method and device, a storage medium and electronic equipment. Wherein the method comprises the following steps: determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene in which the candidate object is located; acquiring first position coordinates of positions of candidate objects respectively appearing in N scene pictures, and obtaining a first position coordinate set containing N first position coordinates; mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene to obtain a second position coordinate set; determining competition physical performance parameters of the candidate object in the competition scene by utilizing the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set; it is determined to filter the candidate objects into the target object list. The application solves the technical problem of lower accuracy in the object screening process in the related technology.

Description

Object screening method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an object screening method and apparatus, a storage medium, and an electronic device.
Background
In the scenario of selecting potential objects for some important items, candidate objects willing to participate in the item are typically organized to play the athletic game, and then target objects meeting predetermined requirements are screened out based on their performance on the course.
The manner of screening target objects provided in the related art is mainly based on a motion capture device, which captures motion parameters of a candidate object for performing a predetermined reference motion mainly by setting sensors on key nodes of the candidate object. The captured actual motion is then compared to the standard motion to determine whether the actual motion of the candidate object corresponds to the standard motion. And screening out the target object according to the action comparison result of each candidate object.
However, since the dynamic capture device is expensive and the collected motion parameters are limited and single, it is often difficult to achieve a predetermined requirement for a target object to be screened from a plurality of candidate objects. That is, the object screening method provided by the related art has a technical problem of low accuracy.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an object screening method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of low efficiency in the object screening process.
According to an aspect of an embodiment of the present application, there is provided an object screening method, including: determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1; acquiring first position coordinates of positions of candidate objects respectively appearing in N scene pictures, and obtaining a first position coordinate set containing N first position coordinates; mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene respectively to obtain a second position coordinate set containing N second position coordinates, wherein the second position coordinates are coordinates of positions of candidate objects in the scene top view; determining competition physical performance parameters of the candidate object in the competition scene by utilizing the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set; and under the condition that the competition physical ability parameter reaches the physical ability index indicated by the screening condition, determining to screen the candidate objects into a target object list.
Optionally, determining the competition physical performance parameter of the candidate object in the competition scene by using the movement time sequence relation and the movement state parameter between the second position coordinates in the second position coordinate set includes: determining N third position coordinates of the candidate object by utilizing the target conversion matrix, wherein the N third position coordinates are coordinates of positions of the candidate object in the competition scene at each of N moments, the N moments are moments of collecting N scene images, and the N moments are in one-to-one correspondence with the N scene images; determining the moving state parameters of the candidate objects in the competition scene according to the N third position coordinates and the moving time sequence relation; and determining the competition physical ability parameters of the candidate objects in the competition scene according to the movement state parameters.
Optionally, determining N third location coordinates of the candidate object using the target transformation matrix includes: determining a position function according to the N second position coordinates, the target conversion matrix and the target scaling, wherein the position function is a function representing the change of coordinates of positions of candidate objects in the competition scene along with time, and the target scaling is used for representing the scaling between the competition scene and the scene top view; n third position coordinates of the candidate object are determined according to the position function.
Optionally, determining the movement state parameter of the candidate object in the competition scene according to the N third position coordinates and the movement time sequence relation includes: performing differentiation processing on the position function to obtain the speed information of the candidate object; and determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates, wherein the moving state parameters comprise speed information and the moving distance.
Optionally, determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates includes: sorting the N third position coordinates according to the moving time sequence relationship to obtain N sorted third position coordinates; and summing the distances between any two adjacent ordered third position coordinates to obtain the moving distance.
Optionally, mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene to obtain a second position coordinate set including N second position coordinates, where the mapping includes: obtaining M second position coordinates matched with M first position coordinates in a first position coordinate set, wherein M is a positive integer which is larger than or equal to a preset value and smaller than or equal to N; determining a target transformation matrix according to the M first position coordinates and the M second position coordinates, wherein the second position coordinate set comprises the M second position coordinates; and mapping each first position coordinate in the first position coordinate set to a scene top view by utilizing the target transformation matrix to obtain N second position coordinates in the second position coordinate set.
Optionally, determining the target transformation matrix according to the M first position coordinates and the M second position coordinates includes: determining a set of matching relationships among the M first position coordinates, the M second position coordinates, and a set of matrix elements, wherein the set of matrix elements includes matrix elements in the target transformation matrix; determining the value of each matrix element in a group of matrix elements according to a group of matching relations; and determining a target conversion matrix according to the value of each matrix element.
Optionally, the obtaining the first position coordinates of positions of the candidate object in the N scene pictures respectively to obtain a first position coordinate set including N first position coordinates includes: detecting positions of candidate objects in N scene pictures respectively to obtain N positions; and determining the position coordinates corresponding to the N positions as N first position coordinates.
Optionally, the obtaining the first position coordinates of the positions of the candidate objects in the N scene pictures respectively, and obtaining a first position coordinate set including N first position coordinates includes: detecting positions of candidate objects in Q scene images respectively to obtain Q positions, wherein N scene images comprise Q scene images, and Q is a positive integer which is greater than or equal to 1 and less than N; acquiring position coordinates corresponding to the Q positions respectively to obtain Q position coordinates; determining position coordinates corresponding to the rest positions except the Q positions in the N positions according to the frame rate of the picture and the Q position coordinates, and obtaining rest position coordinates, wherein the frame rate of the picture represents the frame number of the scene picture acquired in the preset time; the Q position coordinates and the remaining portion position coordinates are determined as N first position coordinates.
Optionally, in the case that the performance parameter of the competition reaches the performance index indicated by the screening condition, determining to screen the candidate object to the target object list includes: determining a distance threshold value and a frequency threshold value which allow the candidate object to move in the competition scene when moving according to a preset competition route; determining to screen the candidate object into a target object list under the condition that the actual moving distance of the candidate object is smaller than a distance threshold value; or under the condition that the actual moving times of the candidate objects are larger than a time threshold, determining to screen the candidate objects into a target object list; or under the condition that the actual moving distance of the candidate object is smaller than a distance threshold value and the actual moving times of the candidate object is larger than a times threshold value, determining to screen the candidate object into a target object list; the competition physical ability parameters comprise actual moving distance and actual moving times.
Optionally, after determining to filter the candidate object into the target object list in the case that the competition physical ability parameter reaches the physical ability index indicated by the filtering condition, the method further includes: and according to the competition physical ability parameters, corresponding training plans are established for the candidate objects, or initial training plans are adjusted.
According to still another aspect of the embodiment of the present application, there is also provided an object screening apparatus, including: the first processing unit is used for determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1; the first acquisition unit is used for acquiring first position coordinates of positions of the candidate objects in the N scene pictures respectively to obtain a first position coordinate set containing N first position coordinates; the second processing unit is used for mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene respectively to obtain a second position coordinate set containing N second position coordinates, wherein the second position coordinates are coordinates of positions of candidate objects in the scene top view; the third processing unit is used for determining the competition physical performance parameters of the candidate objects in the competition scene by utilizing the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set; and the fourth processing unit is used for determining to screen the candidate objects into the target object list under the condition that the competition physical ability parameter reaches the physical ability index indicated by the screening condition.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the above-described object screening method when executed by an electronic device.
According to a further aspect of embodiments of the present application, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the above method.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to execute the object screening method by the computer program.
The method comprises the steps of acquiring scene pictures of a competition scene in real time, determining a first position coordinate set of positions of candidate objects in N scene pictures, mapping the first position coordinate set to a scene top view matched with the competition scene, and determining competition physical performance parameters of the candidate objects in the competition scene by using a moving time sequence relation and moving state parameters between second position coordinates in the mapped second position coordinate set, so as to determine whether the candidate objects meet preset requirements according to the competition physical performance parameters. In other words, through the conversion between the image coordinates, the competition physical ability parameters used for representing the physical ability data of the candidate objects in the competition process are obtained, the candidate objects can be evaluated more objectively through the competition physical ability parameters, the technical problem of lower accuracy caused by screening the candidate objects only by means of action parameters in the related technology is solved, and the technical effect of improving the accuracy of screening results is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application.
Fig. 1 is a schematic diagram of an application scenario of an alternative object screening method according to an embodiment of the present application.
Fig. 2 is a flow chart of an alternative object screening method according to an embodiment of the present application.
Fig. 3 is an overall flow chart of an alternative object screening method according to an embodiment of the present application.
Fig. 4 is an overall schematic diagram of an alternative object screening method according to an embodiment of the present application.
FIG. 5 is a schematic diagram of an alternative top view of a scene similar to a scene cut in accordance with an embodiment of the application.
Fig. 6 is a schematic diagram of an alternative camera coordinate system according to an embodiment of the application.
Fig. 7 is a schematic diagram of an alternative division of speed intervals according to an embodiment of the present application.
Fig. 8 is a schematic diagram of an alternative scene top view and pairs of matching points in a scene picture according to an embodiment of the application.
Fig. 9 is a schematic diagram of mapping locations of candidates in a scene picture to a scene top view.
FIG. 10 is a data report of candidate objects generated from physical performance data and playing patterns.
Fig. 11 is an overall flowchart of another alternative object screening method according to an embodiment of the present application.
Fig. 12 is a schematic structural view of an alternative object screening apparatus according to an embodiment of the present application.
Fig. 13 is a schematic structural view of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme in the embodiment of the application can follow legal rules in the implementation process, and when the operation is executed according to the technical scheme in the embodiment, the used data can not relate to user privacy, and the safety of the data is ensured while the operation process is ensured to be a compliance method.
In addition, when the above embodiments of the present application are applied to a specific product or technology, user approval or consent needs to be obtained, and the collection, use and processing of relevant data needs to comply with relevant regulations and standards of the relevant country or region.
According to an aspect of an embodiment of the present application, there is provided an object screening method. As an alternative embodiment, the object screening method described above may be applied to, but not limited to, an application scenario as shown in fig. 1. In an application scenario as shown in fig. 1, the target terminal 102 may be, but is not limited to being, in communication with the server 106 via the network 104, and the server 106 may be, but is not limited to being, performing operations on the database 108, such as, for example, write data operations or read data operations. The target terminal 102 may include, but is not limited to, a man-machine interaction screen, a processor, and a memory. The man-machine interaction screen may be, but is not limited to, N scene screens, a scene top view, a target object list, etc. for display on the target terminal 102. The processor may be, but is not limited to being, configured to perform a corresponding operation in response to the man-machine interaction operation, or generate a corresponding instruction and send the generated instruction to the server 106. The memory is used for storing related processing data such as a first position coordinate set, a second position coordinate set, a movement state parameter and the like.
Alternatively, in this embodiment, the target terminal may be a terminal configured with a target client, and may include, but is not limited to, at least one of the following: a Mobile phone (such as an Android Mobile phone, an iOS Mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, an MID (Mobile INTERNET DEVICES, mobile internet device), a PAD, a desktop computer, a smart television, etc. The target client may be a video client, an instant messaging client, a browser client, an educational client, and the like. The network may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server.
The technical scheme of the embodiment of the application can be applied to screening potential objects in important projects or making object training plans, such as football players, basketball players, screening virtual objects in virtual scenes and the like, mainly by acquiring images of candidate objects in real scenes and acquiring first position coordinates of the candidate objects in the images, then determining a group of position coordinates of the candidate objects in the real scenes through coordinate conversion, and generating physical energy data of the candidate objects by utilizing the group of position coordinates; it is determined whether the physical ability data satisfies a screening condition, thereby determining whether the candidate object is determined as the target object.
In order to solve the problem of low accuracy in the object screening process, an object screening method is provided in the embodiment of the present application, and fig. 2 is a flowchart of the object screening method according to the embodiment of the present application, where the flowchart includes the following steps S202 to S210.
It should be noted that, the object screening method shown in step S202 to step S210 may be performed by, but not limited to, an electronic device, which may be, but not limited to, a target terminal or a server as shown in fig. 1.
Step S202, determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1.
Step S204, first position coordinates of positions of the candidate object in N scene images are obtained, and a first position coordinate set containing N first position coordinates is obtained.
In step S206, each first position coordinate in the first position coordinate set is mapped to a scene top view matched with the competition scene, so as to obtain a second position coordinate set containing N second position coordinates, where the second position coordinates are coordinates of positions of the candidate object in the scene top view.
Step S208, determining the competition physical ability parameters of the candidate object in the competition scene by using the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set.
Step S210, when the competition physical ability parameter reaches the physical ability index indicated by the screening condition, the candidate object is determined to be screened into the target object list.
In order to facilitate understanding of the above object screening method, in the embodiment of the present application, screening of football players is taken as an example, and the above object screening method is explained.
The basic process of the object screening method will be described with reference to the overall flowchart shown in fig. 3 and the overall schematic diagram shown in fig. 4.
The above competition scenario is shown in fig. 4, in which the football playing field and the candidate object are football players on the field, and specific screening steps of the target football player are given below, and the following steps S302 to S312 may be referred to specifically.
S302, carrying out target recognition on each scene picture in the scene picture sequence acquired in advance to obtain N scene pictures of the candidate object 1.
Obviously, for the same football player (hereinafter referred to as tele-mobilization), the positions of the scene images acquired at different moments may be different, and the same player may appear at different positions in each image in the acquired image sequence or may appear only in part of the images in the image sequence.
The method for obtaining the scene picture sequence in advance may be, but not limited to, as shown in fig. 4, that uses a single fixed camera or other image acquisition device to shoot the scene of the football match in real time, so as to obtain the corresponding scene picture sequence.
S304, position coordinates (again, can be understood as pixel coordinates) of the athlete in each of the N scene frames are obtained.
In this embodiment, the detection of the players in the video football event may be performed by using the target detection network, but not limited to, to obtain the pixel coordinates of each player (which may be understood as a candidate object) in the scene frames, and the implementation of determining the first position coordinates in each scene frame will be described in connection with the specific embodiment.
Assume that according to the above method for object detection, the first position coordinates of candidate object 1 in n=3 scene pictures are determined to be L 1-C1、L1-C2、L1-C3, respectively.
As shown in fig. 4, for a plurality of players on a soccer field, the change in position coordinates of each player in a scene picture sequence can be determined by, but not limited to, using a multi-target tracking method.
S306, converting the first position coordinate in the scene picture to the second position coordinate in the scene top view.
As shown in fig. 5, a scene top view similar to the live football field is first determined, wherein the scale between the scene top view and the real football field is k.
And calculating a mapping relation between the second position coordinate in the top view of the scene and the first position coordinate in the scene picture by utilizing a camera imaging principle, so that position coordinate conversion between different camera coordinate systems is realized.
S308, converting the second position coordinate in the top view of the scene to the third position coordinate in the competition scene by using the target conversion matrix.
The detailed description of how the target transformation matrix is determined and the specific implementation of the transformation between the second position coordinates and the third position coordinates using the target transformation matrix will be described in connection with specific embodiments.
S310, determining the competition physical ability parameters of the candidate objects by using the third position coordinates.
Specifically, according to the second set of position coordinates in the scene top view, the moving time sequence relationship of the candidate object 1 in the scene top view is determined, and a plane motion route pattern of the candidate object 1 (athlete 1) is generated, for example, the position coordinates L 2-C1、L2-C2、L2-C3 shown in fig. 3 are sequentially connected, so as to obtain the plane motion route pattern of the candidate object 1.
According to the plane movement route map, the movement state parameters such as the movement distance (which can be understood as the running distance) of the candidate object 1 in the running process, the speed information and the like are obtained. The racing physical ability parameters of the candidate 1 in the soccer field are generated based on the movement state parameters.
S312, judging whether the competition physical ability parameter of the candidate object 1 meets the preset screening condition, and determining the candidate object 1 as a target object under the condition that the screening condition is met.
It should be noted that, the implementation process of determining whether the candidate object 1 is the target object in this embodiment is only an example, and is not limited thereto, for example, it may also be to detect multiple candidate objects in the scene at the same time to obtain a set of first position coordinates of each candidate object in the scene, then convert each set of first position coordinates to a set of second position coordinates in the scene top view, and finally determine the competition physical performance parameters of each candidate object in the football field, thereby determining candidate objects that satisfy the screening condition in the multiple candidate objects.
By adopting the method, the project group can be helped to quickly acquire the physical ability data of a plurality of candidate objects in the competition scene, and whether each candidate object reaches the screening condition or not is accurately judged, so that the efficiency and the accuracy of object screening are improved.
The method comprises the steps of acquiring scene pictures of a competition scene in real time, determining a first position coordinate set of positions of candidate objects in N scene pictures, mapping the first position coordinate set to a scene top view matched with the competition scene, and determining competition physical performance parameters of the candidate objects in the competition scene by using a moving time sequence relation and moving state parameters between second position coordinates in the mapped second position coordinate set, so as to determine whether the candidate objects meet preset requirements according to the competition physical performance parameters. In other words, through the conversion between the image coordinates, the competition physical ability parameters used for representing the physical ability data of the candidate objects in the competition process are obtained, and according to the competition physical ability parameters, whether the candidate objects are preset screening conditions or not can be judged more directly and accurately, so that the technical problem of lower accuracy caused by screening the candidate objects only by means of action parameters in the related technology is solved, and the technical effect of improving the accuracy of screening results is realized.
As an optional example, determining the racing physical performance parameter of the candidate object in the racing scene by using the movement timing relationship and the movement state parameter between the second position coordinates in the second position coordinate set includes: determining N third position coordinates of the candidate object by utilizing the target conversion matrix, wherein the N third position coordinates are coordinates of positions of the candidate object in the competition scene at each of N moments, the N moments are moments of collecting N scene images, and the N moments are in one-to-one correspondence with the N scene images; determining the moving state parameters of the candidate objects in the competition scene according to the N third position coordinates and the moving time sequence relation; and determining the competition physical ability parameters of the candidate objects in the competition scene according to the movement state parameters.
In this embodiment, the target transformation matrix may be, but is not limited to, a projection matrix for representing coordinate transformation from one plane to another, also referred to as homography.
The homography matrix is mainly used in the fields of image correction, image stitching, camera pose estimation, vision SLAM and the like, and under the normal condition, the shooting angles of cameras are different, the corresponding homography matrix is different, and the homography matrix is usually a3×3-order matrix.
In the embodiment of the present application, the target transformation matrix may be, but is not limited to, a homography matrix, and the specific meaning of the target transformation matrix is a projection matrix that performs coordinate transformation from a first camera coordinate system m in a scene picture to a second camera coordinate system in a scene top view.
And obtaining a third position coordinate of the candidate object in the competition scene according to the target conversion matrix and the second position coordinate in the scene top view, and then determining a movement state parameter of the candidate object in the competition scene according to a movement time sequence relation between the second position coordinates, and further determining a competition physical ability parameter of the candidate object in the competition scene.
The moving time sequence relationship between the second position coordinates may, but is not limited to, map N first position coordinates of the candidate objects in the N scene images corresponding to the N times to N second position coordinates in the scene top view according to a time sequence, as shown in fig. 3, where the time sequence relationship between the second position coordinates L 2-C1、L2-C2、L2-C3 is the same as the time sequence relationship between the first position coordinates L 1-C1、L1-C2、L1-C3, that is, the candidate objects sequentially move from the point corresponding to the second position coordinates L 2-C1 to the point corresponding to the second position coordinates L 2-C2, and move from the point corresponding to the second position coordinates L 2-C2 to the point corresponding to the second position coordinates L 2-C3.
After determining the N third location coordinates of the candidate object in the competition scene, the movement state parameters such as the movement distance, the speed information, and the like of the candidate object in the competition scene may be determined, and the implementation process of determining the movement state parameters will be described in connection with the specific embodiment.
As an optional example, determining N third location coordinates of the candidate object using the target transformation matrix includes: determining a position function according to the N second position coordinates, the target conversion matrix and the target scaling, wherein the position function is a function representing the change of coordinates of positions of candidate objects in the competition scene along with time, and the target scaling is used for representing the scaling between the competition scene and the scene top view; n third position coordinates of the candidate object are determined according to the position function.
Assuming that the scale between the scene top view and the competition scene (which again can be understood as the real competition venue) is k, the position coordinates of the candidate object in the competition scene can be obtained by the following formula (1):
Wherein, For a homography matrix of a first camera coordinate system m in the video stream to a second camera coordinate system n in the scene top view, i.e. the above-mentioned object transformation matrix,Representing the corresponding second position coordinates of the jth candidate in the contest scene at different times t,And representing a first position coordinate corresponding to the jth candidate object in the competition scene at different times t, wherein the first position coordinate and the second position coordinate comprise two coordinate values on a first coordinate axis u and a second coordinate axis v.
By the position function in the above formula (1), the third position coordinates of each candidate object in the competition scene at different times t (for example, N times in total) can be obtained.
By adopting the mode, the position of each candidate object in the competition scene at different times can be rapidly calculated, and the problem of overhigh equipment cost caused by capturing the position of each candidate object in real time by using the dynamic capturing equipment in the related technology is solved.
As an optional example, determining the movement state parameter of the candidate object in the competition scene according to the N third position coordinates and the movement time sequence relation includes: performing differentiation processing on the position function to obtain the speed information of the candidate object; and determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates, wherein the moving state parameters comprise speed information and the moving distance.
The location function is calculated by the following equation (2)And differentiating the time t to obtain the speed information of the candidate object:
In the embodiment of the application, by carrying out statistical analysis on the performance and physical ability data of the field of the relevant athlete in the competitive event, the speed information of the football athlete in the football field is thinned into speed intervals shown in fig. 7 according to different states of the football athlete, and each interval corresponds to running speeds of different grades.
Wherein, when the speed is greater than or equal to 0 and less than or equal to 1.2m/s, determining the running speed as walking or standing; when the speed is greater than or equal to 2.4m/s and less than or equal to 3.7m/s, determining the running speed as low-speed running; when the speed is greater than or equal to 3.7m/s and less than or equal to 4.9m/s, the running speed thereof is determined as medium running or the like.
In addition to the above speed information, the sum of running distances of the candidate object (athlete) in the competition scene or the competition venue may be determined as a movement distance, wherein movement state parameters include, but are not limited to, the above speed information, movement distance, wherein movement distance may be understood as running distance, and may be, but is not limited to, the sum of running distances of the athlete in the stadium time.
As an optional implementation manner, determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates includes: sorting the N third position coordinates according to the moving time sequence relationship to obtain N sorted third position coordinates; and summing the distances between any two adjacent ordered third position coordinates to obtain the moving distance.
In addition to the total moving distance and speed information of the candidate object in the competition scene, the physical performance parameters of the candidate object in the competition process, also called competition physical performance parameters, such as high-speed running distance, sprint running times, fastest running speed, high-intensity running times, highest-intensity average intermittent time, and the like, can be determined according to the speed and time information.
Wherein the effective running distance may refer to, but is not limited to, determining a running distance at which a candidate (e.g., an athlete) reaches a medium running speed and above during a stadium time as the effective running distance; the high speed running distance may, but is not limited to, refer to the sum of running distances for the candidate to reach high speed running at speed during the departure time; sprint running distance may refer to, but is not limited to, the sum of running distances for the candidate to reach sprint at speed during the top-up time; the fastest running speed may refer, but is not limited to, the highest speed at which the candidate runs within a certain period of time during the top-up time.
In addition, the number of times the candidate object runs at different levels of speed in the competition process can be counted, for example, the number of sprinting running times, the number of high-intensity running times and the like, wherein the running speed of the candidate object can reach the sprinting running range and last for more than 0.6s to perform sprinting running once, and then the accumulated number of the sprinting running times of the candidate object in the presence time is counted as the number of high-speed running times; counting the running speed of the candidate object to reach the high-speed running range and last for more than 0.6s to perform one high-speed running, and then counting the accumulated number of high-speed running of the candidate object in the presence time as the high-speed running number; and the sum of the number of high-speed running and the number of sprint running is determined as the number of high-intensity running, etc.
In the case where both sprint and high-speed running are defined as high-intensity running, it is also possible to calculate the average intermittent time of high-intensity running, specifically by calculating the intermittent time length of every two high-intensity runs, and determining the ratio between the cumulative intermittent time length and the number of intermittent times as the average intermittent time of high-intensity running.
Parameters such as the effective running distance, the high-speed running distance, the sprint running times, and the high-intensity running average intermittent time calculated according to the running distance and the speed information obtained in the above embodiment are competition physical parameters of the candidate object, and the summary of physical parameters in different dimensions is referred to as physical data.
The physical performance parameters of the competition can truly reflect the physical performance states of each candidate object in multiple dimensions in the competition scene, so that the physical performance index of the whole candidate object is evaluated, and an effective reference basis is provided for screening target objects.
As an optional example, mapping each first position coordinate in the first position coordinate set to a top view of a scene matched with the competition scene to obtain a second position coordinate set including N second position coordinates includes: obtaining M second position coordinates matched with M first position coordinates in a first position coordinate set, wherein M is a positive integer which is larger than or equal to a preset value and smaller than or equal to N; determining a target transformation matrix according to the M first position coordinates and the M second position coordinates, wherein the second position coordinate set comprises the M second position coordinates; and mapping each first position coordinate in the first position coordinate set to a scene top view by utilizing the target transformation matrix to obtain N second position coordinates in the second position coordinate set.
Firstly, calibrating a scene top view of a competition scene, and obtaining M (M is a positive integer greater than or equal to 4) pairs of matching points in the scene top view and the scene picture, wherein each pair of matching points comprises one point in the scene top view and one point in the scene picture.
Assuming that the position coordinates corresponding to two matching points in a pair of matching points are q mi and q ni, respectively, there is a matching relationship between the two position coordinates and the target transformation matrix as shown in the following formula (3):
Where q mi is the second position coordinate of one matching point in the scene top view, q ni is the first position coordinate of one matching point in the scene corresponding to one matching point in the scene top view, and q mi includes the coordinate value u mi on the first coordinate axis u and the coordinate value v mi,qni on the second coordinate axis v in the second camera coordinate includes the coordinate value u ni on the first coordinate axis u and the coordinate value v ni on the second coordinate axis v in the first camera coordinate.
Root of Chinese characterEach of which is a matrix element in a 3 x 3 order object transformation matrix, a homography matrix between an image acquisition device (e.g., a camera) and a top view of the scene, i.e., an object transformation matrix, is determined.
And the above pair of matching points may provide two equations in the following equation (4):
And determining a homography matrix between the image acquisition equipment (for example, a camera) and the scene top view, namely a target conversion matrix according to the position coordinates corresponding to each matching point in the M pairs of matching points.
According to the target transformation matrix, mapping each first position coordinate in a first position coordinate set in a scene picture to a scene top view to obtain a second position coordinate set, specifically, through the following formula (5):
Where j represents the j-th candidate in the contest scene, Representing the second position coordinates of the jth candidate in the scene top view at time t,Representing the first position coordinates of the jth candidate in the scene at time t.
For example, as shown in fig. 9, the position coordinates of each athlete or each candidate in the scene top view are obtained according to the above method, wherein each number represents the position or position coordinates of one athlete in the scene top view.
As an optional implementation manner, determining the target transformation matrix according to the M first position coordinates and the M second position coordinates includes: determining a set of matching relationships among the M first position coordinates, the M second position coordinates, and a set of matrix elements, wherein the set of matrix elements includes matrix elements in the target transformation matrix; determining the value of each matrix element in a group of matrix elements according to a group of matching relations; and determining a target conversion matrix according to the value of each matrix element.
Wherein, a set of matching relations can refer to the description of the formula (3) part, and the process of determining the target transformation matrix is explained below in connection with the specific embodiment.
The specific process of determining the target conversion matrix refers to the following steps (1) - (3).
(1) Firstly, a scene top view similar to a competition scene is obtained, and the scaling k of the scene top view and a real field or the competition scene is determined.
(2) According to the pinhole imaging principle of a camera, the relationship shown in the following formula (6) exists between the position coordinates in a scene picture and the position coordinates in a scene top view:
Wherein, Is the first position coordinate (which in turn can be understood to be the image coordinate under the first camera coordinate system) of the candidate object in the scene,Is the second position coordinate of the candidate object in the scene top view (again can be understood to be the image coordinate under the second camera coordinate system),The homography matrix, which is the target conversion matrix described above, is a homography matrix from a first camera coordinate system m in a video stream to a second camera coordinate system n in a scene top view, and is a matrix for describing the positional relationship of the same object under the pixel coordinate systems of the two cameras.
In theory, the first and second heat exchangers are,Is represented by the following formula (7):
Wherein, For camera intrinsic (in matrix form) taking scene pictures,A rotation matrix and a translation matrix of the first camera coordinate system and the second camera coordinate system respectively,Is a planar parameter.
The camera coordinate system may be, but is not limited to, a coordinate system created in units of each pixel with the upper left corner of the scene image as the vertex, as shown in fig. 6.
(3) And solving the value of each matrix element in the target conversion matrix according to M pairs of matching points in the scene picture and the scene top view.
As shown in fig. 8, it is assumed that m=4 pairs of matching points are obtained in total by image calibration, namely a n in the scene picture and a m in the scene top view, B n in the scene picture and B m in the scene top view, C n in the scene picture and C m in the scene top view, D n in the scene picture and D m in the scene top view, and the like, respectively, and each pair of matching points is two points that are not collinear.
According to the above formula (4), two equations can be provided for each pair of matching points, and then 8 equations can be provided for 4 pairs of matching points, so that 8 matrix elements in the homography matrix can be solved by using the least square methodEtc.).
It is obvious that the above m=4 is only an example, and is not limited thereto, and for example, 5 pairs or more than 5 pairs of matching points may be used.
In addition, it should be noted that, in the above embodiments, the camera or other image capturing device for capturing a scene is a camera with fixed position and capturing angle, so that the position coordinates of each candidate object can be calculated only by solving the sequential homography matrix. In some scenes, a mobile camera may be used, where a corresponding homography matrix needs to be calculated for each frame of image, for example, a method of deep learning is used to determine several pairs of feature points in the competition scene and the scene top view, and then a homography matrix needs to be calculated for each frame of image.
By adopting the mode, the homography matrix from the scene picture to the scene top view can be rapidly solved by using an artificial intelligence mode, and the motion of the candidate object in the scene is converted into the scene top view by using the solved homography matrix, so that the moving route of the candidate object in the competition scene is determined, and a foundation is laid for calculating the competition physical performance parameters of the candidate object.
As an optional example, the acquiring the first position coordinates of the positions of the candidate objects in the N scene pictures respectively, to obtain a first position coordinate set including N first position coordinates includes: detecting positions of candidate objects in N scene pictures respectively to obtain N positions; and determining the position coordinates corresponding to the N positions as N first position coordinates.
In the embodiment of the present application, but not limited to, using a target detection network, an athlete (candidate object) in a video event may be subject to object recognition (which may be further understood as object detection), N scene images in which the candidate object appears are acquired, pixel coordinates of the candidate object in each scene image are acquired, and the pixel coordinates are determined as first position coordinates.
For multiple candidate objects in the competition scene, a multi-target tracking method can be adopted to acquire first coordinate positions of the multiple candidate objects in each scene picture, for example, a ByteTracking algorithm is used for multi-target tracking.
As an optional example, the acquiring the first position coordinates of the positions of the candidate objects in the N scene pictures respectively, to obtain a first position coordinate set including N first position coordinates includes: detecting positions of candidate objects in Q scene images respectively to obtain Q positions, wherein N scene images comprise Q scene images, and Q is a positive integer which is greater than or equal to 1 and less than N; acquiring position coordinates corresponding to the Q positions respectively to obtain Q position coordinates; determining position coordinates corresponding to the rest positions except the Q positions in the N positions according to the frame rate of the picture and the Q position coordinates, and obtaining rest position coordinates, wherein the frame rate of the picture represents the frame number of the scene picture acquired in the preset time; the Q position coordinates and the remaining portion position coordinates are determined as N first position coordinates.
The manner of obtaining the first position coordinates of each candidate object in the N scene pictures includes, but is not limited to, the following two.
(1) The position coordinates of the candidate object in each of the N scene images are detected in turn using the target detection network directly.
(2) Using the target detection network, only detecting the partial position coordinates of the candidate objects in the partial scene, and then using the frame rate information, obtaining the pixel coordinates of each candidate object r in the video stream in the scene at the time tI.e. the coordinates of the position of the candidate object in the scene at different times are obtained.
With respect to the mode (2), since the ratio between the position difference between the two first position coordinates and the time difference is the frame rate of the video picture, the time difference may be, but is not limited to, the difference between the times respectively corresponding to the two first position coordinates.
Then the j first position coordinates of the candidate object in the j-th scene may be directly solved given the i-th first position coordinates and the frame rate of the candidate object in the i-th scene.
As an optional example, in the case that the performance parameter of the competition reaches the performance index indicated by the screening condition, determining to screen the candidate object into the target object list includes: determining a distance threshold value and a frequency threshold value which allow the candidate object to move in the competition scene when moving according to a preset competition route; determining to screen the candidate object into a target object list under the condition that the actual moving distance of the candidate object is smaller than a distance threshold value; or under the condition that the actual moving times of the candidate objects are larger than a time threshold, determining to screen the candidate objects into a target object list; or under the condition that the actual moving distance of the candidate object is smaller than a distance threshold value and the actual moving times of the candidate object is larger than a times threshold value, determining to screen the candidate object into a target object list; the competition physical ability parameters comprise actual moving distance and actual moving times.
For the competition physical ability parameters of the candidate objects obtained according to the above embodiments, information analysis may be directly performed to generate physical ability data of each candidate object, and then whether the physical ability data satisfies the screening condition may be determined, thereby determining whether the candidate object is determined as the target object.
In addition, whether the candidate object satisfies the screening condition may be determined by at least one of the following means.
(1) And estimating the maximum moving distance and the maximum moving times of the candidate object in the competition scene according to the preset competition route.
(2) And comparing the actual moving distance with the maximum moving distance, the actual moving times with the maximum moving times of the candidate object, and judging whether the screening condition is met or not according to the comparison result.
(3) The candidate is more fully evaluated with reference to its racing performance parameters and related performance on the playing field (e.g., score, pass success rate, ball control rate, etc.).
Through the mode, each candidate object can be evaluated more objectively according to the physical ability data and the competition field performance of the candidate object, so that a coach is helped to screen out excellent players; informing the program can also help the coach to recruit a wide range of athletes.
As an optional example, after determining to filter the candidate object into the target object list in the case that the racing performance parameter reaches the performance index indicated by the filtering condition, the method further includes: and according to the competition physical ability parameters, corresponding training plans are established for the candidate objects, or initial training plans are adjusted.
According to the method, the competition physical performance parameters of each candidate object are obtained, the performance of each candidate object on the competition field is synthesized, and a report as shown in fig. 10 is formed, so that a project group or a coach is helped to select, or a personalized training plan is customized for each candidate object, or a historical training plan is adjusted in real time.
In order to more clearly understand the above object screening method, the following description is further given with reference to the overall flowchart shown in fig. 11.
S1100, obtaining a scene top view similar to the competition scene.
Specifically, reference may be made to the top view of the scene shown in fig. 5, where a preset scaling ratio is provided between the two.
S1102, obtaining a video stream of a competition picture in a competition scene.
May be, but is not limited to, a video stream obtained by photographing a competition scene with a fixed camera.
S1104, performing target detection on a scene picture sequence corresponding to the video stream to obtain N scene pictures of the candidate object.
S1106, acquiring N first position coordinates of the candidate object in N scene pictures by utilizing a multi-target tracking algorithm.
A multi-target tracking algorithm can be utilized to track the pixel coordinates of each candidate object in the scene picture; or, the multi-target tracking algorithm is utilized to obtain N first position coordinates of each candidate object in N scene pictures in combination with frame rate information, and the description in the above embodiment may be referred to for details, which are not repeated herein.
S1108, calibrating the scene picture and the scene top view to obtain a projection matrix from a first camera coordinate system where the scene picture is located to a second camera coordinate system where the scene top view is located.
The implementation process of determining the projection matrix (target transformation matrix) may refer to the description in the above embodiment, and will not be repeated here.
S1110, determining the second position coordinates of the candidate object in the scene top view by using the projection matrix, and obtaining the third position coordinates of the candidate object in the competition scene according to the moving time sequence relation.
According to the third position coordinates, the movement state parameters of the candidate objects are obtained, and then the movement state parameters are analyzed and processed to obtain a plurality of competition physical performance parameters in different dimensions, and particularly, reference can be made to fig. 10.
S1112, generating a physical ability report according to the competition physical ability parameters.
S1114, it is determined whether the screening condition is satisfied based on the performance data, and/or the field performance of the candidate.
The competition field performance is data obtained through real-time recording in the competition process.
In the embodiment of the application, an artificial intelligence-based football player recruitment/screening technology is provided, n points (n > =4) in a video field of a relevant football event and overlook pictures of a field football field are calibrated to obtain homography matrixes of cameras and football field pictures, pixel coordinates of players in each frame can be obtained after video frame processing through an image detection network, pixel coordinate changes of each player are tracked through a multi-target tracking algorithm, pixel coordinates of the players on the top view of the event can be obtained through homography matrixes, positions of the players at each moment in the playing process can be obtained through combining scaling of the football field pictures and real pictures, and finally speed information and acceleration information of the players at each moment can be obtained through simple differential processing.
After the information is analyzed, the physical ability data of the athletes on the competition field can be obtained through a calculation method of the related physical ability indexes, the physical ability reports of each athlete in the competition field can be obtained through simple processing of the physical ability data, and a football coach can more comprehensively know the athletes on the competition field by combining the related performance (such as score, pass success rate and ball control rate) of each athlete on the competition field through the physical ability reports, so that the football coach can be helped to screen excellent players or conduct personalized training.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
According to still another aspect of the embodiment of the present application, there is also provided an object screening apparatus as shown in fig. 12, including: a first processing unit 1202, configured to determine N scene images appearing on a candidate object from a scene image sequence obtained by performing image acquisition on a competition scene where the candidate object is located, where N is a positive integer greater than or equal to 1; a first obtaining unit 1204, configured to obtain first position coordinates of positions of the candidate object appearing in the N scene pictures, respectively, to obtain a first position coordinate set including N first position coordinates; a second processing unit 1206, configured to map each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene, to obtain a second position coordinate set including N second position coordinates, where the second position coordinates are coordinates of a position where the candidate object appears in the scene top view; a third processing unit 1208, configured to determine a competition physical performance parameter of the candidate object in the competition scene by using the movement timing relationship and the movement status parameter between the second position coordinates in the second position coordinate set; the fourth processing unit 1210 is configured to determine to screen the candidate object into the target object list in the case where the racing performance parameter reaches the performance index indicated by the screening condition.
Optionally, the third processing unit 1208 includes: the first processing module is used for determining N third position coordinates of the candidate object by utilizing the target conversion matrix, wherein the N third position coordinates are coordinates of positions of the candidate object in the competition scene at each of N moments, the N moments are moments of collecting N scene images, and the N moments are in one-to-one correspondence with the N scene images; the second processing module is used for determining the moving state parameters of the candidate objects in the competition scene according to the N third position coordinates and the moving time sequence relation; and the third processing module is used for determining the competition physical performance parameters of the candidate objects in the competition scene according to the movement state parameters.
Optionally, the first processing module includes: the first processing submodule is used for determining a position function according to N second position coordinates, a target conversion matrix and a target scaling, wherein the position function is a function which represents the change of coordinates of positions of candidate objects in a competition scene along with time, and the target scaling is used for representing the scaling between the competition scene and a scene top view; and the second processing submodule is used for determining N third position coordinates of the candidate object according to the position function.
Optionally, the second processing module includes: the third processing sub-module is used for carrying out differentiation processing on the position function to obtain the speed information of the candidate object; and the fourth processing sub-module is used for determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates, wherein the moving state parameters comprise speed information and the moving distance.
Optionally, the second processing module includes: the sorting sub-module is used for sorting the N third position coordinates according to the moving time sequence relationship to obtain N sorted third position coordinates; and the summation sub-module is used for summing the distances between any two adjacent ordered third position coordinates to obtain the moving distance.
Optionally, the second processing unit 1206 includes: the first acquisition module is used for acquiring M second position coordinates matched with M first position coordinates in the first position coordinate set, wherein M is a positive integer which is larger than or equal to a preset value and smaller than or equal to N; the fourth processing module is used for determining a target conversion matrix according to the M first position coordinates and the M second position coordinates, wherein the second position coordinate set comprises the M second position coordinates; and the fifth processing module is used for mapping each first position coordinate in the first position coordinate set to the scene top view by utilizing the target conversion matrix to obtain N second position coordinates in the second position coordinate set.
Optionally, the fourth processing module includes: a fifth processing sub-module, configured to determine a set of matching relationships between the M first position coordinates, the M second position coordinates, and a set of matrix elements, where a set of matrix elements includes matrix elements in the target transformation matrix; a sixth processing sub-module, configured to determine a value of each matrix element in the set of matrix elements according to the set of matching relationships; and the seventh processing sub-module is used for determining a target conversion matrix according to the value of each matrix element.
Optionally, the first obtaining unit 1204 includes: the first detection module is used for detecting positions of candidate objects in N scene pictures respectively to obtain N positions; and the sixth processing module is used for determining the position coordinates corresponding to the N positions as N first position coordinates.
Optionally, the first obtaining unit 1204 includes: the second detection module is used for detecting positions of candidate objects in Q scene images respectively to obtain Q positions, wherein N scene images comprise Q scene images, and Q is a positive integer which is greater than or equal to 1 and less than N; the second acquisition module is used for acquiring the position coordinates corresponding to the Q positions respectively to obtain the Q position coordinates; a seventh processing module, configured to determine, according to a frame rate of a frame and Q position coordinates, position coordinates corresponding to remaining positions except the Q positions in the N positions, to obtain remaining position coordinates, where the frame rate of the frame represents a frame number of a scene frame acquired in a preset time; and an eighth processing module, configured to determine the Q position coordinates and the remaining position coordinates as N first position coordinates.
Optionally, the fourth processing unit 1210 includes: a ninth processing module, configured to determine a distance threshold and a frequency threshold that allow the candidate object to move in the competition scene when moving according to the preset competition route; a tenth processing module, configured to determine to screen the candidate object into the target object list if the actual moving distance of the candidate object is less than the distance threshold; or under the condition that the actual moving times of the candidate objects are larger than a time threshold, determining to screen the candidate objects into a target object list; or under the condition that the actual moving distance of the candidate object is smaller than a distance threshold value and the actual moving times of the candidate object is larger than a times threshold value, determining to screen the candidate object into a target object list; the competition physical ability parameters comprise actual moving distance and actual moving times.
Optionally, the apparatus further includes: and the fifth processing unit is used for setting a corresponding training plan for the candidate object or adjusting an initial training plan according to the competition physical ability parameter after the candidate object is determined to be screened into the target object list under the condition that the competition physical ability parameter reaches the physical ability index indicated by the screening condition.
By applying the device to the conversion between the image coordinates, the competition physical ability parameters used for representing the physical ability data of the candidate objects in the competition process are obtained, the candidate objects can be evaluated more objectively through the competition physical ability parameters, the technical problem of lower accuracy caused by screening the candidate objects only by means of action parameters in the related technology is solved, and the technical effect of improving the accuracy of screening results is achieved.
It should be noted that, the embodiments of the object screening apparatus herein may refer to the embodiments of the object screening method described above, and will not be described herein again.
According to still another aspect of the embodiment of the present application, there is also provided an electronic device for implementing the above object screening method, where the electronic device may be a target terminal or a server shown in fig. 1. The present embodiment is described taking the electronic device as a target terminal as an example. As shown in fig. 13, the electronic device comprises a memory 1302 and a processor 1304, the memory 1302 having stored therein a computer program, the processor 1304 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-mentioned processor may be configured to execute the following steps S1 to S5 by a computer program.
S1, determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1.
S2, obtaining first position coordinates of positions of the candidate objects in the N scene pictures respectively, and obtaining a first position coordinate set containing N first position coordinates.
And S3, mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene to obtain a second position coordinate set containing N second position coordinates, wherein the second position coordinates are coordinates of positions of candidate objects in the scene top view.
S4, determining the competition physical performance parameters of the candidate objects in the competition scene by utilizing the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set.
And S5, under the condition that the competition physical ability parameter reaches the physical ability index indicated by the screening condition, the candidate object is determined to be screened into the target object list.
Alternatively, it will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 13 is merely illustrative, and that fig. 13 is not intended to limit the configuration of the electronic device and electronic apparatus described above. For example, the electronics can also include more or fewer components (e.g., network interfaces, etc.) than shown in fig. 13, or have a different configuration than shown in fig. 13.
The memory 1302 may be used to store software programs and modules, such as program instructions/modules corresponding to the object screening method and apparatus in the embodiments of the present application, and the processor 1304 executes the software programs and modules stored in the memory 1302, thereby performing various functional applications and data processing, that is, implementing the object screening method described above. Memory 1302 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1302 may further include memory located remotely from processor 1304, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1302 may be used to store, but is not limited to, N scene pictures, a scene top view, a first set of position coordinates, and the like. As an example, as shown in fig. 13, the memory 1302 may include, but is not limited to, the first processing unit 1202, the first acquiring unit 1204, the second processing unit 1206, the third processing unit 1208, and the fourth processing unit 1210 in the object screening apparatus. In addition, other module units in the object screening apparatus may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 1306 is configured to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 1306 comprises a network adapter (Network Interface Controller, NIC) which can be connected to other network devices and routers via a network cable so as to communicate with the internet or a local area network. In one example, the transmission device 1306 is a Radio Frequency (RF) module for communicating wirelessly with the internet.
In addition, the electronic device further includes: a display 1308 for displaying the scene and the target object list; and a connection bus 1310 for connecting the respective module components in the above-described electronic device.
In other embodiments, the target terminal or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. The nodes may form a point-to-point network, and any type of computing device, such as a server, a target terminal, etc., may become a node in the blockchain system by joining the point-to-point network.
According to yet another aspect of the present application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the object screening method provided in various alternative implementations of the server verification process described above, where the computer program is configured to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the following steps S1 to S5.
S1, determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1.
S2, obtaining first position coordinates of positions of the candidate objects in the N scene pictures respectively, and obtaining a first position coordinate set containing N first position coordinates.
And S3, mapping each first position coordinate in the first position coordinate set to a scene top view matched with the competition scene to obtain a second position coordinate set containing N second position coordinates, wherein the second position coordinates are coordinates of positions of candidate objects in the scene top view.
S4, determining the competition physical performance parameters of the candidate objects in the competition scene by utilizing the movement time sequence relation and the movement state parameters among the second position coordinates in the second position coordinate set.
And S5, under the condition that the competition physical ability parameter reaches the physical ability index indicated by the screening condition, the candidate object is determined to be screened into the target object list.
Alternatively, in embodiments of the present application, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing the target terminal related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method of the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (14)

1. An object screening method, comprising:
Determining N scene images of a candidate object from a scene image sequence obtained by image acquisition of a competition scene in which the candidate object is located, wherein N is a positive integer greater than or equal to 1;
Acquiring first position coordinates of positions of the candidate objects in the N scene images respectively to obtain a first position coordinate set containing N first position coordinates;
Mapping each first position coordinate in the first position coordinate set to the scene top view matched with the competition scene by using a target conversion matrix constructed based on the scene picture and the scene top view matched with the competition scene to obtain a second position coordinate set containing N second position coordinates, wherein the second position coordinates are coordinates of positions of the candidate object in the scene top view, and the target conversion matrix is used for indicating a position conversion relation between a display position of the same candidate object in a position coordinate system corresponding to the scene picture and an appearance position of the same candidate object in a position coordinate system corresponding to the scene top view;
Determining a competition physical ability parameter of the candidate object in the competition scene by using a movement time sequence relation and a movement state parameter among the second position coordinates in the second position coordinate set, wherein the method comprises the following steps: determining a position function according to the N second position coordinates, the target transformation matrix and the target scaling; determining N third position coordinates of the candidate object according to the position function; determining the movement state parameters of the candidate objects in the competition scene according to the N third position coordinates and the movement time sequence relation; determining the competition physical performance parameter of the candidate object in the competition scene according to the movement state parameter, wherein the competition field of the candidate object in the competition scene is represented by data obtained by real-time recording in the competition process, the position function is a function of time change of coordinates representing positions of the candidate object in the competition scene, the target scaling is used for representing scaling between the competition scene and the scene top view, products between the target scaling and N second position coordinates corresponding to the candidate object in different times are determined as N third position coordinates obtained by utilizing the position function, and the N second position coordinates are obtained by multiplying the target conversion matrix and the N first position coordinates, and the N third position coordinates represent the position coordinates of the candidate object in the competition scene in different times; obtaining the competition field performance of the candidate object in the competition scene;
generating a competition performance report of the candidate object in the competition scene based on the competition physical performance parameters and the competition field performance;
and determining whether screening conditions are met according to the competition physical performance parameters in the competition performance report and the competition field performance in the competition performance report, and screening the candidate objects meeting the screening conditions into a target object list.
2. The method of claim 1, wherein determining the game performance parameters of the candidate object in the game scene using the movement timing relationship and the movement state parameters between each of the second set of position coordinates further comprises:
And determining the N third position coordinates of the candidate object by using the target conversion matrix, wherein the N third position coordinates are coordinates of positions of the candidate object in the competition scene at each of N moments, the N moments are moments when the N scene pictures are acquired, and the N moments are in one-to-one correspondence with the N scene pictures.
3. The method of claim 1, wherein the determining the movement state parameters of the candidate object in the competition scene according to the N third position coordinates and the movement timing relationship comprises:
performing differentiation processing on the position function to obtain the speed information of the candidate object;
And determining the moving distance of the candidate object in the competition scene according to the moving time sequence relation and the N third position coordinates, wherein the moving state parameters comprise the speed information and the moving distance.
4. A method according to claim 3, wherein said determining a distance of movement of said candidate object in said competition scene based on said movement timing relationship and said N third position coordinates comprises:
sorting the N third position coordinates according to the movement time sequence relation to obtain N sorted third position coordinates;
And summing the distances between any two adjacent ordered third position coordinates to obtain the moving distance.
5. The method of claim 1, wherein mapping each of the first position coordinates in the first set of position coordinates to the top view of the scene matching the competition scene to obtain a second set of position coordinates including N second position coordinates, comprises:
Obtaining M second position coordinates matched with M first position coordinates in the first position coordinate set, wherein M is a positive integer which is larger than or equal to a preset value and smaller than or equal to N;
determining a target transformation matrix according to the M first position coordinates and the M second position coordinates, wherein the second position coordinate set comprises the M second position coordinates;
And mapping each first position coordinate in the first position coordinate set to the scene top view by using the target conversion matrix to obtain the N second position coordinates in the second position coordinate set.
6. The method of claim 5, wherein determining a target transformation matrix from the M first location coordinates and the M second location coordinates comprises:
Determining a set of matching relationships among the M first position coordinates, the M second position coordinates, and a set of matrix elements, wherein the set of matrix elements includes matrix elements in the target transformation matrix;
Determining the value of each matrix element in the group of matrix elements according to the group of matching relations;
And determining the target conversion matrix according to the value of each matrix element.
7. The method according to claim 1, wherein the obtaining first position coordinates of positions of the candidate object in the N scene pictures respectively, to obtain a first position coordinate set including N first position coordinates, includes:
Detecting positions of the candidate objects in the N scene images respectively to obtain N positions;
and determining the position coordinates corresponding to the N positions as the N first position coordinates.
8. The method of claim 7, wherein the obtaining first position coordinates of positions of the candidate object in the N scene frames respectively, to obtain a first set of position coordinates including N first position coordinates, comprises:
detecting positions of the candidate object in Q scene images respectively to obtain Q positions, wherein the N scene images comprise the Q scene images, and Q is a positive integer which is greater than or equal to 1 and less than N;
acquiring position coordinates corresponding to the Q positions respectively to obtain Q position coordinates;
Determining position coordinates corresponding to the rest positions except the Q positions in the N positions according to a picture frame rate and the Q position coordinates, and obtaining rest position coordinates, wherein the picture frame rate represents the number of frames of a scene picture acquired in preset time;
And determining the Q position coordinates and the rest position coordinates as the N first position coordinates.
9. The method of claim 1, wherein the determining whether a screening condition is satisfied based on the performance parameters of the contest in the contest performance report and the performance of the contest in the contest performance report, and screening the candidate objects satisfying the screening condition into a target object list, comprises:
Determining a distance threshold value and a frequency threshold value which allow the candidate object to move in a competition scene when moving according to a preset competition route;
Determining to screen the candidate object into the target object list under the condition that the actual moving distance of the candidate object is smaller than the distance threshold value; or determining to screen the candidate object into the target object list under the condition that the actual moving times of the candidate object are larger than the times threshold; or determining to screen the candidate object into the target object list under the condition that the actual moving distance of the candidate object is smaller than the distance threshold value and the actual moving times of the candidate object is larger than the times threshold value;
Wherein the racing physical ability parameter includes an actual moving distance and the actual moving times.
10. The method according to any one of claims 1 to 9, wherein after the determining whether a screening condition is satisfied from the competition performance parameters in the competition performance report and the competition field performance in the competition performance report, and screening the candidate objects satisfying the screening condition into a target object list, the method further comprises:
and according to the competition physical ability parameters, corresponding training plans are formulated for the candidate objects, or initial training plans are adjusted.
11. An object screening apparatus, comprising:
The first processing unit is used for determining N scene images of the candidate object from a scene image sequence obtained by image acquisition of the competition scene where the candidate object is located, wherein N is a positive integer greater than or equal to 1;
The first acquisition unit is used for acquiring first position coordinates of positions of the candidate objects in the N scene pictures respectively to obtain a first position coordinate set containing N first position coordinates;
A second processing unit, configured to map each first position coordinate in the first position coordinate set to a scene plan view matched with the competition scene by using a target transformation matrix constructed based on the scene picture and the scene plan view matched with the competition scene, to obtain a second position coordinate set containing N second position coordinates, where the second position coordinates are coordinates of positions where the candidate object appears in the scene plan view, and the target transformation matrix is used to indicate a position transformation relationship between a display position of the same candidate object in a position coordinate system corresponding to the scene picture and an appearance position of the same candidate object in a position coordinate system corresponding to the scene plan view;
The third processing unit is used for determining competition physical performance parameters of the candidate object in the competition scene by utilizing a movement time sequence relation and movement state parameters among the second position coordinates in the second position coordinate set, wherein a position function is determined according to the N second position coordinates, the target conversion matrix and the target scaling; determining N third position coordinates of the candidate object according to the position function; determining the movement state parameters of the candidate objects in the competition scene according to the N third position coordinates and the movement time sequence relation; determining the competition physical performance parameter of the candidate object in the competition scene according to the movement state parameter, wherein the competition field of the candidate object in the competition scene is represented by data obtained by real-time recording in the competition process, the position function is a function of time change of coordinates representing positions of the candidate object in the competition scene, the target scaling is used for representing scaling between the competition scene and the scene top view, products between the target scaling and N second position coordinates corresponding to the candidate object in different times are determined as N third position coordinates obtained by utilizing the position function, and the N second position coordinates are obtained by multiplying the target conversion matrix and the N first position coordinates, and the N third position coordinates represent the position coordinates of the candidate object in the competition scene in different times; obtaining the competition field performance of the candidate object in the competition scene; generating a competition performance report of the candidate object in the competition scene based on the competition physical performance parameters and the competition field performance;
And the fourth processing unit is used for determining whether screening conditions are met according to the competition physical performance parameters in the competition performance report and the competition field performance in the competition performance report, and screening the candidate objects meeting the screening conditions into a target object list.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program is executable by a terminal device or a computer to perform the method as claimed in any one of claims 1 to 10.
13. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method as claimed in any one of claims 1 to 10.
14. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 10 by means of the computer program.
CN202410341988.1A 2024-03-25 2024-03-25 Object screening method and device, storage medium and electronic equipment Active CN117934805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410341988.1A CN117934805B (en) 2024-03-25 2024-03-25 Object screening method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410341988.1A CN117934805B (en) 2024-03-25 2024-03-25 Object screening method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN117934805A CN117934805A (en) 2024-04-26
CN117934805B true CN117934805B (en) 2024-07-12

Family

ID=90768766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410341988.1A Active CN117934805B (en) 2024-03-25 2024-03-25 Object screening method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117934805B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119169323B (en) * 2024-11-20 2025-05-16 福瑞泰克智能系统有限公司 Road object matching method, device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114329072A (en) * 2021-12-23 2022-04-12 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN115191008A (en) * 2021-12-10 2022-10-14 商汤国际私人有限公司 Object identification method, device, equipment and storage medium
CN116309686A (en) * 2023-05-19 2023-06-23 北京航天时代光电科技有限公司 Video positioning and speed measuring method, device and equipment for swimmers and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702673B2 (en) * 2004-10-01 2010-04-20 Ricoh Co., Ltd. System and methods for creation and use of a mixed media environment
KR20140137893A (en) * 2013-05-24 2014-12-03 한국전자통신연구원 Method and appratus for tracking object
JP6276719B2 (en) * 2015-02-05 2018-02-07 クラリオン株式会社 Image generation device, coordinate conversion table creation device, and creation method
CN113808200B (en) * 2021-08-03 2023-04-07 嘉洋智慧安全科技(北京)股份有限公司 Method and device for detecting moving speed of target object and electronic equipment
CN114332444B (en) * 2021-12-27 2023-06-16 中国科学院光电技术研究所 Complex star sky background target identification method based on incremental drift clustering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115191008A (en) * 2021-12-10 2022-10-14 商汤国际私人有限公司 Object identification method, device, equipment and storage medium
CN114329072A (en) * 2021-12-23 2022-04-12 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN116309686A (en) * 2023-05-19 2023-06-23 北京航天时代光电科技有限公司 Video positioning and speed measuring method, device and equipment for swimmers and storage medium

Also Published As

Publication number Publication date
CN117934805A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN111724414B (en) A basketball motion analysis method based on 3D pose estimation
CN109190508B (en) Multi-camera data fusion method based on space coordinate system
US20180137363A1 (en) System for the automated analisys of a sporting match
CN108229294B (en) Motion data acquisition method and device, electronic equipment and storage medium
CN111444890A (en) Sports data analysis system and method based on machine learning
CN109313805A (en) Image processing apparatus, image processing system, image processing method and program
CN107820593A (en) A kind of virtual reality exchange method, apparatus and system
TWI537872B (en) Method for generating three-dimensional information from identifying two-dimensional images.
CN113297883B (en) Information processing method, method for obtaining analysis model, device and electronic equipment
KR20150039252A (en) Apparatus and method for providing application service by using action recognition
CN117934805B (en) Object screening method and device, storage medium and electronic equipment
CN109313806A (en) Image processing apparatus, image processing system, image processing method and program
CN113902084A (en) Motion counting method and device, electronic equipment and computer storage medium
CN114495169A (en) Training data processing method, device and equipment for human body posture recognition
CN108370412A (en) Control device, control method and program
CN116703968B (en) Visual tracking method, device, system, equipment and medium for target object
CN114037923A (en) Target activity hotspot graph drawing method, system, equipment and storage medium
EP4455905A2 (en) A computing system and a computer-implemented method for sensing gameplay events and augmentation of video feed with overlay
CN114120168A (en) Target running distance measuring and calculating method, system, equipment and storage medium
CN113569693A (en) Motion state identification method, device and equipment
Quattrocchi et al. Put Your PPE on: A Tool for Synthetic Data Generation and Related Benchmark in Construction Site Scenarios.
CN111898471A (en) Pedestrian tracking method and device
CN117893563A (en) Sphere tracking system and method
CN115457176A (en) Image generation method and device, electronic equipment and storage medium
CN113971693A (en) Live screen generation method, system, device and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant