[go: up one dir, main page]

WO2018121541A1 - 用户属性提取方法、装置和电子设备 - Google Patents

用户属性提取方法、装置和电子设备 Download PDF

Info

Publication number
WO2018121541A1
WO2018121541A1 PCT/CN2017/118705 CN2017118705W WO2018121541A1 WO 2018121541 A1 WO2018121541 A1 WO 2018121541A1 CN 2017118705 W CN2017118705 W CN 2017118705W WO 2018121541 A1 WO2018121541 A1 WO 2018121541A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
image data
user attribute
information
attribute information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/118705
Other languages
English (en)
French (fr)
Inventor
张帆
彭彬绪
陈楷佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to US16/314,410 priority Critical patent/US20190228227A1/en
Publication of WO2018121541A1 publication Critical patent/WO2018121541A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the embodiments of the present application relate to data processing technologies, and in particular, to a user attribute extraction method, apparatus, and electronic device.
  • Determining user attributes based on user characteristics is important for areas such as user research, personalized recommendations, and precision marketing.
  • the embodiment of the present application provides a user attribute extraction scheme.
  • a user attribute extraction method for a first terminal, including: receiving image data sent by a second terminal; extracting user attribute information based on the image data; determining the user The target business object corresponding to the attribute information.
  • the image data includes video image data or static image data.
  • the method before receiving the image data sent by the second terminal, the method further includes: sending an information acquisition request to the second terminal, to trigger the The second terminal sends image data, where the information acquisition request is used to instruct the second terminal to collect image data through the image collection device.
  • the sending an information acquisition request to the second terminal includes: sending an information acquisition request to the second terminal according to a time interval.
  • the extracting user attribute information based on the image data includes: when the image data includes multiple characters, the multiple The person with the highest ratio appears as the target person in the personal; the user attribute information corresponding to the target person is extracted.
  • the user attribute information includes at least one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
  • the method further includes: pushing the target service object to the second terminal.
  • a user attribute extraction apparatus includes: a first receiving module, configured to receive image data sent by a second terminal; and an extracting module, configured to extract a user based on the image data And an determining module, configured to determine a target service object corresponding to the user attribute information.
  • the image data includes video image data or static image data.
  • the apparatus further includes: a first sending module, configured to send an information acquisition request to the second terminal, to trigger the second terminal Sending image data; wherein the information acquisition request is used to instruct the second terminal to acquire image data through the image collection device.
  • the first sending module is configured to send an information acquisition request to the second terminal according to a time interval.
  • the extraction module is configured to: when the image data includes multiple characters, the person with the highest ratio among the plurality of characters As a target person; extract user attribute information corresponding to the target person.
  • the user attribute information includes at least one or more of the following: age information, gender information, hairstyle information, preference information, expression information, clothing information.
  • the apparatus further includes: a second sending module, configured to push the target service object to the second terminal.
  • another user attribute extraction method including: acquiring image data when receiving an information acquisition request sent by a first terminal; and transmitting the image data to the first a terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • the image data includes video image data or static image data.
  • acquiring the number of images includes: when receiving the first terminal, When the information acquisition request is made, image data is collected by the image acquisition device.
  • the image data is collected by the image collection device, including: And when the information acquisition request sent by the first terminal is sent, the image collection device enable prompt message is displayed; when the user confirmation instruction based on the image capture device enable prompt message is detected, the image data is collected by the image capture device.
  • the image collection device includes: a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
  • the method further includes: receiving a target service object that is pushed by the first terminal; and displaying the target service object.
  • a user attribute extraction apparatus including: an acquisition module, configured to acquire image data when receiving an information acquisition request sent by a first terminal; and a third sending module, configured to: And transmitting the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • the image data includes video image data or static image data.
  • the acquiring module is configured to collect image data by using an image collection device when receiving the information acquisition request sent by the first terminal. .
  • the obtaining module includes: a display submodule, configured to display an image when receiving the information acquisition request sent by the first terminal The collection device enables the prompt message; the collection submodule is configured to collect image data by the image collection device when detecting a user confirmation instruction based on the image acquisition device enabling prompt message.
  • the image collection device includes: a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
  • the apparatus further includes: a second receiving module, configured to receive a target service object that is pushed by the first terminal; and a display module, configured to display The target business object.
  • an electronic device including: a processor and a memory; the memory is configured to store at least one executable instruction, the executable instruction causing the processor to execute the application described above
  • another electronic device including: a processor and a user attribute extraction apparatus according to any one of the foregoing embodiments of the present application; when the processor runs the operation device of the business object, The unit in the user attribute extraction apparatus described in any of the above embodiments of the present application is executed.
  • a computer program comprising computer readable code, the processor in the device executing the above-described implementation of the present application when the computer readable code is run on a device
  • a computer readable storage medium for storing a computer readable instruction, and when the instruction is executed, implementing the user attribute extraction method according to any one of the foregoing embodiments of the present application. Operation in each step
  • the user attribute extraction scheme provided in this embodiment receives the image data sent by the second terminal, extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • Real-time acquisition of the user's biological image is simple and fast, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
  • FIG. 1 is a flowchart of a user attribute extraction method according to an embodiment of the present application.
  • FIG. 2 is a flowchart of another user attribute extraction method according to an embodiment of the present application.
  • FIG. 3 is a structural block diagram of a user attribute extraction apparatus according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of another user attribute extraction apparatus according to an embodiment of the present application.
  • FIG. 5 is a flowchart of still another method for extracting user attributes according to an embodiment of the present application.
  • FIG. 6 is a flowchart of still another user attribute extraction method according to an embodiment of the present application.
  • FIG. 7 is a structural block diagram of still another user attribute extraction apparatus according to an embodiment of the present application.
  • FIG. 8 is a structural block diagram of still another user attribute extraction apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to an embodiment of the present application.
  • Embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
  • Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including storage devices.
  • FIG. 1 a flowchart of a method for extracting user attributes according to an embodiment of the present application is shown.
  • This embodiment is used for the first terminal, and the method for extracting user attributes in the embodiment of the present application is explained by taking the anchor end in the live broadcast scenario as an example.
  • the user attribute extraction method of this embodiment may include:
  • Step 102 Receive image data sent by the second terminal.
  • the attribute information of the real-time user is obtained, and the image data of the user is obtained, so that the user attribute information of the user is obtained by analyzing the image data.
  • the embodiments of the present application can be applied to a live broadcast scenario, where the first terminal (such as the anchor end) establishes a video communication connection with the second terminal (such as a fan terminal) in the live broadcast of the live broadcast platform where the anchor is located.
  • the first terminal such as the anchor end
  • the second terminal such as a fan terminal
  • the first terminal receives the image data sent by the second terminal, where the image data may be actively sent by the second terminal, or the second terminal may receive the image data returned by the information acquisition request of the first terminal.
  • the image data therein may include, but is not limited to, image data of a user of the second terminal.
  • the step 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first receiving module 302 executed by the processor.
  • Step 104 Extract user attribute information based on the image data.
  • the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data, and then performs feature analysis on the image region according to the feature extraction algorithm to determine user attribute information corresponding to the user.
  • the user attribute letter may include, but is not limited to, at least one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
  • the user attribute information of the embodiments of the present application may be determined by using a face detection algorithm or a neural network model, and other feature extraction algorithms may also be used.
  • the step 104 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an extraction module 304 executed by the processor.
  • Step 106 Determine a target service object corresponding to the user attribute information.
  • the first terminal determines, according to the user attribute information, a target business object corresponding to the target business object, where the target business object is a special effect including the semantic information, for example, any one or more of the following special effects of the advertisement information: a two-dimensional sticker special effect, 3D effects, particle effects.
  • a sticker ie, an advertisement sticker
  • 3D effects 3D effects
  • particle effects for example, an advertisement displayed in the form of a sticker (ie, an advertisement sticker); or an effect for displaying an advertisement, such as a 3D advertisement effect.
  • the present invention is not limited thereto, and other forms of business objects are also applicable to the service statistics solution provided by the embodiment of the present application, such as a text description or introduction of an APP or other application, or a certain form of an object (such as an electronic pet) that interacts with a video audience. .
  • the step 106 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module 306 executed by the processor.
  • the user attribute extraction method receives the image data sent by the second terminal, extracts the user attribute information based on the image data, determines the target business object corresponding to the user attribute information, and can obtain the biological image of the user in real time, which is simple and fast. It can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
  • FIG. 2 a flowchart of another user attribute extraction method in the embodiment of the present application is shown.
  • Step 202 Send an information acquisition request to the second terminal to trigger the second terminal to send the image data.
  • the first terminal sends an information acquisition request to the second terminal, and the second terminal acquires image data of the second terminal user according to the information acquisition request, and the information acquisition request may be in various forms, such as a notification message, and is attached to the game object. Interactive request on.
  • the anchor sends an interactive game request to the fan users of the plurality of second terminals through the first terminal, and the information acquisition request is carried in the interactive game request.
  • the anchor calls the fans to play interactive games during the live broadcast, and sends interactive requests for interactive games to multiple fans. After the fans receive the interactive request, the interactive game will be displayed on the fan interface through the trigger of the interactive request. Get access to the fan-side camera and capture image data from fans.
  • the step 202 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first transmitting module 308 that is executed by the processor.
  • Step 204 Receive image data sent by the second terminal.
  • the image data in various embodiments of the present application may include video image data or still image data, such as a small video or a picture.
  • the step 204 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first receiving module 302 being executed by the processor.
  • Step 206 Extract user attribute information based on the image data.
  • the identifier (ID number) of each second terminal device has a corresponding user. To make the user attribute information more valuable, it is necessary to determine a target person corresponding to each second terminal ID. The user attribute information is extracted from the image data of the target person.
  • the information acquisition request is sent to the second terminal, the plurality of image data is acquired, the character recognition is performed on the plurality of image data, and the person having the highest occurrence ratio of the plurality of image data (ie, the most frequently occurring) is determined. Be the target person and determine the user attribute information of the target person.
  • the determined user attribute information is stored, and the image data may be acquired according to the time interval, and the user attribute information is extracted based on the acquired image data, and the storage is performed based on the new user attribute information.
  • the user attribute information is updated, that is, the user attribute information is updated according to the time interval.
  • the person with the highest occurrence ratio among the plurality of characters is determined as the target person, and the user attribute information of the target person is determined.
  • the person in the image data Based on the image data, it is determined whether the person in the image data has a target person. When it is determined that the person in the image data has a target person, feature analysis is performed on the image region corresponding to the target person, and the user attribute information of the target person is determined.
  • the step 206 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an extraction module 304 executed by the processor.
  • Step 208 Determine a target service object corresponding to the user attribute information.
  • weighting calculation is performed on each information in the user attribute information, and the weight of each information in the user attribute information may be set according to the attribute of the information, for example, the relative change of the age information and the gender information is small, A smaller weight can be set, and the clothing information changes greatly with the season, and a larger weight can be set, and accordingly, the user attribute information is determined.
  • the weight of the age information is 10%
  • the weight of the gender information is 10%
  • the weight of the hairstyle information is 10%
  • the weight of the preference information is 10%
  • the weight of the expression information is 20%
  • weight of the clothing information is 40%.
  • Determining the target business object with the best matching degree with the user attribute information may be determined by using each attribute information in turn.
  • the target service object in this embodiment is similar to the target service object in the foregoing embodiment, and details are not described herein again.
  • the user's age group and gender are first determined according to the user attribute information; the user's personality is determined by the hairstyle information and the expression information in the user attribute information; and finally, the user's clothing information is determined according to the user attribute information. For example, it is determined that the user is a male of 15-18 years old, the user's personality is sunny and the clothing information shows that the user's clothing is the Nike sports series clothing, thereby, it can be determined that the target business object to be pushed is the male teenager Nike series. Sportswear.
  • the step 208 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module 306 that is executed by the processor.
  • Step 210 Push a target service object to the second terminal.
  • the first terminal pushes the determined target service object to the second terminal, and the second terminal can display the target service object on the live interface of the second terminal.
  • the step 210 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second transmitting module 310 that is executed by the processor.
  • the application scenarios in the embodiments of the present application may also include other forms of video interaction, such as a video call of a social software type, such as a WeChat video, a QQ video, etc., which is not applicable to this embodiment. Choose a limit.
  • the user attribute extraction method of the embodiment of the present application receives the image data sent by the second terminal by sending an information acquisition request to the second terminal, and performs character recognition on the image data based on the image data to determine whether the person in the image data is the target person.
  • the user attribute information of the target person is extracted, and then the target business object corresponding to the user attribute information is determined to push the target business object to the second terminal, so that the user attribute information can be determined according to the biological image, that is, simple and fast, and Real and effective, the target business object determined by the user attribute information is more in line with the user's needs, realizes the strategy of personalized recommendation and precise marketing, obtains image data through time interval, and can regularly update the user attribute information. Ensure the validity of the information.
  • FIG. 3 a block diagram of a user attribute extraction apparatus according to an embodiment of the present application is shown.
  • the user attribute extraction apparatus of this embodiment can be used as a first terminal to perform a user attribute extraction method as shown in FIG.
  • the user attribute extraction apparatus of this embodiment may include the following modules:
  • the first receiving module 302 is configured to receive image data sent by the second terminal.
  • the extracting module 304 is configured to extract user attribute information based on the image data.
  • the determining module 306 is configured to determine a target service object corresponding to the user attribute information.
  • the user attribute extraction device receives the image data sent by the second terminal, extracts the user attribute information based on the image data, determines the target business object corresponding to the user attribute information, and can obtain the biological image of the user in real time, which is simple and fast. It can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
  • FIG. 4 a structural block diagram of another user attribute extraction apparatus according to an embodiment of the present application is shown.
  • the user attribute extraction apparatus of this embodiment can be used as the first terminal to execute the user attribute extraction method shown in FIG. 2.
  • the user attribute extraction apparatus of this embodiment may include the following modules:
  • the first sending module 308 is configured to send an information obtaining request to the second terminal, to trigger the second terminal to send image data, where the information obtaining request is used to instruct the second terminal to collect by using an image collecting device. Image data.
  • the first sending module 308 is further configured to send an information acquisition request to the second terminal according to a time interval.
  • the image data may include, but is not limited to, video image data or still image data.
  • the first receiving module 302 is configured to receive image data sent by the second terminal.
  • the extracting module 304 is configured to: when a plurality of characters are included in the image data, use a person whose highest occurrence ratio among the plurality of characters as a target person; and extract user attribute information corresponding to the target person.
  • the user attribute information may include, for example but not limited to, any one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
  • the determining module 306 is configured to determine a target service object corresponding to the user attribute information.
  • the second sending module 310 is configured to push the target service object to the second terminal.
  • the user attribute extraction device of the embodiment of the present application receives the image data sent by the second terminal by sending an information acquisition request to the second terminal, and performs character recognition on the image data based on the image data to determine whether the person in the image data is the target person.
  • the user attribute information of the target person is extracted, and then the target business object corresponding to the user attribute information is determined to push the target business object to the second terminal, so that the user attribute information can be determined according to the biological image, that is, simple and fast, and Real and effective, the target business object determined by the user attribute information is more in line with the user's needs, realizes the strategy of personalized recommendation and precise marketing, obtains image data through time interval, and can regularly update the user attribute information. Ensure the validity of the information.
  • FIG. 5 a flowchart of still another user attribute extraction method in the embodiment of the present application is shown.
  • This embodiment is used for the second terminal, and the user attribute extraction method in the embodiment of the present application is explained by taking the fan end in the live broadcast scenario as an example.
  • the user attribute extraction method of this embodiment may include:
  • Step 502 Acquire image data when receiving an information acquisition request sent by the first terminal.
  • the attribute information of the real-time user is obtained, and the user attribute information of the user is obtained by analyzing the image data by acquiring the image data of the user.
  • the embodiments of the present application can be applied to a live broadcast scenario, where the first terminal (such as the anchor end) establishes a video communication connection with the second terminal (fan side) of the live broadcast platform of the live broadcast site through the background server.
  • the user of the second terminal obtains the image data of the second terminal user by using the confirmation of the information acquisition request. .
  • the image data therein may include video image data or still image data such as a small video or a picture.
  • the step 502 can be performed by a processor invoking a corresponding instruction stored in the memory, or can be performed by the acquisition module 702 being executed by the processor.
  • Step 504 Send image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • the second terminal After the image data is collected, the second terminal sends the image to the first terminal. After the first terminal receives the image data, the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data. Feature analysis is performed on the area according to the feature extraction algorithm to determine user attribute information corresponding to the user.
  • the user attribute letter in each embodiment of the present application may include, but is not limited to, any one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
  • the user attribute information in this embodiment may be determined by using a face detection algorithm or a neural network model, and other feature extraction algorithms may also be used.
  • the first terminal determines a target business object corresponding to the user attribute information, where the target business object is a special effect including semantic information, such as at least one of the following special effects including the advertisement information: two-dimensional sticker special effect, three-dimensional special effect, particle special effect .
  • a special effect including semantic information such as at least one of the following special effects including the advertisement information: two-dimensional sticker special effect, three-dimensional special effect, particle special effect .
  • an advertisement displayed in the form of a sticker ie, an advertisement sticker
  • an effect for displaying an advertisement such as a 3D advertisement effect.
  • the present invention is not limited thereto, and other forms of service objects are also applicable to the service statistics solution provided by the embodiment of the present application, such as a text description or introduction of an APP or other application, or a certain form of an object (such as an electronic pet) that interacts with a video audience. .
  • the step 504 can be performed by the processor invoking a corresponding instruction stored in the memory or by the third transmitting module 704 being executed by the processor.
  • the user attribute extraction method when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more consistent. User needs.
  • the method for the second terminal in this embodiment may include the following steps:
  • Step 602 When receiving the information acquisition request sent by the first terminal, the image data is collected by the image collection device.
  • the second terminal When the second terminal receives the information acquisition request sent by the first terminal, obtains the permission of the second terminal image collection device by using the trigger of the information acquisition request, and collects the image data of the second terminal user by using the image collection device.
  • the information acquisition request may be in various forms, such as a notification message, and an interaction request attached to the game object.
  • the anchor sends an interactive game request to the fan users of the plurality of second terminals through the first terminal, and the interactive game request carries the information acquisition request.
  • the image collection device when receiving the information acquisition request sent by the first terminal, the image collection device enables the prompt message; when the user confirmation instruction based on the image collection device enable prompt message is detected, the image data is collected by the image acquisition device.
  • the image collection device of this embodiment may include a camera of the second terminal or a smart device with a shooting function associated with the second terminal.
  • the anchor calls the fans to play interactive games during the live broadcast, and sends interactive requests for interactive games to multiple fans.
  • the fans display the image collection device enable prompt message on the fan interface, and trigger through the interactive request. That is, the confirmation message is enabled for the image capturing device, and the interactive game will be displayed on the interface of the fan, and the permission of the fan camera is obtained to collect the image data of the fan.
  • the step 602 can be performed by the processor invoking a corresponding instruction stored in the memory, or can be performed by the acquisition module 702 being executed by the processor.
  • Step 604 Send image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • the second terminal After the image data is collected, the second terminal sends the image to the first terminal. After the first terminal receives the image data, the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data. Feature analysis is performed on the area according to the feature extraction algorithm to determine user attribute information corresponding to the user.
  • Determining the target business object with the best matching degree with the user attribute information may be determined by using each attribute information in turn.
  • the target service object in this embodiment is similar to the target service object in the foregoing embodiment, and details are not described herein again.
  • the user's age group and gender are first determined according to the user attribute information; the user's personality is determined by the hairstyle information and the expression information in the user attribute information; and finally, the user's clothing information is determined according to the user attribute information. For example, it is determined that the user is a male of 15-18 years old, the user's personality is sunny and the clothing information shows that the user's clothing is the Nike sports series clothing, thereby, it can be determined that the target business object to be pushed is the male teenager Nike series. Sportswear.
  • the step 604 can be performed by the processor invoking a corresponding instruction stored in the memory or by the third transmitting module 704 being executed by the processor.
  • Step 606 Receive a target service object that is pushed by the first terminal.
  • the step 606 can be performed by the processor invoking a corresponding instruction stored in the memory or by the second receiving module 706 being executed by the processor.
  • Step 608 displaying a target business object.
  • the first terminal pushes the determined target service object to the second terminal, and the second terminal can display the target service object on the live interface of the second terminal.
  • the step 608 can be performed by the processor invoking a corresponding instruction stored in the memory or by the presentation module 708 being executed by the processor.
  • the application scenario in this embodiment may include other types of video interactions, such as a video call of a social software type, such as a WeChat video, a QQ video, etc., in this embodiment. .
  • the user attribute extraction method when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user.
  • the demand has realized the strategy of personalized recommendation and precise marketing. Fans can improve the user experience by watching the live broadcast and seeing the target business objects that meet their own needs.
  • FIG. 7 a block diagram of a user attribute extraction apparatus according to another embodiment of the present application is shown.
  • the user attribute extraction apparatus of this embodiment can be used for a second terminal to perform a user attribute extraction method as shown in FIG. 5.
  • the user attribute extraction apparatus of this embodiment may include the following modules:
  • the obtaining module 702 is configured to acquire image data when receiving an information acquisition request sent by the first terminal.
  • the third sending module 704 is configured to send the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information. .
  • the user attribute extraction device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user. Demand.
  • FIG. 8 a block diagram of a user attribute extraction apparatus provided in Embodiment 9 of the present application is shown.
  • the user attribute extraction apparatus of this embodiment can be used in a second terminal to execute a user attribute extraction method as shown in FIG. 6. .
  • the user attribute extraction apparatus of this embodiment may include the following modules:
  • the obtaining module 702 is configured to collect image data by using an image collecting device when receiving the information acquiring request sent by the first terminal.
  • the obtaining module 702 includes: a display sub-module 7022 and a collection sub-module 7024.
  • the display sub-module 7022 is configured to display an image collection device enable prompt message when receiving the information acquisition request sent by the first terminal;
  • the collecting sub-module 7024 is configured to collect image data by the image collecting device when detecting a user confirmation instruction based on the image capturing device enabling prompt message.
  • the image data includes video image data or static image data; the image capturing device includes a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
  • the third sending module 704 is configured to send the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information. .
  • the second receiving module 706 is configured to receive a target service object that is pushed by the first terminal.
  • the display module 708 is configured to display the target business object.
  • the user attribute extraction device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user.
  • the demand has realized the strategy of personalized recommendation and precise marketing. Fans can improve the user experience by watching the live broadcast and seeing the target business objects that meet their own needs.
  • an embodiment of the present application further provides an electronic device, including: a processor and a memory;
  • the memory is configured to store at least one executable instruction that causes the processor to perform an operation corresponding to the user attribute extraction method as described in any of the embodiments of the present application.
  • the embodiment of the present application further provides another electronic device, including:
  • the processor runs the user attribute extraction device
  • the unit in the user attribute extraction device described in any of the embodiments of the present application is executed.
  • the electronic device may be, for example, a mobile terminal, a personal computer (PC), a tablet computer, a server, or the like.
  • the embodiment of the present application further provides a computer program, including computer readable code, when the computer readable code is run on a device, the processor in the device executes to implement any of the embodiments of the present application.
  • the instructions of each step in the user attribute extraction method when the computer readable code is run on a device, the processor in the device executes to implement any of the embodiments of the present application.
  • the embodiment of the present application further provides a computer readable storage medium, which is configured to store computer readable instructions, and when the instructions are executed, implement the steps in the user attribute extraction method according to any embodiment of the present application. Operation.
  • a schematic structural diagram of an application embodiment of an electronic device 1000 suitable for implementing a terminal device or a server of an embodiment of the present application is shown.
  • the electronic device 900 includes one or more processes.
  • the one or more processors such as: one or more central processing units (CPUs) 901, and/or one or more image processors (GPUs) 913, etc., the processor may be stored according to Executable instructions in read only memory (ROM) 902 or executable instructions loaded from random access memory (RAM) 903 from storage portion 908 perform various appropriate actions and processes.
  • the communication component includes a communication component 912 and/or a communication interface 909.
  • the communication component 912 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card, the communication interface 909 includes a communication interface of a network interface card such as a LAN card, a modem, etc., and the communication interface 909 is via an Internet interface such as The network performs communication processing.
  • a network card which can include, but is not limited to, an IB (Infiniband) network card
  • the communication interface 909 includes a communication interface of a network interface card such as a LAN card, a modem, etc.
  • the communication interface 909 is via an Internet interface such as The network performs communication processing.
  • the processor can communicate with read-only memory 902 and/or random access memory 903 to execute executable instructions, communicate with communication component 912 via communication bus 904, and communicate with other target devices via communication component 912, thereby completing embodiments of the present application.
  • Providing any operation corresponding to the user attribute extraction method for example, receiving image data sent by the second terminal; extracting user attribute information based on the image data; and determining a target business object corresponding to the user attribute information.
  • receiving the information acquisition request sent by the first terminal acquiring image data; and transmitting the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, And determining a target business object corresponding to the user attribute information.
  • RAM 903 various programs and data required for the operation of the device can be stored.
  • the CPU 901 or the GPU 913, the ROM 902, and the RAM 903 are connected to each other through a communication bus 904.
  • ROM 902 is an optional module.
  • the RAM 903 stores executable instructions or writes executable instructions to the ROM 902 at runtime, the executable instructions causing the processor 90 to perform operations corresponding to the above-described communication methods.
  • An input/output (I/O) interface 905 is also coupled to communication bus 904.
  • the communication component 912 can be integrated or can be configured to have multiple sub-modules (e.g., multiple IB network cards) and be on a communication bus link.
  • the following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, etc.; an output portion 907 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 908 including a hard disk or the like. And a communication interface 909 including a network interface card such as a LAN card, modem, etc.
  • the 90 driver 910 is also connected to the I/O interface 905 as needed.
  • a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 910 as needed so that a computer program read therefrom is installed into the storage portion 908 as needed.
  • FIG. 9 is only an optional implementation manner.
  • the number and type of the components in FIG. 9 may be selected, deleted, added, or replaced according to actual needs; Different function components can also be implemented in separate settings or integrated settings, such as GPU and CPU detachable settings or GPU can be integrated on the CPU, communication components can be separated, or integrated on the CPU or GPU. ,and many more.
  • GPU and CPU detachable settings or GPU can be integrated on the CPU
  • communication components can be separated, or integrated on the CPU or GPU. ,and many more.
  • embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising the corresponding execution
  • the instruction corresponding to the method step provided by the embodiment of the present application for example, when receiving the information acquisition request sent by the first terminal, acquiring image data; sending the image data to the first terminal, so that the first The terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
  • the computer program can be downloaded and installed from the network via a communication component, and/or installed from the removable media 911.
  • the above-described functions defined in the method of the embodiments of the present application are executed when the computer program is executed by the processor.
  • the electronic device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data, and determines
  • the target business object corresponding to the user attribute information acquires the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user. demand.
  • Any user attribute extraction method provided by the embodiment of the present application may be performed by any suitable device having data processing capability, including but not limited to: a terminal device, a server, and the like.
  • any user attribute extraction method provided by the embodiment of the present application may be executed by a processor.
  • the processor performs any user attribute extraction method mentioned in the embodiment of the present application by calling a corresponding instruction stored in the memory. This will not be repeated below.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • a medium that can store program codes such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • the methods, apparatus, and apparatus of the present application may be implemented in a number of ways.
  • the method, apparatus, and apparatus of the embodiments of the present application can be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the embodiments of the present application are not limited to the order of the above optional description unless otherwise specified.
  • the present application may also be embodied as a program recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with embodiments of the present application.
  • the present application also covers a recording medium storing a program for executing the method according to an embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请实施例公开了一种用户属性提取方法、装置和电子设备,所述方法包括:接收第二终端发送的图像数据;基于所述图像数据,提取用户属性信息;确定所述用户属性信息对应的目标业务对象。实时获取用户当前的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的当前需求。

Description

用户属性提取方法、装置和电子设备
本申请要求在2016年12月28日提交中国专利局、申请号为CN 201611235485.8、发明名称为“用户属性提取方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据处理技术,尤其涉及一种用户属性提取方法、装置和电子设备。
背景技术
根据用户的特征确定用户属性,对于用户研究、个性化推荐以及精准营销等领域都具有重要意义。
发明内容
本申请实施例提供一种用户属性提取方案。
根据本申请实施例的一个方面,提供了一种用户属性提取方法,用于第一终端,包括:接收第二终端发送的图像数据;基于所述图像数据,提取用户属性信息;确定所述用户属性信息对应的目标业务对象。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述图像数据包括视频图像数据或静态图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述接收第二终端发送的图像数据之前,还包括:向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述向所述第二终端发送信息获取请求,包括:按照时间间隔,向所述第二终端发送信息获取请求。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述基于所述图像数据,提取用户属性信息,包括:当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述用户属性信息至少 包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述方法还包括:向所述第二终端推送所述目标业务对象。
根据本申请实施例的另一个方面,提供了一种用户属性提取装置,包括:第一接收模块,用于接收第二终端发送的图像数据;提取模块,用于基于所述图像数据,提取用户属性信息;确定模块,用于确定所述用户属性信息对应的目标业务对象。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述图像数据包括视频图像数据或静态图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述装置还包括:第一发送模块,用于向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述第一发送模块,用于按照时间间隔,向所述第二终端发送信息获取请求。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述提取模块,用于当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述用户属性信息至少包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述装置还包括:第二发送模块,用于向所述第二终端推送所述目标业务对象。
根据本申请实施例的又一个方面,提供了另一种用户属性提取方法,包括:当接收到第一终端发送的信息获取请求时,获取图像数据;将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述图像数据包括视频图像数据或静态图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述当接收到第一终端发送的信息获取请求时,获取图像数,包括:当接收到所述第一终端发送的所述信息获取 请求时,通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据,包括:当接受到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
可选地,结合本申请实施例提供的任一种用户属性提取方法,所述方法还包括:接收所述第一终端推送的目标业务对象;展示所述目标业务对象。
根据本申请实施例的再一个方面,提供了一种用户属性提取装置,包括:获取模块,用于当接收到第一终端发送的信息获取请求时,获取图像数据;第三发送模块,用于将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。可选地,结合本申请实施例提供的任一种用户属性提取装置,所述图像数据包括视频图像数据或静态图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述获取模块,用于当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述获取模块包括:显示子模块,用于当接受到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;采集子模块,用于当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
可选地,结合本申请实施例提供的任一种用户属性提取装置,所述装置还包括:第二接收模块,用于接收所述第一终端推送的目标业务对象;展示模块,用于展示所述目标业务对象。
根据本申请实施例的再一个方面,提供了一种电子设备,包括:处理器和存储器;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行本申请上述任一实施例所述用户属性提取方法。
根据本申请实施例的再一个方面,提供了另一种电子设备,包括:处理器和本申请上述任一实施例所述用户属性提取装置;在处理器运行所述业务对象的操作装置时,本申请 上述任一实施例所述用户属性提取装置中的单元被运行。
根据本申请实施例的再一个方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现本申请上述任一实施例所述用户属性提取方法中各步骤的指令。
根据本申请实施例的再一个方面,提供了一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时实现本申请上述任一实施例所述用户属性提取方法中各步骤的操作
本实施例提供的用户属性提取方案,通过接收第二终端发送的图像数据,基于图像数据,提取用户属性信息,确定用户属性信息对应的目标业务对象。实时获取用户的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1是本申请实施例一种用户属性提取方法的流程图;
图2是本申请实施例另一种用户属性提取方法的流程图;
图3是本申请实施例一种用户属性提取装置的结构框图;
图4是本申请实施例另一种用户属性提取装置的结构框图;
图5是本申请实施例又一种用户属性提取方法的流程图;
图6是本申请实施例再一种用户属性提取方法的流程图;
图7是本申请实施例又一种用户属性提取装置的结构框图;
图8是本申请实施例再一种用户属性提取装置的结构框图;
图9是本申请实施例一种电子设备应用实施例的结构示意图。
具体实施方式
下面结合附图(若干附图中相同的标号表示相同的元素)和实施例,对本申请实施例的可选实施方式作进一步详细说明。以下实施例用于说明本申请实施例,但不用来限制本申请实施例的范围。
本领域技术人员可以理解,本申请实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
应注意到:除非另外可选说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本申请实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
参照图1,示出了本申请实施例一种用户属性提取方法的流程图。本实施例用于第一终端,以直播场景下的主播端为例,对本申请实施例的用户属性提取方法进行解释说明。本实施例的用户属性提取方法可以包括:
步骤102、接收第二终端发送的图像数据。
本申请各实施例为得到实时用户的属性信息,通过获取用户的图像数据,以便通过对图像数据进行分析得到用户的用户属性信息。
本申请各实施例可应用于直播场景下,第一终端(如主播端)通过后台服务器与主播所在直播平台直播间的第二终端(如粉丝端)建立视频通信连接。
第一终端接收第二终端发送的图像数据,其中,该图像数据可以由第二终端主动发送,还可以是第二终端接收第一终端的信息获取请求返回的图像数据。其中的图像数据可以包括但不限于:第二终端的用户的图像数据。
在一个可选示例中,该步骤102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一接收模块302执行。
步骤104、基于图像数据,提取用户属性信息。
第一终端通过对图像数据进行人物识别,确定该图像中的人物对应的图像区域,再根据特征提取算法对该图像区域进行特征分析确定用户对应的用户属性信息。
本申请各实施例中,用户属性信可以包括但不限于以下至少任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
本申请各实施例的用户属性信息可以采用人脸检测算法或神经网络模型确定,还可以使用其它特征提取算法,对此本申请实施例不作可选限定。
在一个可选示例中,该步骤104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提取模块304执行。
步骤106、确定用户属性信息对应的目标业务对象。
第一终端根据用户属性信息确定与之对应的目标业务对象,该目标业务对象为包含有语义信息的特效,例如可以包含广告信息的以下任意一种或多种形式的特效:二维贴纸特效、三维特效、粒子特效。例如使用贴纸形式展示的广告(即广告贴纸);或者,用于展示广告的特效,例如3D广告特效。但不限于此,其它形式的业务对象也同样适用本申请实施例提供的业务统计方案,例如APP或其它应用的文字说明或介绍,或者一定形式的与视频观众交互的对象(如电子宠物)等。
在一个可选示例中,该步骤106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块306执行。
本实施例提供的用户属性提取方法,通过接收第二终端发送的图像数据,基于图像数据,提取用户属性信息,确定用户属性信息对应的目标业务对象,可以实时获取用户的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
参照图2,示出了本申请实施例另一种用户属性提取方法的流程图,本实施例用于第 一终端,可以包括:
步骤202、向第二终端发送信息获取请求,以触发第二终端发送图像数据。
第一终端向第二终端发送信息获取请求,第二终端根据信息该获取请求获取该第二终端用户的图像数据,该信息获取请求可以是多种形式,例如通知消息,又如附加在游戏对象上的互动请求。
例如,主播通过第一终端向多个第二终端的粉丝用户发送互动游戏请求,在该互动游戏请求中携带信息获取请求。
例如,主播在直播过程中号召粉丝一起玩互动游戏,并向多个粉丝发送互动游戏的互动请求,粉丝接收到互动请求后,通过互动请求的触发,互动游戏将在粉丝端的界面进行展示,同时获取到粉丝端摄像头的权限,以及采集粉丝的图像数据。
在一个可选示例中,该步骤202可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一发送模块308执行。
步骤204、接收第二终端发送的图像数据。
第二终端接收到由第一终端发送的信息获取请求,通过该第二终端的粉丝用户对该信息获取请求的确定,由第二终端的图像采集设备采集该第二终端用户的图像数据,并将该图像数据发送给第一终端。
本申请各实施例中的图像数据可以包括视频图像数据或静态图像数据,例如小视频或图片。
在一个可选示例中,该步骤204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一接收模块302执行。
步骤206、基于图像数据,提取用户属性信息。
在本申请各本实施例中,每个第二终端设备的标识(ID号)都有一相对应的用户,为使用户属性信息更有价值,需要确定与每个第二终端ID对应的目标人物,对目标人物的图像数据进行用户属性信息提取。
例如,按照时间间隔,向第二终端发送信息获取请求,获取多个图像数据,对多个图像数据进行人物识别,将多个图像数据中人物出现比率最高(即:出现次数最多)的人物确定为目标人物,并确定该目标人物的用户属性信息。
需要说明的是,在本实施例中将确定的用户属性信息进行存储,还可以按照时间间隔进行图像数据的获取,并基于获取到的图像数据提取用户属性信息,基于新的用户属性信息对存储的用户属性信息进行更新,即根据时间间隔更新用户属性信息。
在本申请各实施例的一种可选方案中,当获取到的图像数据中包括多个人物时,将多个人物中出现比率最高的人物确定为目标人物,并确定目标人物的用户属性信息。
基于图像数据,判断该图像数据中的人物是否存在目标人物,当确定该图像数据中的人物存在目标人物时,对该目标人物对应的图像区域进行特征分析,确定目标人物的用户属性信息。
在一个可选示例中,该步骤206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提取模块304执行。
步骤208、确定用户属性信息对应的目标业务对象。
在本实施例中,对用户属性信息中的各个信息进行加权计算,用户属性信息中每个信息的权值可根据该信息的属性进设定,例如年龄信息和性别信息相对变化较小,则可设定较小的权值,而衣着信息随季节的变化较大,则可设定较大的权值,据此,确定出用户属性信息。例如年龄信息的权值为10%、性别信息的权值为10%、发型信息的权值为10%、喜好信息的权值为10%、表情信息的权值为20%、衣着信息的权值为40%。
确定与用户属性信息匹配度最优的目标业务对象,可依次采用每个属性信息确定。
在本实施例中的目标业务对象与上述实施例中的目标业务对象类似,在此不再赘述。
例如:先根据用户属性信息确定用户的年龄段、性别;再由用户属性信息中的发型信息和表情信息确定用户的性格;最后根据用户属性信息确定用户的衣着信息。例如,确定为用户为15-18岁的男性、用户的性格为阳光开朗型、衣着信息显示用户的衣着为耐克运动系列服装,由此,可以确定待推送的目标业务对象为男性青少年耐克系列的运动装。
在一个可选示例中,该步骤208可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块306执行。
步骤210、向第二终端推送目标业务对象。
第一终端将确定好的目标业务对象向第二终端推送,该第二终端接收到目标业务对象后可以在第二终端的直播界面进行展示。
在一个可选示例中,该步骤210可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二发送模块310执行。
在本申请各实施例中的应用场景除直播类视频互动外,还可包括其它形式的视频互动,例如社交软件类的视频通话,又如微信视频、QQ视频等,对此本实施例不作可选限定。
本申请实施例的用户属性提取方法,通过向第二终端发送信息获取请求,接收第二终端发送的图像数据,基于图像数据,对图像数据进行人物识别,确定图像数据中的人物是 否为目标人物,当是目标人物时,进而提取目标人物的用户属性信息,再确定用户属性信息对应的目标业务对象向第二终端推送目标业务对象,可实现根据生物图像确定用户属性信息,即简单快捷,又真实有效,通过该用户属性信息确定出的目标业务对象,更符合用户的需求,实现了个性化推荐和精准营销的策略,通过时间间隔进行图像数据获取,还可以对用户属性信息进行定时更新,保证信息的有效性。
参照图3,示出了本申请实施例一种用户属性提取装置的结构框图,通过本实施例的用户属性提取装置可作为第一终端,执行如图1所示的用户属性提取方法。该实施例的用户属性提取装置可以包括如下模块:
第一接收模块302,用于接收第二终端发送的图像数据。
提取模块304,用于基于所述图像数据,提取用户属性信息。
确定模块306,用于确定所述用户属性信息对应的目标业务对象。
本实施例提供的用户属性提取装置,通过接收第二终端发送的图像数据,基于图像数据,提取用户属性信息,确定用户属性信息对应的目标业务对象,可以实时获取用户的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
参照图4,示出了本申请实施例另一种用户属性提取装置的结构框图,通过本实施例的用户属性提取装置可作为第一终端,执行如图2所示的用户属性提取方法。该实施例的用户属性提取装置可以包括如下模块:
第一发送模块308,用于向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
可选地,所述第一发送模块308还可用于按照时间间隔,向所述第二终端发送信息获取请求。
其中,所述图像数据可以包括但不限于视频图像数据或静态图像数据。
第一接收模块302,用于接收第二终端发送的图像数据。
提取模块304,用于当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
其中,所述用户属性信息例如可以包括但不限于以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
确定模块306,用于确定所述用户属性信息对应的目标业务对象。
第二发送模块310,用于向所述第二终端推送所述目标业务对象。
本申请实施例的用户属性提取装置,通过向第二终端发送信息获取请求,接收第二终端发送的图像数据,基于图像数据,对图像数据进行人物识别,确定图像数据中的人物是否为目标人物,当是目标人物时,进而提取目标人物的用户属性信息,再确定用户属性信息对应的目标业务对象向第二终端推送目标业务对象,可实现根据生物图像确定用户属性信息,即简单快捷,又真实有效,通过该用户属性信息确定出的目标业务对象,更符合用户的需求,实现了个性化推荐和精准营销的策略,通过时间间隔进行图像数据获取,还可以对用户属性信息进行定时更新,保证信息的有效性。
50参照图5,示出了本申请实施例又一种用户属性提取方法的流程图。本实施例用于第二终端,以直播场景下的粉丝端为例,对本申请实施例的用户属性提取方法进行解释说明。本实施例的用户属性提取方法可以包括:
步骤502、当接收到第一终端发送的信息获取请求时,获取图像数据。
本申请实施例为得到实时用户的属性信息,通过获取用户的图像数据,进而通过对图像数据进行分析得到用户的用户属性信息。
本申请各实施例可以应用于直播场景下,第一终端(如主播端)通过后台服务器与主播所在直播平台直播间的第二终端(粉丝端)建立视频通信连接。
当第二终端接收到第一终端发送的信息获取请求时,该第二终端的用户通过对该信息获取请求的确认,以使该第二终端的图像采集设备获取该第二终端用户的图像数据。
其中的图像数据可以包括视频图像数据或静态图像数据,例如小视频或图片。
在一个可选示例中,该步骤502可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的获取模块702执行。
步骤504、将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定该用户属性信息对应的目标业务对象。
在图像数据采集完成后,由第二终端向第一终端发送,第一终端接收到该图像数据后,第一终端通过对图像数据进行人物识别,确定该图像中的人物对应的图像区域,再根据特征提取算法对该区域进行特征分析确定用户对应的用户属性信息。
本申请各实施例中的用户属性信可以包括但不限于以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
本实施例的用户属性信息可以采用人脸检测算法或神经网络模型确定,还可以使用其它特征提取算法,对此本申请实施例不作可选限定。
第一终端根据用户属性信息确定与之对应的目标业务对象,目标业务对象为包含有语义信息的特效,如包含广告信息的以下至少一种形式的特效:二维贴纸特效、三维特效、粒子特效。如使用贴纸形式展示的广告(即广告贴纸);或者,用于展示广告的特效,如3D广告特效。但不限于此,其它形式的业务对象也同样适用本申请实施例提供的业务统计方案,如APP或其它应用的文字说明或介绍,或者一定形式的与视频观众交互的对象(如电子宠物)等。
在一个可选示例中,该步骤504可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三发送模块704执行。
本实施例提供的用户属性提取方法,通过当接收到第一终端发送的信息获取请求时,获取图像数据,将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象,通过信息获取请求实时获取用户的生物图像,简单快捷,同时还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
参照图6,示出了本申请实施例一种用户属性提取方法的步骤流程图,本实施例用于第二终端,可以包括如下步骤:
步骤602、当接收到第一终端发送的信息获取请求时,通过图像采集设备采集图像数据。
当第二终端接收到第一终端发送的信息获取请求时,通过对该信息获取请求的触发,以获取到第二终端图像采集设备的权限并通过该图像采集设备采集第二终端用户的图像数据。其中的信息获取请求可以是多种形式,例如通知消息,又如附加在游戏对象上的互动请求。
例如,主播通过第一终端向多个第二终端的粉丝用户发送互动游戏请求,在该互动游戏请求种携带信息获取请求。
例如,当接收到第一终端发送的信息获取请求时,显示图像采集设备启用提示消息;当检测到基于图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
本实施例的图像采集设备可以包括第二终端的摄像头,或与第二终端关联的具备拍摄功能的智能设备。
例如,主播在直播过程中号召粉丝一起玩互动游戏,并向多个粉丝发送互动游戏的互动请求,粉丝接收到互动请求后在粉丝端界面显示图像采集设备启用提示消息,通过互动 请求的触发,即对图像采集设备启用提示消息的确认,互动游戏将在粉丝端的界面进行展示,同时获取到粉丝端摄像头的权限,以采集粉丝的图像数据。
在一个可选示例中,该步骤602可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的获取模块702执行。
步骤604、将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象。
在图像数据采集完成后,由第二终端向第一终端发送,第一终端接收到该图像数据后,第一终端通过对图像数据进行人物识别,确定该图像中的人物对应的图像区域,再根据特征提取算法对该区域进行特征分析确定用户对应的用户属性信息。
确定与用户属性信息匹配度最优的目标业务对象,可依次采用每个属性信息确定。
在本实施例中的目标业务对象与上述实施例中的目标业务对象类似,在此不再赘述。
例如,先根据用户属性信息确定用户的年龄段、性别;再由用户属性信息中的发型信息和表情信息确定用户的性格;最后根据用户属性信息确定用户的衣着信息。如,确定为用户为15-18岁的男性、用户的性格为阳光开朗型、衣着信息显示用户的衣着为耐克运动系列服装,由此,可以确定待推送的目标业务对象为男性青少年耐克系列的运动装。
在一个可选示例中,该步骤604可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三发送模块704执行。
步骤606、接收第一终端推送的目标业务对象。
在一个可选示例中,该步骤606可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二接收模块706执行。
步骤608、展示目标业务对象。
第一终端将确定好的目标业务对象向第二终端推送,第二终端接收到目标业务对象后可以在第二终端的直播界面进行展示。
在一个可选示例中,该步骤608可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的展示模块708执行。
在本实施例中的应用场景除直播类视频互动外,还可包括其它形式的视频互动,如社交软件类的视频通话,又如微信视频、QQ视频等,对此本实施例不作可选限定。
本实施例提供的用户属性提取方法,通过当接收到第一终端发送的信息获取请求时,获取图像数据,将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象,通过信息获取请求实时获取用户的生物图 像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求,实现了个性化推荐和精准营销的策略,粉丝可通过在观看直播的同时查看符合自己的需求的目标业务对象,提高了用户体验。
参照图7,示出了本申请实施例又一种用户属性提取装置的结构框图,通过本实施例的用户属性提取装置可用于第二终端,执行如图5所示的用户属性提取方法。该实施例的用户属性提取装置可以包括如下模块:
获取模块702,用于当接收到第一终端发送的信息获取请求时,获取图像数据。
第三发送模块704,用于将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
本实施例提供的用户属性提取装置,通过当接收到第一终端发送的信息获取请求时,获取图像数据,将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象,通过信息获取请求实时获取用户的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
参照图8,示出了本申请实施例九提供的一种用户属性提取装置的结构框图,通过本实施例的用户属性提取装置可用于第二终端,执行如图6所示的用户属性提取方法。该实施例的用户属性提取装置可以包括如下模块:
获取模块702,用于当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
作为改进:所述获取模块702包括:显示子模块7022和采集子模块7024。
其中,显示子模块7022,用于当收到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;以及,
采集子模块7024,用于当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
其中,所述图像数据包括视频图像数据或静态图像数据;所述图像采集设备包括所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
第三发送模块704,用于将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
第二接收模块706,用于接收所述第一终端推送的目标业务对象。
展示模块708,用于展示所述目标业务对象。
本实施例提供的用户属性提取装置,通过当接收到第一终端发送的信息获取请求时,获取图像数据,将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象,通过信息获取请求实时获取用户的生物图像,简单快捷,还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求,实现了个性化推荐和精准营销的策略,粉丝可通过在观看直播的同时查看符合自己的需求的目标业务对象,提高了用户体验。
另外,本申请实施例还提供了一种电子设备,包括:处理器和存储器;
所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如本申请任一实施例所述的用户属性提取方法对应的操作。
另外,本申请实施例还提供了另一种电子设备,包括:
处理器和本申请任一实施例所述的用户属性提取装置;
在处理器运行所述用户属性提取装置时,本申请任一实施例所述的用户属性提取装置中的单元被运行。
本申请各实施例还的电子设备,例如可以是移动终端、个人计算机(PC)、平板电脑、服务器等。
另外,本申请实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现本申请任一实施例所述的用户属性提取方法中各步骤的指令。
另外,本申请实施例还提供了一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时实现本申请任一实施例所述的用户属性提取方法中各步骤的操作。下面参考图9,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备1000一个应用实施例的结构示意图:如图9所示,电子设备900包括一个或多个处理器、通信元件等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)901,和/或一个或多个图像处理器(GPU)913等,处理器可以根据存储在只读存储器(ROM)902中的可执行指令或者从存储部分908加载到随机访问存储器(RAM)903中的可执行指令而执行各种适当的动作和处理。通信元件包括通信组件912和/或通信接口909。其中,通信组件912可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,通信接口909包括诸如LAN卡、调制解调器等的网络接口卡的通信接口,通信接口909经由诸如因特网的网络执行通信处理。
处理器可与只读存储器902和/或随机访问存储器903中通信以执行可执行指令,通过 通信总线904与通信组件912相连、并经通信组件912与其他目标设备通信,从而完成本申请实施例提供的任一项用户属性提取方法对应的操作,例如,接收第二终端发送的图像数据;基于所述图像数据,提取用户属性信息;确定所述用户属性信息对应的目标业务对象。。再如,当接收到第一终端发送的信息获取请求时,获取图像数据;将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
此外,在RAM 903中,还可存储有装置操作所需的各种程序和数据。CPU901或GPU913、ROM902以及RAM903通过通信总线904彼此相连。在有RAM903的情况下,ROM902为可选模块。RAM903存储可执行指令,或在运行时向ROM902中写入可执行指令,可执行指令使处理器90执行上述通信方法对应的操作。输入/输出(I/O)接口905也连接至通信总线904。通信组件912可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在通信总线链接上。
以下部件连接至I/O接口905:包括键盘、鼠标等的输入部分906;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分907;包括硬盘等的存储部分908;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信接口909。90驱动器910也根据需要连接至I/O接口905。可拆卸介质911,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器910上,以便于从其上读出的计算机程序根据需要被安装入存储部分908。
需要说明的,如图9所示的架构仅为一种可选实现方式,在可选实践过程中,可根据实际需要对上述图9的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信元件可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本申请的保护范围。
特别地,根据本申请实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的方法步骤对应的指令,例如,当接收到第一终端发送的信息获取请求时,获取图像数据;将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。在这样的实施例中,该计算机程序可以通过通信元件从网络上被下载和安装,和/或从可拆卸介 质911被安装。在该计算机程序被处理器执行时,执行本申请实施例的方法中限定的上述功能。
本实施例提供的电子设备,通过当接收到第一终端发送的信息获取请求时,获取图像数据,将图像数据发送给第一终端,以使第一终端基于图像数据提取用户属性信息,并确定用户属性信息对应的目标业务对象,通过信息获取请求实时获取用户的生物图像,简单快捷,同时还可以确保用户属性信息的真实性,通过该用户属性信息确定出的目标业务对象,更符合用户的需求。
本申请实施例提供的任一种用户属性提取方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本申请实施例提供的任一种用户属性提取方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本申请实施例提及的任一种用户属性提取方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于装置、设备实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请实施例的方法和装置、设备。用于方法的步骤的上述顺序仅是为了进行说明,本申请实施例的方法的步骤不限于以上可选描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请实施例的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请实施例的方法的程序的记录介质。
本申请实施例的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式,很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。

Claims (30)

  1. 一种用户属性提取方法,用于第一终端,其特征在于,包括:
    接收第二终端发送的图像数据;
    基于所述图像数据,提取用户属性信息;
    确定所述用户属性信息对应的目标业务对象。
  2. 根据权利要求1所述的方法,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述接收第二终端发送的图像数据之前,还包括:
    向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
  4. 根据权利要求3所述的方法,其特征在于,所述向所述第二终端发送信息获取请求,包括:
    按照时间间隔,向所述第二终端发送信息获取请求。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述基于所述图像数据,提取用户属性信息,包括:
    当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;
    提取所述目标人物对应的用户属性信息。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述用户属性信息至少包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
  7. 根据权利要求1-6任一所述的方法,其特征在于,还包括:
    向所述第二终端推送所述目标业务对象。
  8. 一种用户属性提取方法,用于第二终端,其特征在于,包括:
    当接收到第一终端发送的信息获取请求时,获取图像数据;
    将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
  9. 根据权利要求8所述的方法,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
  10. 根据权利要求8或9所述的方法,其特征在于,所述当接收到第一终端发送的信 息获取请求时,获取图像数,包括:
    当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
  11. 根据权利要求10所述的方法,其特征在于,所述当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据,包括:
    当接收到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;
    当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
  12. 根据权利要求10或11所述的方法,其特征在于,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
  13. 根据权利要求8-12任一项所述的方法,其特征在于,还包括:
    接收所述第一终端推送的目标业务对象;
    展示所述目标业务对象。
  14. 一种用户属性提取装置,其特征在于,包括:
    第一接收模块,用于接收第二终端发送的图像数据;
    提取模块,用于基于所述图像数据,提取用户属性信息;
    确定模块,用于确定所述用户属性信息对应的目标业务对象。
  15. 根据权利要求14所述的装置,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
  16. 根据权利要求14或15所述的装置,其特征在于,还包括:
    第一发送模块,用于向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
  17. 根据权利要求16所述的装置,其特征在于,所述第一发送模块,用于按照时间间隔,向所述第二终端发送信息获取请求。
  18. 根据权利要求14-17任一项所述的装置,其特征在于,
    所述提取模块,用于当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
  19. 根据权利要求14-18任一所述的装置,其特征在于,所述用户属性信息至少包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
  20. 根据权利要求14-19任一所述的装置,其特征在于,还包括:
    第二发送模块,用于向所述第二终端推送所述目标业务对象。
  21. 一种用户属性提取装置,其特征在于,包括:
    获取模块,用于当接收到第一终端发送的信息获取请求时,获取图像数据;
    第三发送模块,用于将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
  22. 根据权利要求21所述的装置,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
  23. 根据权利要求21或22所述的装置,其特征在于,所述获取模块,用于当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
  24. 根据权利要求23所述的装置,其特征在于,所述获取模块包括:
    显示子模块,用于当接受到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;
    采集子模块,用于当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
  25. 根据权利要求23或24所述的装置,其特征在于,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
  26. 根据权利要求21-25任一项所述的装置,其特征在于,还包括:
    第二接收模块,用于接收所述第一终端推送的目标业务对象;
    展示模块,用于展示所述目标业务对象。
  27. 一种电子设备,包括:处理器和存储器;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-13任一所述用户属性提取方法。
  28. 一种电子设备,其特征在于,包括:
    处理器和权利要求14-26任一项所述用户属性提取装置;
    在处理器运行所述业务对象的操作装置时,权利要求14-26任一项所述用户属性提取装置中的单元被运行。
  29. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1-13任一所述用户属性提取方法中各步骤的指令。
  30. 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1-13任一所述用户属性提取方法中各步骤的操作。
PCT/CN2017/118705 2016-12-28 2017-12-26 用户属性提取方法、装置和电子设备 Ceased WO2018121541A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/314,410 US20190228227A1 (en) 2016-12-28 2017-12-26 Method and apparatus for extracting a user attribute, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611235485.8 2016-12-28
CN201611235485.8A CN108076128A (zh) 2016-12-28 2016-12-28 用户属性提取方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2018121541A1 true WO2018121541A1 (zh) 2018-07-05

Family

ID=62161529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/118705 Ceased WO2018121541A1 (zh) 2016-12-28 2017-12-26 用户属性提取方法、装置和电子设备

Country Status (3)

Country Link
US (1) US20190228227A1 (zh)
CN (1) CN108076128A (zh)
WO (1) WO2018121541A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738676A (zh) * 2020-06-05 2020-10-02 天津玛斯特车身装备技术有限公司 一种柔性生产线作业方法及系统
CN115103457A (zh) * 2022-06-02 2022-09-23 Oppo广东移动通信有限公司 数据传输方法、装置、电子设备和存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019227426A1 (zh) * 2018-05-31 2019-12-05 优视科技新加坡有限公司 多媒体数据处理方法、装置和设备/终端/服务器
CN109598578A (zh) * 2018-11-09 2019-04-09 深圳壹账通智能科技有限公司 业务对象数据的推送方法及装置、存储介质、计算机设备
CN109697196A (zh) * 2018-12-10 2019-04-30 北京大学 一种情境建模方法、装置及设备
CN109982148B (zh) * 2019-04-03 2022-05-20 广州虎牙信息科技有限公司 一种直播方法、装置、计算机设备与存储介质
CN111311303A (zh) * 2020-01-17 2020-06-19 北京市商汤科技开发有限公司 一种信息投放方法及装置、电子设备、存储介质
CN113823285A (zh) * 2021-09-30 2021-12-21 广东美的厨房电器制造有限公司 信息录入方法及其装置、家用电器和可读存储介质
CN114648796A (zh) * 2022-03-18 2022-06-21 成都商汤科技有限公司 用户识别方法、装置、存储介质及电子设备
CN115273149A (zh) * 2022-08-08 2022-11-01 浙江大华技术股份有限公司 对象的识别方法、装置、存储介质和电子装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2570965A2 (en) * 2011-09-15 2013-03-20 Omron Corporation Detection device, display control device and imaging control device, provided with the detection device, body detection method, control program, and recording medium
CN103164450A (zh) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 一种向目标用户推送信息的方法及装置
CN103377293A (zh) * 2013-07-05 2013-10-30 河海大学常州校区 多源输入、信息智能优化处理的全息触摸交互展示系统
US20150023552A1 (en) * 2013-07-18 2015-01-22 GumGum, Inc. Systems and methods for determining image safety
CN104915000A (zh) * 2015-05-27 2015-09-16 天津科技大学 用于裸眼3d广告的多感知生物识别交互方法
US20160314442A1 (en) * 2015-04-21 2016-10-27 Xiaomi Inc. Numerical value transfer method, terminal, cloud server and storage medium
CN106200918A (zh) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 一种基于ar的信息显示方法、装置和移动终端
CN106326433A (zh) * 2016-08-25 2017-01-11 武克易 一种广告播放装置

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8156116B2 (en) * 2006-07-31 2012-04-10 Ricoh Co., Ltd Dynamic presentation of targeted information in a mixed media reality recognition system
US8997006B2 (en) * 2009-12-23 2015-03-31 Facebook, Inc. Interface for sharing posts about a live online event among users of a social networking system
KR20120076673A (ko) * 2010-12-13 2012-07-09 삼성전자주식회사 이동통신 시스템에서 광고 서비스 제공 방법 및 장치
US8401343B2 (en) * 2011-03-27 2013-03-19 Edwin Braun System and method for defining an augmented reality character in computer generated virtual reality using coded stickers
US20130095855A1 (en) * 2011-10-13 2013-04-18 Google Inc. Method, System, and Computer Program Product for Obtaining Images to Enhance Imagery Coverage
US20140066044A1 (en) * 2012-02-21 2014-03-06 Manoj Ramnani Crowd-sourced contact information and updating system using artificial intelligence
US9124950B2 (en) * 2012-03-26 2015-09-01 Max Abecassis Providing item information notification during video playing
JP5188639B1 (ja) * 2012-05-31 2013-04-24 株式会社 ディー・エヌ・エー ゲームプログラム、及び、情報処理装置
CN103488402B (zh) * 2012-06-14 2018-09-04 腾讯科技(深圳)有限公司 显示控制的方法、设备及系统
US20140101781A1 (en) * 2012-10-05 2014-04-10 Sedrick Andrew Bouknight Peer-to-peer, real-time, digital media distribution
US20140245335A1 (en) * 2013-02-25 2014-08-28 Comcast Cable Communications, Llc Environment Object Recognition
JP6369067B2 (ja) * 2014-03-14 2018-08-08 株式会社リコー 情報処理システム、情報処理方法、及びプログラム
CN103984741B (zh) * 2014-05-23 2016-09-21 合一信息技术(北京)有限公司 用户属性信息提取方法及其系统
WO2016004330A1 (en) * 2014-07-03 2016-01-07 Oim Squared Inc. Interactive content generation
CN104166713A (zh) * 2014-08-14 2014-11-26 百度在线网络技术(北京)有限公司 网络业务的推荐方法和装置
US20160205443A1 (en) * 2015-01-13 2016-07-14 Adsparx USA Inc System and method for real-time advertisments in a broadcast content
US11071919B2 (en) * 2015-06-30 2021-07-27 Amazon Technologies, Inc. Joining games from a spectating system
US10366440B2 (en) * 2015-10-28 2019-07-30 Adobe Inc. Monitoring consumer-product view interaction to improve upsell recommendations
US10221260B2 (en) * 2016-07-29 2019-03-05 Exxonmobil Chemical Patents Inc. Phenolate transition metal complexes, production and use thereof
US10057310B1 (en) * 2017-06-12 2018-08-21 Facebook, Inc. Interactive spectating interface for live videos
JP7030452B2 (ja) * 2017-08-30 2022-03-07 キヤノン株式会社 情報処理装置、情報処理装置の制御方法、情報処理システム及びプログラム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2570965A2 (en) * 2011-09-15 2013-03-20 Omron Corporation Detection device, display control device and imaging control device, provided with the detection device, body detection method, control program, and recording medium
CN103164450A (zh) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 一种向目标用户推送信息的方法及装置
CN103377293A (zh) * 2013-07-05 2013-10-30 河海大学常州校区 多源输入、信息智能优化处理的全息触摸交互展示系统
US20150023552A1 (en) * 2013-07-18 2015-01-22 GumGum, Inc. Systems and methods for determining image safety
US20160314442A1 (en) * 2015-04-21 2016-10-27 Xiaomi Inc. Numerical value transfer method, terminal, cloud server and storage medium
CN104915000A (zh) * 2015-05-27 2015-09-16 天津科技大学 用于裸眼3d广告的多感知生物识别交互方法
CN106200918A (zh) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 一种基于ar的信息显示方法、装置和移动终端
CN106326433A (zh) * 2016-08-25 2017-01-11 武克易 一种广告播放装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738676A (zh) * 2020-06-05 2020-10-02 天津玛斯特车身装备技术有限公司 一种柔性生产线作业方法及系统
CN115103457A (zh) * 2022-06-02 2022-09-23 Oppo广东移动通信有限公司 数据传输方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108076128A (zh) 2018-05-25
US20190228227A1 (en) 2019-07-25

Similar Documents

Publication Publication Date Title
WO2018121541A1 (zh) 用户属性提取方法、装置和电子设备
JP6267861B2 (ja) 対話型広告のための使用測定技法およびシステム
US10325372B2 (en) Intelligent auto-cropping of images
WO2018033143A1 (zh) 视频图像的处理方法、装置和电子设备
US20200037040A1 (en) Visual hash tags via trending recognition activities, systems and methods
TWI648641B (zh) Wisdom TV data processing method, smart TV and smart TV system
US11087140B2 (en) Information generating method and apparatus applied to terminal device
EP3285222A1 (en) Facilitating television based interaction with social networking tools
US20190155864A1 (en) Method and apparatus for recommending business object, electronic device, and storage medium
US20180225377A1 (en) Method, server and terminal for acquiring information and method and apparatus for constructing database
CN108763532A (zh) 用于推送信息、展现信息的方法和设备
US12395711B2 (en) Dynamic code integration within network-delivered media
Indrawan et al. Face recognition for social media with mobile cloud computing
CN114442869A (zh) 用户分流处理的方法、装置、电子设备及存储介质
CN114283349A (zh) 一种数据处理方法、装置、计算机设备及存储介质
GB2574431A (en) Systems and method for automated boxing data collection and analytics platform
JP7130771B2 (ja) 注目情報の処理方法および装置、記憶媒体ならびに電子機器
JP2019057245A (ja) 情報処理装置及びプログラム
CN111756863A (zh) 内容推送方法、装置、处理设备及存储介质
US20130138505A1 (en) Analytics-to-content interface for interactive advertising
US20160315886A1 (en) Network information push method, apparatus and system based on instant messaging
CN110033291A (zh) 信息对象推送方法、装置和系统
EP3783561A1 (en) Information processing device, information processing system, information processing method, and program
CN112767006B (zh) 广告资源数据处理方法、装置、电子设备和可读存储介质
WO2015100070A1 (en) Presenting information based on a video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886214

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17886214

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 05/02/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17886214

Country of ref document: EP

Kind code of ref document: A1