WO2018121541A1 - 用户属性提取方法、装置和电子设备 - Google Patents
用户属性提取方法、装置和电子设备 Download PDFInfo
- Publication number
- WO2018121541A1 WO2018121541A1 PCT/CN2017/118705 CN2017118705W WO2018121541A1 WO 2018121541 A1 WO2018121541 A1 WO 2018121541A1 CN 2017118705 W CN2017118705 W CN 2017118705W WO 2018121541 A1 WO2018121541 A1 WO 2018121541A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- terminal
- image data
- user attribute
- information
- attribute information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0242—Determining effectiveness of advertisements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the embodiments of the present application relate to data processing technologies, and in particular, to a user attribute extraction method, apparatus, and electronic device.
- Determining user attributes based on user characteristics is important for areas such as user research, personalized recommendations, and precision marketing.
- the embodiment of the present application provides a user attribute extraction scheme.
- a user attribute extraction method for a first terminal, including: receiving image data sent by a second terminal; extracting user attribute information based on the image data; determining the user The target business object corresponding to the attribute information.
- the image data includes video image data or static image data.
- the method before receiving the image data sent by the second terminal, the method further includes: sending an information acquisition request to the second terminal, to trigger the The second terminal sends image data, where the information acquisition request is used to instruct the second terminal to collect image data through the image collection device.
- the sending an information acquisition request to the second terminal includes: sending an information acquisition request to the second terminal according to a time interval.
- the extracting user attribute information based on the image data includes: when the image data includes multiple characters, the multiple The person with the highest ratio appears as the target person in the personal; the user attribute information corresponding to the target person is extracted.
- the user attribute information includes at least one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
- the method further includes: pushing the target service object to the second terminal.
- a user attribute extraction apparatus includes: a first receiving module, configured to receive image data sent by a second terminal; and an extracting module, configured to extract a user based on the image data And an determining module, configured to determine a target service object corresponding to the user attribute information.
- the image data includes video image data or static image data.
- the apparatus further includes: a first sending module, configured to send an information acquisition request to the second terminal, to trigger the second terminal Sending image data; wherein the information acquisition request is used to instruct the second terminal to acquire image data through the image collection device.
- the first sending module is configured to send an information acquisition request to the second terminal according to a time interval.
- the extraction module is configured to: when the image data includes multiple characters, the person with the highest ratio among the plurality of characters As a target person; extract user attribute information corresponding to the target person.
- the user attribute information includes at least one or more of the following: age information, gender information, hairstyle information, preference information, expression information, clothing information.
- the apparatus further includes: a second sending module, configured to push the target service object to the second terminal.
- another user attribute extraction method including: acquiring image data when receiving an information acquisition request sent by a first terminal; and transmitting the image data to the first a terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- the image data includes video image data or static image data.
- acquiring the number of images includes: when receiving the first terminal, When the information acquisition request is made, image data is collected by the image acquisition device.
- the image data is collected by the image collection device, including: And when the information acquisition request sent by the first terminal is sent, the image collection device enable prompt message is displayed; when the user confirmation instruction based on the image capture device enable prompt message is detected, the image data is collected by the image capture device.
- the image collection device includes: a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
- the method further includes: receiving a target service object that is pushed by the first terminal; and displaying the target service object.
- a user attribute extraction apparatus including: an acquisition module, configured to acquire image data when receiving an information acquisition request sent by a first terminal; and a third sending module, configured to: And transmitting the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- the image data includes video image data or static image data.
- the acquiring module is configured to collect image data by using an image collection device when receiving the information acquisition request sent by the first terminal. .
- the obtaining module includes: a display submodule, configured to display an image when receiving the information acquisition request sent by the first terminal The collection device enables the prompt message; the collection submodule is configured to collect image data by the image collection device when detecting a user confirmation instruction based on the image acquisition device enabling prompt message.
- the image collection device includes: a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
- the apparatus further includes: a second receiving module, configured to receive a target service object that is pushed by the first terminal; and a display module, configured to display The target business object.
- an electronic device including: a processor and a memory; the memory is configured to store at least one executable instruction, the executable instruction causing the processor to execute the application described above
- another electronic device including: a processor and a user attribute extraction apparatus according to any one of the foregoing embodiments of the present application; when the processor runs the operation device of the business object, The unit in the user attribute extraction apparatus described in any of the above embodiments of the present application is executed.
- a computer program comprising computer readable code, the processor in the device executing the above-described implementation of the present application when the computer readable code is run on a device
- a computer readable storage medium for storing a computer readable instruction, and when the instruction is executed, implementing the user attribute extraction method according to any one of the foregoing embodiments of the present application. Operation in each step
- the user attribute extraction scheme provided in this embodiment receives the image data sent by the second terminal, extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- Real-time acquisition of the user's biological image is simple and fast, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
- FIG. 1 is a flowchart of a user attribute extraction method according to an embodiment of the present application.
- FIG. 2 is a flowchart of another user attribute extraction method according to an embodiment of the present application.
- FIG. 3 is a structural block diagram of a user attribute extraction apparatus according to an embodiment of the present application.
- FIG. 4 is a structural block diagram of another user attribute extraction apparatus according to an embodiment of the present application.
- FIG. 5 is a flowchart of still another method for extracting user attributes according to an embodiment of the present application.
- FIG. 6 is a flowchart of still another user attribute extraction method according to an embodiment of the present application.
- FIG. 7 is a structural block diagram of still another user attribute extraction apparatus according to an embodiment of the present application.
- FIG. 8 is a structural block diagram of still another user attribute extraction apparatus according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to an embodiment of the present application.
- Embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
- Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including storage devices.
- FIG. 1 a flowchart of a method for extracting user attributes according to an embodiment of the present application is shown.
- This embodiment is used for the first terminal, and the method for extracting user attributes in the embodiment of the present application is explained by taking the anchor end in the live broadcast scenario as an example.
- the user attribute extraction method of this embodiment may include:
- Step 102 Receive image data sent by the second terminal.
- the attribute information of the real-time user is obtained, and the image data of the user is obtained, so that the user attribute information of the user is obtained by analyzing the image data.
- the embodiments of the present application can be applied to a live broadcast scenario, where the first terminal (such as the anchor end) establishes a video communication connection with the second terminal (such as a fan terminal) in the live broadcast of the live broadcast platform where the anchor is located.
- the first terminal such as the anchor end
- the second terminal such as a fan terminal
- the first terminal receives the image data sent by the second terminal, where the image data may be actively sent by the second terminal, or the second terminal may receive the image data returned by the information acquisition request of the first terminal.
- the image data therein may include, but is not limited to, image data of a user of the second terminal.
- the step 102 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first receiving module 302 executed by the processor.
- Step 104 Extract user attribute information based on the image data.
- the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data, and then performs feature analysis on the image region according to the feature extraction algorithm to determine user attribute information corresponding to the user.
- the user attribute letter may include, but is not limited to, at least one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
- the user attribute information of the embodiments of the present application may be determined by using a face detection algorithm or a neural network model, and other feature extraction algorithms may also be used.
- the step 104 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an extraction module 304 executed by the processor.
- Step 106 Determine a target service object corresponding to the user attribute information.
- the first terminal determines, according to the user attribute information, a target business object corresponding to the target business object, where the target business object is a special effect including the semantic information, for example, any one or more of the following special effects of the advertisement information: a two-dimensional sticker special effect, 3D effects, particle effects.
- a sticker ie, an advertisement sticker
- 3D effects 3D effects
- particle effects for example, an advertisement displayed in the form of a sticker (ie, an advertisement sticker); or an effect for displaying an advertisement, such as a 3D advertisement effect.
- the present invention is not limited thereto, and other forms of business objects are also applicable to the service statistics solution provided by the embodiment of the present application, such as a text description or introduction of an APP or other application, or a certain form of an object (such as an electronic pet) that interacts with a video audience. .
- the step 106 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module 306 executed by the processor.
- the user attribute extraction method receives the image data sent by the second terminal, extracts the user attribute information based on the image data, determines the target business object corresponding to the user attribute information, and can obtain the biological image of the user in real time, which is simple and fast. It can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
- FIG. 2 a flowchart of another user attribute extraction method in the embodiment of the present application is shown.
- Step 202 Send an information acquisition request to the second terminal to trigger the second terminal to send the image data.
- the first terminal sends an information acquisition request to the second terminal, and the second terminal acquires image data of the second terminal user according to the information acquisition request, and the information acquisition request may be in various forms, such as a notification message, and is attached to the game object. Interactive request on.
- the anchor sends an interactive game request to the fan users of the plurality of second terminals through the first terminal, and the information acquisition request is carried in the interactive game request.
- the anchor calls the fans to play interactive games during the live broadcast, and sends interactive requests for interactive games to multiple fans. After the fans receive the interactive request, the interactive game will be displayed on the fan interface through the trigger of the interactive request. Get access to the fan-side camera and capture image data from fans.
- the step 202 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first transmitting module 308 that is executed by the processor.
- Step 204 Receive image data sent by the second terminal.
- the image data in various embodiments of the present application may include video image data or still image data, such as a small video or a picture.
- the step 204 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by the first receiving module 302 being executed by the processor.
- Step 206 Extract user attribute information based on the image data.
- the identifier (ID number) of each second terminal device has a corresponding user. To make the user attribute information more valuable, it is necessary to determine a target person corresponding to each second terminal ID. The user attribute information is extracted from the image data of the target person.
- the information acquisition request is sent to the second terminal, the plurality of image data is acquired, the character recognition is performed on the plurality of image data, and the person having the highest occurrence ratio of the plurality of image data (ie, the most frequently occurring) is determined. Be the target person and determine the user attribute information of the target person.
- the determined user attribute information is stored, and the image data may be acquired according to the time interval, and the user attribute information is extracted based on the acquired image data, and the storage is performed based on the new user attribute information.
- the user attribute information is updated, that is, the user attribute information is updated according to the time interval.
- the person with the highest occurrence ratio among the plurality of characters is determined as the target person, and the user attribute information of the target person is determined.
- the person in the image data Based on the image data, it is determined whether the person in the image data has a target person. When it is determined that the person in the image data has a target person, feature analysis is performed on the image region corresponding to the target person, and the user attribute information of the target person is determined.
- the step 206 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by an extraction module 304 executed by the processor.
- Step 208 Determine a target service object corresponding to the user attribute information.
- weighting calculation is performed on each information in the user attribute information, and the weight of each information in the user attribute information may be set according to the attribute of the information, for example, the relative change of the age information and the gender information is small, A smaller weight can be set, and the clothing information changes greatly with the season, and a larger weight can be set, and accordingly, the user attribute information is determined.
- the weight of the age information is 10%
- the weight of the gender information is 10%
- the weight of the hairstyle information is 10%
- the weight of the preference information is 10%
- the weight of the expression information is 20%
- weight of the clothing information is 40%.
- Determining the target business object with the best matching degree with the user attribute information may be determined by using each attribute information in turn.
- the target service object in this embodiment is similar to the target service object in the foregoing embodiment, and details are not described herein again.
- the user's age group and gender are first determined according to the user attribute information; the user's personality is determined by the hairstyle information and the expression information in the user attribute information; and finally, the user's clothing information is determined according to the user attribute information. For example, it is determined that the user is a male of 15-18 years old, the user's personality is sunny and the clothing information shows that the user's clothing is the Nike sports series clothing, thereby, it can be determined that the target business object to be pushed is the male teenager Nike series. Sportswear.
- the step 208 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a determination module 306 that is executed by the processor.
- Step 210 Push a target service object to the second terminal.
- the first terminal pushes the determined target service object to the second terminal, and the second terminal can display the target service object on the live interface of the second terminal.
- the step 210 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second transmitting module 310 that is executed by the processor.
- the application scenarios in the embodiments of the present application may also include other forms of video interaction, such as a video call of a social software type, such as a WeChat video, a QQ video, etc., which is not applicable to this embodiment. Choose a limit.
- the user attribute extraction method of the embodiment of the present application receives the image data sent by the second terminal by sending an information acquisition request to the second terminal, and performs character recognition on the image data based on the image data to determine whether the person in the image data is the target person.
- the user attribute information of the target person is extracted, and then the target business object corresponding to the user attribute information is determined to push the target business object to the second terminal, so that the user attribute information can be determined according to the biological image, that is, simple and fast, and Real and effective, the target business object determined by the user attribute information is more in line with the user's needs, realizes the strategy of personalized recommendation and precise marketing, obtains image data through time interval, and can regularly update the user attribute information. Ensure the validity of the information.
- FIG. 3 a block diagram of a user attribute extraction apparatus according to an embodiment of the present application is shown.
- the user attribute extraction apparatus of this embodiment can be used as a first terminal to perform a user attribute extraction method as shown in FIG.
- the user attribute extraction apparatus of this embodiment may include the following modules:
- the first receiving module 302 is configured to receive image data sent by the second terminal.
- the extracting module 304 is configured to extract user attribute information based on the image data.
- the determining module 306 is configured to determine a target service object corresponding to the user attribute information.
- the user attribute extraction device receives the image data sent by the second terminal, extracts the user attribute information based on the image data, determines the target business object corresponding to the user attribute information, and can obtain the biological image of the user in real time, which is simple and fast. It can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user's needs.
- FIG. 4 a structural block diagram of another user attribute extraction apparatus according to an embodiment of the present application is shown.
- the user attribute extraction apparatus of this embodiment can be used as the first terminal to execute the user attribute extraction method shown in FIG. 2.
- the user attribute extraction apparatus of this embodiment may include the following modules:
- the first sending module 308 is configured to send an information obtaining request to the second terminal, to trigger the second terminal to send image data, where the information obtaining request is used to instruct the second terminal to collect by using an image collecting device. Image data.
- the first sending module 308 is further configured to send an information acquisition request to the second terminal according to a time interval.
- the image data may include, but is not limited to, video image data or still image data.
- the first receiving module 302 is configured to receive image data sent by the second terminal.
- the extracting module 304 is configured to: when a plurality of characters are included in the image data, use a person whose highest occurrence ratio among the plurality of characters as a target person; and extract user attribute information corresponding to the target person.
- the user attribute information may include, for example but not limited to, any one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
- the determining module 306 is configured to determine a target service object corresponding to the user attribute information.
- the second sending module 310 is configured to push the target service object to the second terminal.
- the user attribute extraction device of the embodiment of the present application receives the image data sent by the second terminal by sending an information acquisition request to the second terminal, and performs character recognition on the image data based on the image data to determine whether the person in the image data is the target person.
- the user attribute information of the target person is extracted, and then the target business object corresponding to the user attribute information is determined to push the target business object to the second terminal, so that the user attribute information can be determined according to the biological image, that is, simple and fast, and Real and effective, the target business object determined by the user attribute information is more in line with the user's needs, realizes the strategy of personalized recommendation and precise marketing, obtains image data through time interval, and can regularly update the user attribute information. Ensure the validity of the information.
- FIG. 5 a flowchart of still another user attribute extraction method in the embodiment of the present application is shown.
- This embodiment is used for the second terminal, and the user attribute extraction method in the embodiment of the present application is explained by taking the fan end in the live broadcast scenario as an example.
- the user attribute extraction method of this embodiment may include:
- Step 502 Acquire image data when receiving an information acquisition request sent by the first terminal.
- the attribute information of the real-time user is obtained, and the user attribute information of the user is obtained by analyzing the image data by acquiring the image data of the user.
- the embodiments of the present application can be applied to a live broadcast scenario, where the first terminal (such as the anchor end) establishes a video communication connection with the second terminal (fan side) of the live broadcast platform of the live broadcast site through the background server.
- the user of the second terminal obtains the image data of the second terminal user by using the confirmation of the information acquisition request. .
- the image data therein may include video image data or still image data such as a small video or a picture.
- the step 502 can be performed by a processor invoking a corresponding instruction stored in the memory, or can be performed by the acquisition module 702 being executed by the processor.
- Step 504 Send image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- the second terminal After the image data is collected, the second terminal sends the image to the first terminal. After the first terminal receives the image data, the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data. Feature analysis is performed on the area according to the feature extraction algorithm to determine user attribute information corresponding to the user.
- the user attribute letter in each embodiment of the present application may include, but is not limited to, any one or more of the following: age information, gender information, hair style information, favorite information, expression information, clothing information.
- the user attribute information in this embodiment may be determined by using a face detection algorithm or a neural network model, and other feature extraction algorithms may also be used.
- the first terminal determines a target business object corresponding to the user attribute information, where the target business object is a special effect including semantic information, such as at least one of the following special effects including the advertisement information: two-dimensional sticker special effect, three-dimensional special effect, particle special effect .
- a special effect including semantic information such as at least one of the following special effects including the advertisement information: two-dimensional sticker special effect, three-dimensional special effect, particle special effect .
- an advertisement displayed in the form of a sticker ie, an advertisement sticker
- an effect for displaying an advertisement such as a 3D advertisement effect.
- the present invention is not limited thereto, and other forms of service objects are also applicable to the service statistics solution provided by the embodiment of the present application, such as a text description or introduction of an APP or other application, or a certain form of an object (such as an electronic pet) that interacts with a video audience. .
- the step 504 can be performed by the processor invoking a corresponding instruction stored in the memory or by the third transmitting module 704 being executed by the processor.
- the user attribute extraction method when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more consistent. User needs.
- the method for the second terminal in this embodiment may include the following steps:
- Step 602 When receiving the information acquisition request sent by the first terminal, the image data is collected by the image collection device.
- the second terminal When the second terminal receives the information acquisition request sent by the first terminal, obtains the permission of the second terminal image collection device by using the trigger of the information acquisition request, and collects the image data of the second terminal user by using the image collection device.
- the information acquisition request may be in various forms, such as a notification message, and an interaction request attached to the game object.
- the anchor sends an interactive game request to the fan users of the plurality of second terminals through the first terminal, and the interactive game request carries the information acquisition request.
- the image collection device when receiving the information acquisition request sent by the first terminal, the image collection device enables the prompt message; when the user confirmation instruction based on the image collection device enable prompt message is detected, the image data is collected by the image acquisition device.
- the image collection device of this embodiment may include a camera of the second terminal or a smart device with a shooting function associated with the second terminal.
- the anchor calls the fans to play interactive games during the live broadcast, and sends interactive requests for interactive games to multiple fans.
- the fans display the image collection device enable prompt message on the fan interface, and trigger through the interactive request. That is, the confirmation message is enabled for the image capturing device, and the interactive game will be displayed on the interface of the fan, and the permission of the fan camera is obtained to collect the image data of the fan.
- the step 602 can be performed by the processor invoking a corresponding instruction stored in the memory, or can be performed by the acquisition module 702 being executed by the processor.
- Step 604 Send image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- the second terminal After the image data is collected, the second terminal sends the image to the first terminal. After the first terminal receives the image data, the first terminal determines the image region corresponding to the character in the image by performing character recognition on the image data. Feature analysis is performed on the area according to the feature extraction algorithm to determine user attribute information corresponding to the user.
- Determining the target business object with the best matching degree with the user attribute information may be determined by using each attribute information in turn.
- the target service object in this embodiment is similar to the target service object in the foregoing embodiment, and details are not described herein again.
- the user's age group and gender are first determined according to the user attribute information; the user's personality is determined by the hairstyle information and the expression information in the user attribute information; and finally, the user's clothing information is determined according to the user attribute information. For example, it is determined that the user is a male of 15-18 years old, the user's personality is sunny and the clothing information shows that the user's clothing is the Nike sports series clothing, thereby, it can be determined that the target business object to be pushed is the male teenager Nike series. Sportswear.
- the step 604 can be performed by the processor invoking a corresponding instruction stored in the memory or by the third transmitting module 704 being executed by the processor.
- Step 606 Receive a target service object that is pushed by the first terminal.
- the step 606 can be performed by the processor invoking a corresponding instruction stored in the memory or by the second receiving module 706 being executed by the processor.
- Step 608 displaying a target business object.
- the first terminal pushes the determined target service object to the second terminal, and the second terminal can display the target service object on the live interface of the second terminal.
- the step 608 can be performed by the processor invoking a corresponding instruction stored in the memory or by the presentation module 708 being executed by the processor.
- the application scenario in this embodiment may include other types of video interactions, such as a video call of a social software type, such as a WeChat video, a QQ video, etc., in this embodiment. .
- the user attribute extraction method when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user.
- the demand has realized the strategy of personalized recommendation and precise marketing. Fans can improve the user experience by watching the live broadcast and seeing the target business objects that meet their own needs.
- FIG. 7 a block diagram of a user attribute extraction apparatus according to another embodiment of the present application is shown.
- the user attribute extraction apparatus of this embodiment can be used for a second terminal to perform a user attribute extraction method as shown in FIG. 5.
- the user attribute extraction apparatus of this embodiment may include the following modules:
- the obtaining module 702 is configured to acquire image data when receiving an information acquisition request sent by the first terminal.
- the third sending module 704 is configured to send the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information. .
- the user attribute extraction device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user. Demand.
- FIG. 8 a block diagram of a user attribute extraction apparatus provided in Embodiment 9 of the present application is shown.
- the user attribute extraction apparatus of this embodiment can be used in a second terminal to execute a user attribute extraction method as shown in FIG. 6. .
- the user attribute extraction apparatus of this embodiment may include the following modules:
- the obtaining module 702 is configured to collect image data by using an image collecting device when receiving the information acquiring request sent by the first terminal.
- the obtaining module 702 includes: a display sub-module 7022 and a collection sub-module 7024.
- the display sub-module 7022 is configured to display an image collection device enable prompt message when receiving the information acquisition request sent by the first terminal;
- the collecting sub-module 7024 is configured to collect image data by the image collecting device when detecting a user confirmation instruction based on the image capturing device enabling prompt message.
- the image data includes video image data or static image data; the image capturing device includes a camera of the second terminal, or a smart device with a shooting function associated with the second terminal.
- the third sending module 704 is configured to send the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information. .
- the second receiving module 706 is configured to receive a target service object that is pushed by the first terminal.
- the display module 708 is configured to display the target business object.
- the user attribute extraction device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data. And determining the target business object corresponding to the user attribute information, obtaining the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user.
- the demand has realized the strategy of personalized recommendation and precise marketing. Fans can improve the user experience by watching the live broadcast and seeing the target business objects that meet their own needs.
- an embodiment of the present application further provides an electronic device, including: a processor and a memory;
- the memory is configured to store at least one executable instruction that causes the processor to perform an operation corresponding to the user attribute extraction method as described in any of the embodiments of the present application.
- the embodiment of the present application further provides another electronic device, including:
- the processor runs the user attribute extraction device
- the unit in the user attribute extraction device described in any of the embodiments of the present application is executed.
- the electronic device may be, for example, a mobile terminal, a personal computer (PC), a tablet computer, a server, or the like.
- the embodiment of the present application further provides a computer program, including computer readable code, when the computer readable code is run on a device, the processor in the device executes to implement any of the embodiments of the present application.
- the instructions of each step in the user attribute extraction method when the computer readable code is run on a device, the processor in the device executes to implement any of the embodiments of the present application.
- the embodiment of the present application further provides a computer readable storage medium, which is configured to store computer readable instructions, and when the instructions are executed, implement the steps in the user attribute extraction method according to any embodiment of the present application. Operation.
- a schematic structural diagram of an application embodiment of an electronic device 1000 suitable for implementing a terminal device or a server of an embodiment of the present application is shown.
- the electronic device 900 includes one or more processes.
- the one or more processors such as: one or more central processing units (CPUs) 901, and/or one or more image processors (GPUs) 913, etc., the processor may be stored according to Executable instructions in read only memory (ROM) 902 or executable instructions loaded from random access memory (RAM) 903 from storage portion 908 perform various appropriate actions and processes.
- the communication component includes a communication component 912 and/or a communication interface 909.
- the communication component 912 can include, but is not limited to, a network card, which can include, but is not limited to, an IB (Infiniband) network card, the communication interface 909 includes a communication interface of a network interface card such as a LAN card, a modem, etc., and the communication interface 909 is via an Internet interface such as The network performs communication processing.
- a network card which can include, but is not limited to, an IB (Infiniband) network card
- the communication interface 909 includes a communication interface of a network interface card such as a LAN card, a modem, etc.
- the communication interface 909 is via an Internet interface such as The network performs communication processing.
- the processor can communicate with read-only memory 902 and/or random access memory 903 to execute executable instructions, communicate with communication component 912 via communication bus 904, and communicate with other target devices via communication component 912, thereby completing embodiments of the present application.
- Providing any operation corresponding to the user attribute extraction method for example, receiving image data sent by the second terminal; extracting user attribute information based on the image data; and determining a target business object corresponding to the user attribute information.
- receiving the information acquisition request sent by the first terminal acquiring image data; and transmitting the image data to the first terminal, so that the first terminal extracts user attribute information based on the image data, And determining a target business object corresponding to the user attribute information.
- RAM 903 various programs and data required for the operation of the device can be stored.
- the CPU 901 or the GPU 913, the ROM 902, and the RAM 903 are connected to each other through a communication bus 904.
- ROM 902 is an optional module.
- the RAM 903 stores executable instructions or writes executable instructions to the ROM 902 at runtime, the executable instructions causing the processor 90 to perform operations corresponding to the above-described communication methods.
- An input/output (I/O) interface 905 is also coupled to communication bus 904.
- the communication component 912 can be integrated or can be configured to have multiple sub-modules (e.g., multiple IB network cards) and be on a communication bus link.
- the following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, etc.; an output portion 907 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 908 including a hard disk or the like. And a communication interface 909 including a network interface card such as a LAN card, modem, etc.
- the 90 driver 910 is also connected to the I/O interface 905 as needed.
- a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 910 as needed so that a computer program read therefrom is installed into the storage portion 908 as needed.
- FIG. 9 is only an optional implementation manner.
- the number and type of the components in FIG. 9 may be selected, deleted, added, or replaced according to actual needs; Different function components can also be implemented in separate settings or integrated settings, such as GPU and CPU detachable settings or GPU can be integrated on the CPU, communication components can be separated, or integrated on the CPU or GPU. ,and many more.
- GPU and CPU detachable settings or GPU can be integrated on the CPU
- communication components can be separated, or integrated on the CPU or GPU. ,and many more.
- embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising the corresponding execution
- the instruction corresponding to the method step provided by the embodiment of the present application for example, when receiving the information acquisition request sent by the first terminal, acquiring image data; sending the image data to the first terminal, so that the first The terminal extracts user attribute information based on the image data, and determines a target service object corresponding to the user attribute information.
- the computer program can be downloaded and installed from the network via a communication component, and/or installed from the removable media 911.
- the above-described functions defined in the method of the embodiments of the present application are executed when the computer program is executed by the processor.
- the electronic device when receiving the information acquisition request sent by the first terminal, acquires the image data, and sends the image data to the first terminal, so that the first terminal extracts the user attribute information based on the image data, and determines
- the target business object corresponding to the user attribute information acquires the biological image of the user in real time through the information acquisition request, which is simple and quick, and can also ensure the authenticity of the user attribute information, and the target business object determined by the user attribute information is more in line with the user. demand.
- Any user attribute extraction method provided by the embodiment of the present application may be performed by any suitable device having data processing capability, including but not limited to: a terminal device, a server, and the like.
- any user attribute extraction method provided by the embodiment of the present application may be executed by a processor.
- the processor performs any user attribute extraction method mentioned in the embodiment of the present application by calling a corresponding instruction stored in the memory. This will not be repeated below.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
- a medium that can store program codes such as a ROM, a RAM, a magnetic disk, or an optical disk.
- the methods, apparatus, and apparatus of the present application may be implemented in a number of ways.
- the method, apparatus, and apparatus of the embodiments of the present application can be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
- the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the embodiments of the present application are not limited to the order of the above optional description unless otherwise specified.
- the present application may also be embodied as a program recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with embodiments of the present application.
- the present application also covers a recording medium storing a program for executing the method according to an embodiment of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (30)
- 一种用户属性提取方法,用于第一终端,其特征在于,包括:接收第二终端发送的图像数据;基于所述图像数据,提取用户属性信息;确定所述用户属性信息对应的目标业务对象。
- 根据权利要求1所述的方法,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
- 根据权利要求1或2所述的方法,其特征在于,所述接收第二终端发送的图像数据之前,还包括:向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
- 根据权利要求3所述的方法,其特征在于,所述向所述第二终端发送信息获取请求,包括:按照时间间隔,向所述第二终端发送信息获取请求。
- 根据权利要求1-4任一项所述的方法,其特征在于,所述基于所述图像数据,提取用户属性信息,包括:当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
- 根据权利要求1-5任一所述的方法,其特征在于,所述用户属性信息至少包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
- 根据权利要求1-6任一所述的方法,其特征在于,还包括:向所述第二终端推送所述目标业务对象。
- 一种用户属性提取方法,用于第二终端,其特征在于,包括:当接收到第一终端发送的信息获取请求时,获取图像数据;将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
- 根据权利要求8所述的方法,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
- 根据权利要求8或9所述的方法,其特征在于,所述当接收到第一终端发送的信 息获取请求时,获取图像数,包括:当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
- 根据权利要求10所述的方法,其特征在于,所述当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据,包括:当接收到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
- 根据权利要求10或11所述的方法,其特征在于,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
- 根据权利要求8-12任一项所述的方法,其特征在于,还包括:接收所述第一终端推送的目标业务对象;展示所述目标业务对象。
- 一种用户属性提取装置,其特征在于,包括:第一接收模块,用于接收第二终端发送的图像数据;提取模块,用于基于所述图像数据,提取用户属性信息;确定模块,用于确定所述用户属性信息对应的目标业务对象。
- 根据权利要求14所述的装置,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
- 根据权利要求14或15所述的装置,其特征在于,还包括:第一发送模块,用于向所述第二终端发送信息获取请求,以触发所述第二终端发送图像数据;其中,所述信息获取请求用于指示所述第二终端通过图像采集设备采集图像数据。
- 根据权利要求16所述的装置,其特征在于,所述第一发送模块,用于按照时间间隔,向所述第二终端发送信息获取请求。
- 根据权利要求14-17任一项所述的装置,其特征在于,所述提取模块,用于当所述图像数据中包括多个人物时,将所述多个人物中出现比率最高的人物作为目标人物;提取所述目标人物对应的用户属性信息。
- 根据权利要求14-18任一所述的装置,其特征在于,所述用户属性信息至少包括以下任意一项或多项:年龄信息、性别信息、发型信息、喜好信息、表情信息、衣着信息。
- 根据权利要求14-19任一所述的装置,其特征在于,还包括:第二发送模块,用于向所述第二终端推送所述目标业务对象。
- 一种用户属性提取装置,其特征在于,包括:获取模块,用于当接收到第一终端发送的信息获取请求时,获取图像数据;第三发送模块,用于将所述图像数据发送给所述第一终端,以使所述第一终端基于所述图像数据提取用户属性信息,并确定所述用户属性信息对应的目标业务对象。
- 根据权利要求21所述的装置,其特征在于,所述图像数据包括视频图像数据或静态图像数据。
- 根据权利要求21或22所述的装置,其特征在于,所述获取模块,用于当接收到所述第一终端发送的所述信息获取请求时,通过图像采集设备采集图像数据。
- 根据权利要求23所述的装置,其特征在于,所述获取模块包括:显示子模块,用于当接受到所述第一终端发送的所述信息获取请求时,显示图像采集设备启用提示消息;采集子模块,用于当检测到基于所述图像采集设备启用提示消息的用户确认指令时,通过图像采集设备采集图像数据。
- 根据权利要求23或24所述的装置,其特征在于,所述图像采集设备包括:所述第二终端的摄像头,或与所述第二终端关联的具备拍摄功能的智能设备。
- 根据权利要求21-25任一项所述的装置,其特征在于,还包括:第二接收模块,用于接收所述第一终端推送的目标业务对象;展示模块,用于展示所述目标业务对象。
- 一种电子设备,包括:处理器和存储器;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-13任一所述用户属性提取方法。
- 一种电子设备,其特征在于,包括:处理器和权利要求14-26任一项所述用户属性提取装置;在处理器运行所述业务对象的操作装置时,权利要求14-26任一项所述用户属性提取装置中的单元被运行。
- 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1-13任一所述用户属性提取方法中各步骤的指令。
- 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1-13任一所述用户属性提取方法中各步骤的操作。
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/314,410 US20190228227A1 (en) | 2016-12-28 | 2017-12-26 | Method and apparatus for extracting a user attribute, and electronic device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611235485.8 | 2016-12-28 | ||
| CN201611235485.8A CN108076128A (zh) | 2016-12-28 | 2016-12-28 | 用户属性提取方法、装置和电子设备 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018121541A1 true WO2018121541A1 (zh) | 2018-07-05 |
Family
ID=62161529
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/118705 Ceased WO2018121541A1 (zh) | 2016-12-28 | 2017-12-26 | 用户属性提取方法、装置和电子设备 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20190228227A1 (zh) |
| CN (1) | CN108076128A (zh) |
| WO (1) | WO2018121541A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111738676A (zh) * | 2020-06-05 | 2020-10-02 | 天津玛斯特车身装备技术有限公司 | 一种柔性生产线作业方法及系统 |
| CN115103457A (zh) * | 2022-06-02 | 2022-09-23 | Oppo广东移动通信有限公司 | 数据传输方法、装置、电子设备和存储介质 |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019227426A1 (zh) * | 2018-05-31 | 2019-12-05 | 优视科技新加坡有限公司 | 多媒体数据处理方法、装置和设备/终端/服务器 |
| CN109598578A (zh) * | 2018-11-09 | 2019-04-09 | 深圳壹账通智能科技有限公司 | 业务对象数据的推送方法及装置、存储介质、计算机设备 |
| CN109697196A (zh) * | 2018-12-10 | 2019-04-30 | 北京大学 | 一种情境建模方法、装置及设备 |
| CN109982148B (zh) * | 2019-04-03 | 2022-05-20 | 广州虎牙信息科技有限公司 | 一种直播方法、装置、计算机设备与存储介质 |
| CN111311303A (zh) * | 2020-01-17 | 2020-06-19 | 北京市商汤科技开发有限公司 | 一种信息投放方法及装置、电子设备、存储介质 |
| CN113823285A (zh) * | 2021-09-30 | 2021-12-21 | 广东美的厨房电器制造有限公司 | 信息录入方法及其装置、家用电器和可读存储介质 |
| CN114648796A (zh) * | 2022-03-18 | 2022-06-21 | 成都商汤科技有限公司 | 用户识别方法、装置、存储介质及电子设备 |
| CN115273149A (zh) * | 2022-08-08 | 2022-11-01 | 浙江大华技术股份有限公司 | 对象的识别方法、装置、存储介质和电子装置 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2570965A2 (en) * | 2011-09-15 | 2013-03-20 | Omron Corporation | Detection device, display control device and imaging control device, provided with the detection device, body detection method, control program, and recording medium |
| CN103164450A (zh) * | 2011-12-15 | 2013-06-19 | 腾讯科技(深圳)有限公司 | 一种向目标用户推送信息的方法及装置 |
| CN103377293A (zh) * | 2013-07-05 | 2013-10-30 | 河海大学常州校区 | 多源输入、信息智能优化处理的全息触摸交互展示系统 |
| US20150023552A1 (en) * | 2013-07-18 | 2015-01-22 | GumGum, Inc. | Systems and methods for determining image safety |
| CN104915000A (zh) * | 2015-05-27 | 2015-09-16 | 天津科技大学 | 用于裸眼3d广告的多感知生物识别交互方法 |
| US20160314442A1 (en) * | 2015-04-21 | 2016-10-27 | Xiaomi Inc. | Numerical value transfer method, terminal, cloud server and storage medium |
| CN106200918A (zh) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | 一种基于ar的信息显示方法、装置和移动终端 |
| CN106326433A (zh) * | 2016-08-25 | 2017-01-11 | 武克易 | 一种广告播放装置 |
Family Cites Families (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8156116B2 (en) * | 2006-07-31 | 2012-04-10 | Ricoh Co., Ltd | Dynamic presentation of targeted information in a mixed media reality recognition system |
| US8997006B2 (en) * | 2009-12-23 | 2015-03-31 | Facebook, Inc. | Interface for sharing posts about a live online event among users of a social networking system |
| KR20120076673A (ko) * | 2010-12-13 | 2012-07-09 | 삼성전자주식회사 | 이동통신 시스템에서 광고 서비스 제공 방법 및 장치 |
| US8401343B2 (en) * | 2011-03-27 | 2013-03-19 | Edwin Braun | System and method for defining an augmented reality character in computer generated virtual reality using coded stickers |
| US20130095855A1 (en) * | 2011-10-13 | 2013-04-18 | Google Inc. | Method, System, and Computer Program Product for Obtaining Images to Enhance Imagery Coverage |
| US20140066044A1 (en) * | 2012-02-21 | 2014-03-06 | Manoj Ramnani | Crowd-sourced contact information and updating system using artificial intelligence |
| US9124950B2 (en) * | 2012-03-26 | 2015-09-01 | Max Abecassis | Providing item information notification during video playing |
| JP5188639B1 (ja) * | 2012-05-31 | 2013-04-24 | 株式会社 ディー・エヌ・エー | ゲームプログラム、及び、情報処理装置 |
| CN103488402B (zh) * | 2012-06-14 | 2018-09-04 | 腾讯科技(深圳)有限公司 | 显示控制的方法、设备及系统 |
| US20140101781A1 (en) * | 2012-10-05 | 2014-04-10 | Sedrick Andrew Bouknight | Peer-to-peer, real-time, digital media distribution |
| US20140245335A1 (en) * | 2013-02-25 | 2014-08-28 | Comcast Cable Communications, Llc | Environment Object Recognition |
| JP6369067B2 (ja) * | 2014-03-14 | 2018-08-08 | 株式会社リコー | 情報処理システム、情報処理方法、及びプログラム |
| CN103984741B (zh) * | 2014-05-23 | 2016-09-21 | 合一信息技术(北京)有限公司 | 用户属性信息提取方法及其系统 |
| WO2016004330A1 (en) * | 2014-07-03 | 2016-01-07 | Oim Squared Inc. | Interactive content generation |
| CN104166713A (zh) * | 2014-08-14 | 2014-11-26 | 百度在线网络技术(北京)有限公司 | 网络业务的推荐方法和装置 |
| US20160205443A1 (en) * | 2015-01-13 | 2016-07-14 | Adsparx USA Inc | System and method for real-time advertisments in a broadcast content |
| US11071919B2 (en) * | 2015-06-30 | 2021-07-27 | Amazon Technologies, Inc. | Joining games from a spectating system |
| US10366440B2 (en) * | 2015-10-28 | 2019-07-30 | Adobe Inc. | Monitoring consumer-product view interaction to improve upsell recommendations |
| US10221260B2 (en) * | 2016-07-29 | 2019-03-05 | Exxonmobil Chemical Patents Inc. | Phenolate transition metal complexes, production and use thereof |
| US10057310B1 (en) * | 2017-06-12 | 2018-08-21 | Facebook, Inc. | Interactive spectating interface for live videos |
| JP7030452B2 (ja) * | 2017-08-30 | 2022-03-07 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法、情報処理システム及びプログラム |
-
2016
- 2016-12-28 CN CN201611235485.8A patent/CN108076128A/zh active Pending
-
2017
- 2017-12-26 US US16/314,410 patent/US20190228227A1/en not_active Abandoned
- 2017-12-26 WO PCT/CN2017/118705 patent/WO2018121541A1/zh not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2570965A2 (en) * | 2011-09-15 | 2013-03-20 | Omron Corporation | Detection device, display control device and imaging control device, provided with the detection device, body detection method, control program, and recording medium |
| CN103164450A (zh) * | 2011-12-15 | 2013-06-19 | 腾讯科技(深圳)有限公司 | 一种向目标用户推送信息的方法及装置 |
| CN103377293A (zh) * | 2013-07-05 | 2013-10-30 | 河海大学常州校区 | 多源输入、信息智能优化处理的全息触摸交互展示系统 |
| US20150023552A1 (en) * | 2013-07-18 | 2015-01-22 | GumGum, Inc. | Systems and methods for determining image safety |
| US20160314442A1 (en) * | 2015-04-21 | 2016-10-27 | Xiaomi Inc. | Numerical value transfer method, terminal, cloud server and storage medium |
| CN104915000A (zh) * | 2015-05-27 | 2015-09-16 | 天津科技大学 | 用于裸眼3d广告的多感知生物识别交互方法 |
| CN106200918A (zh) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | 一种基于ar的信息显示方法、装置和移动终端 |
| CN106326433A (zh) * | 2016-08-25 | 2017-01-11 | 武克易 | 一种广告播放装置 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111738676A (zh) * | 2020-06-05 | 2020-10-02 | 天津玛斯特车身装备技术有限公司 | 一种柔性生产线作业方法及系统 |
| CN115103457A (zh) * | 2022-06-02 | 2022-09-23 | Oppo广东移动通信有限公司 | 数据传输方法、装置、电子设备和存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108076128A (zh) | 2018-05-25 |
| US20190228227A1 (en) | 2019-07-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018121541A1 (zh) | 用户属性提取方法、装置和电子设备 | |
| JP6267861B2 (ja) | 対話型広告のための使用測定技法およびシステム | |
| US10325372B2 (en) | Intelligent auto-cropping of images | |
| WO2018033143A1 (zh) | 视频图像的处理方法、装置和电子设备 | |
| US20200037040A1 (en) | Visual hash tags via trending recognition activities, systems and methods | |
| TWI648641B (zh) | Wisdom TV data processing method, smart TV and smart TV system | |
| US11087140B2 (en) | Information generating method and apparatus applied to terminal device | |
| EP3285222A1 (en) | Facilitating television based interaction with social networking tools | |
| US20190155864A1 (en) | Method and apparatus for recommending business object, electronic device, and storage medium | |
| US20180225377A1 (en) | Method, server and terminal for acquiring information and method and apparatus for constructing database | |
| CN108763532A (zh) | 用于推送信息、展现信息的方法和设备 | |
| US12395711B2 (en) | Dynamic code integration within network-delivered media | |
| Indrawan et al. | Face recognition for social media with mobile cloud computing | |
| CN114442869A (zh) | 用户分流处理的方法、装置、电子设备及存储介质 | |
| CN114283349A (zh) | 一种数据处理方法、装置、计算机设备及存储介质 | |
| GB2574431A (en) | Systems and method for automated boxing data collection and analytics platform | |
| JP7130771B2 (ja) | 注目情報の処理方法および装置、記憶媒体ならびに電子機器 | |
| JP2019057245A (ja) | 情報処理装置及びプログラム | |
| CN111756863A (zh) | 内容推送方法、装置、处理设备及存储介质 | |
| US20130138505A1 (en) | Analytics-to-content interface for interactive advertising | |
| US20160315886A1 (en) | Network information push method, apparatus and system based on instant messaging | |
| CN110033291A (zh) | 信息对象推送方法、装置和系统 | |
| EP3783561A1 (en) | Information processing device, information processing system, information processing method, and program | |
| CN112767006B (zh) | 广告资源数据处理方法、装置、电子设备和可读存储介质 | |
| WO2015100070A1 (en) | Presenting information based on a video |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17886214 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17886214 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 05/02/2020) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17886214 Country of ref document: EP Kind code of ref document: A1 |