[go: up one dir, main page]

CN101493951A - Skin design system and method in input tool - Google Patents

Skin design system and method in input tool Download PDF

Info

Publication number
CN101493951A
CN101493951A CNA2009100792527A CN200910079252A CN101493951A CN 101493951 A CN101493951 A CN 101493951A CN A2009100792527 A CNA2009100792527 A CN A2009100792527A CN 200910079252 A CN200910079252 A CN 200910079252A CN 101493951 A CN101493951 A CN 101493951A
Authority
CN
China
Prior art keywords
picture
information
user
server
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100792527A
Other languages
Chinese (zh)
Inventor
张会鹏
宋爱元
王松旭
陈坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CNA2009100792527A priority Critical patent/CN101493951A/en
Publication of CN101493951A publication Critical patent/CN101493951A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a skin designing system in an input tool, which comprises an interception unit and an embedding unit, wherein, the interception unit is used for intercepting an obtained original picture; and the embedding unit is used for conducting mask processing over the intercepted picture so as to obtain a target picture and embedding the target picture into the skin display area of an appointed photo frame after conducting fusion processing over the target picture. The invention also discloses a skin designing method in the input tool, which comprises the following steps: an obtained original picture is intercepted; and the mask processing is conducted over the intercepted picture so as to obtain a target picture and the target picture is embedded into the skin display area of an appointed photo frame after the fusion processing is conducted over the target picture. With the system and the method provided by the invention, the individual requirements of users can be met; simultaneously, users can conveniently conduct skin design.

Description

Skin design system and method in input tool
Technical Field
The present invention relates to skin design technology, and more particularly, to a skin design system and method in an input tool.
Background
With the rapid development of computers, computers have become essential information processing and communication tools in people's daily life. Generally, a computer user uses a Chinese character input software tool to input Chinese characters into a computer, and the Chinese character input tool is a tool software running on an operating system and is a tool for converting codes input by a keyboard or other non-keyboard input media data into Chinese characters. Currently, the chinese input software tools can be divided into two types, keyboard-based input and non-keyboard-based input, which are described below.
For a Chinese character input tool based on a keyboard, the tool is a method for inputting Chinese characters by utilizing the keyboard according to coding rules. Because the number of English letters is only 26, which respectively corresponds to 26 letters on the keyboard, for English input, no additional input software tool is needed, and only the English letters need to be directly input. The number of Chinese characters is tens of thousands, and the Chinese characters and the keyboard do not have any corresponding relation, but in order to realize the Chinese character input into the computer, the Chinese characters need to be coded, and the codes are correspondingly associated with keys on the keyboard, so that the user can finally input the codes of the Chinese characters based on the keyboard, and then the codes are converted into the Chinese characters according to the codes.
At present, there are hundreds of Chinese character coding schemes, wherein there are dozens of Chinese characters which are used as a graphic character and are commonly expressed by the sound, the shape and the meaning of the character, and the Chinese character input coding method basically adopts the method of associating the sound, the shape and the meaning with a specific key and then combining the sound, the shape and the meaning according to different Chinese characters to complete the input of the Chinese characters.
For non-keyboard based Chinese input tools, the tools include handwriting input tools, speech input tools, and Optical Character Recognition (OCR) input tools, among others.
The handwriting input tool is a handwriting Chinese character recognition input tool in a pen-type environment, accords with the habit of writing Chinese characters by using a pen of people, and can be recognized and displayed by a computer as long as the handwriting tool writes on a handwriting board according to the usual habit. The handwriting input tool needs a matched hardware handwriting board, and Chinese characters are written and recorded on the matched handwriting board by adopting any type of hard pen, so that the handwriting input tool is convenient and quick, and the wrong character rate is low.
The voice input tool is an input tool that inputs voice through a microphone and then converts the voice into text. Although convenient to use, the rate of wrong words is still high, especially some untrained terms and uncommon words.
OCR requires that the document to be input is first converted into a pattern by a scanner for recognition, so that the scanner is necessary, and the higher the printing quality of the document is, the higher the recognition accuracy is, and generally, the characters of the printed form, such as books, magazines, etc., are preferred. If the original is thin, the pattern and characters on the back side of the paper may also be transmitted during scanning, which may interfere with the final recognition effect.
In summary, the various embodiments are superior or inferior in both types of keyboard-based input and non-keyboard-based input. In the prior art, the mature and most widely used is keyboard-based input, and in this type, in order to improve the user experience, the keyboard-based Chinese input tool is usually provided with a skin function. By skin function is meant: the user can select different tool interfaces as skin and can even make the skin himself. For selecting skin, the Chinese input tool based on keyboard has several sets of skin for user to select, or the user can obtain skin from other ways; the skin editor attached to the keyboard-based chinese input tool is usually used for creating the skin by itself, and the user needs to grasp the usage of the skin editor.
It can be seen that in the case of skin selection, the disadvantages with the prior art are: the personalized requirements of the user cannot be met, and the user may have difficulty in finding the skin satisfied by the user. In the case of making skin by oneself, the prior art can meet the personalized requirements of users, but the skin editor is used by users who master the editing usage. For most ordinary users, the design threshold of making skin by themselves is relatively high, and in general, the ordinary users do not make skin by themselves. Obviously, how to meet the personalized requirements of users and enable users to conveniently and automatically make skins becomes a problem which is urgently solved at present aiming at the two aspects, however, a solution for the problem is not provided at present.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a skin design system and method in an input tool, which can not only meet the personalized requirements of users, but also enable users to make skin designs conveniently and quickly.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a skin planning system in an input tool, the system comprising: an intercepting unit and an embedding unit;
wherein,
the intercepting unit is used for intercepting the acquired original picture;
and the embedding unit is used for performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area after performing fusion processing on the target picture.
The system also comprises a network server used for storing all user information identified by the user account.
The intercepting unit is further configured to acquire the original picture from a local user terminal or the network server, and intercept the original picture with a size display scale that is the same as that of the target picture.
The embedding unit is further used for carrying out masking processing and fusion processing on the intercepted picture based on the intermediate carrier for picture conversion and fusion.
Wherein, the intermediate carrier based on image conversion and fusion comprises: a mask matched with the photo frame skin display area; the embedding unit further includes: a mask processing module and a fusion processing module;
wherein,
the mask processing module is used for masking the intercepted picture and the picture of the mask plate to obtain a target picture displayed in the picture frame skin display area;
and the fusion processing module is used for fusing the target picture with the picture frame skin display area by using the picture of the mask to obtain the picture frame skin.
Wherein the network server comprises an image storage server; or, the system comprises a character storage server and an image processing server; or, the system comprises an image storage server, a character storage server and an image processing server;
the image storage server is used for storing the image information of the user identified by the user account and the image information of the system provided by the system;
the text storage server is used for storing the text information of the user identified by the user account and the text information of the system provided by the system;
the picture processing server is used for converting the character information and acquiring information in a picture format; or combining the character information and the image information together, and obtaining the content of combining the two types of information including the character information and the image information, wherein the combined content is information in a picture format.
A method of skin design in an input tool, the method comprising the steps of:
intercepting the obtained original picture;
and performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area after performing fusion processing on the target picture.
The capturing of the original picture specifically includes: and acquiring the original picture from a local user terminal or a network server, and intercepting the original picture by adopting the same size display scale as the target picture.
Wherein, the masking and fusing the picture specifically comprises the following steps: and performing mask processing and fusion processing on the intercepted picture based on the intermediate carrier for picture conversion and fusion.
Wherein the original picture comprises: images, text, or a combination of images and text;
the display mode of the target picture is as follows: static display, or dynamic display.
Wherein, the intermediate carrier for image conversion and fusion comprises: and the mask is matched with the skin display area of the photo frame.
When the original picture is obtained from the network server, the obtaining of the original picture specifically includes the following steps:
x1, according to the user account and the login key, after the login server successfully verifies the login request of the user terminal, the login server informs the image storage server to issue the index of the image type information to the user terminal; or,
the login server informs the picture processing server to send the index of the character type information to the user terminal; or,
the login server informs the picture processing server to issue an index of the content of the combination of the image type information and the character type information to the user terminal;
x2, the user terminal obtains the image type information from the image storage server according to the index of the image type information; or,
the user terminal acquires the character type information from the image processing server according to the index of the character type information; or,
and the user terminal acquires the content of the combination of the image type information and the character type information from the image processing server according to the index of the content of the combination of the image type information and the character type information.
Wherein, the method also comprises: when the user information in the network server is updated, the network server actively sends a user information updating message to all the user terminals, or periodically notifies all the user terminals after the updated user information is accumulated; users independently select whether to reacquire the original picture; or,
based on the user information updating inquiry request initiated by the user terminal, the network server passively issues a user information updating message to the requesting user terminal.
The skin design system and method in the input tool provided by the invention adopt an intercepting unit to intercept the acquired original picture; and performing mask processing on the intercepted picture by adopting an embedding unit to obtain a target picture, and embedding the target picture into a specified photo frame skin display area in an input tool after performing fusion processing on the target picture. Furthermore, the user in the invention can not only obtain the original picture from the local user terminal, but also customize the user-defined target picture in the skin display area of the subsequent embedded photo frame based on the original picture; and the original picture can be obtained from a network server at a different place, and a user-defined target picture which is subsequently embedded into the picture frame skin display area is customized on the basis of the original picture.
The invention has simple operation, can meet the individual requirements of users and can also meet the requirements of users for making skins.
Drawings
FIG. 1 is a schematic diagram of an input field according to the present invention;
FIG. 2 is a schematic view of a status bar according to the present invention;
FIG. 3 is a schematic diagram of the structure of the system of the present invention;
FIG. 4 is a schematic flow chart of the implementation of the method of the present invention;
FIG. 5 is a schematic diagram of a setting frame according to the present invention;
FIG. 6 is a schematic view of a display shape of a photo frame skin display area according to the present invention;
FIG. 7 is a schematic diagram of a display of a captured image according to the present invention;
FIG. 8 is a schematic diagram illustrating a target picture according to the present invention.
Detailed Description
The basic idea of the invention is: intercepting the obtained original picture; and performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area in an input tool after performing fusion processing on the target picture.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings by way of examples.
In short, the skin design scheme of the invention can provide the function of the photo frame skin. Here, the photo frame skin function means: the photo frame skin display area is arranged on the input tool, and a user can design a self-defined picture to be displayed in the photo frame skin display area, so that the personalized customization requirements of the user are met. Moreover, the customized picture can be obtained from a picture stored locally in the user terminal or a picture stored in a different place. The obtained picture is as follows: an original picture on which the generated target picture is based; the target picture is: and finally displaying the target picture in the picture frame skin display area.
FIG. 1 is a schematic display diagram of an input field of the present invention, in which a circle area filled with left oblique lines in FIG. 1 represents a photo frame skin display area; FIG. 2 is a schematic diagram showing a status bar of the present invention, and a rectangular area filled with cross-hatching in FIG. 2 represents another photo frame skin display area. In the display area, not only an original picture obtained from a picture locally stored in the user terminal can be displayed, for example, a real picture of the user himself taken by the user can be displayed in fig. 1; but also the original picture taken from a picture stored remotely, such as the virtual cartoon picture provided by the system itself taken remotely in fig. 2.
The skin design of the present invention is specifically set forth below.
Fig. 3 shows a skin planning system in an input tool according to the invention, which system, as shown in fig. 3, comprises: an intercepting unit and an embedding unit; the image capturing unit is used for capturing an acquired original image and sending the captured image to the embedding unit; the embedding unit is used for performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area in the input tool after fusion processing.
Here, the intercepting unit and the embedding unit are located in a user terminal, and the system further includes a network server for storing all user information identified by the user account. All user information refers to: information used when a user operates on various user account based service platforms. All user information includes: and aiming at the information such as picture information, character information, login password and the like of the user account.
Here, the capturing unit is further configured to obtain an original picture from a local user terminal or a network server, and capture the original picture in the same size display scale as the target picture. The size display scale may be a ratio of an aspect ratio of the image, and in the process of capturing the original image by the capturing unit, it is always required to ensure that the ratio of the aspect ratio of the captured portion is the same as the ratio of the aspect ratio of the target image. The target picture is: and finally displaying the picture in the picture frame skin display area designated in the input tool.
Here, the embedding unit is further configured to perform masking processing and blending processing on the intercepted picture based on the intermediate carrier for picture conversion and blending. The intermediate carrier based on image conversion and fusion can be: a mask matched with the skin display area of the photo frame. The mask processing and the fusion processing are both processing performed by combining a mask through a picture library carried by the WINDOWS system.
Here, the embedding unit further includes: the system comprises a mask processing module and a fusion processing module. The mask processing module is used for masking the intercepted picture and the picture of the mask plate to obtain a target picture finally displayed in the skin display area of the photo frame. And the fusion processing module is used for fusing the target picture with the picture frame skin display area by using the picture of the mask to obtain the final picture frame skin.
For the mask, the mask of the mask is performed after the original file is loaded. The original file loading means: the picture files in various formats are loaded into the memory, and the technical implementation is mainly realized by CxImage, wherein the CxImage refers to a picture library of an open source. Here, the masking process of the mask is specifically as follows: loading the picture and the masked mask into a memory by using the CxImage; creating a 32-bit memory bitmap with the same size as the mask in the memory; scaling the picture to the memory bitmap; traversing each point in the detection mask, if the currently detected point is white, designating the picture of the point as invisible, setting the alpha value of the point corresponding to the memory bitmap as 0, wherein 0 represents transparency, and if the currently detected point is not white, keeping the alpha value of the point corresponding to the memory bitmap unchanged. After the traversal detection is finished, the mask of the memory bitmap is finished. Any complex picture mask can be realized by utilizing the mask processing, so that the shape of the skin display area of the photo frame can be set as desired. In short, the picture of the original file is scaled to a memory bitmap with the same size as the mask in the memory, and then the mask is added to obtain the memory image of the final finished mask, namely the target picture. The target picture is a picture that is finally displayed in the picture frame skin display area specified in the input tool. Where alpha is used to indicate the amount of transparency.
For fusion, the frame is also loaded into memory using CxImage. Firstly, a completely transparent black memory bitmap is used as a base map; then sequentially drawing the memory graphics finally completing the shade on a base map according to the layers by utilizing alpha mixing, namely mixing a target picture on the base map; finally, the photo frame skin display area is mixed on the base map, and the fusion is completed.
For a network server, the network server includes three implementation manners, a first implementation manner: the network server comprises an image storage server; the second implementation mode comprises the following steps: the network server comprises a character storage server and an image processing server; the third implementation mode comprises the following steps: the network server comprises an image storage server, a character storage server and an image processing server.
The image storage server is used for storing the image information of the user identified by the user account and the image information of the system provided by the system. Here, the image information of the user may be: image information that the user has previously uploaded to the server. The image information of the system may be: including virtual avatar information of cartoon animals or humans, etc.
And the text storage server is used for storing the text information of the user identified by the user account and the text information of the system provided by the system. Here, the text information of the user may be: the user has previously uploaded textual information to the server.
It should be noted that, on one hand, for the image information and the text information that the user has uploaded to the server before, specifically, after the user logs in with the user account, the image information and the text information are left in the operations performed on various service platforms based on the user account. Both the image information and the text information may be retained in the server. The server here means: an image storage server and a character storage server. On the other hand, aiming at the image information and the text information provided by the system, particularly, when the system is used for popularizing a new service, the intuitive image information and text information can be recommended to the user through the display on the photo frame skin. The new service here refers to: and a new service developed on the service platform based on the user account.
The image processing server comprises two implementation modes, wherein the first implementation mode comprises the following steps: the image processing server is used for converting the user's text information and/or the system's text information interacted with the text storage server and obtaining the information of the image format; the second implementation mode comprises the following steps: combining the user character information and/or the system character information interacted with the image storage server and the character storage server with the user image information and/or the system image information, and obtaining the content combining the two types of information including the character information and the image information, wherein the content combining is information in a picture format. Among the two types of information, the text type information includes: the text information of the user and/or the text information of the system is: the text information of the user, the text information of the system, and the text information of the user and the text information of the system. The image type information includes: the image information of the user and/or the image information of the system is: image information of the user, image information of the system, and image information of the user and image information of the system.
Fig. 4 shows a skin planning method in the input tool of the present invention, which comprises the following steps:
step 101, intercepting the obtained original picture.
This step can be accomplished by the intercepting unit; the obtained picture can be an original picture obtained from a local user terminal, and a user-defined target picture subsequently embedded into a picture frame skin display area is customized on the basis of the original picture; the method can also be used for customizing a user-defined target picture which is subsequently embedded into the picture frame skin display area on the basis of an original picture which is acquired from a network server at a different place.
And 102, performing shading processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area in an input tool after fusion processing.
This step may be performed by the embedding unit.
For the above technical solution comprising steps 101 to 102, the specific processing procedure of step 101 is as follows: the intercepting unit acquires an original picture from a local user terminal or a network server and intercepts the original picture by adopting the same size display proportion as that of the target picture.
Here, the types of the original picture include: type of image, text, or image combined text.
Here, the user terminal includes: a desktop Personal Computer (PC), a laptop PC, a mobile data assistant (PDA), a mobile terminal such as a mobile phone, and the like.
Here, the size display scale may be a scale of an aspect ratio of the picture, and in the process of capturing the original picture by the capturing unit, it is always required to ensure that the scale of the aspect ratio of the captured portion is the same as the scale of the aspect ratio of the target picture. The target picture is: and finally displaying the target picture in the picture frame skin display area of the input tool. After the original picture is captured, the captured picture needs to be scaled to the same size as the skin display area of the photo frame, and then the subsequent step 102 is executed.
The specific processing procedure of step 102 is: the embedding unit is used for carrying out masking processing and fusion processing on the intercepted picture based on the intermediate carrier for picture conversion and fusion.
Here, the intermediate carrier for picture conversion and fusion includes: a mask matched with the skin display area of the photo frame. In step 102, firstly, masking the intercepted picture and the picture of the mask plate to obtain a target picture finally displayed in the picture frame skin display area of the input tool; and then, fusing the target picture with the skin of the photo frame, namely fusing the target picture with the skin display area of the photo frame by using the picture of the mask, and embedding the target picture into the skin display area of the photo frame to obtain the final skin of the photo frame.
In step 102, the target picture is further embedded in the display area of the photo frame skin for displaying, and the display mode of the target picture includes: a static display mode, or a dynamic display mode. The dynamic display mode can be customized by the user in advance. For example, the user selects three pictures from the locally stored pictures in advance, and the three pictures may be based on the pictures of the same user, and the display modes are combined according to the display sequence set by the user, so as to generate the display picture dynamically changing based on the three pictures. Moreover, the three pictures may also be based on different users.
Further, it should be noted that, in step 101, when the original picture is acquired from the network server, the acquisition of the original picture includes the following three cases.
The first case includes: firstly, according to a user account and a login key, after a login server successfully verifies a login request of a user terminal, the login server informs an image storage server to send an index of image type information to the user terminal; then, the user terminal acquires image type information including image information of the user and/or image information of the system from the image storage server according to the index of the image type information, that is: the image type information includes image information of the user, image information of the system, and image information of the user and image information of the system.
The second case includes: firstly, according to a user account and a login key, after a login server successfully verifies a login request of a user terminal, the login server informs an image processing server to send an index of character type information to the user terminal; then, the user terminal obtains the text type information including the text information of the user and/or the text information of the system from the image processing server according to the index of the text type information, namely: the text type information includes text information of the user, text information of the system, and text information of the user and text information of the system, and the text type information is: and the information of the picture format is obtained by the conversion processing of the picture processing server.
The third case includes: firstly, according to a user account and a login key, after a login server successfully verifies a login request of a user terminal, the login server informs an image processing server to issue an index of the content of the combination of image type information and character type information to the user terminal; then, the user terminal obtains the content of the combination of the two types of information including the character information and the image information from the image processing server according to the index of the content of the combination of the image type information and the character type information. For the two types of information, the text type information includes text information of the user, text information of the system, text information of the user and text information of the system. The image type information includes image information of the user, image information of the system, and image information of the user and image information of the system, and the two types of information are combined as follows: and the information of the picture format obtained by the combination processing of the picture processing server.
Further, it should be noted that step 102 also includes a processing step for updating the user information. The processing step includes two cases, a first case: when the user information in the network server is updated, the network server actively sends a user information updating message to all the user terminals, or periodically notifies all the user terminals after the updated user information is accumulated; and the users logging in the user terminal respectively and independently select whether to reacquire the original picture. Wherein, all the user terminals refer to: the user terminal is logged in when the user operates on various service platforms based on the user account. In the second case: based on the user information updating inquiry request initiated by the user terminal, the network server passively issues a user information updating message to the requesting user terminal. The user terminal in the second case differs from the user terminal in the first case in that: the user terminal in the second case refers to the user terminal that initiates the request, and the user terminals in the first case refer to all the user terminals, and since the difference is that the passive delivery related to the second case receives the user information update message no matter whether the user terminal in the first case initiates the request or not, through the active delivery of the network server, the user terminal in the first case refers to all the user terminals.
The embodiment of the method is as follows: in this embodiment, the scheme for the user to realize skin design and obtain the customized photo frame skin includes the following steps:
step 201, the capturing unit captures an original picture obtained from a picture locally stored in the user terminal.
Here, step 201 is a bottom layer implementation operation for implementing original picture acquisition and capture, and accordingly, the operation of the user in the upper layer actual application is, for example: the user terminal selects an original picture from the locally stored pictures, displays the selected picture in the picture frame skin editing area on the left side of fig. 5, and the user can drag the rectangular selection frame shown by the dotted line in fig. 5 to intercept the selected picture, and displays the display effect of the intercepted picture in the final picture frame skin display area in the picture frame skin preview area on the right side of fig. 5. In practical applications, the selection frame may also take different shapes.
Fig. 5 is a schematic display diagram of a setting frame for setting a photo frame skin according to the present invention, where fig. 5 includes: the picture frame skin editing area on the left side and the picture frame skin preview area on the right side are cut out from the picture frame skin editing area, and a rectangular selection frame is shown by a dotted line. Fig. 5 also includes: a selection confirmation area of the status bar photo frame skin, and a selection confirmation area of the input bar photo frame skin. The two selection confirmation areas correspond to the two radio buttons, respectively. The radio button functions to switch the status field and the input field. Fig. 5 also includes: the use confirmation area of the virtual head portrait of the instant messaging tool and the use confirmation area of the local picture. The two use confirmation areas correspond to the two use confirmation buttons, respectively. The use of the confirmation button serves to allow the user to obtain an original picture, such as a virtual avatar of an instant messenger provided by the system, from a remote web server in addition to the local picture. Then, when the use confirmation button is clicked in the use confirmation area of the instant messaging tool virtual avatar, the instant messaging tool virtual avatar can be obtained and displayed in the photo frame skin editing area on the left side of fig. 5. In addition, if the user has not adopted the user account login input tool, the current operation of the user in the actual application of the upper layer further includes: when the setting frame shown in fig. 5 is used to set the photo frame skin, the user is prompted to input the user account and login password used by the login input tool.
Step 202, the embedding unit performs mask processing on the intercepted picture based on a preset mask to obtain a target picture, performs fusion processing on the target picture, and embeds the target picture into a specified photo frame skin display area in the input tool.
It should be noted that, since the captured picture is usually a regular picture, and the display area of the photo frame skin is usually an irregular area, that is, only an irregular picture can be displayed, the mask is used as an intermediate carrier for picture conversion and fusion, and a mask process needs to be applied to the regular picture to convert the regular picture into the irregular picture or the picture with the shape required by the photo frame skin display area. The mask is matched with the display area of the photo frame skin and can be in any preset required shape, so that the picture which is processed by the mask is not only an irregular picture or a picture with a shape required by the display area of the photo frame skin, but also is matched with the display area of the photo frame skin. Because of matching, the mask is used as an intermediate carrier for image conversion and fusion, and the target image can be obtained after the fusion processing is carried out on the regular image. The target picture is: and finally embedding and displaying the target picture in the picture frame skin display area of the input tool.
Here, the display area of the photo frame skin provided by default by the input tool is transparent, taking fig. 6 as an example of an unfilled circle as an example, fig. 6 is a schematic view of a display shape of a photo frame skin display area provided by the present invention, and the transparent display area is composed of a picture with an alpha channel, alpha is used to indicate the degree of transparency, and the transparent display area is made transparent by using the transparency of each point on the picture; the input tool also provides a mask. The picture size of mask is the same with the display area size of photo holder frame skin, and the picture of mask actually comprises two colours, only black or white two kinds of colours promptly, and black is used for appointing the display area of photo holder frame skin, and this display area is: the user's own customized original picture is finally converted into the display area of the target picture, and in technical implementation, it is usually specified that the white area is transparent and the non-white area is non-transparent.
Here, the step 202 is to implement the bottom-layer implementation operation of the masking process and the fusion process, and the corresponding operation of the user in the actual application of the upper layer includes the following, for example:
a. the user selects a local picture and intercepts it.
Here, the cut picture is shown in fig. 7 as a rectangle filled with right oblique lines.
Here, in the process of capturing, it is always ensured that the ratio of the aspect ratio of the captured picture is the same as the ratio of the aspect ratio of the target picture, and the target picture is the target picture finally displayed in the picture frame skin display area of the input tool.
b. And storing the intercepted picture into a single file.
c. And calling out a file, zooming the intercepted picture to be the same as the target picture in size, and then masking the zoomed picture and the picture of the mask to obtain the target picture.
Here, the target picture is represented by a rectangle filled with right oblique lines in fig. 8, and fig. 8 is a schematic display diagram of the target picture after the frame skin is set according to the present invention.
d. And then fusing the target picture with the display area of the photo frame skin to obtain the final photo frame skin.
The specific application examples are: the implementation mode of the network server comprises an image storage server, the image storage server is specifically a storage server of the virtual head portrait of the instant messaging tool, and the user terminal acquires the implementation scheme of the virtual head portrait of the instant messaging tool from the virtual head portrait storage server, and the implementation scheme comprises the following steps:
step 301, the user terminal reads the locally stored user account and login password.
Step 302, the user terminal initiates a login request to the login server, wherein the login request is packaged with a user account and a login password.
Step 303, the login server parses the user account and the login password in the login request, and compares the user account and the login password stored in the login server to realize login verification.
Step 304, judging whether the verification is successful, if so, returning a verification success message to the user terminal, wherein the verification success message is packaged with an index of the virtual head portrait of the instant messaging tool, and executing step 305; otherwise, returning the verification failure message to the user terminal, prompting the user terminal that the operation of obtaining the virtual head portrait of the instant messaging tool fails, and ending the current operation obtaining process.
And 305, the user terminal analyzes the index of the virtual head portrait of the instant messaging tool in the verification success message, and acquires the virtual head portrait of the instant messaging tool from the storage server of the virtual head portrait of the instant messaging tool according to the index of the virtual head portrait of the instant messaging tool.
Step 306, judging whether the virtual head portrait of the instant messaging tool is successfully acquired, if so, executing step 307; otherwise, prompting the user terminal that the operation of obtaining the virtual head portrait of the instant messaging tool fails, and ending the current operation flow of obtaining.
Step 307, storing the obtained virtual head portrait of the instant messaging tool into a local file of the user terminal, and displaying the file in a photo frame skin editing area on the left side of fig. 5.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (13)

1. A skin planning system in an input tool, the system comprising: an intercepting unit and an embedding unit; wherein,
the intercepting unit is used for intercepting the acquired original picture;
and the embedding unit is used for performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area after performing fusion processing on the target picture.
2. The system of claim 1, wherein the intercepting unit and the embedding unit are located in a user terminal, and the system further comprises a web server for storing all user information identified by a user account.
3. The system according to claim 2, wherein the capturing unit is further configured to obtain the original picture from a local user terminal or the network server, and capture the original picture at a same display scale as the target picture.
4. The system of claim 2, wherein the embedding unit is further configured to perform a masking process and a blending process on the captured picture based on an intermediate carrier for the picture conversion and the blending.
5. The system of claim 4, wherein the picture conversion and fusion based intermediate carrier comprises: a mask matched with the photo frame skin display area; the embedding unit further includes: a mask processing module and a fusion processing module; wherein,
the mask processing module is used for masking the intercepted picture and the picture of the mask plate to obtain a target picture displayed in the picture frame skin display area;
and the fusion processing module is used for fusing the target picture with the picture frame skin display area by using the picture of the mask to obtain the picture frame skin.
6. The system according to any one of claims 2 to 5, wherein the network server comprises an image storage server; or, the system comprises a character storage server and an image processing server; or, the system comprises an image storage server, a character storage server and an image processing server;
the image storage server is used for storing the image information of the user identified by the user account and the image information of the system provided by the system;
the text storage server is used for storing the text information of the user identified by the user account and the text information of the system provided by the system;
the picture processing server is used for converting the character information and acquiring information in a picture format; or combining the character information and the image information together, and obtaining the content of combining the two types of information including the character information and the image information, wherein the combined content is information in a picture format.
7. A method of skin design in an input tool, the method comprising the steps of:
intercepting the obtained original picture;
and performing mask processing on the intercepted picture to obtain a target picture, and embedding the target picture into a specified photo frame skin display area after performing fusion processing on the target picture.
8. The method according to claim 7, wherein the truncating the original picture is specifically: and acquiring the original picture from a local user terminal or a network server, and intercepting the original picture by adopting the same size display scale as the target picture.
9. The method according to claim 7, wherein the masking and blending the picture is specifically: and performing mask processing and fusion processing on the intercepted picture based on the intermediate carrier for picture conversion and fusion.
10. The method according to any of claims 7 to 9, wherein the original picture comprises: images, text, or a combination of images and text;
the display mode of the target picture is as follows: static display, or dynamic display.
11. The method of claim 9, wherein the picture conversion and fusion intermediate carrier comprises: and the mask is matched with the skin display area of the photo frame.
12. The method according to claim 8, wherein when the original picture is obtained from a network server, the obtaining of the original picture specifically comprises the steps of:
x1, according to the user account and the login key, after the login server successfully verifies the login request of the user terminal, the login server informs the image storage server to issue the index of the image type information to the user terminal; or,
the login server informs the picture processing server to send the index of the character type information to the user terminal; or,
the login server informs the picture processing server to issue an index of the content of the combination of the image type information and the character type information to the user terminal;
x2, the user terminal obtains the image type information from the image storage server according to the index of the image type information; or,
the user terminal acquires the character type information from the image processing server according to the index of the character type information; or,
and the user terminal acquires the content of the combination of the image type information and the character type information from the image processing server according to the index of the content of the combination of the image type information and the character type information.
13. The method of claim 8, further comprising: when the user information in the network server is updated, the network server actively sends a user information updating message to all the user terminals, or periodically notifies all the user terminals after the updated user information is accumulated; users independently select whether to reacquire the original picture; or,
based on the user information updating inquiry request initiated by the user terminal, the network server passively issues a user information updating message to the requesting user terminal.
CNA2009100792527A 2009-03-05 2009-03-05 Skin design system and method in input tool Pending CN101493951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100792527A CN101493951A (en) 2009-03-05 2009-03-05 Skin design system and method in input tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100792527A CN101493951A (en) 2009-03-05 2009-03-05 Skin design system and method in input tool

Publications (1)

Publication Number Publication Date
CN101493951A true CN101493951A (en) 2009-07-29

Family

ID=40924534

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100792527A Pending CN101493951A (en) 2009-03-05 2009-03-05 Skin design system and method in input tool

Country Status (1)

Country Link
CN (1) CN101493951A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622214A (en) * 2011-01-27 2012-08-01 腾讯科技(深圳)有限公司 Method and device for realizing multiple-display mode universal icons
CN102810307A (en) * 2011-06-02 2012-12-05 精工爱普生株式会社 Display device, method of controlling display device, and recording medium
CN103064691A (en) * 2013-01-30 2013-04-24 广东欧珀移动通信有限公司 Method and device for producing desktop icon of electronic equipment
CN103150150A (en) * 2011-12-06 2013-06-12 腾讯科技(深圳)有限公司 Method and device for displaying weather information
CN103677791A (en) * 2012-09-26 2014-03-26 联想(北京)有限公司 Icon processing method and electronic device
CN103903292A (en) * 2012-12-27 2014-07-02 北京新媒传信科技有限公司 Method and system for realizing head portrait editing interface
CN104715205A (en) * 2013-12-12 2015-06-17 中国移动通信集团公司 Image resource processing, publishing and obtaining method and related device
CN105678695A (en) * 2014-11-19 2016-06-15 腾讯科技(深圳)有限公司 Picture processing method and device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622214A (en) * 2011-01-27 2012-08-01 腾讯科技(深圳)有限公司 Method and device for realizing multiple-display mode universal icons
CN102622214B (en) * 2011-01-27 2015-09-30 腾讯科技(深圳)有限公司 One realizes plurality of display modes standard drawing calibration method and device
CN102810307A (en) * 2011-06-02 2012-12-05 精工爱普生株式会社 Display device, method of controlling display device, and recording medium
CN103150150A (en) * 2011-12-06 2013-06-12 腾讯科技(深圳)有限公司 Method and device for displaying weather information
CN103677791A (en) * 2012-09-26 2014-03-26 联想(北京)有限公司 Icon processing method and electronic device
CN103677791B (en) * 2012-09-26 2017-11-07 联想(北京)有限公司 A kind of icon processing method and electronic equipment
CN103903292B (en) * 2012-12-27 2017-04-19 北京新媒传信科技有限公司 Method and system for realizing head portrait editing interface
CN103903292A (en) * 2012-12-27 2014-07-02 北京新媒传信科技有限公司 Method and system for realizing head portrait editing interface
CN103064691A (en) * 2013-01-30 2013-04-24 广东欧珀移动通信有限公司 Method and device for producing desktop icon of electronic equipment
CN103064691B (en) * 2013-01-30 2016-04-06 广东欧珀移动通信有限公司 Make the method and apparatus of desktop icon of electronic equipment
CN104715205A (en) * 2013-12-12 2015-06-17 中国移动通信集团公司 Image resource processing, publishing and obtaining method and related device
CN104715205B (en) * 2013-12-12 2018-01-30 中国移动通信集团公司 A kind of picture resource processing, issue and acquisition methods and relevant apparatus
CN105678695A (en) * 2014-11-19 2016-06-15 腾讯科技(深圳)有限公司 Picture processing method and device

Similar Documents

Publication Publication Date Title
RU2488232C2 (en) Communication network and devices for text to speech and text to facial animation conversion
CN101493951A (en) Skin design system and method in input tool
JP6447066B2 (en) Image processing apparatus, image processing method, and program
US6396598B1 (en) Method and apparatus for electronic memo processing for integrally managing document including paper document and electronic memo added to the document
US9652704B2 (en) Method of providing content transmission service by using printed matter
JP7381775B2 (en) Unique identifier based on signature
JP4702705B2 (en) Information display processing system, client terminal, management server, program
JP2016134014A (en) Electronic information board device, information processing method and program
WO2016121401A1 (en) Information processing apparatus and program
US20170220858A1 (en) Optical recognition of tables
US8952989B2 (en) Viewer unit, server unit, display control method, digital comic editing method and non-transitory computer-readable medium
CN101237637A (en) Method and device for realizing personalized multimedia short message
KR100799644B1 (en) Recording medium containing terminal device, display system, display method, and program
US11567629B2 (en) System and image forming system
CA2929908A1 (en) System and method of communicating between interactive systems
CN107066109A (en) The mthods, systems and devices that dynamic text is inputted immediately
JP2011192008A (en) Image processing system and image processing method
CN113438526A (en) Screen content sharing method, screen content display device, screen content equipment and storage medium
CN115134317B (en) Message display method, device, storage medium and electronic device
CN110537164A (en) The inking ability of enhancing for content creation applications
CN103294722B (en) A kind of information processing method and system
CN201541287U (en) Mobile communication terminal having hand-written information sending function
JPS59123360A (en) Information and communication terminal equipment
JP2016143183A (en) Information processing apparatus, image processing system, and program
KR101631730B1 (en) Apparatus and method for playing handwriting message using handwriting data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20090729