US20100141679A1 - System to Compose Pictorial/Video Image Contents With a Face Image Designated by the User - Google Patents
System to Compose Pictorial/Video Image Contents With a Face Image Designated by the User Download PDFInfo
- Publication number
- US20100141679A1 US20100141679A1 US12/093,907 US9390707A US2010141679A1 US 20100141679 A1 US20100141679 A1 US 20100141679A1 US 9390707 A US9390707 A US 9390707A US 2010141679 A1 US2010141679 A1 US 2010141679A1
- Authority
- US
- United States
- Prior art keywords
- creating
- reu
- pictorial
- dcu
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Definitions
- the present invention relates to a system for composing pictorial/video image contents where the Face Image which the User designates (hereinafter referred to as “FIU”) is reflected, and more particularly, to a system for composing pictorial/video image contents reflecting the FIU, in which the system provides a series of pictorial/video image composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that it can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity
- An object of the invention is to provide a system for composing pictorial/video image contents where the Face Image which the User designates (hereinafter referred to as “FIU”) is reflected.
- the object of the invention is achieved through a computational module that converts a representative expression image of a source character appearing in a video content to a FIU where a user's preferences are reflected to generate a Representative Expression image for User design character (hereinafter referred to as “REU”), a computational module that converts a standard expression image of a source character on the basis of a specific relationship between REU and the representative expression image of the source character to create plural Standard Expression images for User design character (hereinafter referred to as “SEU”), a computational module that combines and transforms SEU in an appropriate manner on the basis of generation characteristics of an expression pictorial/video image of the source character to create an Expression pictorial/viDeo image for User design character (hereinafter referred to as “EDU”)
- These computational modules provide a series of video composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- a video related company for example, a producer, a distributor, a sales agency (provider), etc.
- VOD video on demand
- a system for composing a pictorial/video image content where FIU (face image which the user designates) is reflected comprising: a DCU (pictorial/viDeo image Contents where the FIU is reflected) production control module installed in an information processing device having an operation system for storing and managing information related to a source pictorial/video image content, for outputting and operating a production guide window, and for performing an overall control over changing a face of a specific source character appearing in the source pictorial/video image content to an FIU pattern; an REU creating module for converting, under the control of the DCU production control module, a representative expression image of a specific source character appearing in the source pictorial/video image content into one matching with the FIU, to create an REU (Representative Expression image for User design character); an SEU creating module for converting, under the control of the DCU production control module, standard expression images of the source character based on conversion features between a representative expression image of the source character and the
- the invention is realized through a computational module that converts a representative expression image of a source character appearing in a video content to a FIU where a user's preferences are reflected to generate a REU, a computational module that converts a standard expression image of a source character on the basis of a specific relationship between REU and SEU to create plural SEUs, a computational module that combines and transforms SEU in an appropriate manner on the basis of generation characteristics of an expression pictorial/video image of the source character to create an EDU, and a computational module that combines EDU with the background of a source image (character face layer, background layer, costume layer, etc.) to create a DCU having the face image of a character appeared on an image being newly replaced in FIU pattern.
- These computational modules provide a series of video composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- a video related company for example, a producer, a distributor, a sales agency (provider), etc.
- VOD video on demand
- FIG. 1 conceptually shows a general structure of a DCU production system according to an embodiment of the invention
- FIG. 2 conceptually shows a noticed state of a production guide window according to an embodiment of the invention
- FIG. 3 conceptually shows a saved state of source character standard face images according to an embodiment of the invention
- FIG. 4 conceptually shows a saved state of an expression pictorial/video image of a source character according to an embodiment of the invention
- FIG. 5 conceptually shows an FIU according to an embodiment of the invention
- FIG. 6 conceptually shows a representative expression image of a source character according to an embodiment of the invention
- FIG. 7 conceptually shows an REU according to an embodiment of the invention.
- FIG. 8 conceptually shows a detailed structure of an REU creating module according to an embodiment of the invention.
- FIG. 9 conceptually shows a performance result of a salient point designation guide section that belongs to an REU creating module according to an embodiment of the invention.
- FIG. 10 conceptually shows a performance of an REU creating engine that belongs to a REU creating module according to an embodiment of the invention
- FIG. 11 conceptually shows an SEU according to an embodiment of the invention.
- FIG. 12 conceptually shows a detailed structure of an SEU creating module according to an embodiment of the invention.
- FIG. 13 conceptually shows a performance of an SEU creating engine that belongs to an SEU creating module according to an embodiment of the invention
- FIG. 14 conceptually shows an EDU according to an embodiment of the invention.
- FIG. 15 conceptually shows a detailed structure of an EDU creating module according to an embodiment of the invention.
- FIG. 16 and FIG. 17 conceptually show a performance of an EDU creating engine that belongs to an EDU creating module according to an embodiment of the invention
- FIG. 18 conceptually shows a DCU according to an embodiment of the invention.
- FIG. 19 conceptually shows a detailed structure of a DCU creating module according to an embodiment of the invention.
- FIG. 20 conceptually shows a performance of a DCU creating engine that belongs to a DCU creating module according to an embodiment of the invention.
- a system 100 for composing pictorial/video image contents where FIU is reflected is attached to an information processing apparatus 10 such as a notebook computer, a desktop computer, etc.
- a video related company executes the pictorial/video image contents composing system of the invention through the medium of an input/output device 13 (e.g., a mouse, a keyboard, a monitor, etc.), an operation system 11 , an application 12 and so on to produce and further manufacture/sell a video on demand (VOD) content to meet the needs of individual users, through which the face image of a specific character (Genie, Snow-White, Heung-bu, Sinbad, etc.) appearing in pictorial/video image contents can be changed to the face image of another person who a user ordered (for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- VOD video on demand
- the system 100 for composing pictorial/video image contents where FIU is reflected is largely constituted by a DCU production control module 110 , a production guide window operation module 180 controlled by the DCU production control module 110 overall, a FIU acquisition module 120 , a source image content related information storage module 150 , an REU creating module 130 , an SEU creating module 140 , an EDU creating module 170 , and a DCU creating module 160 , each being closely combined with one another.
- the DCU production control module 110 maintains close connection with the operating system 11 , application 12 , and etc. on the side of the information processing apparatus 10 by the medium of the interface module 111 , overally controls/manages the process of changing the face of a specific character (e.g. Genie, Snow-White, Heung-bu, Sinbad, etc.) appearing in pictorial/video image contents (e.g. Alladin, Snow-White, The story of Heung-bu, The adventure of Sinbad) to a FIU pattern in accordance with the work of a video related company.
- a specific character e.g. Genie, Snow-White, Heung-bu, Sinbad, etc.
- pictorial/video image contents e.g. Alladin, Snow-White, The story of Heung-bu, The adventure of Sinbad
- the production guide window operation module 180 flexibly extracts, under the control of the DCU production control module 110 , various kinds of operating information stored in a private information storage area, e.g., image/test information, skin information, link information, setting information, etc. for generating a guide window, and generates a production guide window 201 as depicted in FIG. 2 on the basis of the various operating information having been extracted. And, the production guide window operation module 180 selects and notifies the production guide window 201 through an output device 13 on the side of the information processing apparatus 10 so that basic environment required for various service procedures can be established smoothly without particular problems by the DCU production control module 110 .
- various kinds of operating information stored in a private information storage area e.g., image/test information, skin information, link information, setting information, etc. for generating a guide window
- the production guide window operation module 180 selects and notifies the production guide window 201 through an output device 13 on the side of the information processing apparatus 10 so that basic environment required for various service procedures can be established smoothly without particular problems
- the source pictorial/video image content related information storage module 150 being controlled by the DCU production control module 110 includes a source character standard expression image storage section 151 , a source character expression pictorial/video image storage section 152 and the like to store and manage “various source character standard expression images as shown in FIG. 3 ” and “source character expression images as shown in FIG. 4 ” (in this case, source character standard expression images indicate standard facial expression images used as a practical basis for source character expression images, e.g., a crying face, an angry face, an astonished face, a laughing face, an image having a mouth shape in pronouncing a given phonetic symbol and so on).
- the source pictorial/video image content related information storage module 150 includes a source pictorial/video image background content storage section 153 and a source pictorial/video image setting information storage section 154 , so that source pictorial/video image background contents (e.g., background pictorial/video image layer, source character body pictorial/video image layer, other source character layer, source character accessories/clothes layer, etc.), and source pictorial/video image setting information (e.g., source character's representative expression image designation information, conversion characteristics information between source character's standard expression image and expression pictorial/video image, proximal expression image related information and the like) are stored and managed in a stable manner.
- source pictorial/video image background contents e.g., background pictorial/video image layer, source character body pictorial/video image layer, other source character layer, source character accessories/clothes layer, etc.
- source pictorial/video image setting information e.g., source character's representative expression image designation information, conversion characteristics information between
- the FIU acquisition module 120 being controlled by the DCU production control module 110 builds a series of communication relationships with the operation system 11 and the application 12 via an interface module 111 , and a video related company operates the production guide window 201 to progress a computation work providing a privately designated FIU (FIU at this time is the face image of a user/celebrity/politician, etc., designated by a user who made a special order to the video related company for production of pictorial/video image contents, and the procedure of acquiring such a user designated face image may undergo diverse changes in accordance with circumstances of the video related company.) to the unit 100 .
- the FIU acquisition module 120 acquires a FIU similar to one shown in FIG. 5 for example by the medium of the interface module 111 , and then stores and manages the acquired FIU in a private information storage buffer 121 .
- the REU creating module 130 communicates, under the control of the DCU production control module 110 , with the source character standard expression image storage section 151 and with the source pictorial/video image setting information storage section 154 after FIU is secured and stored in the information storage buffer 121 by the FIU acquisition module 120 . Accordingly, the REU creating module 130 extracts a source character representative expression image similar to one shown in FIG. 6 (In this case, the source character representative expression image means an expression image of the source character that can represent standard expression images of each source character.) out of the standard expression images of the source character (refer to FIG. 3 ) having been stored in the source character standard expression image storage section 151 and then converts the source character representative expression image that matches with FIU, to create an REU similar to one shown in FIG. 7 .
- the source character representative expression image means an expression image of the source character that can represent standard expression images of each source character.
- the REU creating module 130 is constituted by an REU creating control section 131 that is in charge of overall control of the REU creating procedure, and other constituents that operate under the control of the REU creating control section 131 , i.e., an FIU loading section 135 , a source character representative expression image loading section 134 , a salient point designation guide section 133 , and an REU creating engine 137 , each being closely combined with one another.
- an FIU loading section 135 i.e., a source character representative expression image loading section 134 , a salient point designation guide section 133 , and an REU creating engine 137 , each being closely combined with one another.
- the FIU loading section 135 communicates, under the control of the REU creating control section 131 , with the FIU acquisition module 120 via an information exchange section 132 after the FIU is secured and stored (refer to FIG. 5 ) in the information storage buffer 121 by the FIU acquisition module 120 , and loads the acquired FIU in a processing buffer 136 .
- the source character representative expression image loading section 134 communicates, under the control of the REU creating control section 131 , with the source pictorial/video image setting information storage section 154 via the information exchange section 132 after the FIU loading procedure is completed by the FIU loading section 135 , to figure out source character's representative expression image designation information (e.g., information that explains about the source character's representative expression image) having been stored. Later, the source character representative expression image loading section 134 communicates with the source character standard expression image storage section 151 via the information exchange section 132 , to selectively select a source character representative expression image (refer to FIG. 6 ) out of the standard expression images of the source character having been stored (refer to FIG. 3 ), and loads the extracted source character representative expression image into the processing buffer 136 (of course, the performance of the source character representative expression image loading section may precede the performance of the FIU loading section described earlier).
- source character representative expression image loading section may precede the performance of the FIU loading section described earlier.
- the salient point designation guide section 133 communicates, under the control of the REU creating control section 131 , with the production guide window operation module 180 after extracting FIU that has been loaded into the processing buffer 136 by the FIU loading section 135 , and displays the corresponding FIU through the production guide window 201 as shown in FIG. 9 .
- a video related company or a user may easily designate a number of salient points on main parts of FIU (eyes, nose, philtrum, etc.) through the production guide window 201 .
- the REU creating engine 137 under the control the REU creating control section 131 acquires, as illustrated in FIG. 10 , “a difference degree (for example, a degree indicating the difference between two positions) between a position (e.g., m k ) of the salient point appointed to the main part of the FIU 2 and a position (e.g., v k ) of vertex constituting a polygon mesh (PS) of the representative expression image 1 ”.
- a difference degree for example, a degree indicating the difference between two positions
- the REU creating engine 137 analyzes/acquires the difference between the two positions under limited condition as shown in Math FIG. 1 below, i.e., limited condition that “there is little difference between the location m k of the salient point appointed to the main part of the FIU 2 and the position v k of vertex of the representative expression image.” Consequently, the REU creating engine 137 guides subsequent procedures, i.e., the REU acquisition, the SEU acquisition, and EDU acquisition, to progress more rapidly while minimizing deformation of the source character images (the source character representative expression image, and the source character standard expression image).
- m k is a position of k th salient point appointed to the main part of the FIU
- v k is a position of k th vertex constituting a polygon mesh of the source character representative expression image.
- the REU creating engine 137 calculates Math FIG. 2 below based on the difference degree to obtain a summation of respective items, thereby progressing a process of estimating positions of vertexes, V I for example, that will constitute a polygon mesh PN of the REU 3 .
- the positions of vertexes, V I for example, that will constitute a polygon mesh PN of the REU 3 are estimated through a least square method as shown Math FIG. 2 . Therefore, in the invention, the vertexes, V I for example, constituting the polygon mesh PS of the source character representative expression image 1 exhibits, within a minimum deformation range, a transition pattern that becomes optimally similar to features of the vertexes constituting a polygon mesh PT of the FIU 2 (i.e., a feature error of the two vertexes is minimized).
- the source character representative expression image 1 can be eventually changed into the REU 3 having optimally reflected the feature of the FIU 2 401 , while minimizing the deformation of the source character representative expression image 1 .
- V I is a position of I th vertex that will constitute a polygon mesh of the REU
- T i is a transform matrix of i th triangle constituting a polygon mesh of the source character representative expression image
- T j is a transform matrix of j th triangle neighboring to T i
- I is an ideal transform matrix that is almost same as T i
- v i is a position of i th vertex constituting a polygon mesh of the source character representative expression image
- c i is a position of i th vertex constituting a polygon mesh of the FIU as the nearest corresponding position to v i
- matrix norm ⁇ ⁇ F is a Frobenius norm.
- Math FIG. 2 included in Math FIG. 2 is an item that estimates a V I value so that the transform matrix T i of i th triangle P 1 constituting a polygon mesh PS of the source character representative expression image 1 can be transformed while having an uttermost similar value to the transform matrix T j of j th triangle P 2 neighboring to T i , in a situation that v i is converted into V I to fully form the polygon mesh PN of the REU 3 (refer to FIG. 10 ).
- V I is finally estimated by the calculation of Math FIG.
- the REU 3 can maintain an optimized, very smooth shape due to an increase in a similarity of the polygon meshes neighboring to each other.
- Math FIG. 2 included in Math FIG. 2 is an item that estimates a V I value so that the transform matrix T i of i th triangle P 1 constituting a polygon mesh PN of the source character representative expression image 1 can be transformed while having an uttermost close value to the ideal transform matrix 1 that is almost same as T i , in a situation that v i is converted into V I to fully form the polygon mesh PN of the REU 3 (refer to FIG. 10 ).
- V I is finally estimated by the calculation of Math FIG.
- the vertexes constituting the polygon mesh PS of the source character representative expression image 1 can naturally form the REU 3 having optimally reflected the feature of the FIU 2 , even within the minimum deformation range.
- Math FIG. 2 included in Math FIG. 2 is an item that estimates a V I value so that the position v i of i th vertex constituting a polygon mesh PS of the source character representative expression image 1 can be transformed while possibly minimizing a difference with the position c i of the i th vertex constituting a polygon mesh PT of the FIU 2 and nearest corresponding to v i , in a situation that v i is converted into V I to form the polygon mesh PN of the REU 3 in earnest (refer to FIG. 10 ).
- V I is finally estimated by the calculation of Math FIG.
- the REU 3 can naturally form a shape that is closest to the feature of the FIU.
- w s , w m , w d and the like included in each item of Math FIG. 2 are weight factors of the corresponding items.
- the REU creating engine 137 differently sets the weight factors of the respective items depending on conditions (for example, sets w s : 0.01, w m : 0.1 and w d : 0.2) in the calculation situation of Math FIG. 2 , thereby enabling the REU 3 , which will be finally completed, to have a shape matching with the FIU 2 more efficiently.
- the REU creating control section 131 communicates with the DCU production control module 110 by the medium of the information exchange section 132 , to guide the REU 3 to be stably stored and managed in the processing buffer 112 of the DCU production control module 110 .
- the SEU creating module 140 which is controlled by the DCU production control module 110 similarly to the FIU acquisition module 120 and the REU creating module 130 , communicates with the REU creating module 130 , the source character standard expression image storage section 151 and the like after the REU creating module 130 stores the REU 3 in the processing buffer of the DCU production control module 110 and converts the source character standard expression images (refer to FIG. 3 ) based on the transform features between the source character representative expression image 1 and the REU 3 , to thereby create plural SEUs as shown in FIG. 11 .
- the SEU creating module 140 is constituted by an SEU creating control section 141 that is in charge of overall control of the SEU creating procedure, and other constituents that operate under the control of the SEU creating control section 141 , i.e., an REU conversion characteristic acquisition section 143 , a source character standard expression image loading section 144 , and an SEU creating engine, each being closely combined with one another.
- the REU conversion characteristic acquisition section 143 communicates, under the control of the SEU creating control section 141 , with the REU creating module 130 , the processing buffer 112 on the side of the DCU production control module 110 , etc., via the information exchange section 142 after the REU creating module 130 completes the creation of the REU 3 , to acquire position conversion characteristics (e.g., position conversion characteristics when v i is transformed into V I to constitute the polygon mesh PN of the REU 3 ) when the source character's representative expression image 2 is converted into the REU 3 .
- the acquisition result data is loaded and stored into the processing buffer 145 by the REU conversion characteristic acquisition section 143 .
- the source character standard expression image loading section 144 communicates, under the control of the SEU creating control section 141 , with the source character standard expression image storage section 151 by the medium of the information exchange section 142 after the REU conversion characteristic acquisition section 143 completes loading of the position conversion characteristic data into the processing buffer 145 , to extract source character standard expression images (refer to FIG. 3 ) having been stored therein.
- These extracted source character standard expression images are loaded into the processing buffer 145 by the source character standard expression image loading section 144 (of course, the performance of the source character standard expression image loading section may precede the performance of the REU conversion characteristic acquisition section).
- the SEU creating engine 146 which is controlled by the REU creating control section 141 , communicates with the processing buffer 145 to figure out, as shown in FIG. 13 , vertexes that constitute the polygon mesh PS of the source character representative expression image 1 , e.g., position conversion characteristics when v i is transformed into vertexes, V I for example, to constitute the polygon mesh PN of the REU 3 , and converts the positions of the vertexes; v n for example, that constitute the polygon mesh PS of respective source character standard expression images 4 into V N for example, to constitute a new polygon mesh PNN.
- vertexes that constitute the polygon mesh PS of the source character representative expression image 1 e.g., position conversion characteristics when v i is transformed into vertexes, V I for example, to constitute the polygon mesh PN of the REU 3 , and converts the positions of the vertexes; v n for example, that constitute the polygon mesh PS of respective source character standard expression images 4 into V
- the SEU creating engine 146 progresses the creation procedure of plural SEUs 5 while having figured out “the position conversion characteristics when the source character representative expression image 1 is converted into the REU 3 ” in advance, the processing speed of the creation of plural SEUs 5 can be accelerated to an optimal state.
- the SEU creating control section 141 communicates with the DCU production control module 110 by the medium of the information exchange section 142 , to guide a corresponding SEU to stably store and manage the SEUs 5 into the processing buffer 112 of the DCU production control module.
- the EDU creating module 170 communicates, under the control of the DCU production control module 110 , with the SEU creating module 140 , the source pictorial/video image setting information storage section 154 and the like to selectively combine/transform the SEU 5 , on the basis of the conversion features exhibiting when a proximate expression image that is most proximate to each reproduced expression pictorial/video image of the source character (refer to FIG. 4 ) among the standard expression images of the source character is converted into each reproduced expression image, to thereby create an EDU similar to one shown in FIG. 14 .
- the EDU creating module is constituted by an EDU creating control section 171 that is in charge of overall control of the EDU creating procedure, and other constituents that operate under the control of the EDU creating control section 171 , i.e., an SEU loading section 174 , a source character expression pictorial/video image creating characteristic acquisition section 173 , and an EDU creating engine 176 , each being closely combined with one another.
- the SEU loading section 174 which is controlled by the EDU creating control section 171 , communicates via the information exchange section 172 with the processing buffer 112 of the DCU production control module 110 where SEUs 5 had been stored by the SEU creating module 140 , so as to extract corresponding SEUs 5 , and loads the extracted SEUs 5 into the processing buffer 175 .
- the source character expression pictorial/video image creating characteristic acquisition section 173 which is controlled by the EDU creating control section 171 , communicates with the source pictorial/video image setting information storage section 154 by the medium of the information exchange section 172 after the SEU loading section completes loading of the SEUs 5 into the processing buffer 175 , to check information having been stored therein, i.e., (as shown in FIG. 16 ) information about which proximate expression image 4 a, 4 b, and 4 c among the standard expression images 4 of the source character is most proximate to each reproduced expression image 6 , 6 a, 6 b, and 6 c: refer to FIG.
- the source character expression pictorial/video image creating characteristic acquisition section 173 acquires “mixture weight features when proximate expression images 4 a, 4 b, and 4 c are mixed with other standard expression images 4 to transform the reproduced expression pictorial/video images 6 a, 6 b, and 6 c every moment (of course, the performance of the source character expression pictorial/video image creating characteristic acquisition section may precede the performance of the SEU loading section).
- the EDU creating engine 176 Upon the completion of the procedures in respective computation parts, the EDU creating engine 176 , which is controlled by the EDU creating control section 171 , estimates proximate SEUs 5 a, 5 b, and 5 c that are most proximate to the EDUs 7 , 7 a, 7 b, and 7 c among the SEUs 5 , when it is assumed, as shown in FIG. 17 , a user design character appearing in the DCU is reproduced with changes in facial expressions as time passes, on the basis of the information on the proximate expression images 4 a, 4 b, and 4 c having been acquired by the source character expression pictorial/video image creating characteristic acquisition section 173 . Moreover, the EDU creating engine 176 calculates Math FIG. 3 below, according to mixture weight features, and combines proximate SEUs 5 a, 5 b, and 5 c with other SEUs 5 every minute and creates EDU 7 depending on the reproduction time of the DCU (refer to
- F(t) is an EDU varying by reproduction time flow of DCU
- w i (t) is a function of mixture weights with respect to time
- M i is an i th proximate SEU.
- the EDU creating engine 171 can accelerate the processing speed of the creation of the EDU 7 to an optimal state because SEUs 5 obtained based on the standard expression image 4 of the source character is utilized as a basis of the EDU formation and “the mixture weight features at the time the proximate expression images 4 a, 4 b, and 4 c are mixed with other standard expression images 4 for conversion of the reproduced expression pictorial/video images 6 every moment” are completely employed and taken advantage of as the mixture weight features for the formation of EDU 7 .
- the EDU creating control section 171 communicates with the DCU production control module 110 via the information exchange section 172 , to guide the corresponding EDU 7 to be stored and managed in the processing buffer 112 of the DCU production control module 110 in a stable manner.
- the DCU creating module 160 communicates, under the DCU production control module 110 , with the source pictorial/video image background content storage section 153 and combines the EDU 7 with the background of the source pictorial/video image content, to create a DCU having the facial image of the source character being changed to the FIU 2 as shown in FIG. 18 .
- the DCU creating module 160 is constituted by a DCU creating control section 161 that is in charge of overall control of the DCU creating procedure, and other constituents that operate under the control of the DCU creating control section 161 , i.e., a background content loading section 163 , an EDU loading section 164 , and a DCU creating engine 166 , each being closely combined with one another.
- the background content loading section 163 communicates, under the control of the DCU creating control section 161 , with the source pictorial/video image background content storage section 153 by the medium of the information exchange section 162 after the EDU creating module 170 stores the EDU 7 in the processing buffer 112 of the DCU production control module 110 , and extracts the background of the source pictorial/video image content having been stored (e.g., background pictorial/video image layer, source character body pictorial/video image layer, other source character layer, source character accessories/clothes layer, etc.).
- the background content loading section 163 loads the extracted background data into the processing buffer 165 .
- the EDU loading section 164 communicates, under the DCU creating control section 161 , with the processing buffer 112 of the DCU production control module 110 by the medium of the information exchange section 162 after the EDU creating module 170 stores the EDU 7 in the processing buffer 112 , to extract the corresponding EDU 7 .
- the EDU loading section 164 loads the extracted EDU 7 into the processing buffer 165 (of course, the performance of the EDU loading section may precede the performance of the background content loading section).
- the DCU creating engine 166 communicates, under the control of the DCU creating control section 161 , with the processing buffer 165 and synthesizes the EDU 7 with a source character face image f of the background data B 1 , B 2 , and B 3 , according to the reproduction flow of the source pictorial/video image contents as shown in FIG. 20 .
- a DCU 8 having the source character face image f being newly replaced into the FIU 2 pattern is created (refer to FIG. 18 ).
- the DCU creating control section 161 communicates with the DCU production control module 110 by the medium of the information exchange section 162 , to guide the corresponding DCU 8 to be stably stored and managed in the processing buffer 112 of the DCU production control module.
- a video related company for example, a producer, a distributor, a sales agency (provider), etc.
- VOD video on demand
- a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- the invention relates to a system for composing pictorial/video image contents where the FIU is reflected, and more particularly, to a system for composing pictorial/video image contents reflecting the FIU, in which the system provides a series of pictorial/video image composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that it can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
- The present invention relates to a system for composing pictorial/video image contents where the Face Image which the User designates (hereinafter referred to as “FIU”) is reflected, and more particularly, to a system for composing pictorial/video image contents reflecting the FIU, in which the system provides a series of pictorial/video image composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that it can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- With widespread home appliances including DVD players, CD players, video players, and so on in recent years, the number of users who purchase/watch pictorial/video image contents (e.g., music videos, movies, games, animations, etc.) has also increased to a great extent. Keeping abreast with such an increase in users, a variety of video content products like DVDs, CDs, videos and so on have come out and their scale has sharply increased ever.
- Under a traditional system, pictorial/video image contents related companies usually produced the contents and put them on the market en bloc, without paying special attentions on the needs of individual users. Therefore, it was rather a matter of course to see that same character appearing in each video content that was released (sold) to the public always had the same face images that were originally designed by producers.
- For instance, suppose that 1,000 products of a video content such as <Aladdin> were sold to the public. A character ‘Genie’ in the content has the same face image that a producer originally designed in each of the 1,000 products. Similarly, suppose that 1,000 products of a video content such as <The story of Heung-bu> (a Korean classic novel) were sold to the public. A character ‘Heung-bu’ in the content has the same face image that a producer originally designed in each of the 1,000 products.
- In case that the same characters appearing in each video content released (sold) to the public have the same face images that a producer has originally designed, users had no choice but accepting them as they are. For example, although a user may desire to adopt a special order to change the face image of a specific character appearing in a video content into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on), it could not be realized at all.
- The invention has been made to solve the above problems occurring in the prior art. An object of the invention is to provide a system for composing pictorial/video image contents where the Face Image which the User designates (hereinafter referred to as “FIU”) is reflected. The object of the invention is achieved through a computational module that converts a representative expression image of a source character appearing in a video content to a FIU where a user's preferences are reflected to generate a Representative Expression image for User design character (hereinafter referred to as “REU”), a computational module that converts a standard expression image of a source character on the basis of a specific relationship between REU and the representative expression image of the source character to create plural Standard Expression images for User design character (hereinafter referred to as “SEU”), a computational module that combines and transforms SEU in an appropriate manner on the basis of generation characteristics of an expression pictorial/video image of the source character to create an Expression pictorial/viDeo image for User design character (hereinafter referred to as “EDU”), and a computational module that combines EDU with the background of a source image (character face layer, background layer, costume layer, etc.) to create a pictorial/viDeo image Contents where the face image which the User designates is reflected (hereinafter referred to as “DCU”) having the face image of a character appeared on an image being newly replaced in FIU pattern. These computational modules provide a series of video composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- To achieve the foregoing object, there is provided a system for composing a pictorial/video image content where FIU (face image which the user designates) is reflected, comprising: a DCU (pictorial/viDeo image Contents where the FIU is reflected) production control module installed in an information processing device having an operation system for storing and managing information related to a source pictorial/video image content, for outputting and operating a production guide window, and for performing an overall control over changing a face of a specific source character appearing in the source pictorial/video image content to an FIU pattern; an REU creating module for converting, under the control of the DCU production control module, a representative expression image of a specific source character appearing in the source pictorial/video image content into one matching with the FIU, to create an REU (Representative Expression image for User design character); an SEU creating module for converting, under the control of the DCU production control module, standard expression images of the source character based on conversion features between a representative expression image of the source character and the REU, to create plural SEUs (Standard Expression images for User design character); an EDU creating module for selectively combining and transforming the SEUs, under the DCU production control module, based on conversion features exhibiting when a proximate expression image that is most proximate to each reproduced expression pictorial/video image of the source character among standard expression images of the source character is converted into each reproduced expression image, to create an EDU (Expression pictorial/viDeo image for User design character); and a DCU creating module for creating, under the control of the DCU production control module, a DCU having a face image of the source character being newly replaced into the FIU pattern by combining the EDU with a background of the source pictorial/video image content.
- The invention is realized through a computational module that converts a representative expression image of a source character appearing in a video content to a FIU where a user's preferences are reflected to generate a REU, a computational module that converts a standard expression image of a source character on the basis of a specific relationship between REU and SEU to create plural SEUs, a computational module that combines and transforms SEU in an appropriate manner on the basis of generation characteristics of an expression pictorial/video image of the source character to create an EDU, and a computational module that combines EDU with the background of a source image (character face layer, background layer, costume layer, etc.) to create a DCU having the face image of a character appeared on an image being newly replaced in FIU pattern. These computational modules provide a series of video composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 conceptually shows a general structure of a DCU production system according to an embodiment of the invention; -
FIG. 2 conceptually shows a noticed state of a production guide window according to an embodiment of the invention; -
FIG. 3 conceptually shows a saved state of source character standard face images according to an embodiment of the invention; -
FIG. 4 conceptually shows a saved state of an expression pictorial/video image of a source character according to an embodiment of the invention; -
FIG. 5 conceptually shows an FIU according to an embodiment of the invention; -
FIG. 6 conceptually shows a representative expression image of a source character according to an embodiment of the invention; -
FIG. 7 conceptually shows an REU according to an embodiment of the invention; -
FIG. 8 conceptually shows a detailed structure of an REU creating module according to an embodiment of the invention; -
FIG. 9 conceptually shows a performance result of a salient point designation guide section that belongs to an REU creating module according to an embodiment of the invention; -
FIG. 10 conceptually shows a performance of an REU creating engine that belongs to a REU creating module according to an embodiment of the invention; -
FIG. 11 conceptually shows an SEU according to an embodiment of the invention; -
FIG. 12 conceptually shows a detailed structure of an SEU creating module according to an embodiment of the invention; -
FIG. 13 conceptually shows a performance of an SEU creating engine that belongs to an SEU creating module according to an embodiment of the invention; -
FIG. 14 conceptually shows an EDU according to an embodiment of the invention; -
FIG. 15 conceptually shows a detailed structure of an EDU creating module according to an embodiment of the invention; -
FIG. 16 andFIG. 17 conceptually show a performance of an EDU creating engine that belongs to an EDU creating module according to an embodiment of the invention; -
FIG. 18 conceptually shows a DCU according to an embodiment of the invention; -
FIG. 19 conceptually shows a detailed structure of a DCU creating module according to an embodiment of the invention; and -
FIG. 20 conceptually shows a performance of a DCU creating engine that belongs to a DCU creating module according to an embodiment of the invention. - Hereinafter, a preferred embodiment of a system for composing pictorial/video image contents where FIU is reflected of the present invention will be described with reference to the accompanying drawings.
- As shown in
FIG. 1 , asystem 100 for composing pictorial/video image contents where FIU is reflected according to an embodiment of the invention is attached to aninformation processing apparatus 10 such as a notebook computer, a desktop computer, etc. - In such circumstances, a video related company (e.g., a producer, a distributor, a selling agency (provider), etc.) executes the pictorial/video image contents composing system of the invention through the medium of an input/output device 13 (e.g., a mouse, a keyboard, a monitor, etc.), an operation system 11, an application 12 and so on to produce and further manufacture/sell a video on demand (VOD) content to meet the needs of individual users, through which the face image of a specific character (Genie, Snow-White, Heung-bu, Sinbad, etc.) appearing in pictorial/video image contents can be changed to the face image of another person who a user ordered (for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
- At this time, as can be seen from the drawing, the
system 100 for composing pictorial/video image contents where FIU is reflected according to an embodiment of the invention is largely constituted by a DCUproduction control module 110, a production guidewindow operation module 180 controlled by the DCUproduction control module 110 overall, aFIU acquisition module 120, a source image content relatedinformation storage module 150, anREU creating module 130, anSEU creating module 140, anEDU creating module 170, and aDCU creating module 160, each being closely combined with one another. - In this case, the DCU
production control module 110 maintains close connection with the operating system 11, application 12, and etc. on the side of theinformation processing apparatus 10 by the medium of theinterface module 111, overally controls/manages the process of changing the face of a specific character (e.g. Genie, Snow-White, Heung-bu, Sinbad, etc.) appearing in pictorial/video image contents (e.g. Alladin, Snow-White, The story of Heung-bu, The adventure of Sinbad) to a FIU pattern in accordance with the work of a video related company. - At this time, the production guide
window operation module 180 flexibly extracts, under the control of the DCUproduction control module 110, various kinds of operating information stored in a private information storage area, e.g., image/test information, skin information, link information, setting information, etc. for generating a guide window, and generates aproduction guide window 201 as depicted inFIG. 2 on the basis of the various operating information having been extracted. And, the production guidewindow operation module 180 selects and notifies theproduction guide window 201 through anoutput device 13 on the side of theinformation processing apparatus 10 so that basic environment required for various service procedures can be established smoothly without particular problems by the DCUproduction control module 110. - The source pictorial/video image content related
information storage module 150 being controlled by the DCUproduction control module 110 includes a source character standard expressionimage storage section 151, a source character expression pictorial/videoimage storage section 152 and the like to store and manage “various source character standard expression images as shown in FIG. 3” and “source character expression images as shown in FIG. 4” (in this case, source character standard expression images indicate standard facial expression images used as a practical basis for source character expression images, e.g., a crying face, an angry face, an astonished face, a laughing face, an image having a mouth shape in pronouncing a given phonetic symbol and so on). Moreover, the source pictorial/video image content relatedinformation storage module 150 includes a source pictorial/video image backgroundcontent storage section 153 and a source pictorial/video image settinginformation storage section 154, so that source pictorial/video image background contents (e.g., background pictorial/video image layer, source character body pictorial/video image layer, other source character layer, source character accessories/clothes layer, etc.), and source pictorial/video image setting information (e.g., source character's representative expression image designation information, conversion characteristics information between source character's standard expression image and expression pictorial/video image, proximal expression image related information and the like) are stored and managed in a stable manner. - Under such a basic infra structure, the FIU
acquisition module 120 being controlled by the DCUproduction control module 110 builds a series of communication relationships with the operation system 11 and the application 12 via aninterface module 111, and a video related company operates theproduction guide window 201 to progress a computation work providing a privately designated FIU (FIU at this time is the face image of a user/celebrity/politician, etc., designated by a user who made a special order to the video related company for production of pictorial/video image contents, and the procedure of acquiring such a user designated face image may undergo diverse changes in accordance with circumstances of the video related company.) to theunit 100. In this case, the FIUacquisition module 120 acquires a FIU similar to one shown inFIG. 5 for example by the medium of theinterface module 111, and then stores and manages the acquired FIU in a privateinformation storage buffer 121. - In addition, the REU creating
module 130 communicates, under the control of the DCUproduction control module 110, with the source character standard expressionimage storage section 151 and with the source pictorial/video image settinginformation storage section 154 after FIU is secured and stored in theinformation storage buffer 121 by theFIU acquisition module 120. Accordingly, the REU creatingmodule 130 extracts a source character representative expression image similar to one shown inFIG. 6 (In this case, the source character representative expression image means an expression image of the source character that can represent standard expression images of each source character.) out of the standard expression images of the source character (refer toFIG. 3 ) having been stored in the source character standard expressionimage storage section 151 and then converts the source character representative expression image that matches with FIU, to create an REU similar to one shown inFIG. 7 . - At this time, as shown in
FIG. 8 , the REU creatingmodule 130 is constituted by an REU creatingcontrol section 131 that is in charge of overall control of the REU creating procedure, and other constituents that operate under the control of the REU creatingcontrol section 131, i.e., anFIU loading section 135, a source character representative expressionimage loading section 134, a salient pointdesignation guide section 133, and an REU creatingengine 137, each being closely combined with one another. - Here, the FIU
loading section 135 communicates, under the control of the REU creatingcontrol section 131, with theFIU acquisition module 120 via aninformation exchange section 132 after the FIU is secured and stored (refer toFIG. 5 ) in theinformation storage buffer 121 by the FIUacquisition module 120, and loads the acquired FIU in aprocessing buffer 136. - Meanwhile, the source character representative expression
image loading section 134 communicates, under the control of the REU creatingcontrol section 131, with the source pictorial/video image settinginformation storage section 154 via theinformation exchange section 132 after the FIU loading procedure is completed by the FIUloading section 135, to figure out source character's representative expression image designation information (e.g., information that explains about the source character's representative expression image) having been stored. Later, the source character representative expressionimage loading section 134 communicates with the source character standard expressionimage storage section 151 via theinformation exchange section 132, to selectively select a source character representative expression image (refer toFIG. 6 ) out of the standard expression images of the source character having been stored (refer toFIG. 3 ), and loads the extracted source character representative expression image into the processing buffer 136 (of course, the performance of the source character representative expression image loading section may precede the performance of the FIU loading section described earlier). - Furthermore, the salient point
designation guide section 133 communicates, under the control of the REU creatingcontrol section 131, with the production guidewindow operation module 180 after extracting FIU that has been loaded into theprocessing buffer 136 by theFIU loading section 135, and displays the corresponding FIU through theproduction guide window 201 as shown inFIG. 9 . In this manner, a video related company (or a user) may easily designate a number of salient points on main parts of FIU (eyes, nose, philtrum, etc.) through theproduction guide window 201. - Upon the completion of the procedures in respective computation parts, the
REU creating engine 137 under the control the REU creatingcontrol section 131 acquires, as illustrated inFIG. 10 , “a difference degree (for example, a degree indicating the difference between two positions) between a position (e.g., mk) of the salient point appointed to the main part of the FIU 2 and a position (e.g., vk) of vertex constituting a polygon mesh (PS) of therepresentative expression image 1”. - In this case, the
REU creating engine 137 analyzes/acquires the difference between the two positions under limited condition as shown in MathFIG. 1 below, i.e., limited condition that “there is little difference between the location mk of the salient point appointed to the main part of the FIU 2 and the position vk of vertex of the representative expression image.” Consequently, theREU creating engine 137 guides subsequent procedures, i.e., the REU acquisition, the SEU acquisition, and EDU acquisition, to progress more rapidly while minimizing deformation of the source character images (the source character representative expression image, and the source character standard expression image). -
vk≡mk [Math FIG. 1] - where, mk is a position of kth salient point appointed to the main part of the FIU, and vk is a position of kth vertex constituting a polygon mesh of the source character representative expression image.
- When “a difference degree between a position mk of the salient point appointed to the main part of the FIU 2 and a position vk of vertex constituting a polygon mesh of the
representative expression image 1” is acquired through the above procedure, theREU creating engine 137 calculates MathFIG. 2 below based on the difference degree to obtain a summation of respective items, thereby progressing a process of estimating positions of vertexes, VI for example, that will constitute a polygon mesh PN of the REU 3. - At this time, the positions of vertexes, VI for example, that will constitute a polygon mesh PN of the REU 3 are estimated through a least square method as shown Math
FIG. 2 . Therefore, in the invention, the vertexes, VI for example, constituting the polygon mesh PS of the source characterrepresentative expression image 1 exhibits, within a minimum deformation range, a transition pattern that becomes optimally similar to features of the vertexes constituting a polygon mesh PT of the FIU 2 (i.e., a feature error of the two vertexes is minimized). As a result, in “a process of converting the positions vi of the vertexes constituting the polygon mesh PS of the source characterrepresentative expression image 1 into the positions VI of vertexes that will constitute a polygon mesh PN of the REU 3”, which will be progressed later, the source characterrepresentative expression image 1 can be eventually changed into the REU 3 having optimally reflected the feature of the FIU 2 401, while minimizing the deformation of the source characterrepresentative expression image 1. -
- where, VI is a position of Ith vertex that will constitute a polygon mesh of the REU, Ti is a transform matrix of ith triangle constituting a polygon mesh of the source character representative expression image, Tj is a transform matrix of jth triangle neighboring to Ti, I is an ideal transform matrix that is almost same as Ti, vi is a position of ith vertex constituting a polygon mesh of the source character representative expression image, ci is a position of ith vertex constituting a polygon mesh of the FIU as the nearest corresponding position to vi, and matrix norm ∥ ∥F is a Frobenius norm.
- At this time, the item
-
- included in Math
FIG. 2 is an item that estimates a VI value so that the transform matrix Ti of ith triangle P1 constituting a polygon mesh PS of the source characterrepresentative expression image 1 can be transformed while having an uttermost similar value to the transform matrix Tj of jth triangle P2 neighboring to Ti, in a situation that vi is converted into VI to fully form the polygon mesh PN of the REU 3 (refer toFIG. 10 ). Needless to say, when VI is finally estimated by the calculation of MathFIG. 2 including “an adjusting item that minimizes a transform matrix difference between the neighborhoods of the polygon meshes of the source character” and vi is transformed into VI to constitute the polygon mesh PN of the REU 3, the REU 3 can maintain an optimized, very smooth shape due to an increase in a similarity of the polygon meshes neighboring to each other. - In addition, the item
-
- included in Math
FIG. 2 is an item that estimates a VI value so that the transform matrix Ti of ith triangle P1 constituting a polygon mesh PN of the source characterrepresentative expression image 1 can be transformed while having an uttermost close value to theideal transform matrix 1 that is almost same as Ti, in a situation that vi is converted into VI to fully form the polygon mesh PN of the REU 3 (refer toFIG. 10 ). Needless to say, when VI is finally estimated by the calculation of MathFIG. 2 including “an adjusting item that minimizes a deformed degree of the polygon meshes PS of a source character” and vi is transformed into VI to constitute the polygon meshes PN of the REU 3, the vertexes constituting the polygon mesh PS of the source characterrepresentative expression image 1 can naturally form the REU 3 having optimally reflected the feature of the FIU 2, even within the minimum deformation range. - Furthermore, the item
-
- included in Math
FIG. 2 is an item that estimates a VI value so that the position vi of ith vertex constituting a polygon mesh PS of the source characterrepresentative expression image 1 can be transformed while possibly minimizing a difference with the position ci of the ith vertex constituting a polygon mesh PT of the FIU 2 and nearest corresponding to vi, in a situation that vi is converted into VI to form the polygon mesh PN of the REU 3 in earnest (refer toFIG. 10 ). Needless to say, when VI is finally estimated by the calculation of MathFIG. 2 including “an adjusting item that makes the position of vertex of the source character representative expression image as close as possible to the position of vertex of the FIU 2” and vi is transformed into VI to constitute the polygon mesh of the REU 3, the REU 3 can naturally form a shape that is closest to the feature of the FIU. - Here, ws, wm, wd and the like included in each item of Math
FIG. 2 are weight factors of the corresponding items. TheREU creating engine 137 differently sets the weight factors of the respective items depending on conditions (for example, sets ws: 0.01, wm: 0.1 and wd: 0.2) in the calculation situation of MathFIG. 2 , thereby enabling the REU 3, which will be finally completed, to have a shape matching with the FIU 2 more efficiently. - When the REU 3 is created through the procedures described above, the REU creating
control section 131 communicates with the DCUproduction control module 110 by the medium of theinformation exchange section 132, to guide the REU 3 to be stably stored and managed in theprocessing buffer 112 of the DCUproduction control module 110. - In the meantime, the
SEU creating module 140, which is controlled by the DCUproduction control module 110 similarly to theFIU acquisition module 120 and theREU creating module 130, communicates with theREU creating module 130, the source character standard expressionimage storage section 151 and the like after theREU creating module 130 stores the REU 3 in the processing buffer of the DCUproduction control module 110 and converts the source character standard expression images (refer toFIG. 3 ) based on the transform features between the source characterrepresentative expression image 1 and the REU 3, to thereby create plural SEUs as shown inFIG. 11 . - At this time, as shown in
FIG. 12 , theSEU creating module 140 is constituted by an SEU creatingcontrol section 141 that is in charge of overall control of the SEU creating procedure, and other constituents that operate under the control of the SEU creatingcontrol section 141, i.e., an REU conversioncharacteristic acquisition section 143, a source character standard expressionimage loading section 144, and an SEU creating engine, each being closely combined with one another. - Here, the REU conversion
characteristic acquisition section 143 communicates, under the control of the SEU creatingcontrol section 141, with theREU creating module 130, theprocessing buffer 112 on the side of the DCUproduction control module 110, etc., via theinformation exchange section 142 after theREU creating module 130 completes the creation of the REU 3, to acquire position conversion characteristics (e.g., position conversion characteristics when vi is transformed into VI to constitute the polygon mesh PN of the REU 3) when the source character's representative expression image 2 is converted into the REU 3. The acquisition result data is loaded and stored into theprocessing buffer 145 by the REU conversioncharacteristic acquisition section 143. - In addition, the source character standard expression
image loading section 144 communicates, under the control of the SEU creatingcontrol section 141, with the source character standard expressionimage storage section 151 by the medium of theinformation exchange section 142 after the REU conversioncharacteristic acquisition section 143 completes loading of the position conversion characteristic data into theprocessing buffer 145, to extract source character standard expression images (refer toFIG. 3 ) having been stored therein. These extracted source character standard expression images are loaded into theprocessing buffer 145 by the source character standard expression image loading section 144 (of course, the performance of the source character standard expression image loading section may precede the performance of the REU conversion characteristic acquisition section). - Upon the completion of the procedures in respective computation parts, the
SEU creating engine 146, which is controlled by the REU creatingcontrol section 141, communicates with theprocessing buffer 145 to figure out, as shown inFIG. 13 , vertexes that constitute the polygon mesh PS of the source characterrepresentative expression image 1, e.g., position conversion characteristics when vi is transformed into vertexes, VI for example, to constitute the polygon mesh PN of the REU 3, and converts the positions of the vertexes; vn for example, that constitute the polygon mesh PS of respective source character standard expression images 4 into VN for example, to constitute a new polygon mesh PNN. In result, plural SEU 5 maintaining the same expressions (images including a crying face, an angry face, an astonished face, a laughing face, an image having a mouth shape in pronouncing a given phonetic symbol and so on) as the source character standard expression images 4 while keeping the basic facial features of the REU 3 (refer toFIG. 11 ). - Needless to say, under such a system of the invention, because the
SEU creating engine 146 progresses the creation procedure of plural SEUs 5 while having figured out “the position conversion characteristics when the source characterrepresentative expression image 1 is converted into the REU 3” in advance, the processing speed of the creation of plural SEUs 5 can be accelerated to an optimal state. - When the creation of plural SEUs 5 is completed through the procedure described above, the SEU creating
control section 141 communicates with the DCUproduction control module 110 by the medium of theinformation exchange section 142, to guide a corresponding SEU to stably store and manage the SEUs 5 into theprocessing buffer 112 of the DCU production control module. - Meanwhile, when the performance of the
SEU creating module 140 is completed, theEDU creating module 170 communicates, under the control of the DCUproduction control module 110, with theSEU creating module 140, the source pictorial/video image settinginformation storage section 154 and the like to selectively combine/transform the SEU 5, on the basis of the conversion features exhibiting when a proximate expression image that is most proximate to each reproduced expression pictorial/video image of the source character (refer toFIG. 4 ) among the standard expression images of the source character is converted into each reproduced expression image, to thereby create an EDU similar to one shown inFIG. 14 . - At this time, as shown in
FIG. 15 , the EDU creating module is constituted by an EDU creatingcontrol section 171 that is in charge of overall control of the EDU creating procedure, and other constituents that operate under the control of the EDU creatingcontrol section 171, i.e., anSEU loading section 174, a source character expression pictorial/video image creatingcharacteristic acquisition section 173, and anEDU creating engine 176, each being closely combined with one another. - Here, the
SEU loading section 174, which is controlled by the EDU creatingcontrol section 171, communicates via theinformation exchange section 172 with theprocessing buffer 112 of the DCUproduction control module 110 where SEUs 5 had been stored by theSEU creating module 140, so as to extract corresponding SEUs 5, and loads the extracted SEUs 5 into theprocessing buffer 175. - In addition, the source character expression pictorial/video image creating
characteristic acquisition section 173, which is controlled by the EDU creatingcontrol section 171, communicates with the source pictorial/video image settinginformation storage section 154 by the medium of theinformation exchange section 172 after the SEU loading section completes loading of the SEUs 5 into theprocessing buffer 175, to check information having been stored therein, i.e., (as shown inFIG. 16 ) information about which 4 a, 4 b, and 4 c among the standard expression images 4 of the source character is most proximate to each reproducedproximate expression image 6, 6 a, 6 b, and 6 c: refer toexpression image FIG. 4 when a specific source character appearing in the source pictorial/video image contents is reproduced while making changes in facial expressions as time passes. Moreover, the source character expression pictorial/video image creatingcharacteristic acquisition section 173 acquires “mixture weight features when 4 a, 4 b, and 4 c are mixed with other standard expression images 4 to transform the reproduced expression pictorial/proximate expression images 6 a, 6 b, and 6 c every moment (of course, the performance of the source character expression pictorial/video image creating characteristic acquisition section may precede the performance of the SEU loading section).video images - Upon the completion of the procedures in respective computation parts, the
EDU creating engine 176, which is controlled by the EDU creatingcontrol section 171, estimates 5 a, 5 b, and 5 c that are most proximate to theproximate SEUs 7, 7 a, 7 b, and 7 c among the SEUs 5, when it is assumed, as shown inEDUs FIG. 17 , a user design character appearing in the DCU is reproduced with changes in facial expressions as time passes, on the basis of the information on the 4 a, 4 b, and 4 c having been acquired by the source character expression pictorial/video image creatingproximate expression images characteristic acquisition section 173. Moreover, theEDU creating engine 176 calculates MathFIG. 3 below, according to mixture weight features, and combines 5 a, 5 b, and 5 c with other SEUs 5 every minute and creates EDU 7 depending on the reproduction time of the DCU (refer toproximate SEUs FIG. 14 ). -
- where, F(t) is an EDU varying by reproduction time flow of DCU, wi(t) is a function of mixture weights with respect to time, and Mi is an ith proximate SEU.
- In computation of Math
FIG. 3 aforementioned, theEDU creating engine 171 can accelerate the processing speed of the creation of the EDU 7 to an optimal state because SEUs 5 obtained based on the standard expression image 4 of the source character is utilized as a basis of the EDU formation and “the mixture weight features at the time the 4 a, 4 b, and 4 c are mixed with other standard expression images 4 for conversion of the reproduced expression pictorial/video images 6 every moment” are completely employed and taken advantage of as the mixture weight features for the formation of EDU 7.proximate expression images - When the EDU 7 is created through the above procedure, the EDU creating
control section 171 communicates with the DCUproduction control module 110 via theinformation exchange section 172, to guide the corresponding EDU 7 to be stored and managed in theprocessing buffer 112 of the DCUproduction control module 110 in a stable manner. - In the meantime, when the
EDU creating module 170 completes the EDU creating and storage, theDCU creating module 160 communicates, under the DCUproduction control module 110, with the source pictorial/video image backgroundcontent storage section 153 and combines the EDU 7 with the background of the source pictorial/video image content, to create a DCU having the facial image of the source character being changed to the FIU 2 as shown inFIG. 18 . - At this time, as shown in
FIG. 19 , theDCU creating module 160 is constituted by a DCU creatingcontrol section 161 that is in charge of overall control of the DCU creating procedure, and other constituents that operate under the control of the DCU creatingcontrol section 161, i.e., a backgroundcontent loading section 163, anEDU loading section 164, and aDCU creating engine 166, each being closely combined with one another. - Here, the background
content loading section 163 communicates, under the control of the DCU creatingcontrol section 161, with the source pictorial/video image backgroundcontent storage section 153 by the medium of theinformation exchange section 162 after theEDU creating module 170 stores the EDU 7 in theprocessing buffer 112 of the DCUproduction control module 110, and extracts the background of the source pictorial/video image content having been stored (e.g., background pictorial/video image layer, source character body pictorial/video image layer, other source character layer, source character accessories/clothes layer, etc.). The backgroundcontent loading section 163 loads the extracted background data into theprocessing buffer 165. - In addition, the
EDU loading section 164 communicates, under the DCU creatingcontrol section 161, with theprocessing buffer 112 of the DCUproduction control module 110 by the medium of theinformation exchange section 162 after theEDU creating module 170 stores the EDU 7 in theprocessing buffer 112, to extract the corresponding EDU 7. TheEDU loading section 164 loads the extracted EDU 7 into the processing buffer 165 (of course, the performance of the EDU loading section may precede the performance of the background content loading section). - Upon the completion of the procedures in respective computation parts described earlier, the
DCU creating engine 166 communicates, under the control of the DCU creatingcontrol section 161, with theprocessing buffer 165 and synthesizes the EDU 7 with a source character face image f of the background data B1, B2, and B3, according to the reproduction flow of the source pictorial/video image contents as shown inFIG. 20 . In result of the synthesis, aDCU 8 having the source character face image f being newly replaced into the FIU 2 pattern is created (refer toFIG. 18 ). - When the
DCU 8 is created through the procedure described above, the DCU creatingcontrol section 161 communicates with the DCUproduction control module 110 by the medium of theinformation exchange section 162, to guide thecorresponding DCU 8 to be stably stored and managed in theprocessing buffer 112 of the DCU production control module. - Accordingly, when the
DCU 8 having the source character face image being newly replaced into the FIU 2 pattern is secured through the pictorial/video image composing pipe line, a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) become capable of production/manufacture/marketing a video on demand (VOD) content that reflects individual desire of a user so that they can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on). - While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made thereto without departing from the spirit and scope of the invention as defined by the appended claims.
- The invention relates to a system for composing pictorial/video image contents where the FIU is reflected, and more particularly, to a system for composing pictorial/video image contents reflecting the FIU, in which the system provides a series of pictorial/video image composing pipe line capable of changing the face of a specific source character that appears in pictorial/video image contents to a FIU pattern and guides a video related company (for example, a producer, a distributor, a sales agency (provider), etc.) to be able to establish a base infra for producing/manufacturing/marketing a video on demand (VOD) content that reflects individual desire of a user so that it can satisfy user needs in changing the face image of a specific character appearing in pictorial/video image contents into the face image of his/her favorite person (or the user designates, for example, his/her own face image, the face image of his/her acquaintance, the face image of a specific celebrity, the face image of a specific politician, and so on).
Claims (7)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2007-0037047 | 2007-04-16 | ||
| KR1020070037047A KR100874962B1 (en) | 2007-04-16 | 2007-04-16 | Video Contents Production System Reflecting Custom Face Image |
| PCT/KR2007/003496 WO2008126964A1 (en) | 2007-04-16 | 2007-07-19 | The system which compose a pictorial/video image contents where the face image which the user designates is reflected |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20100141679A1 true US20100141679A1 (en) | 2010-06-10 |
| US8106925B2 US8106925B2 (en) | 2012-01-31 |
Family
ID=39864046
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/093,907 Expired - Fee Related US8106925B2 (en) | 2007-04-16 | 2007-07-19 | System to compose pictorial/video image contents with a face image designated by the user |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US8106925B2 (en) |
| JP (1) | JP4665147B2 (en) |
| KR (1) | KR100874962B1 (en) |
| TW (1) | TWI358038B (en) |
| WO (1) | WO2008126964A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100209073A1 (en) * | 2008-09-18 | 2010-08-19 | Dennis Fountaine | Interactive Entertainment System for Recording Performance |
| US20190147841A1 (en) * | 2017-11-13 | 2019-05-16 | Facebook, Inc. | Methods and systems for displaying a karaoke interface |
| US10599916B2 (en) | 2017-11-13 | 2020-03-24 | Facebook, Inc. | Methods and systems for playing musical elements based on a tracked face or facial feature |
| US10783716B2 (en) * | 2016-03-02 | 2020-09-22 | Adobe Inc. | Three dimensional facial expression generation |
| US10810779B2 (en) | 2017-12-07 | 2020-10-20 | Facebook, Inc. | Methods and systems for identifying target images for a media effect |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20190108936A (en) | 2018-03-16 | 2019-09-25 | 주식회사 엘케이 | Cable Hanger and Hanger Roller |
| WO2020037679A1 (en) * | 2018-08-24 | 2020-02-27 | 太平洋未来科技(深圳)有限公司 | Video processing method and apparatus, and electronic device |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5611037A (en) * | 1994-03-22 | 1997-03-11 | Casio Computer Co., Ltd. | Method and apparatus for generating image |
| US20020087329A1 (en) * | 2000-09-21 | 2002-07-04 | The Regents Of The University Of California | Visual display methods for in computer-animated speech |
| US20080039163A1 (en) * | 2006-06-29 | 2008-02-14 | Nokia Corporation | System for providing a personalized comic strip |
| US20090144173A1 (en) * | 2004-12-27 | 2009-06-04 | Yeong-Il Mo | Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service business method thereof |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10126687A (en) * | 1996-10-16 | 1998-05-15 | Matsushita Electric Ind Co Ltd | Substitution editing system |
| JPH10240908A (en) * | 1997-02-27 | 1998-09-11 | Hitachi Ltd | Video composition method |
| JP2000287824A (en) * | 1999-04-02 | 2000-10-17 | Koji Nakamura | Imaging device for ceremonial occasions and its imaging software |
| JP2000312336A (en) * | 1999-04-27 | 2000-11-07 | Koji Nakamura | Video television device |
| KR20010090308A (en) * | 2000-03-24 | 2001-10-18 | 박선은 | Method and system for substituting an actor's facial image with a client's facial image |
| JP2002232783A (en) * | 2001-02-06 | 2002-08-16 | Sony Corp | Image processing apparatus, image processing method, and program storage medium |
| KR100422470B1 (en) * | 2001-02-15 | 2004-03-11 | 비쥬텍쓰리디(주) | Method and apparatus for replacing a model face of moving image |
| JP2009515375A (en) * | 2005-09-16 | 2009-04-09 | フリクサー,インコーポレーテッド | Operation to personalize video |
| KR20060115700A (en) * | 2006-10-20 | 2006-11-09 | 주식회사 제스틴 | Flash language learning system for children with easy face changes |
-
2007
- 2007-04-16 KR KR1020070037047A patent/KR100874962B1/en not_active Expired - Fee Related
- 2007-07-19 US US12/093,907 patent/US8106925B2/en not_active Expired - Fee Related
- 2007-07-19 WO PCT/KR2007/003496 patent/WO2008126964A1/en not_active Ceased
- 2007-11-21 TW TW096144007A patent/TWI358038B/en not_active IP Right Cessation
- 2007-11-27 JP JP2007305803A patent/JP4665147B2/en not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5611037A (en) * | 1994-03-22 | 1997-03-11 | Casio Computer Co., Ltd. | Method and apparatus for generating image |
| US20020087329A1 (en) * | 2000-09-21 | 2002-07-04 | The Regents Of The University Of California | Visual display methods for in computer-animated speech |
| US20090144173A1 (en) * | 2004-12-27 | 2009-06-04 | Yeong-Il Mo | Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service business method thereof |
| US20080039163A1 (en) * | 2006-06-29 | 2008-02-14 | Nokia Corporation | System for providing a personalized comic strip |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100209073A1 (en) * | 2008-09-18 | 2010-08-19 | Dennis Fountaine | Interactive Entertainment System for Recording Performance |
| US20100209069A1 (en) * | 2008-09-18 | 2010-08-19 | Dennis Fountaine | System and Method for Pre-Engineering Video Clips |
| US20100211876A1 (en) * | 2008-09-18 | 2010-08-19 | Dennis Fountaine | System and Method for Casting Call |
| US10783716B2 (en) * | 2016-03-02 | 2020-09-22 | Adobe Inc. | Three dimensional facial expression generation |
| US20190147841A1 (en) * | 2017-11-13 | 2019-05-16 | Facebook, Inc. | Methods and systems for displaying a karaoke interface |
| US10599916B2 (en) | 2017-11-13 | 2020-03-24 | Facebook, Inc. | Methods and systems for playing musical elements based on a tracked face or facial feature |
| US10810779B2 (en) | 2017-12-07 | 2020-10-20 | Facebook, Inc. | Methods and systems for identifying target images for a media effect |
Also Published As
| Publication number | Publication date |
|---|---|
| TW200842732A (en) | 2008-11-01 |
| US8106925B2 (en) | 2012-01-31 |
| JP4665147B2 (en) | 2011-04-06 |
| KR20080093291A (en) | 2008-10-21 |
| TWI358038B (en) | 2012-02-11 |
| WO2008126964A1 (en) | 2008-10-23 |
| KR100874962B1 (en) | 2008-12-19 |
| JP2008271495A (en) | 2008-11-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8106925B2 (en) | System to compose pictorial/video image contents with a face image designated by the user | |
| Ghorbani et al. | ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech | |
| US11144597B2 (en) | Computer generated emulation of a subject | |
| Haque et al. | Facexhubert: Text-less speech-driven e (x) pressive 3d facial animation synthesis using self-supervised speech representation learning | |
| US9959657B2 (en) | Computer generated head | |
| KR102058783B1 (en) | Method and apparatus for generating adaptlve song lip sync animation based on text | |
| KR102119868B1 (en) | System and method for producting promotional media contents | |
| US20090132371A1 (en) | Systems and methods for interactive advertising using personalized head models | |
| US20110064388A1 (en) | User Customized Animated Video and Method For Making the Same | |
| WO2018049979A1 (en) | Animation synthesis method and device | |
| US20140210831A1 (en) | Computer generated head | |
| KR20210114521A (en) | Real-time generation of speech animations | |
| Pan et al. | Emotional voice puppetry | |
| Khan | Role of generative AI for developing personalized content based websites | |
| Zhang et al. | Towards ai-driven sign language generation with non-manual markers | |
| Abootorabi et al. | Generative AI for character animation: A comprehensive survey of techniques, applications, and future directions | |
| Ostermann et al. | Talking heads and synthetic speech: An architecture for supporting electronic commerce | |
| Niraula | Shifting rhetoric of Teej songs in the context of consumer culture in Nepal | |
| CN117636106A (en) | A fashion product image generation method based on attention generative adversarial network | |
| Wahba et al. | Creating a digital human twin: Cloning voice, face, and attitude | |
| Bozkurt | Personalized speech-driven expressive 3d facial animation synthesis with style control | |
| Kong | A study on optimizing deep learning models for creative generation of animated new media advertisements: an application based on improved generative adversarial networks (GANs) and variational autocoders (VAEs) | |
| Vilchis | School of Engineering and Sciences | |
| Sundararaman et al. | Seeing Voices: Generating A-Roll Video from Audio with Mirage | |
| Karmakar et al. | Unfolding a Hidden Risk of Direct 3D Software Usage for Animation Character Design |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: FXGEAR, INC.,KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEE, CHANG HWAN;REEL/FRAME:023021/0455 Effective date: 20080625 Owner name: FXGEAR, INC., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEE, CHANG HWAN;REEL/FRAME:023021/0455 Effective date: 20080625 |
|
| ZAAA | Notice of allowance and fees due |
Free format text: ORIGINAL CODE: NOA |
|
| ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20240131 |