NL1043015B1 - Device and method for content generation - Google Patents
Device and method for content generation Download PDFInfo
- Publication number
- NL1043015B1 NL1043015B1 NL1043015A NL1043015A NL1043015B1 NL 1043015 B1 NL1043015 B1 NL 1043015B1 NL 1043015 A NL1043015 A NL 1043015A NL 1043015 A NL1043015 A NL 1043015A NL 1043015 B1 NL1043015 B1 NL 1043015B1
- Authority
- NL
- Netherlands
- Prior art keywords
- item
- software application
- processor
- user
- data structure
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/63—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computing device and a method for creating custom content data for use in an interactive software application, said software application being executed by a processor. The software application comprises a customizable data structure which is customized with the custom content data, and the processor generates an element of the custom content data, whereby the generation of the element is derived from input resulting from capturing an item of the physical world. The processor is arranged for capturing the item from a peripheral device, processing the captured item, storing the processed item in the memory, converting the stored item into the element; and executing the software application to customize the customizable data structure by incorporating the element in the customizable data structure.
Description
DEVICE AND METHOD FOR CONTENT GENERATION
TECHNICAL FIELD
The present invention relates generally to customized content data in a computing device. More particularly, the present invention relates to the method of converting captured items from the physical world into digital data in interactive software applications.
BACKGROUND
Software applications, such as interactive games, often offer some kind of customization. A user may for example choose an appearance (a so called “skin”) of an avatar or adjust the playing field according to his or her wishes (hereinafter, whenever the words “he”, “her” is use, one should also be interpreted as he” or “his” without prejudice to any gender). These adjustments are usually referred to as usergenerated content. In this way, a user can express her creativity by generating her own content. Additionally, by adding network services to the interactive application, the user can easily share her creation and retrieve other user’s creations. Some solutions have been proposed to facilitate the creative process by using elements from a captured image in a software application.
One such solution is disclosed in United States Patent US 9,908,050 B2, by Disney Enterprises, Inc., which is summarized as a system and method for image recognized content creation. The method comprising capturing an image from a camera, analyzing the image to recognize a plurality of elements, converting the plurality of elements into custom content data, and executing an interactive application using the custom content data. In one embodiment, the interactive application may comprise a racing video game, and the image may comprise a racetrack layout for use in the racing video game. Thus, a user can provide a racetrack by drawing simple lines and curves on a piece of paper, which are detected by the camera and converted into valid data assets for the video game.
Current solutions are limited in the use of custom content data. The quality of the custom content data is largely dependent of the quality of the created pictures, drawings etc. Additionally, capturing of e.g. images, which are randomly selected or drawn by a user requires quite powerful image recognition software, and processing power. In practice this means that identifying and translating elements of an image into relevant input data for a software application often results in faulty interpretation. Children, for example may select a totally irrelevant image or draw in an incomprehensible manner. There is a need for a more reliable, and user-friendly input manner of image data into a software application, which provides the user with means to control an interactive software application, without compromising quality in looks and interaction.
DISCLOSURE OF INVENTION
It is an object of the present invention to overcome the drawbacks of current solutions. It is a further object to provide a low threshold manner for a user to express her creativity in generating content for a software application. In particular, it is an object of the invention to provide even unexperienced and/or young children with a means to generate this content, without compromising the object of creative expression or quality of visuals, and other media applied in the software application. It is yet a further object of the present invention to empower the user to determine the level, scope and/or difficulty of interaction with the software application. It is yet a further object of the invention to provide control of interactive software application, such as a video game, a serious game, or simulation software incorporating these objects.
The object is realized by a computing device and a method for creating custom content data for use in an interactive software application, said software application being executed by a processor. The software application comprises a customizable data structure which is customized with the custom content data, and the processor generates an element of the custom content data, whereby the generation of the element is derived from input resulting from capturing an item of the physical world. The processor is arranged for capturing the item from a peripheral device, processing the captured item, storing the processed item in the memory, converting the stored item into the element; and executing the software application to customize the customizable data structure by incorporating the element in the customizable data structure.
The invention is summarized in the following clauses.
1. A computing device arranged for creating custom content data for use in an interactive software application, said software application being executed by a processor in the computing device, the computing device comprising a memory, and the software application comprising a plurality of available data structures, whereby the software application comprises a customizable data structure which is customized with the custom content data, wherein the processor is arranged for generating an element of the custom content data, the generation of said element being derived from input resulting from capturing an item of the physical world, the processor further arranged for:
- capturing the item from a peripheral device, said peripheral device comprising an external device connected to the computing device, or a device integrated in the computing device;
- processing the captured item;
- storing the processed item in the memory;
- converting the stored item into the element; and
- executing the software application to customize the customizable data structure by incorporating the element in the customizable data structure.
2. The computing device according to clause 1, characterized in that the item of the physical world comprises any one of the group comprising:
- an image, such as a picture, an illustration created by a person or a visual output of a computer screen;
- a sound, such as an audible output of a speaker, a sound in nature, a piece of music, or a spoken word by a person;
- a location;
- a movement;
- an orientation;
- a physical manipulation, such as pressing, stretching or bending of the device.
3. The computing device according to any one of the preceding clauses, characterized in that the peripheral device comprises any one of the group comprising:
- a camera arranged for capturing an image;
- a microphone arranged for capturing a sound;
- a GPS device arranged for determining a position;
- an accelerometer arranged for determining acceleration, change of direction and/or deceleration;
- a tactile input device arranged for determining a tactile input.
4. The computing device according to any one of the preceding clauses, characterized in that the processor is arranged for converting the stored item into the element, by analyzing the semantics of the stored item, and based on the analyzed semantics, selecting a related element, a series of related elements and/or a sequence of related elements from a set of elements stored in the memory, and incorporating said respectively related element, series of related elements and/or sequence of related elements, instead of the stored item.
5. The computing device according to any one of the preceding clauses, characterized in that the processor is arranged for converting the stored item into the element by analyzing the semantics of the stored item, and based on the analyzed semantics changing the syntactics of the customizable data structure.
6. A method for creating custom content data for use in an interactive software application, said software application being executed by a processor in a computing device, the computing device comprising a memory, and the software application comprising a plurality of data structures, whereby the plurality of data structures comprises a customizable data structure which is customized with the custom content data, wherein customization comprises generating an element of the custom content data, the generation of said element being derived from input resulting from capturing an item of the physical world, the method comprising the steps of:
- a peripheral device capturing the item, said peripheral device comprising an external device connected to the computing device, or a device integrated in the computing device;
- the processor processing the captured item;
- the processor storing the processed item in the memory;
- the processor converting the stored item into the element; and
- executing, by the processor, the software application to customize the customizable data structure by incorporating the element in the customizable data structure.
7. The method according to clause 6, characterized in that the item of the physical world comprises any one of the group comprising:
- an image, such as a picture, an illustration created by a person or a visual output of a computer screen;
- a sound, such as an audible output of a speaker, a piece of music, or a spoken word by a person;
- a location, a movement or an orientation;
- a physical manipulation, such as pressing, stretching or bending.
8. The method according to clause 6 or 7, characterized in that the peripheral device comprises any one of the group comprising:
- a camera for capturing an image;
- a microphone for capturing a sound;
- a GPS device for determining a position;
- an accelerometer for determining acceleration, change of direction and/or deceleration;
- a tactile input device for determining a tactile input.
9. The method according to any one of the clauses 6-8, characterized in that the step of converting the stored item into the element comprises analyzing the semantics of the stored item, and based on the analyzed semantics, select a related element, a series of related elements or a sequence of related elements from a set of elements stored in the memory, and incorporating said respectively related element, series of related elements, or sequence of related elements, as replacement for the stored item.
10. The method according to any one of the clauses 6-9, characterized in that the step of converting the stored item into the element comprises analyzing the semantics of the stored item, and based on the analyzed semantics changing the syntactics of the customizable data structure.
11. The method according to clause 6-10, characterized in that, before the step of a peripheral device capturing the item the method comprises the step of creating a collection of one or more illustrations on a paper sheet by a user of the software application by creating an illustration in an area of one or more areas on the paper sheet, said one or more areas being indicated by one or more markers on the paper sheet.
12. The method according to clause 6-11, characterized in that the step of a peripheral device capturing the item comprises the steps of;
- a camera connected to the computer identifying markers of the one or more markers, indicative of the outer borders of the collection;
- the camera capturing an image of the collection within said indicated outer borders;
- identifying one or more areas in the captured image by identifying markers in the captured image indicative for the position, shape and/or size of the area.
13. The method according to clause 11 or 12, characterized in that the type and/or position of a marker on the paper sheet is predetermined and known to the software application.
14. The method according to clause 12 or 13, characterized in that the marker, indicative of the outer borders of the collection, comprises a fiducial marker.
15. The method according to any one of the clauses 12-14, characterized in that the marker, indicative for the position, shape and/or size of the area comprises an outline of uniquely identifiable size and/or shape.
16. The method according to any one of the 11 -15, characterized in that the one or more markers are organized in a lay-out, generated by the software application, said layout being selectable by a user.
17. The method according to any one of the clauses 15-16, characterized in that, when an outline is empty, a standard element is used instead of a created illustration, or a user-selectable element in the software application is used instead of a created illustration.
18. The method according to clause any one of the clauses 6-17, characterized in that the stored item is analyzed by the software application and split into multiple elements.
19. The method according to clause 18, characterized in that the multiple elements are animated by the software application.
20. A paper sheet for use in the method according to any one of the clauses 6-19, wherein the paper sheet comprises a lay-out of one or more markers, arranged for
Identification of areas on the paper sheet which are arranged for a user to create an illustration in, said markers comprising any one of the group of markers comprising:
- a marker indicating an outer corner of an area for the user to create the illustration in;
- an outline indicating the outer corners of an area for the user to create the illustration in, or to colorize;
- a fiducial marker.
BRIEF DESCRIPTION OF THE DRAWINGS
The figure shows an embodiments in accordance with the present invention. FIGURE 1 shows an example of a paper sheet with a printed lay-out.
DETAILED DESCRIPTION
The invention is now described by the following aspects and embodiments, with reference to the figures.
The invention proposes to convert items which are present in the physical world into digital elements for use in a computing device. A comping device may run a software application which is adapted to incorporate customized content data. The customization is created by a user of the application, or by the application or by a combination of both. The level of control of the customization may for example be dependent of the level of competence of the user in using the application or may depend on the availability of relevant input. The input is provided by peripheral devices which are arranged for capturing various types of items in the physical world. Such an item may comprise for example a paper sheet with an illustration. The illustration may be created by a person, such as the user of the application, or may be a readily available image, which is for example printed from a computer. Other examples are printed pictures, a photograph taken with a photo camera or with a camera of a smartphone on which the software application according to the invention may be running.
The example of an illustration on a paper sheet is further explained in the clauses and below.
For the sake of explanation of the invented method, a typical type of software application comprises a software game (hereinafter referred to as game). Games have many variations in play, interaction and application area. A game may for example be designed purely for the sake of entertainment, but it may also have a use for training personnel of e.g. industrial automation, or as simulation of e.g. disasters, which gives inside in behavior of people in various situations, and may help to improve the competence of rescue workers. Hereinafter these variants are all referred to as “game”. Taking the example of training for emergency situations after for example a natural disaster, it is cumbersome to write a simulation in software which represents a realistic situation which is new and unexpected every time an rescue worker has to be trained. Rather than rewriting the software application for every emergency exercise, the present invention proposes to add elements in the game which can be created ad hoc by a user of the game. The user may for example be a trainer or a trainee, or any other supervisor. A user in a mountainous area may for example have a need to have mountain like elements in the game. In this case the user may take pictures of a real mountain, or she may draw a picture of the mountain. The image recognition software, which is already at a relative high standard currently, is arranged for recognizing the picture of the mountain or the drawing of the mountain (both referred to as illustration), and the software application (the game) is arranged for incorporating mountainous elements in the game. There are a couple of options in how the illustration is processed. The invention proposes for example to include the illustration substantially unchanged in the game as a customized element. The illustration may for example be placed in a digitally created environment which in itself is made up, but the customized element placed in the environment, i.e. the illustration of a mountain, is realistic. Alternatively, the invention proposes not to include the illustration itself, but to first apply a recognition procedure, in this case image recognition, tot the illustration and then to customize one or more elements in the game based on the resulting identification of the image. Therefore, the illustration itself is not presented one on one in the game, but other elements of the game may be customized according to predetermined rules (predetermined in the game), or rules controlled by a user. In the example of an illustration of a mountain, the rule of the game may comprise that fictitious mountains are created in the game. The game may be a video game, but a purely text-based game may also be perfectly possible. When a game for handling an emergency after a natural disaster is for example based on a story with a main fixed plot or data structure, the challenging elements, which may make the rescue of victims more difficult for example, may comprise mountains, or the story may be set in a mountainous area. The input or trigger in this case is just an illustration of a mountain, or some line drawings which resembles a mountain.
Another example comprises an adventure game for a child, which storyline involves a dragon. In one embodiment a child may create his own expression of a dragon on a piece of paper. An alternative would be that the child opens a drawing application supported by the game, where he can digitally create a dragon. In the case of a drawing on paper, the game supports that a picture or scan is taken from the illustrated dragon, or alternatively from a computer screen when the drawing is created on a drawing program of any third-party digital drawing tool. Subsequently the illustration may be used one on one in the game. Alternatively, in the case where the storyline of the game is for example still open for any setting or direction, the captured illustration of the dragon may be analyzed and identified as dragon, upon which parts or whole of the game may then be customize with the dragon as basis, or the game may be directed towards a story line including dragons, knight, castles etcetera, based on a rule such as: when the illustration is identified as dragon, then start up the dragon story. If the user would capture an illustration of a car, the rule may comprise that a racing game is started when the illustration is identified as car.
Besides capturing illustrations, all kinds of real-world items may be captured. For example, sounds may be captured by a microphone, or a location may be captured by a GPS location device. In the example of audio capturing, a user may for example make the sound he wants to be used as a dragon’s breathe. Location data may be used to change the setting of the game to a setting which is relevant for the user at the location where he is at the moment of playing the game. In the example of the dragon game, the scenery may comprise mountains, when it is determined, based on the user’s location, that the user is located in a mountainous are.
A particular advantage of the use of an illustration on a paper sheet is that especially children will be stimulated to be creative, and to be involved in the game. For empowering children to create custom content data with a low threshold and with the least chance of erroneous input and the least possible need for often complicated image recognition software, a paper sheet may be provided with a predetermined lay out or with markers which indicate in which area illustrations should be drawn. In this way, the lay-out may comprise for example a frame in which an avatar should be drawn, whereas in another frame a barrier should be drawn. Further examples comprise a frame for a background, a welcome message, of even a level. When the user enters an A, B or C, this may for example mean a selection of respectively an easy, medium or difficult starting level in game.
Image recognition software may be arranged to identify elements of an illustration which can be animated such as the wings of a dragon. In this case a simulation of the captured dragon may even comprise that the dragon flies and moves its wing while flying.
Examples of creating custom content data by capturing real life items from the physical world are summed up in the following:
- creating content data based on visual input;
- Scanning (which includes taking a photograph with e.g. a built in camera of a smart phone) a piece of paper and use designated areas as textures;
- Scanning a piece of paper and use colored detected blobs as functional object generator, whereby for example black means walls, red means lava, blue means start, green means stop, and yellow means powerup.
A further example: In a software application such as a garden designer, the color of blobs may represent species of plants/trees. Other colors may represent type of tiles. When using Augmented reality (AR), the drawn garden can then be projected on real garden.
Further examples comprise:
- Scanning a piece of paper and use detected lines as audio generator;
- Scanning a piece of paper and use detected forms as audio filters;
- Scanning a piece of paper and use detected forms as a way to set the mood for music (slow, fast, upbeat, romantic etc.);
The invention also proposes that a user may take a picture and manually specify an element to be used. Examples of how the resulting image may be incorporated in the customizable data structure is that a picture of an apple on a table determines where which apple is used in the software application;
The image resulting from taking the picture may be used one on one, but the software application may also be used to analyze the image and use the semantics of the image to derive information which can be used to change the behavior of the software application, the syntax of the customizable data structure or introduce elements which are not a copy of the image itself, but are related to the identified semantics.
For example, when a user takes a picture of a dog, the software application, such as a game, may be built around dogs, or dogs may be included in a story line.
Likewise, a picture of a particular color may be mapped on a color scheme. For example, a picture of an ocean will lead to the use of more blue colors as color scheme in a game.
Other types of content data customization comprise content based on auditive input. For example, recorded audio samples may be used for sounds effects, recorded music audio samples may be used as music and spoken words may be recognized and used for selecting a type of music. Spoken words may for example be: “Classical”, “rock”, “pop”, “blues”. Subsequently, these types of music may be played in the software application.
Other examples comprise:
- Record audio samples over time and use recorded frequency to generate content;
- Example: roller coaster generation: high pitch is high track; low pitch is low track;
- Record audio samples over time and use recorded volume to generate content;
- Racetrack generation: high volume is wide track; low volume is narrow track;
- Record audio samples over time and use audio spectrum to generate content;
- Object generation audio: Every detected audio blip is used to generate objects of a certain type. The length of the blip will indicate the size of the generated object;
- Record audio samples and use recognized words for object generation, for example: Roller coaster object generation: “Start... Tree......House.......Tree....
Flower.....Bird.....Finish
- Create content based on GPS coordinates;
- Use GPS coordinates to place country related content, such as to generate nation flags;
- Use themed music based on GPS coordinates;
- Use local weather conditions inside the simulated environment like rain/snow/wind based on the GPS coordinates;
- Create content based on device acceleration values (accelerometer);
- Record user generated acceleration to modify virtual object acceleration over time. For example, device acceleration over time is used to modify the dynamic behavior of virtual object like an animation.
- Record user generated acceleration values over time as generated object placement, such as rollercoaster generation based on acceleration values, or racetrack generation based on acceleration values;
- Create content based on device orientation values, by using e.g. a gyroscope;
- Record user generated orientation values to modify virtual object orientation overtime, for example device orientation over time is used to modify the direction an application character is looking, or rollercoaster generation based on orientation values over time, or racetrack generation based on orientation values over time;
- create content based on compass values, for example a user generated compass values are used to modify virtual object generation over time. Examples of this are a rollercoaster generation based on compass values over time or a racetrack generation based on compass values over time.
FIGURE 1 shows an example of a paper sheet 100 with a printed lay-out. the paper sheet 100 comprises a lay-out of one or more markers 101a,b,c,d,e,f arranged for identification of areas on the paper sheet which are arranged for a user to create an illustration in. A marker may comprise for example a marker indicating an outer corner of an area, such as marker 101a,b,c,d. Such a marker may comprise a fiducial as shown in the figure. A fiducial has a pattern of black and white areas which may be interpreted by the software application as outer corners. This is useful for determining the area that needs to be scanned. The area within those corner markers indicates the outer borders within which a user should create the illustration in. There are also markers, such as 101 e,f which indicate the outer comers of a specific area for the user to create the illustration in. In the figure this is in the shape of a frame or an outline 101e,f. A user may for example draw a game title in area 101e. An area may be indicated by a text such as “Game title” as in text box 102e. Frame 101 f may comprise a special area, where a user may enter a level of difficulty (also optionally indicated by text box 102f). In this case the software application applies optical character recognition and determines e.g. that the entered letter is a “B”. This subsequently converted into an element of custom content data which sets the level of the game to medium for example. In this way the image is not incorporated as an element as such in the game, but the semantics are interpreted and translated in a command for the software application.
The paper sheet may be part of a game and may be printed by a user. Then the user can draw illustrations, titles, avatars and level for example. The paper sheet is then scanned, and the items are converted into elements for the customizable data structure. The scanned image may be shared with other user via social media or other means of communication. The sheet may also be colored via a computer drawing application, and imported or scanned into the software application.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that a person skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb to comprise and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The term and/or includes any and all combinations of one or more of the associated listed items. The article a or an preceding an element does not exclude the presence of a plurality of such elements. The article the preceding an element does not exclude the presence of a plurality of such elements. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL1043015A NL1043015B1 (en) | 2018-09-28 | 2018-09-28 | Device and method for content generation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL1043015A NL1043015B1 (en) | 2018-09-28 | 2018-09-28 | Device and method for content generation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| NL1043015B1 true NL1043015B1 (en) | 2020-05-29 |
Family
ID=64755660
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| NL1043015A NL1043015B1 (en) | 2018-09-28 | 2018-09-28 | Device and method for content generation |
Country Status (1)
| Country | Link |
|---|---|
| NL (1) | NL1043015B1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030103611A1 (en) * | 1999-12-01 | 2003-06-05 | Paul Lapstun | Method and system for telephone control using sensor with identifier |
| EP2156869A1 (en) * | 2008-08-19 | 2010-02-24 | Sony Computer Entertainment Europe Limited | Entertainment device and method of interaction |
| US20140365993A1 (en) * | 2013-06-10 | 2014-12-11 | Pixel Press Technology, LLC | Systems and Methods for Creating a Playable Video Game From A Static Model |
| US9908050B2 (en) | 2010-07-28 | 2018-03-06 | Disney Enterprises, Inc. | System and method for image recognized content creation |
-
2018
- 2018-09-28 NL NL1043015A patent/NL1043015B1/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030103611A1 (en) * | 1999-12-01 | 2003-06-05 | Paul Lapstun | Method and system for telephone control using sensor with identifier |
| EP2156869A1 (en) * | 2008-08-19 | 2010-02-24 | Sony Computer Entertainment Europe Limited | Entertainment device and method of interaction |
| US9908050B2 (en) | 2010-07-28 | 2018-03-06 | Disney Enterprises, Inc. | System and method for image recognized content creation |
| US20140365993A1 (en) * | 2013-06-10 | 2014-12-11 | Pixel Press Technology, LLC | Systems and Methods for Creating a Playable Video Game From A Static Model |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220148271A1 (en) | Immersive story creation | |
| CN113383369B (en) | Body pose estimation | |
| Wells | Animation: Genre and authorship | |
| Vernallis et al. | The Oxford handbook of sound and image in digital media | |
| US12172087B2 (en) | Systems and methods for improved player interaction using augmented reality | |
| US20150142434A1 (en) | Illustrated Story Creation System and Device | |
| CN104978758A (en) | Animation video generating method and device based on user-created images | |
| CN105608934B (en) | AR children's story early education stage play system | |
| Nubia et al. | Development of a mobile application in augmented reality to improve the communication field of autistic children at a Neurorehabilitar Clinic | |
| Zeng et al. | Implementation of escape room system based on augmented reality involving deep convolutional neural network | |
| US20160253911A1 (en) | Playground system | |
| NL1043015B1 (en) | Device and method for content generation | |
| KR20220105354A (en) | Method and system for providing educational contents experience service based on Augmented Reality | |
| US20240037877A1 (en) | Augmented reality system for enhancing the experience of playing with toys | |
| CN114555198A (en) | Interactive music playback system | |
| Seide et al. | Virtual cinematic heritage for the lost Singaporean film Pontianak (1957) | |
| CN117793409A (en) | Video generation method and device, electronic equipment and readable storage medium | |
| Stern | Interactive Art: Interventions in/to Process | |
| Lievianto et al. | The Design of 3D Virtual Reality Animation of Javan Rhino for Educational Media of Endangered Animals in Indonesia | |
| Doroski | Thoughts of spirits in madness: Virtual production animation and digital technologies for the expansion of independent storytelling | |
| Slowik | Defining Cinema: Rouben Mamoulian and Hollywood Film Style, 1929-1957 | |
| Xie | Sonic Interaction Design in Immersive Theatre | |
| CN120069093B (en) | Method, system, device, storage medium and product for constructing image and text data sets | |
| KR102344518B1 (en) | Method for providing mixed rendering content using mixed reality in forest experience environment and apparatus using the same | |
| Lawrence | Up, down and amongst: perceptions and productions of space in vertical dance practices |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PD | Change of ownership |
Owner name: IDEALISERS BEHEER BV; NL Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: PATENTPUNT EINDHOVEN Effective date: 20210914 |