[go: up one dir, main page]

GB2481790A - Displaying a simulated environment on a mobile device - Google Patents

Displaying a simulated environment on a mobile device Download PDF

Info

Publication number
GB2481790A
GB2481790A GB1011196.1A GB201011196A GB2481790A GB 2481790 A GB2481790 A GB 2481790A GB 201011196 A GB201011196 A GB 201011196A GB 2481790 A GB2481790 A GB 2481790A
Authority
GB
United Kingdom
Prior art keywords
simulated environment
images
client device
server
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1011196.1A
Other versions
GB201011196D0 (en
Inventor
Ajay Soni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TXT TV FZ LLC
Original Assignee
TXT TV FZ LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TXT TV FZ LLC filed Critical TXT TV FZ LLC
Priority to GB1011196.1A priority Critical patent/GB2481790A/en
Publication of GB201011196D0 publication Critical patent/GB201011196D0/en
Priority to PCT/GB2011/001007 priority patent/WO2012001376A1/en
Publication of GB2481790A publication Critical patent/GB2481790A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/332Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
    • A63F13/10
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L29/06034
    • H04L29/08108
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/04Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/406Transmission via wireless network, e.g. pager or GSM
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The method comprises: storing the images of the simulated environment and a descriptor of the attributes of the simulated environment on a server from which the images of the simulated environment to the mobile device can be served. The simulated environment on the mobile device is created by rendering the images of the simulated environment and applying the attributes of the environment described in the descriptor. In this way, the attributes of the environment are described to the client device very concisely, which is beneficial in reducing the volume of data transmission.

Description

A Method of Displaying a Simulated Environment on a Client Device The present invention relates to a method of displaying a simulated environment on a client device, such as a mobile phone.
Modem mobile phones are normally capable of running programmes or applications besides those required to make a telephone call. Those applications are normally stand-alone executable programs which are either downloaded or pre-stored during manufacture. The application can then be run to permit a user of the mobile phone to use that software. An example of such software would be the game "The Sims Virtual World", which is Java-based. The user of the mobile phone can play the game once it is loaded. However, such an arrangement does have some problems. First of all, if it is necessary to download the game, it may take a long time to download it which could be expensive. Secondly, a complicated game or piece of software will require a significant amount of storage because the game is complicated. Thirdly, it would require lots of memory when in use. Finally, when the software is upgraded or modified, it is necessary to go through the complicated exercise of downloading and installing updates.
All of this is costly and requires a high-specification and expensive mobile phone with much greater resources than has been available in the past.
The present invention aims to overcome or at least reduce some of the disadvantages set out above.
According to a first aspect of the invention, a method of displaying a simulated environment on a client device comprises: storing the images of the simulated environment and a descriptor of the attributes of the simulated environment on a server; serving the images of the simulated environment to the client device; creating the simulated environment on the client device by rendering the images of the simulated environment and applying the attributes of the environment described in the descriptor.
In this way, the attributes of the environment are described to the client device very concisely, which is beneficial in reducing the volume of data transmission.
Preferably, the descriptor includes the position of an object of the simulated environment when the object is rendered by the client device. The use of the descriptor means that if the object moves, a whole new image does not need to be transmitted from the server to the client device.
Preferably, the descriptor includes the behaviour of an object of the simulated environment when the object is rendered by the client device.
Preferably, the descriptor includes the appearance of an object of the simulated environment when the object is rendered by the client device.
An embodiment of the present invention will now be described by way of example only, and with reference to the drawings in which: Figure 1 is a schematic drawing showing a number of mobile phones connected to a server in a network; Figure 2 shows the servers of Figure 1 in more detail; Figure 3 A-C shows three mobile phones displaying avatars located in different virtual environments; Figure 4 schematically shows the way in which a simulated environment is built; Figure 5 shows an avatar together with images of that avatar in a number of different positions; Figure 6 schematically shows the building of a simulated environment; Figure 7 shows a built simulated environment together with a data descriptor defining the environment; Figure 8 shows an avatar including a hot spot in an environment having a collision zone; and Figure 9 shows an avatar including a hot spot in an environment having an interaction zone.
It has become increasingly popular for people using PCs to interact with each other in the virtual world within simulated environments hosted on the Internet. Normally, a user will devise a character for themselves which is rendered in the virtual world as an avatar. The avatar is able to move through the simulated environment, such as rooms in a building or open spaces and interact with the simulated environment within which the avatar is located, picking up objects, standing or sifting on objects and the like.
Avatars of different users can interact with each other, exchanging objects or talking to each other.
Since the present invention is intended to operate on a mobile phone (cell phone), the virtual environment must be arranged so as to meet the requirements of a mobile phone, taking account of its connection, for example to the internet, its processing power, its memory and storage, and any other characteristics. Since most mobile phones tend to have relatively small memories and storage, and since it is intended that the avatar of one user is able to interact with avatars of other users within a "public" or shared virtual environment, the mobile phone must be able to connect to a server, for example via the internet, in order to be able to establish that interaction. It should be appreciated that this application is not limited to mobile phone client devices, but includes other client devices too.
To achieve this, Figure 1 schematically shows the interaction of mobile phones with one or more servers. In Figure 1, a number of mobile phones 1 are shown, each held by a user 2. Each mobile phone has its own unique process identifier (PID) which distinguishes one mobile phone 1 from another. Each mobile phone 1 can be connected to a workflow server 3 which has connections permitting data to be transferred between the mobile phones 1 and the workflow server 3. This connection might, most conveniently, be via an internet connection provided by the mobile phone service of the mobile phone 1, or by some other form of network connection, as appropriate.
The passing of data between the workflow server 3 and the mobile phones 1 may be asynchronous or synchronous. In this embodiment, the workflow server 3 includes its own unique process identifier (PID), and is, effectively, a form of middleware between the mobile phones 1 of the users 2 and various backend services which are conducted by one or more backend servers 4. The workflow server 3 handles messages between the mobile phones 1 and the backend servers 4 with messages which are predominantly asynchronous JSON object (text formatted) bodies which are universally understood by the client application on the mobile phone 1 and the backend servers 4. The workflow server 3 also handles the queuing of messages and load balancing of messages to the multiple backend servers 4. By handling messages within the workflow server 3, it allows very large numbers of messages sent from thousands of clients to be handled without data loss.
Figure 2 shows the system in more detail. The backend servers 4 include an avatar service 5 which, on request, generates avatar animations based on various visual characteristics held within a graphics library 7, with the created avatar animations being stored in a graphics cache 8. A graphics processor 9 converts images into transparent PNG files which can be saved in the graphics library 7 for use when avatars, rooms and objects are created. A designer can create images of objects to be stored and accessed from the graphics library, such as clothing to be worn by avatars, hair of different styles and colours, faces of different shapes and appearances, chairs, tables, lamps, sofas, whole rooms, parts of rooms and layers of rooms. When the avatar service generates the avatar animations, many different features are available from the graphics library to build the animation.
Likewise, a room service 6 is included which, on request, generates rooms and objects, such as furniture, which are within the room. The component parts are built from images within the graphics library 7, and the room is stored within the graphics cache 8. The room images in the library might be whole rooms or parts of rooms. In most instances, a room is built up from two or more layers, each layer being a separate image. Objects, such as tables, chairs, lamps, wine glasses and the like are added into the image, as appropriate.
The backend servers 4 also include a number of other services which include an orchestration service 10 which runs the core business processes and integrates all the backend services together, a chat room service 11 which defines the room templates/types, and manages people in multiple instances of room types, a friends service 12 which holds connections between users and manages an invitation life cycle, and a database 13. The backend server 4 also includes an inbox service 14 for holding virtual email boxes for users, and which identifies email types, including notifications, messages from users and invitations. A session service 15 is included which holds the active session details for a user and holds all actions performed by a user. A shop service 16 is included which manages the shopping process and payments, and holds what is bought by which user. A bank service 17 holds user and shop accounts and credit and debit transaction histories. A profile service 18 manages prospects and user accounts, provides security and avatar DNAs which define the characteristics of the avatar. An SMS service 19 sends SMS messages to multiple service providers.
In addition to the backend servers 4, a management information system 20 is included as well as an administration and monitoring function 21. A portal 22 for access by a moderator is also included connected to the workflow server 3.
Referring now to Figure 3, three mobile phones are shown in Figures 3A, 3B and 3C illustrating the presence of avatars within virtual environments. In Figure 3A, a user's avatar is shown standing in a simulated environment representing a scene on a beach with sunny weather. The ground is of sand, and the background of blue sky. Included in the simulated environment are objects including a chair, a parasol and glasses of cocktails. A user can operate his or her avatar to move within the simulated environment so as to walk into another position within the environment, to sit down, to stand up, to pick up objects such as cocktail glasses, and to interact with any other objects or with other avatars.
In Figure 3B, a user's avatar is within a simulated environment with avatars of two other users. In this case, the simulated environment is a cafe with objects such as sofas and chairs and tables. The avatars are able to interact by talking to each other.
In this case, the user's avatar is speaking to the avatars of the other two users. The other two users, whilst connected via their own mobile phones, are able to receive words spoken' by the central avatar in the form of text-formatted messages and are then able to reply.
In Figure 3C, a user's avatar is shown in a simulated environment representing a room of a home. The avatar is sitting in a chair, but is also able to stand up and to walk about the rest of the environment.
In this specification, the term room' normally refers to the simulated environment, whether the environment is a room of a building, the beach, a market square or a cafe.
The room normally permits an avatar to move around, sit, if there is something to sit on, interact with objects within the room, and interact with other avatars. The room may be layered so that avatars and objects can have positions at different depths as well as being able to move from side to side.
The mobile phone 1 runs an application which allows the user to display the room, objects within the room and avatars, and to control the movement and actions of the avatar. It also permits the user to converse with the users controlling other avatars by entering messages, for example in text format. However, the mobile phone application is relatively compact in terms of the size of the software and the hardware resources that are required by the mobile phone to run it. For example, the mobile phone does not store all the different designs of objects, avatars and rooms, but downloads only those which are required and stores them in a cache while they are used. This means that the resources of the mobile phone are relatively lightly used compared with other software products, but the system still permits access to a vast library of objects, avatars and simulated environments which are held on the server and are accessible by the mobile phone. This also means that, as new designs and upgrades are introduced, these are held on the server, and do not require the mobile phone software to be upgraded. Thus, new environments and objects can be created rapidly on the server and pushed to the client as required without any need for software or code changes and dynamic loading. The amount of data required for a simulated environment is small, saving significant data usage.
The cache can be arranged to store recently-used images, and/or frequently-used images to save from downloading them repeatedly. A small number of standard designed rooms or avatars might be stored permanently on the mobile phone. The user's own avatar is worth storing on the mobile phone because this won't normally change very often.
Referring now to Figure 4, the synthesis of a room, the avatars and objects will now be described. When the mobile phone 1 is running the application, it needs to build rooms with the avatars and objects within it. When an avatar enters a room, an image of the room must be displayed. The mobile phone requests the graphics of the room to be supplied, including any layers making up the room, and any objects located within the room, and requests the images, or animations, of any avatar to be added into the room to be supplied as well. These things are requested from the servers 3 and 4, and in particular the backend server 4, via the mobile phone service provider's 3G, GPRS, or any other suitable type of network, such as by a wireless or wired connection to the network, connecting via the Internet.
The server identifies the room which is to be displayed, and sends a data description of the room to the mobile phone instead of an image. It also identifies the avatar or avatars to be displayed, and sends a data description of it or them to the mobile phone instead of the image or images. More information about the data description of the room will be given later in the specification. The mobile phone searches its cache to see if it already has the images of the room and the avatar to be displayed. If it does, it takes the image and builds an on-screen image of the room, or a part of the room. If it doesn't, it sends the data description of the room or avatar to the server with a request for the image.
If the room or avatar is one which has already been built, perhaps because the user has been in that room before, the image will already be stored in the graphics cache on the server. It can then be sent to the mobile phone. If it has not previously been built, the room service 6 will take the image of the room from the graphics library 7, which forms a catalogue of parts, or where the room is stored in layers, the necessary layers, and will add any objects which are to be included in the room and compile them into an image of a room. The graphical representation of the room is then stored in the graphics cache 8 of the server with the data description and sent to the mobile phone.
Similarly, when an avatar is requested, if a graphical representation is already stored in the graphics cache 8, that graphical representation is sent to the mobile phone. If the graphics cache 8 doesn't have a graphical representation of the avatar, the avatar service 5 generates avatar animations from the graphics library and stores it in the graphics cache 8. The images of the avatar are transferred to the mobile phone 1 where the simulation is created by combining the room and avatars.
The mobile phone assembles the image of the room and any avatars into an image which is displayed. The mobile phone might also cache the images together with their data description in case they are used again. The user is able to move his or her avatar about the room under the control of the application.
Mention is made above of the graphics library, or library of parts 7 which is a catalogue of individual objects such as furniture, clothes, body shapes/types/colours for use in synthesising avatars, objects and simulated environments. Because it is held on the server, the graphics library 7 can continuously be updated with new designs and new objects which are available for use by the mobile phone application.
With reference to Figures 4 and 6, it should be understood that each item in the catalogue has a unique part identifier. So, for example, a three-seater sofa of a particular style might have a part identifier 359823'. If that sofa is to be yellow and hard, the sofa is taken from the graphics library 7 and is given those attributes in a synthesis stage 25 by the room service 6. This adds a position and interaction behaviour identifier to the part identifier so that the data descriptor for the chair becomes 359823543876'. The first six digits of the data descriptor represents the object, and the last six identify its position and interaction so that, when the image is received by the mobile phone, it is able to position the object in the room in the correct place, and give it the right attributes. In effect, the data descriptor of an object, room or avatar is like a DNA data sequence defining its characteristics.
A wide range of objects are held in the graphics library 7 including things such as chairs, sofas, tables, parasols, televisions, bookcases, cocktail glasses, and the like.
Objects can be given other characteristics and attributes and behaviours. For example, a chair could be given the attribute of high bounciness so that an avatar which approaches the chair too quickly will bounce off In the synthesis stage 25, the server takes the raw image corresponding to the part identifier and combines it with the attributes represented by the position and interaction behaviour identifier to create simulation artefacts that are transmitted to the mobile phone 1. The data descriptor includes the part identifier of the individual object as well as the position and interaction behaviour identifier which indicates the object's attributes. In addition to transmitting the simulation artefacts to the mobile phone 1, these are also stored in the graphics cache 8.
Likewise, as shown in Figures 4 and 5, when an avatar is created, a DNA-like data descriptor is created identifying the characteristics of the avatar, including body type, colour, style of hair, clothing, and other characteristics. Again the first half of the data descriptor is a unique part identifier for the body type, and the second half defines the attributes of the avatar, such as skin colour, style of hair, clothing, face shape and eye colour. The synthesis stage 25 creates a number of images of the avatar which can be used as still images, or put together to form an animation representing movement of the avatar. The images are stored in the graphics cache 8 until they are called by the mobile phone.
In Figure 5, an avatar is shown in which various characteristics have been defined, including gender, skin colour, hair colour and style, eye colour and shape, lip shape and colour, beard, upper clothing, lower clothing, shoes and accessories. These characteristics can all be represented in the data descriptor associated with the avatar.
Once the user has created the gender of the avatar, the server sets a default set of characteristics which can be modified by the user so as to create an avatar having a particular appearance. The building of the avatar with the various characteristics and rendering it is called synthesis. The synthesised avatar can then be stored on the server against the user's profile. When the avatar appears in a room, the graphical representations of the avatar is obtained from the graphics cache 8 of the server where it is stored, and transferred to the mobile phone.
Figure 5 shows the synthesised avatar together with ten different positions in which the avatar can be displayed. When an avatar is to be displayed by the mobile phone, these images of the avatar are downloaded into the mobile phone's cache for use in displaying the avatar in different positions. The mobile phone can request the sending of the images of the avatar to it by requesting the images associated with a particular data descriptor which represents that avatar. In the case of Figure 5, this would be 000000232300'. It will be appreciated that some of the images of the avatar will represent the avatar being stationary in particular positions such as standing or sitting, and some may simulate the movement of the avatar, for example by walking, when put together in an animation. Since the individual images can be assembled into an animation, to simulate movement, the group of images might best be described as avatar animations. Of course, other renderings of the avatar are also be possible.
With reference to Figures 4, 6 and 7, the library 7 also includes a range of different rooms, such as rooms of an apartment, open spaces such as market squares and the beach, shops, cafes, and the like. Figure 6 illustrates the way in which a simulated environment, in this case, a room, is created. In some respects, the creation of a room is similar to that of an avatar, in that the room has a data descriptor representing characteristics of the room, such as background, floors, windows and the like. The room can be made up of layers, and each layer can be represented in a separate data descriptor, if appropriate. In addition, the data descriptor of the room will also include the data descriptor of any objects within the room which not only includes the part identifier, but also the position and interaction behaviour identifier.
When a user moves his avatar into a room, the data descriptor of the room is obtained from the server and then the images are obtained from the server (as described above), and then the mobile phone creates a graphical representation of the room with all the objects in it. It then adds the avatar, which the application allows the user to move about the room. Figure 7 shows a data descriptor which represents the complete room with objects, and which is quite long.
The data descriptor of a room is more complex than that of an avatar since a single data descriptor represents the room, the objects within the room, the position of the objects and also the objects' appearance, attributes and interaction behaviour.
In Figure 6, a room is shown which has been synthesised by the server, and this is transmitted to the mobile phone. The mobile phone shows a part of the room, together with avatars added to the room. Each avatar is shown to have a behaviour, and in the case of the central avatar, the behaviour exhibited by the avatar is to speak.
The data descriptor is shown in Figure 6 including a part identifier which indicates the object and then a position and interaction behaviour which refers to the position of the object within the room, perhaps using X Y and Z coordinates, as well as the location of collision zones around an object. Reference to the use of collision zones will be set out in more detail later in the specification. The position and interaction behaviour also includes interaction behaviour information, such as whether an object can be pushed. The example given in Figure 6 represents a data descriptor representing a chair. Therefore the position and interaction information relating to the chair is added to the part identifier for the chair. In this case, the chair cannot be collided with, and cannot be sat down on.
The user can use his avatar to interact with other avatars and objects within the room.
However, adding or moving objects within the room will cause the representation of the room to change, and therefore the data descriptor. When a change occurs, the new data descriptor is sent to the server so that it keeps track of changes. The run time environment simulates a real physical environment with objects, collisions and the application of physics in the interaction between objects.
The mobile phone application simulates the environment with the avatars inside based on the information. The mobile phone has a set of pre-built behaviour libraries for example to sit, to push, and so on, an object. An object is assigned in its data descriptor to have one or more of the behaviours. The mobile phone, or client application, is then responsible for simulating the behaviours. More graphical representation and generation is performed by the server.
It will be appreciated from the foregoing that the encoding of the virtual environment including the simulated environment, the objects and the avatars using the data descriptors to define appearance, positions and behaviours is particularly useful because the end user application running on the mobile phone has the ability to graphically simulate the environment without, itself, having to store the library of graphics. It just caches those images that are required, and uses the DNA sequence to build the graphical representation of the room without having to download it all every time there is a change. The application on the mobile phone is able to simulate different rooms by taking a server-downloadable data descriptor, and by being able to mutate the descriptor (for customising the room) and generating the simulation.
In addition, the images that are actually sent to a mobile phone may be modified based on the model or characteristics of the phone. Clearly, if the phone's screen is low resolution, there is no point in transmitting high definition images. Also, different phones have different size screens. Therefore, once an image has been created, it may be re-sized and downgraded in quality to match the mobile phone being used. This reduces the data transmission required.
Mention is made above of collision zones. This is now explained in more detail.
Collision zones are defined areas within a simulated environment. Referring to Figure 8, the simulated environment includes areas called collision zones, as illustrated. The avatar includes a point or hot spot which is not visible to a user in the final rendering of the environment, but that hot spot cannot move into a collision zone. In the example given, the avatar would be unable to jump into the air because there is a collision zone just above the hot spot. If the avatar were to move to the left, he would be able to jump because he would then be clear of the collision zone. If, however, he were to move further to the right, he would encounter another collision zone which would stop him from moving any further to the right.
By default, the mobile application will create a five pixel margin around the collision zone to prevent the avatar getting stuck. The collision zones are created to prevent the avatar moving off the screen, colliding with furniture, or from going around furniture.
The mobile application will automatically scale the collision zones depending on the screen resolution selected using a screen scaling factor predefined in Java.
Figure 9 shows an interaction zone which, like the collision zone in Figure 8, is located within a room. Rather than preventing the hot spot of an avatar from moving into the zone, the hot spot is allowed into the interaction zone. Once into the interaction zone, the avatar can carry out additional functions, such as sitting left, sitting right, watch tv, interact with another avatar, enter a room. Thus, when the hot spot moves into the interaction zone, additional functionality becomes available.

Claims (29)

  1. Claims 1. A method of displaying a simulated environment on a client device comprising: storing the images of the simulated environment and a descriptor of the attributes of the simulated environment on a server; serving the images of the simulated environment to the client device; creating the simulated environment on the client device by rendering the images of the simulated environment and applying the attributes of the environment described in the descriptor.
  2. 2. A method according to claim 1, wherein the descriptor includes the identity of an object of the simulated environment.
  3. 3. A method according to claim 1 or 2, wherein the descriptor includes the position of an object of the simulated environment when the object is rendered by the client device.
  4. 4. A method according to any one of claims 1 to 3, wherein the descriptor includes the behaviour of an object of the simulated environment when the object is rendered by the client device.
  5. 5. A method according to any one of the preceding claims, wherein the descriptor includes the appearance of an object of the simulated environment when the object is rendered by the client device.
  6. 6. A method according to any one of the preceding claims, further comprising storing the images served to the client device on the client device.
  7. 7. A method according to claim 6, wherein at least one of: (i) the most recently used images; (ii) the most frequently used imaged; (iii) the most recently stored images; and (iv) a number of standard images are stored on the client device.
  8. 8. A method according to any one of the preceding claims, further comprising, before serving the images of the simulated environment to the client device, sending a data description of the simulated environment and the images which form that environment to the client device; and searching the client device for the images forming the simulated environment.
  9. 9. A method according to claim 8, wherein, if all of the images for forming the simulated environment are stored in the client device, building an image of the simulated environment.
  10. 10. A method according to claim 8, wherein, if any of the images for forming the simulated environment are not stored in the client device, sending the datadescription of the missing part to the server.Q
  11. 11. A method according to any one of the preceding claims, wherein the descriptor is alphanumeric
  12. 12. A method according to any one of the preceding claims, wherein the simulated environment is formed from one or more of the following images:
    (a) a background layer;
    (b) a ground layer; (c) a foreground layer; (d) one or more objects; and (e) one or more avatars.
  13. 13. A method according to any one of the preceding claims, further comprising allocating every object with a descriptor.
  14. 14. A method according to any preceding claim, wherein each object has a unique descriptor.
  15. 15. A method according to any one of the preceding claims, wherein the simulated environment or an object can be given an attribute which establishes a collision zone on or around the object.
  16. 16. A method according to claim 15, wherein when an object can be given an attribute of a hot spot, and the object is prevented from being moved or positioned such that the hot spot is located within a collision zone.
  17. 17. A method according to any one of the preceding claims, wherein the simulated environment or an object can be given an attribute which establishes an interaction zone.
  18. 18. A method according to claim 17, wherein when an object can be given an attribute of a hot spot, and the object interacts with the simulated environment or another object when the hot spot is located within the interaction zone.
  19. 19. A system for displaying a simulated environment, comprising: a server storing the images of the simulated environment and a descriptor of the attributes of the simulated environment, and serving the images; a client device arranged to receive the served images from the server and to create the simulated environment by rendering the images of the simulated environment and applying the attributes of the environment described in the descriptor.
  20. 20. A system according to claim 19, wherein, before the server serves the images of the simulated environment to the client device, it sends a data description of the simulated environment and the images which form that environment to the client device; and the client device searches for the images forming the simulated environment.
  21. 21. A system according to claim 20, wherein, if all of the images for forming the simulated environment are stored in the client device, the client device builds an image of the simulated environment.
  22. 22. A system according to claim 20, wherein, if any of the images for forming the simulated environment are not stored in the client device, the client device sends the data description of the missing part to the server.
  23. 23. A client device arranged to display a simulated environment, comprising: data storage arranged to store images of the simulated environment and a descriptor of the attributes of the simulated environment; the client device being arranged to receive the images from the server and to create the simulated environment by rendering the images of the simulated environment and applying the attributes of the environment described in the descriptor.
  24. 24. A client device according to claim 23, arranged to receive a data description of the simulated environment and the images which form that environment; and to Q search its storage for the images forming the simulated environment.
  25. 25. A client device according to claim 24, wherein, if all of the images for forming the simulated environment are stored in the storage, the client is arranged to device builds an image of the simulated environment.
  26. 26. A client device according to claim 24, wherein, if any of the images for forming the simulated environment are not stored in the storage of the client device, the client device is arranged to send the data description of the missing part to the server.
  27. 27 A server arranged to serve images to a client device to enable it to display a simulated environment, the server comprising: server storage arranged to store the images of the simulated environment and a descriptor of the attributes of the simulated environment; the server being arranged to serve the images of the simulated environment to the client device.
  28. 28. A server according to claim 27, wherein, before the server serves the images of the simulated environment to the client device, it is arranged to send a data description of the simulated environment and the images which form that environment to the client device.
  29. 29. A server according to claim 28, wherein, if any of the images for forming the simulated environment are not stored in the client device, the client device sends the data description of the missing part to the server, and the server is arranged to send the image or images to the client device.
GB1011196.1A 2010-07-02 2010-07-02 Displaying a simulated environment on a mobile device Withdrawn GB2481790A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1011196.1A GB2481790A (en) 2010-07-02 2010-07-02 Displaying a simulated environment on a mobile device
PCT/GB2011/001007 WO2012001376A1 (en) 2010-07-02 2011-07-04 A method of displaying a simulated environment on a client device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1011196.1A GB2481790A (en) 2010-07-02 2010-07-02 Displaying a simulated environment on a mobile device

Publications (2)

Publication Number Publication Date
GB201011196D0 GB201011196D0 (en) 2010-08-18
GB2481790A true GB2481790A (en) 2012-01-11

Family

ID=42669127

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1011196.1A Withdrawn GB2481790A (en) 2010-07-02 2010-07-02 Displaying a simulated environment on a mobile device

Country Status (2)

Country Link
GB (1) GB2481790A (en)
WO (1) WO2012001376A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008151424A1 (en) * 2007-06-11 2008-12-18 Darwin Dimensions Inc. Metadata for avatar generation in virtual environments
US20090158170A1 (en) * 2007-12-14 2009-06-18 Rajesh Narayanan Automatic profile-based avatar generation
US20100229108A1 (en) * 2009-02-09 2010-09-09 Last Legion Games, LLC Computational Delivery System for Avatar and Background Game Content
US20100304869A1 (en) * 2009-06-02 2010-12-02 Trion World Network, Inc. Synthetic environment broadcasting
US20110055267A1 (en) * 2009-08-27 2011-03-03 International Business Machines Corporation Virtual Universe Rendering Based on Prioritized Metadata Terms
WO2011110855A2 (en) * 2010-03-10 2011-09-15 Tangentix Limited Multimedia content delivery system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880731A (en) * 1995-12-14 1999-03-09 Microsoft Corporation Use of avatars with automatic gesturing and bounded interaction in on-line chat session
AU2003292160A1 (en) * 2002-12-12 2004-06-30 Sony Ericsson Mobile Communications Ab System and method for implementing avatars in a mobile environment
US7979067B2 (en) * 2007-02-15 2011-07-12 Yahoo! Inc. Context avatar
JP5039922B2 (en) * 2008-03-21 2012-10-03 インターナショナル・ビジネス・マシーンズ・コーポレーション Image drawing system, image drawing server, image drawing method, and computer program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008151424A1 (en) * 2007-06-11 2008-12-18 Darwin Dimensions Inc. Metadata for avatar generation in virtual environments
US20090158170A1 (en) * 2007-12-14 2009-06-18 Rajesh Narayanan Automatic profile-based avatar generation
US20100229108A1 (en) * 2009-02-09 2010-09-09 Last Legion Games, LLC Computational Delivery System for Avatar and Background Game Content
US20100304869A1 (en) * 2009-06-02 2010-12-02 Trion World Network, Inc. Synthetic environment broadcasting
US20110055267A1 (en) * 2009-08-27 2011-03-03 International Business Machines Corporation Virtual Universe Rendering Based on Prioritized Metadata Terms
WO2011110855A2 (en) * 2010-03-10 2011-09-15 Tangentix Limited Multimedia content delivery system

Also Published As

Publication number Publication date
GB201011196D0 (en) 2010-08-18
WO2012001376A1 (en) 2012-01-05

Similar Documents

Publication Publication Date Title
US9223469B2 (en) Configuring a virtual world user-interface
KR20240013154A (en) A personalized virtual store
GB2450757A (en) Avatar customisation, transmission and reception
KR20240010719A (en) AR-based connected portal shopping
US12475621B2 (en) Product image generation based on diffusion model
JP2025176133A (en) Virtual room decoration method, device, equipment, medium, and program product
KR20260005388A (en) Diffusion Model Virtual Try-on Experience
US20250285351A1 (en) Augmented reality try-on experience for friend
KR20250156150A (en) Real-time fashion item transfer system
US20240249474A1 (en) Image generation from text and 3d object
US20250371825A1 (en) Pixel-based deformation of fashion items
US20250166264A1 (en) Diffusion model multi-person image generation
US20240404005A1 (en) High resolution synthesis using shaders
WO2025030043A1 (en) Garment fabrication using machine learning
US12299775B2 (en) Augmented reality experience with lighting adjustment
KR20090122897A (en) Implementation of Virtual Reality and Direct Advertising through 3D Desktop
KR20260004388A (en) Body mesh reconstruction from RGB images
KR20250078996A (en) Real-time machine learning-based in-painting
GB2481790A (en) Displaying a simulated environment on a mobile device
US20250329082A1 (en) Selecting fashion items for an avatar
US20260045015A1 (en) Product image generation based on diffusion model
US20250329118A1 (en) Generative ai experience with movement
US20240346775A1 (en) Stationary extended reality device
KR20260004383A (en) Fixed extended reality device
KR20260012714A (en) Product image generation based on diffusion models

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)