[go: up one dir, main page]

AU2003200217A1 - Communication Method - Google Patents

Communication Method Download PDF

Info

Publication number
AU2003200217A1
AU2003200217A1 AU2003200217A AU2003200217A AU2003200217A1 AU 2003200217 A1 AU2003200217 A1 AU 2003200217A1 AU 2003200217 A AU2003200217 A AU 2003200217A AU 2003200217 A AU2003200217 A AU 2003200217A AU 2003200217 A1 AU2003200217 A1 AU 2003200217A1
Authority
AU
Australia
Prior art keywords
command
animated
user
vector
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2003200217A
Inventor
Benjamin POWELL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUMANARENA Ltd
Original Assignee
HUMANARENA Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUMANARENA Ltd filed Critical HUMANARENA Ltd
Priority to AU2003200217A priority Critical patent/AU2003200217A1/en
Publication of AU2003200217A1 publication Critical patent/AU2003200217A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Description

P/00/0011 Regulation 3.2
AUSTRALIA
Patents Act 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name of Applicant: Actual Inventor: Address for service in Australia: Invention Title: HumanArena Ltd Benjamin Powell
WALLINGTON-DUMMER
GPO Box 3888, Sydney NSW 2001 (Suite 904, 37 Bligh Street, Sydney NSW 2000) Communication Method The following statement is a full description of this invention, including the best method of performing it known to us 2 COMMUNICATION METHOD The present invention relates to a communication method and, more particularly, to such a method adapted for use over computer networks and more particularly, but not exclusively, in the context of animated characters.
BACKGROUND
The interconnected network of computers currently known as the "Internet" has led to the adoption of digital communication technology as an important means of communication between people. Initially the transducing device for the digital communication over the Internet or like network of interconnected computers has been a personal computer. In more recent times many other devices, particularly of the hand-held variety, have been produced to aid human communication over these same digital networks.
These hand-held devices include personal digital assistants, "palm" computers and mobile telephone devices.
Where it is desired to represent movement in a visual manner, for example on a VDU screen of a personal computer and it is desired to communicate that movement over the Internet it has been observed that bandwidth restrictions 3 come into play making it difficult to render movement particularly in or near real time.
The personal computers and other transducing devices being used to initiate communication started to initiating communication essentially in a text based format. Even where graphical user interfaces are employed much of the communication of meaningful information and context of the meaningful information is still, even today, based on the transmission of text.
People, however, are "visual" in nature and frequently comprehend communications better where the communication can be placed in a visual context.
Heretofore bandwidth restrictions, in particularly but not exclusively, have hindered the use of personal computers and other transducing devices in attempts to communicate movement and visual context over the Internet and other interconnected networks of computers.
It is an object of the present invention to address or ameliorate the abovementioned problems or at least provide a useful choice.
BRIEF DESCRIPTION OF INVENTION Accordingly, in one broad form of the invention there is provided a method of depicting motion of body components communicated over a network; said method comprising: communicating the basic form of each said body component to a target display device; subsequently communicating desired movement of one or more of said body components by transmitting vector information over said network to said target display device.
Preferably said vector information comprises motion start and motion finish points.
Preferably said vector information further includes information pertaining to rate of movement between said start and finish points.
In a further broad form of the invention there is provided a method of assisting an on-line chat participant to visualize participants, said method comprising: displaying on a visual display of a first user an animated representation of said first user and an animated display of a representation of each other user with which said first user is in current online communication; causing relative movement of body components of said animated characters and causing relative movement between said animated characters on said visual display such that said first user views an analogue in time and space of current online chat participants with which said first user is currently in communication.
In yet a further broad form of the invention there is provided a method of communication between animated characters displayed on a visual display device; said method comprising: tracking displayed characteristics of said animated characters effecting changes to displayed characteristics of said animated characters effecting changes to the relative position between characters whereby an observer is given an impression of interaction between said animated characters.
In yet a further broad form of the invention there is provided an online commercial for representation on a video display device; said commercial including representations of one or more animated characters; said characters operating in accordance with any of the above defined methods.
In yet a further broad form of the invention there is provided a command structure for communication of vectorcommands over a digital network; said command structure comprising at least a first field which defines a vectorcommand type, a second field which identifies a character upon which a command is to be acted, at least a third field -associated with the command type.
Preferably said command structure further includes one or more optional detail fields.
Preferably said vector-command is a move character command.
Preferably said vector-command is a move character part command.
Preferably said animated representations corresponding to said users are grouped according to a quadratic dissection method so as to minimize network traffic.
Preferably said quadratic dissection method comprises dissecting said animated display into four equal areas and mapping at least one animated representation located in each square to a hierarchical tree structure.
7 Preferably each quadrant of said animated display is further subdivided into groups of four equal areas on a recursive basis until each animated representation of a user is isolated to a single quadrant.
BRIEF DESCRIPTION OF DRAWINGS Embodiments of the present invention will now be described with reference to the accompanying drawings wherein: Fig. 1 is a diagram of a typical computer-to-computer connection over a computer network with a personal computer being used as a transducing communications device; Fig. 2 is a block diagram of a "wave" command structure for use in a communication method according to a first embodiment of the present invention; Fig. 3 is a block diagram of a "move" command structure also for use in conjunction with the communication method of the first embodiment of the present invention; Fig. 4 is a diagram of implementation of the command structure of Fig. 2 or Fig. 3 in relation to a human character moving from A to B;
I
Fig. 5 is a further diagram of implementation of the command structure of Fig. 2 and Fig. 3 in relation to movement of a character from A to B shown in Fig. 4, but following a varied path; Fig. 6 is a diagram of a humanoid character denoting vertices as reference points for the command structure of Fig. 2 or Fig. 3; Fig. 7 is a diagram of an animal character also denoting vertices for use as points of reference by the command structure of Fig. 2 or Fig. 3; Fig. 8 is a block diagram of a communications context system according to a second preferred embodiment of the present invention; Fig. 9 is a block diagram of the communications context system of Fig. 8 implemented in a client/server environment; Fig. 10 is a block diagram of a relationship system with a view to minimizing network traffic; Fig. 11 is a block diagram of an example of the method embodied in the relationship of Fig. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS 9 A first embodiment of the present invention will now be described particularly with reference to Figs. 2, 3, 4, 5, 6, 7 applicable in the generalized communications context of Fig. 1.
With initial reference to Fig. 1 there is illustrated a digital network 10 over which a first personal computer 11 is adapted to communicate with a second personal computer 12.
The digital network 10 may comprise the interconnected network of computers currently known as the "Internet" which operates on the basis of the TCP/IP protocol. Alternatively digital network 10 could be a corporate intranet or wireless communication operating on other proprietary protocols.
In this instance first user 13 desires to communicate with second user 14 by means of respective first PC 11 communicating with second PC 12.
In this instance first user 13 desires to communicate the movement of figure 15 on VDU 16 from point A to point B so that a figure facsimile 17 on VDU 18 of second PC 12 of second user 14 also appears to move from point A to point B on VDU 16.
In accordance with a first preferred embodiment of the present invention and with particular reference to Figs. 2 and 3 a command structure of the format: VECTOR-COMMAND [character ID; new position; optional first detail field; optional second detail field is utilized.
In the particular instance of Figs. 2 and 3 the vectorcommand 20 relates, in this instance, to a character part and can be characterized as a vector-command of type MOVE CHARACTER PART where, in this instance, the character part comprises an arm or limb of a character and is a "WAVE" command where the character is caused to perform a waving motion. In the example given in Fig. 2 character 15 is designated as character Cl in the character ID field 21. The part of the character to be moved is designated in part ID field 22 as the "left" arm or limb of character Cl. Finally the first detail field 23 contains data relating to the "times to repeat" the wave movement which, in this instance, is designated as the numeral 3.
In the case of the example of Fig. 3 the vector-command is a move character command with the default vector being move in a straight line. So, the fields of the vector-
I
command comprise the move command 20 having a character ID field 21 which again designates character 15 as identified as character C1. The second ID field comprising field 22 denotes "new position" in this instance designated in Cartesian coordinates X 2
Y
2 Optionally the command can include a first detail field 23 which designates speed. The vector-command 20 can also include a second detail field 24 which designates a specific path or trajectory in the instance where the default path of a straight line is not to be used.
So, relating back to the movement of the figure 15 in Fig. 1 from point A to point B on VDUs 16, 18, a program (not shown) operating on first PC 11 is invoked by commands entered by first user 13, for example by mouse clicks on a graphical user interface such as the Windows (TM Microsoft Corporation) interface to designate: i. invocation of the "move character" command; ii. define the character to be moved as character Cl (figure 15 on first PC 11) currently located at point A and to be moved (in a default) straight line to position B designated also by a mouse click at that location on VDU 16 having the Cartesian coordinates X 2
Y
2 With the parameters of the move character command having been defined the vector-command 20 is communicated over digital network 10 to second PC 12 where a program (not shown) running on second PC 12 parses the received vectorcommand 20, determines that it is a move character command and that it applies to the figure facsimile 17 designated as character Cl and then drives VDU 18 so as to cause figure facsimile 17 to move from point A to point B of VDU 18.
In an alternative form first user 13 can program the move character vector-command 20 by a "click and drag" action of a mouse pointer (not shown) on figure 15 from point A to point B on VDU 16.
With reference to Figs. 4, 5, 6 and 7 the figure 15 can be any kind of character adapted for animation and can, for example, be humanoid or animal in its representation or, indeed, an animation of what is otherwise considered an inanimate object.
Fig. 4 illustrates the example where figure 15 is a humanoid moving in a straight line from point A to point B as previously described with reference to the command of Fig. 4.
Fig. 5 illustrates the situation where the same humanoid character is moved in three-dimensional co-ordinates from
I
point A (Xl, Yl, Zl) to point B (X2, Y2, Z2) along an arcuate path 25. In this instance the vector-command 20 of Fig. 3 being a move command would include an entry in second detail field 24 nominating the trajectory to be followed from point A to point B. The trajectory can be defined with reference to a look up table e.g. curved path 1; ellipsoid path 2.
Fig. 6 highlights the necessity in the situation where the vector-command is a command of type move character part that it will often be necessary to define beginning and end vertices of the relevant character part which is to be moved comprising, in this instance, first vertex 26 and second vertex 27 where first vertex 26 is a vertex about which rotation and displacement takes place whilst second vertex 27 represents the point of maximum displacement of the part.
So, in the case of the vector-command 20 being a wave command as illustrated in Fig. 2 the part ID field 22 will identify the arm of humanoid figure 15 in Fig. 6 and, during execution of the command 20, the arm would rotate about first vertex 26.
In more complex arrangements, such as that of the move command, animation of the humanoid figure 15 requires that, during execution of the move command, the humanoid figure moves its arms and legs forward and backwards in a walking or running motion which can include a definition with reference to a vector path that both first vertex 26 and second vertex 27 are to traverse during a move command.
Fig. 7 illustrates the situation where the figure is an animal figure rather than a humanoid figure and again shows that the same principles apply wherein this case the part to be moved is a rear shin bone 28 of animal figure COMMUNICATION CONTEXT SYSTEM The general command structure comprising the vectorcommands 20 according to the previously described first preferred embodiment can be used with advantage to communicate across digital networks not just the specific movement of a single character or part of a character but can be used as the basis for a command structure which provides a visual context system 30 as will now be described with reference to Fig. 8.
In this instance the visual context system 30 applies in a context of what are known as "chat" rooms currently available on the Internet in essentially text based form.
In this case each active participant is represented by a humanoid figure. There 'is one figure for each participant and the figures of all participants are represented on the communications device of each participant.
So, in the case of three participants as illustrated in Fig. 8 comprising first participant 31, second participant 32 and third participant 33, each active participant is represented by a facsimile humanoid comprising first facsimile humanoid 31A, second facsimile humanoid 32A.
Because third participant 33 is currently not an active participant there is no humanoid representation of him/her.
In this embodiment the digital network 10 of Fig. 1 is utilized and wherein first participant 31 utilises screen 34 of first PC 35 and second participant 32 utilises screen 36 of second PC 37. As will be observed in Fig. 8 facsimiles 31A, 32A appear on both screens 34 and 36.
Third participant 33 merely observes the interactions of first participant 31 and second participant 32 by means of screen 38 on third PC 39.
Should third participant 33 elect to become an active participant then a facsimile humanoid figure 33A would appear on each of screens 34, 36, 38.
The facsimile characters 31A, 32A are moved around the screens utilizing the vector-commands 20 previously described with reference to the first embodiment and Figs. 1-7 inclusive. Additional vector-commands 20 adapted specifically for the chat context can be invoked such as, for example, a "SMILE" command.
Fig. 9 illustrates a client/server implementation of a visual context system 40 wherein personal computers 35, 37, 39 are implemented as clients of server computer 41 and wherein the personal computers and participants are numbered as for the visual context system 30 of the first embodiment of visual context system. Command structure for movement of the facsimile figures 31A, 32A is again utilizing vectorcommands 20 of the type described with reference to the first embodiment.
SUMMARY
The vector-command structure of the communication method according to the first embodiment is defined by the ability to send a stream of character vertices and associated
XYZ
coordinate information over a communication network such that the motion of the character is made possible both on the client and other clients throughout the network.
This enables coordinates of vertices derived from a dancing person through the use of existing 3D digitizing technology to be streamed over a communication network to multiple recipients and for their client system to animate a character based upon these movements.
To further reduce data- volume the technology also packages certain movement sequences into commands such as the Move and Wave command described with reference to Figs. 2 and 3. This enables further data compression and in conjunction with the vector-command structure provides an enhanced subset of the technology.
In this way a particular sequence of streaming motion can be contained in a command and reproduced on the client with significantly less data volume being subsequently sent.
In order to implement the vector-command structure, the characters require one or more designated vertices. In the case of Figs. 6 and 7, the vertex of the character represents the centre of gravity or key node of the character and the XY location provided is the location that the character needs to be relocated to.
The character can simply be transferred to the new location directly, or may take on the task of walking to the specified location. It is possible for the character to translate, walk, run, fly, etc. to the new location and it is 18 also possible that the system take into account various paths to transit the distance without running through interfering walls, other objects, etc.
Example command: Wave command of Fig. 2 COMMAND STRUCTURE WAVE (Cl, Left, 3) where "Cl" is the character name, "Left" designates the hand to wave and is the number of repetitions. Note that the command can be predefined as shown above or may contain actual streaming motion coordinates for various vertices.
Networked System With reference to Figs. 10 and 11 the system previously described for the depiction of animated characters can be applied in a networked computer environment. For example one individual animated character can be assigned to one each of every participant in an online "chat" environment and the chat participants can thereby visualize "themselves" and the other participants with whom they are interacting in any given session.
In the example now to be described with reference to Figs. 10 and 11 it is assumed that there is one participant operating one personal computer used as the means for that
I
19 person to interact with other chat participants in an online chat session.
With reference to Fig. 10 a first user A through computer terminal 50A communicates over a network 51 with three other participants AA, AB, and AC who communicate themselves respectively via terminals 50AA, 50AB and In turn these participants communicate with their own personal groupings, members of which in turn communicate with their own personal groupings and so On as shown in the network 51 of Fig. 10. Working on an underlying assumption that any one participant will most likely and most often communicate with other users which are in their visual range, which is to say most closely associated with them a networking model is outlined which increases the probability that network traffic takes the most direct route between any two participants forming a grouping.
With reference to Fig. 11 a visual display 52 on the personal computer (not shown) of first user 53 shows a total of 9 chat participants comprising users 53, 54, 55, 56, 57, 58, 59, 60, 61. In accordance with a network dissection method according to a preferred embodiment a quadratic dissection takes place of the participants illustrated on screen 52 by clustering the participants in groups of five comprising a centre participant and four other corner participants, the four corner participants comprising the participants which are either most closely associated with the centre participant or have the highest probability of communicating with the centre participant. So, for example, the screen arrangement of Fig. 52 is dissected in accordance with the screen arrangement 52A whereby first user 53 becomes the centre participant in a group of five with second user 54, third user 55, fourth user 56, fifth user 57 forming the corner participants in a first dissection grouping 62.
In turn, and still with reference to screen 52A a second dissection grouping 63 is formed based on a centre participant comprising fifth user 57 who becomes associated with corner participants 58, 59, 60, 61 as illustrated in the screen 52A.
Having performed the dissection illustrated in screen 52A as compared with screen 52 it follows that a hierarchical or tree structure exists between the participants thus dissected as shown in hierarchical tree structure 64 shown in Fig. 11.
The dissection method can be applied recursively.
21 The above describes only some embodiments of the present invention and modifications, obvious to those skilled in the art, can be made thereto without departing from the scope and spirit of the present invention.

Claims (13)

1. A method of depicting motion of body components communicated over a network; said method comprising: communicating the basic form of each said body component to a target display device; subsequently communicating desired movement of one or more of said body components by transmitting vector information over said network to said target display device.
2. The method of Claim 1 wherein the vector information comprises motion start and motion finish points.
3. The method of Claim 2 wherein the vector information further includes information pertaining to rate of movement between said start and finish points.
4. A method of assisting an on-line chat participant to visualize participants, said method comprising: displaying on a visual display of a first user an animated representation of said first user and an animated display of a representation of each other user with which said first user is in current online communication; causing relative movement of body components of said animated characters and causing relative movement between said animated characters on said visual display such that said first user views an analogue in time and space of current online chat participants with which said first user is currently in communication.
A method of communication between animated characters displayed on a visual display device; said method comprising: tracking displayed characteristics of said animated characters effecting changes to displayed characteristics of said animated characters effecting changes to the relative position between characters whereby an observer is given an impression of interaction between said animated characters.
6. An online commercial for representation on a video display device; said commercial including representations of one or more animated characters; said characters operating in accordance with any of the above defined methods.
7. A command structure for communication of vector-commands over a digital network; said command structure 24 comprising at least a first field which defines a vector-command type, a second field which identifies a character upon which a command is to be acted, at least a third field associated with the command type.
8. The command structure of Claim 7 wherein said command structure further includes one or more optional detail fields.
9. The command structure of Claim 7 or Claim 8 wherein said vector-command is a move character command.
10. The command structure of Claim 7 or Claim 8 wherein said vector-command is a move character part command.
11. The method of Claim 4 or Claim 5 wherein said animated representations corresponding to said users are grouped according to a quadratic dissection method so as to minimize network traffic.
12. The method of Claim 11 wherein said quadratic dissection method comprises dissecting said animated display into four equal areas and mapping at least one animated representation located in each square to a hierarchical tree structure.
13. The method of Claim 12 wherein each quadrant of said animated display is further subdivided into groups of four equal areas on a recursive basis until each animated representation of a user is isolated to a single quadrant. Dated: 24 January 2003 HumanArena Ltd By its Patent Attorney Wallington-Dummer
AU2003200217A 2003-01-24 2003-01-24 Communication Method Abandoned AU2003200217A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003200217A AU2003200217A1 (en) 2003-01-24 2003-01-24 Communication Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2003200217A AU2003200217A1 (en) 2003-01-24 2003-01-24 Communication Method

Publications (1)

Publication Number Publication Date
AU2003200217A1 true AU2003200217A1 (en) 2004-08-12

Family

ID=34318190

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2003200217A Abandoned AU2003200217A1 (en) 2003-01-24 2003-01-24 Communication Method

Country Status (1)

Country Link
AU (1) AU2003200217A1 (en)

Similar Documents

Publication Publication Date Title
US8161385B2 (en) System and method for enabling users to interact in a virtual space
US20180359448A1 (en) Multiparty collaborative interaction in a virtual reality environment
JP3337938B2 (en) Motion transmitting / receiving device having three-dimensional skeleton structure and motion transmitting / receiving method
Pandzic et al. Realistic avatars and autonomous virtual humans
US20090128555A1 (en) System and method for creating and using live three-dimensional avatars and interworld operability
CN106774820A (en) The methods, devices and systems that human body attitude is superimposed with virtual scene
Sankaranarayanan et al. Virtual coupling schemes for position coherency in networked haptic environments
JP2005071182A (en) Three-dimensional animation creation support apparatus
Joslin et al. Trends in networked collaborative virtual environments
Pandzic et al. Towards natural communication in networked collaborative virtual environments
Pimentel et al. Teaching your system to share
CN113144592B (en) Interaction method of VR equipment and mobile equipment
AU2003200217A1 (en) Communication Method
Roth et al. Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality.
Petkov et al. Comparative study of latent-sensitive processing of heterogeneous data in an experimental platform for 3D video holographic communication
Soares et al. Sharing and immersing applications in a 3D virtual inhabited world
CN113393561A (en) Method, device and storage medium for generating limb action expression packet of virtual character
Perl Distributed Multi-User VR With Full-Body Avatars
Guo et al. Towards asynchronous video-haptic interaction in cyberspace
Gasparyan Cost-Efficient Video Interactions for Virtual Training Environment
Anastassakis et al. A system for logic-based intelligent virtual agents
Sung et al. The avatar navigation of distributed virtual environment by using multiview client
JPH10164536A (en) Visualization processing method of remote cooperative work using 3D virtual space
Hartling Octopus: A study in collaborative virtual environment implementation
Dutton Developing articulated human models from laser scan data for use as avatars in real time networked virtual environments

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period