WO2012127329A1 - Method of collaboration between devices, and system therefrom - Google Patents
Method of collaboration between devices, and system therefrom Download PDFInfo
- Publication number
- WO2012127329A1 WO2012127329A1 PCT/IB2012/050627 IB2012050627W WO2012127329A1 WO 2012127329 A1 WO2012127329 A1 WO 2012127329A1 IB 2012050627 W IB2012050627 W IB 2012050627W WO 2012127329 A1 WO2012127329 A1 WO 2012127329A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- instruction
- touch event
- image
- touch
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Definitions
- the invention relates generally to a method of collaboration and more specifically to a method of collaboration between users of devices wherein at least one device comprises touch enabled user interface.
- the invention provides a method for capturing a touch event.
- the method comprises creating the touch event through at least one gesture on a first device comprising a first touch enabled user interface.
- the method then includes capturing at least one instruction for the touch event.
- the invention provides a method for collaborative interaction for an image.
- the method comprises providing a first collaborator for creating a touch event through a gesture on a first device comprising a first touch enabled user interface having the image.
- the method then includes capturing at least one instruction for the touch event.
- the method then involves transmitting the at least one instruction for the touch event to a second device for a second collaborator.
- the method further comprises carrying the at least on instruction at the second device to re-create the touch event on the image.
- the image is accessed by the first and second collaborator from an image server.
- the invention provides a system for enabling collaborative interaction.
- the system comprises a gesture tool kit, a first device comprising a first touch enabled user interface, and a processing device.
- FIG. 1 shows steps for the method of the invention
- FIG. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention.
- touch enabled user interface means any user interface that is based on haptics, that is a user interface that acts on the sensation of touch.
- the interaction of a user with the touch user interface is also sometimes referred to as a gesture.
- touch enabled user interface comprises arrays of switches on one side of the user interface. One or more switches are activated when a gesture is performed. The exact action to be performed based on the gesture may be present on a database that is linked to the array of switches. The database may be present on a storage location with the capability to execute instructions, such as EPROM, EEPROM, etc.
- Gesture as used herein also includes interacting through other means such as typing, speaking, pointing, and the like.
- a "touch event” means any action that has been triggered by at least one gesture by the user, also sometimes referred to as touch actions.
- These gestures include, for example, pointing and/or marking a particular region or area of the user interface, turning pages, zooming, panning, scrolling, moving selected portions of a page, cropping out selected sections of a user interface, moving cropped sections to a predetermined locations, opening a link provided on the page, closing a page, annotating, and the like, and combinations thereof.
- Such gestures are known to one of ordinary skill in the art.
- panning in some devices would involve placing a finger at a location on the user interface and then moving the finger until a required portion of the user interface is in view.
- a set of coordinates are generated based on the location.
- the co-ordinates are updated, which are then transmitted to the user interface, where the view is updated until the finger is released, at which time the view is held constant and no more changes are effected.
- Scrolling and zooming may also be effected in such a manner.
- a touch event will be triggered, which will be enacted to the desired extent, which desired extent will depend on a number of factors, such as time of contact, extent of contact, distance of movement from initial contact, and the like.
- two fingers may mean zooming, the extent of zooming will depend on the distance to which the two fingers are moved apart relative to the initial contact.
- the view, as used herein, on a user interface may be an image, a text, a video clip, a web page, and the like.
- the view is an image.
- the view on the user interface is an image from a medical modality, such as retinal scan images, X-Ray, Ultrasound, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and the like. Images obtained from a medical modality are widely used for diagnostic and treatment of patients undergoing procedures. One skilled in the art will understand that some of the modalities may provide information in the form of a movie clip. An exemplary modality giving video clips as the view includes Ultrasound.
- Images from medical modalities may be stored and retrieved from secure locations such as image servers.
- One exemplary storage location for images from medical modalities known in the art is Picture Archiving and Communication Storage, also referred to in the art as PACS.
- PACS Picture Archiving and Communication Storage
- This enables images from different scanning techniques to be stored electronically and viewed on computer screens. This enables doctors and other health care professionals to access information and to compare it with previous images electronically.
- PACS is a combination of hardware and software dedicated to the short and long term storage, retrieval, management, distribution and presentation of images.
- Annotating, as used herein, means any metadata used to mark on a user interface by a user on a given view.
- Annotations may be in the form of texts; drawings, such as arrows, circles or rectangles, and the like; color highlighting, and so on. Arrows, circles and such shapes may be used to put emphasize on a relevant portion of a view.
- Text annotations may be used to record comments of a user on the view, in order to provide opinions, rationales and reasoning, and so on.
- Other text annotations may include device position information, such as "Office”, “Work", “Home”, or “In Transit” etc. Such device position information may be made available from a variety of sources, such as the user, or by a suitable positioning system such as GPS, and the like, or by the server the device is connected to, and combinations thereof.
- Annotations may also be in the form of voice recordings superimposed on a view to provide auditory annotations.
- annotations may be generated through an appropriate gesture, such as clicking on an icon, speaking into a microphone, video recording an event, and the like.
- Annotations may also be converted into a set of instructions that can be captured in a suitable format, such as XML or HTML format.
- Annotations will also include, besides the actual information input (such as text or circle or arrow etc.), the exact location on the screen at which the annotation was added, which may be, in one embodiment, in the form of co-ordinates. [0017]
- a number of devices that uses touch enabled user interfaces are commercially available today.
- the invention provides a method of capturing a touch event from a first device comprising a touch enabled user interface.
- the steps involved in the method of the invention 10 are shown in Fig. 1.
- the method includes creating a touch event on a first device comprising a touch enabled user interface, represented by numeral 12 in Fig. 1.
- the method then includes capturing the at least one instruction for the touch event, represented by numeral 14 in Fig. 1.
- the instruction may be derived from the database of instructions associated with the touch event.
- the touch event results in an instruction derived from the database, which is then executed on the first device.
- the instruction is simultaneously captured in a suitable format such as, but not limited to, a text file, an algorithm coded in a programming language, and the like. Other formats would become known to those skilled in the art and is contemplated to be within the scope of the invention.
- the at least one instruction in one embodiment may comprise at least two co-ordinates for the touch event.
- the at least one instruction that is captured in step 14 may now be stored in a suitable format at an appropriate location.
- the format in which the at least one instruction is stored may be the same as the format in which it was captured, or in any other suitable format.
- the appropriate location for storing the at least one instruction may include a server, a hard drive, a portable storage device, and the like. In one embodiment, the storage location for storing the at least one instruction is the PACS.
- the method subsequently involves transmitting the at least one instruction to a second device, shown in Fig. 1 as 16, that was captured in step 14.
- the second device is used by a second user to view the image the first user is viewing on the first device.
- the image is made available from a suitable location, such as a server like PACS.
- the image may be retrieved directly by the second user, or the first user or an administrator may provide permissions and instructions for the server to transfer the image to the second user.
- the second device comprises a second touch enabled user interface.
- the second device does not comprise a touch enabled user interface, instead it may be any one of desktop computer, laptop computer, mobile communication device, specialized computing device adapted for certain requirements, and the like.
- the at least one instruction is then converted to a format that is recognized by the second device.
- the transmitting may be done in any format known to those skilled in the art. This includes, for example, transmitting through wired networks, such as LAN, telephone ports, and the like; wireless networks such as WLAN, WAN, and the like; and combinations thereof.
- the transmission may also be through secured networks that involve appropriate levels of encryption and decryption, which is also contemplated to be within the scope of the invention. Such levels of security are necessary for many situations, including privacy issues, for obtaining approvals from regulatory authorities, and the like.
- the instruction is carried out on it, as shown in Fig. 1 and depicted by numeral 18.
- the at least one instruction from the first device is transmitted as such to the second device, after which the second device converts the at least one instruction to a format that is recognized by it, and hence capable of executing the at least one instruction.
- the touch event that was enacted on the first device is now re-created on the second device as well.
- the user of the second device sees an updated view on the user interface automatically without having to repeat the touch event.
- the touch event may be updated automatically on a real-time basis as long as the connection between the first and second device is of a certain quality and sufficient speed.
- the touch event may be performed on a user interface at any time period, as long as the at least one instruction is carried out along with the view.
- a second user on a second device may replay the entire set of views and the touch events that occurred originally, by retrieving the instructions from the storage location along with the views at any later time period as compared to the original time period when the actual set of gestures and touch events were recorded.
- the second device may be the same device as the first device where the original views, gestures and touch events occurred by a first user.
- the at least one instruction may be executed by a first device, second device, or combinations thereof, upon instructions on a series of predefined views, such as medical images or medical video images.
- the method of the invention enables one to "ZOOM" a video to a particular frame to a certain zoom extent and subsequently, carry the "ZOOM” to the same extent of zoom levels forward to all frames.
- a set of operations that were conducted on a first image is repeated on every subsequent image automatically.
- rapid analysis of a series of images may be conducted without having to go through a series of repetitive steps manually, thus saving time and resources, making the user experience very comfortable and easy.
- collaborations and teaching may also be facilitated in a great manner using the method of the invention.
- the touch event performed on a first device may be carried out on any number of devices associated with it, either on a real-time basis or in a time-delayed manner.
- the method of the invention is especially useful for collaboration between a first user and any number of further users, wherein all users are collaborating over a view.
- a first user creates a touch event zoom of a MRI scan image using an appropriate gesture, the same touch event is re-created in all the devices involved in the collaboration.
- another user may create a touch event of annotating using a "pointing arrow" at an appropriate location in the MRI image, which touch event will now be re-created on all the collaborating devices using the method of the invention.
- the annotation may also be supplemented by a voice recording regarding the importance of the pointed location of the image.
- the method of the invention may be used for teaching purposes, wherein the views and the instructed with associated gestures and touch events are recorded in an appropriate location. Subsequently, this entire set of views and the instructions associated with all the gestures and touch events are retrieved from the storage location and re-created on the device comprising a touch enabled user interface.
- Other exemplary uses for the method of the invention will become obvious to one skilled in the art, and is contemplated to be within the scope of the invention.
- the method of the invention avoids the transfer of the views repeatedly at a certain "frame rate," which consumes considerable amount of bandwidth, making real-time collaboration difficult.
- the benefits of the invention stem from the fact that relevant views are transmitted only once from the first device to all the collaborating devices, and any further communication only involves transfer of instructions related to touch gestures and annotations. These instructions will be carried out in all the collaborating devices, thus enabling real-time collaborations while still conserving communication bandwidth.
- the method of the invention may be enabled in the form a software tool written in the form of instructions for executing an algorithm in an appropriate programming language. The software may then be executed in collaborating devices such that when a touch event is performed in one device, the same touch event is re-created in all of the collaborating devices without having the need for any intervention by any of the other users except the first user.
- Fig. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention 20.
- the system of the invention is particularly useful for interacting over images, especially from a medical modality.
- the image may be obtained from a suitable location such as an image server.
- An exemplary image server is PACS described herein.
- the system of the invention 20 comprises a gesture tool kit (not shown in the Fig. 2).
- the gesture tool kit comprises at least one gesture and at least one instruction for each gesture.
- the system of the invention 20 then comprises a first device 22 that comprises a first touch enabled user interface.
- the first device 22 is used by a first collaborator (not shown in Fig. 2) to open an image from the image server.
- the first collaborator creates a touch event through one or more touch screen recognizable gestures on the image.
- the system 20 then comprises a processing device 24 to capture at least one instruction.
- the instruction comprise at least one set of co-ordinates.
- the gesture tool kit may be present as part of the first device 22, and the at least one instruction is generated from the gesture tool kit in the device and captured by the processing device.
- the gesture tool kit may be present as part of the processing device 24, and the touch event is transmitted to the processing device 24, wherein the at least one instruction associated with the touch event is extracted from the gesture tool kit and captured by the processing device 24.
- the gesture tool kit may be a stand-alone separate device, and the processing device 24 extracts the at least one instruction associated with the touch event from the gesture tool kit and captures it.
- the system comprises a transmission means (not shown in figure) that transmits the at least one instruction.
- Suitable transmission means include wired network connections such as LAN, telephone ports, and the like; wireless network connections such WAN, WLAN, and the like, and combinations thereof.
- the at least one instruction is transmitted to a second device 26.
- the second device is used by a second collaborator and has the same image being viewed by the first collaborator on the first device.
- the second device 26 is configured to receive the at least one instruction from the processing device 24 and carry out the at least one instruction to re-create the touch event on the image on the second device.
- the processing device may be a server which is in constant contact with the devices in collaboration.
- the server may also comprise a storage location which stores the at least one instruction associated with all the touch events related to a collaborative event. This enables that the image and all the actions, such as zooming, panning, annotating, and the like, may be retrieved at any later point in time.
- the system of the invention may advantageously use an appropriate software tool that encodes the algorithm associated with the method of the invention.
- the software may then be installed in all the collaborators' devices, wherein the at least one instruction for each touch event is converted to an appropriate executable instruction for each of the other devices and the same touch event is re-created on all the devices. Subsequently, the images and the touch events on the device of the first user may be replicated in all the collaborators' devices without the necessity for any other users' intervention, while still conserving bandwidth during communication and avoiding repeated transmission of bandwidth consuming images.
- the system of the invention may also incorporate security features such as encryption and decryption algorithms to secure the information contained within, and the information being received and transmitted. Further, the system may also include secure logging in with password of appropriate strengths to be used for collaborators to log into the system and collaborate freely within the confines of the system. The entire system may be operated within a virtual private network to ensure the privacy and security of all the data.
- a user of a first device places a finger on a specific location of the user interface associated with the image.
- the location of the finger will be referred to by a set of co-ordinates (x 1 ; y 1 ; z ).
- the user drags the finger across the user interface to another location of the user interface.
- Each distinct new location of the user interface that the finger is in contact with will be assigned a set of co-ordinates (x 2 , y 2 , z ⁇ ), (x 3 , y 3 , z ⁇ ), (x 4 , y 4 , z 4 ) etc.
- the last point on the user interface was in contact with the finger has a set of co-ordinates (x n , y n , z ⁇ ).
- the co-ordinates along with the action of moving the image is converted into a set of instructions, which are then saved as an executable file.
- the executable file is then assigned a name which comprises the image file name, date, time, the number of actions??
- the file is then transmitted to a second device through a LAN line, wherein the instructions are executed to re-create the panning action.
- [x,y,z] may be changed to just [x,y], as I don't think 3D UI is in the purview of this patent]
- a view comprising a medical image from a modality like CT is viewed by a first and a second user.
- the first user creates a touch event of cropping a certain section of the image, thus the view is updated to a specific portion of the original image.
- This touch event cropping the image is converted into a series of instructions, which may comprise a series of co-ordinates on the screen indicating the area of cropping, and the instruction associated with cropping.
- These instructions are then transmitted to the second user' s device, wherein, upon executing the series of instructions, the touch event is recreated and hence, the view is updated to provide the cropped image.
- the communication bandwidth is preserved by sending only the instructions for cropping instead of the entire cropped image.
- the cropped image may be moved from a corner of the screen to the centre of the screen to enhance viewing effect.
- a touch event of moving the cropped image is created.
- This new touch event is then converted to instructions comprising original co-ordinates of the cropped image and the final co-ordinates of the cropped image, along with an instruction for moving.
- the instructions are then transmitted to the second user's device, wherein, upon executing the series of instructions, the touch event is re-created and hence, the cropped image is moved to the appropriate location on the user interface.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a method for capturing a touch event, wherein the touch event has been created through at least one gesture on a first device comprising a first touch enabled user interface. The method involves capturing at least one instruction for the touch event. The method of the invention can be used advantageously for enabling collaborations between multiple users without using all the bandwidth that is used for conventional collaboration methods. The invention also provides a system for enabling collaborative interaction that is based on the method of the invention.
Description
METHOD OF COLLABORATION BETWEEN DEVICES, AND SYSTEM THEREFROM
TECHNICAL FIELD
[0001] The invention relates generally to a method of collaboration and more specifically to a method of collaboration between users of devices wherein at least one device comprises touch enabled user interface.
BACKGROUND
[0002] Advances in communication technology have enabled people to communicate in many new ways, beyond voice calls and text messages. Even video calls are possible using the existing communication networks. Also, currently, a rich variety of communication is possible between computer systems and mobile phones, such as voice calls, video calls, emails, chat, text messages etc. There are even ways of taking complete control of another system from a remote location, such as "remote desktop" systems and "remote windowing" systems, using which a person can work on a remotely located system through the network, using hardware peripherals (mouse, keyboard) attached to a computer closer to him. Through these methods, it is possible for two or more people that are geographically separated but connected to a good quality computer network, to collaborate using these tools.
[0003] Further, the availability of higher bandwidth allow for more new applications to be developed for deployment. One such application scenario involves multiple parties wanting to collaboratively review visual media such as images, video or documents. In such situations, what is of importance is for those involved to review the visual media together. Typically, such collaborations involve sending information across the network that is comprised of "whole image frames" sampled at a certain "frames per second" on the source system. However, such collaborations and "remote desktops" generally involve heavy bandwidth usage over the network, and most often, bandwidth limitations cause applications to fail.
[0004] Thus there is a dire need to develop a method and system that enables collaborations across platforms without stressing the network capacity, while still allowing for real time collaborations.
BRIEF DESCRIPTION
[0005] In one aspect, the invention provides a method for capturing a touch event.
The method comprises creating the touch event through at least one gesture on a first device comprising a first touch enabled user interface. The method then includes capturing at least one instruction for the touch event. [0006] In another aspect, the invention provides a method for collaborative interaction for an image. The method comprises providing a first collaborator for creating a touch event through a gesture on a first device comprising a first touch enabled user interface having the image. The method then includes capturing at least one instruction for the touch event. The method then involves transmitting the at least one instruction for the touch event to a second device for a second collaborator. The method further comprises carrying the at least on instruction at the second device to re-create the touch event on the image. The image is accessed by the first and second collaborator from an image server.
[0007] In yet another aspect, the invention provides a system for enabling collaborative interaction. The system comprises a gesture tool kit, a first device comprising a first touch enabled user interface, and a processing device.
DRAWINGS
[0008] These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0009] FIG. 1 shows steps for the method of the invention; and
[0010] FIG. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention.
DETAILED DESCRIPTION [0011] As used herein and in the claims, the singular forms "a," "an," and "the" include the plural reference unless the context clearly indicates otherwise.
[0012] As used herein, "user" is a person in possession of and using a device that comprises a touch enabled user interface.
[0013] As used herein, "touch enabled user interface," also sometimes referred to in the art as touch user interface, means any user interface that is based on haptics, that is a user interface that acts on the sensation of touch. The interaction of a user with the touch user interface is also sometimes referred to as a gesture. Without being bound to any theory, it is known that touch enabled user interface comprises arrays of switches on one side of the user interface. One or more switches are activated when a gesture is performed. The exact action to be performed based on the gesture may be present on a database that is linked to the array of switches. The database may be present on a storage location with the capability to execute instructions, such as EPROM, EEPROM, etc. Gesture as used herein also includes interacting through other means such as typing, speaking, pointing, and the like.
[0014] As used herein, a "touch event" means any action that has been triggered by at least one gesture by the user, also sometimes referred to as touch actions. These gestures include, for example, pointing and/or marking a particular region or area of the user interface, turning pages, zooming, panning, scrolling, moving selected portions of a page, cropping out selected sections of a user interface, moving cropped sections to a predetermined locations, opening a link provided on the page, closing a page, annotating, and the like, and combinations thereof. Such gestures are known to one of ordinary skill in the art. As an example, panning in some devices would involve placing a finger at a location on the user interface and then moving the finger until a required portion of the user interface is in view. To achieve this, when the finger is placed on the user interface, immediately a set of coordinates are generated based on the location. As the finger is moved, the co-ordinates are updated, which are then transmitted to the user interface, where the view is updated until the finger is released, at which time the view is held constant and no more changes are effected. Scrolling and zooming may also be effected in such a manner. Depending on the number of fingers associated with the gesture that are in contact with the touch enabled user interface, a touch event will be triggered, which will be enacted to the desired extent, which desired extent will depend on a number of factors, such as time of contact, extent of contact, distance of movement from initial contact, and the like. Thus, for example, two fingers may mean zooming, the extent of zooming will depend on the distance to which the two fingers are moved apart relative to the initial contact.
[0015] The view, as used herein, on a user interface may be an image, a text, a video clip, a web page, and the like. In one embodiment, the view is an image. In a specific
embodiment, the view on the user interface is an image from a medical modality, such as retinal scan images, X-Ray, Ultrasound, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and the like. Images obtained from a medical modality are widely used for diagnostic and treatment of patients undergoing procedures. One skilled in the art will understand that some of the modalities may provide information in the form of a movie clip. An exemplary modality giving video clips as the view includes Ultrasound. Images from medical modalities may be stored and retrieved from secure locations such as image servers. One exemplary storage location for images from medical modalities known in the art is Picture Archiving and Communication Storage, also referred to in the art as PACS. This enables images from different scanning techniques to be stored electronically and viewed on computer screens. This enables doctors and other health care professionals to access information and to compare it with previous images electronically. PACS is a combination of hardware and software dedicated to the short and long term storage, retrieval, management, distribution and presentation of images. [0016] Annotating, as used herein, means any metadata used to mark on a user interface by a user on a given view. Annotations may be in the form of texts; drawings, such as arrows, circles or rectangles, and the like; color highlighting, and so on. Arrows, circles and such shapes may be used to put emphasize on a relevant portion of a view. Text annotations may be used to record comments of a user on the view, in order to provide opinions, rationales and reasoning, and so on. Other text annotations may include device position information, such as "Office", "Work", "Home", or "In Transit" etc. Such device position information may be made available from a variety of sources, such as the user, or by a suitable positioning system such as GPS, and the like, or by the server the device is connected to, and combinations thereof. Annotations may also be in the form of voice recordings superimposed on a view to provide auditory annotations. Thus, annotations may be generated through an appropriate gesture, such as clicking on an icon, speaking into a microphone, video recording an event, and the like. Annotations may also be converted into a set of instructions that can be captured in a suitable format, such as XML or HTML format. Annotations will also include, besides the actual information input (such as text or circle or arrow etc.), the exact location on the screen at which the annotation was added, which may be, in one embodiment, in the form of co-ordinates.
[0017] A number of devices that uses touch enabled user interfaces are commercially available today. An exemplary list of devices include, iPad™, iPhone™, iPod Touch™ available from Apple Corporation, USA; Tablet PCs such as Toshiba Portege™ M700 from Toshiba Corporation, USA, Lenovo X300™ from Lenovo Corporation, USA; and the like. [0018] As noted herein, in one aspect the invention provides a method of capturing a touch event from a first device comprising a touch enabled user interface. The steps involved in the method of the invention 10 are shown in Fig. 1. The method includes creating a touch event on a first device comprising a touch enabled user interface, represented by numeral 12 in Fig. 1. [0019] The method then includes capturing the at least one instruction for the touch event, represented by numeral 14 in Fig. 1. The instruction may be derived from the database of instructions associated with the touch event. The touch event results in an instruction derived from the database, which is then executed on the first device. The instruction is simultaneously captured in a suitable format such as, but not limited to, a text file, an algorithm coded in a programming language, and the like. Other formats would become known to those skilled in the art and is contemplated to be within the scope of the invention. The at least one instruction in one embodiment may comprise at least two co-ordinates for the touch event. The at least one instruction that is captured in step 14 may now be stored in a suitable format at an appropriate location. The format in which the at least one instruction is stored may be the same as the format in which it was captured, or in any other suitable format. The appropriate location for storing the at least one instruction may include a server, a hard drive, a portable storage device, and the like. In one embodiment, the storage location for storing the at least one instruction is the PACS.
[0020] The method subsequently involves transmitting the at least one instruction to a second device, shown in Fig. 1 as 16, that was captured in step 14. The second device is used by a second user to view the image the first user is viewing on the first device. In a typical use situation, when a second user wishes to view an original image, the image is made available from a suitable location, such as a server like PACS. The image may be retrieved directly by the second user, or the first user or an administrator may provide permissions and instructions for the server to transfer the image to the second user. In one embodiment, the second device comprises a second touch enabled user interface. In another embodiment, the second device does not comprise a touch enabled user interface, instead it may be any one of
desktop computer, laptop computer, mobile communication device, specialized computing device adapted for certain requirements, and the like. The at least one instruction is then converted to a format that is recognized by the second device. The transmitting may be done in any format known to those skilled in the art. This includes, for example, transmitting through wired networks, such as LAN, telephone ports, and the like; wireless networks such as WLAN, WAN, and the like; and combinations thereof. The transmission may also be through secured networks that involve appropriate levels of encryption and decryption, which is also contemplated to be within the scope of the invention. Such levels of security are necessary for many situations, including privacy issues, for obtaining approvals from regulatory authorities, and the like.
[0021] Once the second device has received the transmitted instruction, the instruction is carried out on it, as shown in Fig. 1 and depicted by numeral 18. In some embodiments, the at least one instruction from the first device is transmitted as such to the second device, after which the second device converts the at least one instruction to a format that is recognized by it, and hence capable of executing the at least one instruction. In this manner, the touch event that was enacted on the first device is now re-created on the second device as well. Thus, the user of the second device sees an updated view on the user interface automatically without having to repeat the touch event. Further, the touch event may be updated automatically on a real-time basis as long as the connection between the first and second device is of a certain quality and sufficient speed. Alternately, the touch event may be performed on a user interface at any time period, as long as the at least one instruction is carried out along with the view. Thus, a second user on a second device may replay the entire set of views and the touch events that occurred originally, by retrieving the instructions from the storage location along with the views at any later time period as compared to the original time period when the actual set of gestures and touch events were recorded. In this context, one skilled in the art will also understand that the second device may be the same device as the first device where the original views, gestures and touch events occurred by a first user.
[0022] In one embodiment, the at least one instruction may be executed by a first device, second device, or combinations thereof, upon instructions on a series of predefined views, such as medical images or medical video images. For example, the method of the invention enables one to "ZOOM" a video to a particular frame to a certain zoom extent and
subsequently, carry the "ZOOM" to the same extent of zoom levels forward to all frames. In this manner, a set of operations that were conducted on a first image is repeated on every subsequent image automatically. In this manner, rapid analysis of a series of images may be conducted without having to go through a series of repetitive steps manually, thus saving time and resources, making the user experience very comfortable and easy. Further, collaborations and teaching may also be facilitated in a great manner using the method of the invention.
[0023] Using the method of the invention, the touch event performed on a first device may be carried out on any number of devices associated with it, either on a real-time basis or in a time-delayed manner.
[0024] One of ordinary skill in the art will also understand that the method of the invention can be extended to a number of such touch events within a given time period, all these touch events being re-created on a second device and any number of other devices also. Further, for explanation purposes, the invention is described as collaboration between two users, but one skilled in the art will be able to appreciate that the method of the invention may be extended to any number of users involved in a collaboration.
[0025] The method of the invention is especially useful for collaboration between a first user and any number of further users, wherein all users are collaborating over a view. Thus, in one exemplary embodiment, during collaboration between several users a first user creates a touch event zoom of a MRI scan image using an appropriate gesture, the same touch event is re-created in all the devices involved in the collaboration. Subsequently, another user may create a touch event of annotating using a "pointing arrow" at an appropriate location in the MRI image, which touch event will now be re-created on all the collaborating devices using the method of the invention. The annotation may also be supplemented by a voice recording regarding the importance of the pointed location of the image.
[0026] In another exemplary embodiment, the method of the invention may be used for teaching purposes, wherein the views and the instructed with associated gestures and touch events are recorded in an appropriate location. Subsequently, this entire set of views and the instructions associated with all the gestures and touch events are retrieved from the storage location and re-created on the device comprising a touch enabled user interface.
[0027] Other exemplary uses for the method of the invention will become obvious to one skilled in the art, and is contemplated to be within the scope of the invention.
[0028] The method of the invention avoids the transfer of the views repeatedly at a certain "frame rate," which consumes considerable amount of bandwidth, making real-time collaboration difficult. The benefits of the invention stem from the fact that relevant views are transmitted only once from the first device to all the collaborating devices, and any further communication only involves transfer of instructions related to touch gestures and annotations. These instructions will be carried out in all the collaborating devices, thus enabling real-time collaborations while still conserving communication bandwidth. [0029] One skilled in the art will be able to appreciate that the method of the invention may be enabled in the form a software tool written in the form of instructions for executing an algorithm in an appropriate programming language. The software may then be executed in collaborating devices such that when a touch event is performed in one device, the same touch event is re-created in all of the collaborating devices without having the need for any intervention by any of the other users except the first user.
[0030] In another aspect, the invention provides a system for enabling collaborative interaction. Fig. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention 20. The system of the invention is particularly useful for interacting over images, especially from a medical modality. The image may be obtained from a suitable location such as an image server. An exemplary image server is PACS described herein. The system of the invention 20 comprises a gesture tool kit (not shown in the Fig. 2). The gesture tool kit comprises at least one gesture and at least one instruction for each gesture. The system of the invention 20 then comprises a first device 22 that comprises a first touch enabled user interface. The first device 22 is used by a first collaborator (not shown in Fig. 2) to open an image from the image server. The first collaborator creates a touch event through one or more touch screen recognizable gestures on the image.
[0031] The system 20 then comprises a processing device 24 to capture at least one instruction. The instruction comprise at least one set of co-ordinates. In one embodiment, the gesture tool kit may be present as part of the first device 22, and the at least one instruction is generated from the gesture tool kit in the device and captured by the processing device. In another embodiment, the gesture tool kit may be present as part of the processing
device 24, and the touch event is transmitted to the processing device 24, wherein the at least one instruction associated with the touch event is extracted from the gesture tool kit and captured by the processing device 24. In an alternate embodiment, the gesture tool kit may be a stand-alone separate device, and the processing device 24 extracts the at least one instruction associated with the touch event from the gesture tool kit and captures it.
[0032] The system comprises a transmission means (not shown in figure) that transmits the at least one instruction. Suitable transmission means include wired network connections such as LAN, telephone ports, and the like; wireless network connections such WAN, WLAN, and the like, and combinations thereof. The at least one instruction is transmitted to a second device 26. The second device is used by a second collaborator and has the same image being viewed by the first collaborator on the first device. The second device 26 is configured to receive the at least one instruction from the processing device 24 and carry out the at least one instruction to re-create the touch event on the image on the second device. [0033] The processing device may be a server which is in constant contact with the devices in collaboration. The server may also comprise a storage location which stores the at least one instruction associated with all the touch events related to a collaborative event. This enables that the image and all the actions, such as zooming, panning, annotating, and the like, may be retrieved at any later point in time. [0034] One skilled in the art may appreciate that the system of the invention may advantageously use an appropriate software tool that encodes the algorithm associated with the method of the invention. The software may then be installed in all the collaborators' devices, wherein the at least one instruction for each touch event is converted to an appropriate executable instruction for each of the other devices and the same touch event is re-created on all the devices. Subsequently, the images and the touch events on the device of the first user may be replicated in all the collaborators' devices without the necessity for any other users' intervention, while still conserving bandwidth during communication and avoiding repeated transmission of bandwidth consuming images.
[0035] The system of the invention may also incorporate security features such as encryption and decryption algorithms to secure the information contained within, and the information being received and transmitted. Further, the system may also include secure
logging in with password of appropriate strengths to be used for collaborators to log into the system and collaborate freely within the confines of the system. The entire system may be operated within a virtual private network to ensure the privacy and security of all the data.
EXAMPLES Transmitting Instructions for Panning of an image
[0036] A user of a first device places a finger on a specific location of the user interface associated with the image. The location of the finger will be referred to by a set of co-ordinates (x1 ; y1 ; z ). Then, the user drags the finger across the user interface to another location of the user interface. Each distinct new location of the user interface that the finger is in contact with will be assigned a set of co-ordinates (x2, y2, z\), (x3, y3, z\), (x4, y4, z4) etc. The last point on the user interface was in contact with the finger has a set of co-ordinates (xn, yn, z\). The co-ordinates along with the action of moving the image is converted into a set of instructions, which are then saved as an executable file. The executable file is then assigned a name which comprises the image file name, date, time, the number of actions?? The file is then transmitted to a second device through a LAN line, wherein the instructions are executed to re-create the panning action. [x,y,z] may be changed to just [x,y], as I don't think 3D UI is in the purview of this patent]
Cropping of an image and Moving the Cropped Image
[0037] In another exemplary embodiment, a view comprising a medical image from a modality like CT is viewed by a first and a second user. The first user creates a touch event of cropping a certain section of the image, thus the view is updated to a specific portion of the original image. This touch event cropping the image is converted into a series of instructions, which may comprise a series of co-ordinates on the screen indicating the area of cropping, and the instruction associated with cropping. These instructions are then transmitted to the second user' s device, wherein, upon executing the series of instructions, the touch event is recreated and hence, the view is updated to provide the cropped image. In this manner, the communication bandwidth is preserved by sending only the instructions for cropping instead of the entire cropped image. Further, the cropped image may be moved from a corner of the screen to the centre of the screen to enhance viewing effect. In this instance, a touch event of moving the cropped image is created. This new touch event is then converted to instructions comprising original co-ordinates of the cropped image and the final co-ordinates of the
cropped image, along with an instruction for moving. The instructions are then transmitted to the second user's device, wherein, upon executing the series of instructions, the touch event is re-created and hence, the cropped image is moved to the appropriate location on the user interface. [0038] While only certain features of the invention have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Claims
1. A method for capturing a touch event, the method comprising: creating the touch event through at least one gesture on a first device comprising a first touch enabled user interface; and capturing at least one instruction for the touch event.
2. The method of claim 1 further comprising: transmitting the at least one instruction for the touch event to a second device ; and carrying out the at least one instruction to re-create the touch event on the second device.
3. The method of claim 2, wherein the transmitting is through a server.
4. The method of claim 2, wherein the second device comprises a second touch enabled user interface.
5. The method of claim 1, wherein the at least one instruction comprises two or more co-ordinates for the touch event.
6. The method of claim 1, wherein the gesture is at least one of a zooming, pointing, panning, annotating with annotations, and combinations thereof.
7. The method of claim 1, wherein the gesture is in relation with an image.
8. The method of claim 7, wherein the image is a video image.
9. The method of claim 6, wherein the annotation is at least one of voice, text, shape, device position information and color.
10. The method of claim 1 further comprising storing the at least one instruction at a storage location.
11. The method of claim 10 further comprising retrieving the at least one instruction from the storage location.
12. The method of claim 2 further comprising collaborating between at least a first user of the first device comprising the first touch enabled user interface and a second user for the second device using the at least one instruction.
13. The method of claim 12, wherein the collaborating is real-time.
14. The method of claim 2 wherein the transmitting is through at least one of wired or wireless transmission technique.
15. A system for enabling collaborative interaction, comprising: a gesture tool kit comprising at least one gesture and at least one instruction for each gesture; a first device comprising a first touch enabled user interface having the image used by a first collaborator for creating a touch event through one or more touch screen recognizable gestures on the image; and a processing device for capturing at least one instruction for the touch event.
16. The system of claim 15 further comprising a transmission means for transmitting the at least one instruction.
17. The system of claim 16 further comprising a second device having the image viewed by a second collaborator and configured to receive the at least one instruction for the touch event from the transmission means to re-create the touch event on the image at the second device.
18. The system of claim 17, wherein the second device comprises a second touch enabled user interface.
19. The system of claim 15, wherein the at least one instruction comprises two or more co-ordinates for the touch event.
20. The system of claim 15, wherein the at least one gesture is at least one of zooming, pointing, panning, annotating with annotations, and combinations thereof
21. The system of claim 15, wherein the gesture is in relation to an image.
22. The system of claim 21, wherein the image is from a medical modality.
23. The system of claim 20, wherein the annotations is at least one of voice, text, shape, device position information and color.
24. The system of claim 15 further comprising storing the at least one instruction in a storage location.
25. The system of claim 24 further comprising retrieving the at least one instruction from the storage location.
26. The system of claim 16 wherein the transmission means is at least one of a wired means of communication or a wireless means of communication.
27. The system of claim 15, wherein the collaborative interaction is a real-time.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN863/CHE/2011 | 2011-03-21 | ||
| IN863CH2011 | 2011-03-21 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012127329A1 true WO2012127329A1 (en) | 2012-09-27 |
Family
ID=46878686
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2012/050627 Ceased WO2012127329A1 (en) | 2011-03-21 | 2012-02-13 | Method of collaboration between devices, and system therefrom |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2012127329A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101587392A (en) * | 2008-05-20 | 2009-11-25 | 宏碁股份有限公司 | Remote system synchronous operation method and local end touch screen synchronous operation method |
| US20100169842A1 (en) * | 2008-12-31 | 2010-07-01 | Microsoft Corporation | Control Function Gestures |
| US20100277337A1 (en) * | 2009-05-01 | 2010-11-04 | Apple Inc. | Directional touch remote |
| CN101893964A (en) * | 2010-07-21 | 2010-11-24 | 中兴通讯股份有限公司 | Mobile terminal remote control method and mobile terminal |
| US20100333043A1 (en) * | 2009-06-25 | 2010-12-30 | Motorola, Inc. | Terminating a Communication Session by Performing a Gesture on a User Interface |
-
2012
- 2012-02-13 WO PCT/IB2012/050627 patent/WO2012127329A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101587392A (en) * | 2008-05-20 | 2009-11-25 | 宏碁股份有限公司 | Remote system synchronous operation method and local end touch screen synchronous operation method |
| US20100169842A1 (en) * | 2008-12-31 | 2010-07-01 | Microsoft Corporation | Control Function Gestures |
| US20100277337A1 (en) * | 2009-05-01 | 2010-11-04 | Apple Inc. | Directional touch remote |
| US20100333043A1 (en) * | 2009-06-25 | 2010-12-30 | Motorola, Inc. | Terminating a Communication Session by Performing a Gesture on a User Interface |
| CN101893964A (en) * | 2010-07-21 | 2010-11-24 | 中兴通讯股份有限公司 | Mobile terminal remote control method and mobile terminal |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8924864B2 (en) | System and method for collaboratively communicating on images and saving those communications and images in a standard known format | |
| US20250168212A1 (en) | Privacy Management And Adaptive Layouts Within A Communication Session | |
| US8843852B2 (en) | Medical interface, annotation and communication systems | |
| US8886726B2 (en) | Systems and methods for interactive smart medical communication and collaboration | |
| US20180011627A1 (en) | Meeting collaboration systems, devices, and methods | |
| US11417367B2 (en) | Systems and methods for reviewing video content | |
| US20110113329A1 (en) | Multi-touch sensing device for use with radiological workstations and associated methods of use | |
| US10638089B2 (en) | System and method of collaboratively communication on images via input illustrations and have those illustrations auto erase | |
| US9641799B2 (en) | Multimodal cognitive communications and collaborative knowledge exchange with visual neural networking and packetized augmented intelligence | |
| US20150049163A1 (en) | Network system apparatus and method of use adapted for visual neural networking with multi-channel multiplexed streaming medical imagery and packetized clinical informatics | |
| US9778779B2 (en) | Device and method for visual sharing of data | |
| CN107615266A (en) | Method for capturing layering screen content | |
| JP6407526B2 (en) | Medical information processing system, medical information processing method, and information processing system | |
| Karim et al. | Telepointer technology in telemedicine: a review | |
| US20070020603A1 (en) | Synchronous communications systems and methods for distance education | |
| JP4696480B2 (en) | Remote conference system, base server and program | |
| WO2012127329A1 (en) | Method of collaboration between devices, and system therefrom | |
| Shurtz | Application Sharing from Mobile Devices with a Collaborative Shared Display | |
| Cohen | A practical guide to graphic communication for quality assurance, education, and patient care in echocardiography | |
| Bogen et al. | Telemedical technologies in urological cancer care: past, present and future applications | |
| Denoue et al. | Building digital project rooms for web meetings | |
| US12477196B1 (en) | AI-based video summary generation for content consumption | |
| WO2025120517A1 (en) | Online collaborative medical platform for facilitating collaboration between remote medical practitioners | |
| KR101468915B1 (en) | System and method for providing multi media service | |
| Poon et al. | Internet-based videoconferencing and data collaboration for the imaging community |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12761395 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12761395 Country of ref document: EP Kind code of ref document: A1 |