[go: up one dir, main page]

WO2012127329A1 - Procédé de collaboration entre des dispositifs, et système associé - Google Patents

Procédé de collaboration entre des dispositifs, et système associé Download PDF

Info

Publication number
WO2012127329A1
WO2012127329A1 PCT/IB2012/050627 IB2012050627W WO2012127329A1 WO 2012127329 A1 WO2012127329 A1 WO 2012127329A1 IB 2012050627 W IB2012050627 W IB 2012050627W WO 2012127329 A1 WO2012127329 A1 WO 2012127329A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
touch event
image
touch
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2012/050627
Other languages
English (en)
Inventor
Shyamol BANERJI
Sriram Kannan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2012127329A1 publication Critical patent/WO2012127329A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the invention relates generally to a method of collaboration and more specifically to a method of collaboration between users of devices wherein at least one device comprises touch enabled user interface.
  • the invention provides a method for capturing a touch event.
  • the method comprises creating the touch event through at least one gesture on a first device comprising a first touch enabled user interface.
  • the method then includes capturing at least one instruction for the touch event.
  • the invention provides a method for collaborative interaction for an image.
  • the method comprises providing a first collaborator for creating a touch event through a gesture on a first device comprising a first touch enabled user interface having the image.
  • the method then includes capturing at least one instruction for the touch event.
  • the method then involves transmitting the at least one instruction for the touch event to a second device for a second collaborator.
  • the method further comprises carrying the at least on instruction at the second device to re-create the touch event on the image.
  • the image is accessed by the first and second collaborator from an image server.
  • the invention provides a system for enabling collaborative interaction.
  • the system comprises a gesture tool kit, a first device comprising a first touch enabled user interface, and a processing device.
  • FIG. 1 shows steps for the method of the invention
  • FIG. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention.
  • touch enabled user interface means any user interface that is based on haptics, that is a user interface that acts on the sensation of touch.
  • the interaction of a user with the touch user interface is also sometimes referred to as a gesture.
  • touch enabled user interface comprises arrays of switches on one side of the user interface. One or more switches are activated when a gesture is performed. The exact action to be performed based on the gesture may be present on a database that is linked to the array of switches. The database may be present on a storage location with the capability to execute instructions, such as EPROM, EEPROM, etc.
  • Gesture as used herein also includes interacting through other means such as typing, speaking, pointing, and the like.
  • a "touch event” means any action that has been triggered by at least one gesture by the user, also sometimes referred to as touch actions.
  • These gestures include, for example, pointing and/or marking a particular region or area of the user interface, turning pages, zooming, panning, scrolling, moving selected portions of a page, cropping out selected sections of a user interface, moving cropped sections to a predetermined locations, opening a link provided on the page, closing a page, annotating, and the like, and combinations thereof.
  • Such gestures are known to one of ordinary skill in the art.
  • panning in some devices would involve placing a finger at a location on the user interface and then moving the finger until a required portion of the user interface is in view.
  • a set of coordinates are generated based on the location.
  • the co-ordinates are updated, which are then transmitted to the user interface, where the view is updated until the finger is released, at which time the view is held constant and no more changes are effected.
  • Scrolling and zooming may also be effected in such a manner.
  • a touch event will be triggered, which will be enacted to the desired extent, which desired extent will depend on a number of factors, such as time of contact, extent of contact, distance of movement from initial contact, and the like.
  • two fingers may mean zooming, the extent of zooming will depend on the distance to which the two fingers are moved apart relative to the initial contact.
  • the view, as used herein, on a user interface may be an image, a text, a video clip, a web page, and the like.
  • the view is an image.
  • the view on the user interface is an image from a medical modality, such as retinal scan images, X-Ray, Ultrasound, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and the like. Images obtained from a medical modality are widely used for diagnostic and treatment of patients undergoing procedures. One skilled in the art will understand that some of the modalities may provide information in the form of a movie clip. An exemplary modality giving video clips as the view includes Ultrasound.
  • Images from medical modalities may be stored and retrieved from secure locations such as image servers.
  • One exemplary storage location for images from medical modalities known in the art is Picture Archiving and Communication Storage, also referred to in the art as PACS.
  • PACS Picture Archiving and Communication Storage
  • This enables images from different scanning techniques to be stored electronically and viewed on computer screens. This enables doctors and other health care professionals to access information and to compare it with previous images electronically.
  • PACS is a combination of hardware and software dedicated to the short and long term storage, retrieval, management, distribution and presentation of images.
  • Annotating, as used herein, means any metadata used to mark on a user interface by a user on a given view.
  • Annotations may be in the form of texts; drawings, such as arrows, circles or rectangles, and the like; color highlighting, and so on. Arrows, circles and such shapes may be used to put emphasize on a relevant portion of a view.
  • Text annotations may be used to record comments of a user on the view, in order to provide opinions, rationales and reasoning, and so on.
  • Other text annotations may include device position information, such as "Office”, “Work", “Home”, or “In Transit” etc. Such device position information may be made available from a variety of sources, such as the user, or by a suitable positioning system such as GPS, and the like, or by the server the device is connected to, and combinations thereof.
  • Annotations may also be in the form of voice recordings superimposed on a view to provide auditory annotations.
  • annotations may be generated through an appropriate gesture, such as clicking on an icon, speaking into a microphone, video recording an event, and the like.
  • Annotations may also be converted into a set of instructions that can be captured in a suitable format, such as XML or HTML format.
  • Annotations will also include, besides the actual information input (such as text or circle or arrow etc.), the exact location on the screen at which the annotation was added, which may be, in one embodiment, in the form of co-ordinates. [0017]
  • a number of devices that uses touch enabled user interfaces are commercially available today.
  • the invention provides a method of capturing a touch event from a first device comprising a touch enabled user interface.
  • the steps involved in the method of the invention 10 are shown in Fig. 1.
  • the method includes creating a touch event on a first device comprising a touch enabled user interface, represented by numeral 12 in Fig. 1.
  • the method then includes capturing the at least one instruction for the touch event, represented by numeral 14 in Fig. 1.
  • the instruction may be derived from the database of instructions associated with the touch event.
  • the touch event results in an instruction derived from the database, which is then executed on the first device.
  • the instruction is simultaneously captured in a suitable format such as, but not limited to, a text file, an algorithm coded in a programming language, and the like. Other formats would become known to those skilled in the art and is contemplated to be within the scope of the invention.
  • the at least one instruction in one embodiment may comprise at least two co-ordinates for the touch event.
  • the at least one instruction that is captured in step 14 may now be stored in a suitable format at an appropriate location.
  • the format in which the at least one instruction is stored may be the same as the format in which it was captured, or in any other suitable format.
  • the appropriate location for storing the at least one instruction may include a server, a hard drive, a portable storage device, and the like. In one embodiment, the storage location for storing the at least one instruction is the PACS.
  • the method subsequently involves transmitting the at least one instruction to a second device, shown in Fig. 1 as 16, that was captured in step 14.
  • the second device is used by a second user to view the image the first user is viewing on the first device.
  • the image is made available from a suitable location, such as a server like PACS.
  • the image may be retrieved directly by the second user, or the first user or an administrator may provide permissions and instructions for the server to transfer the image to the second user.
  • the second device comprises a second touch enabled user interface.
  • the second device does not comprise a touch enabled user interface, instead it may be any one of desktop computer, laptop computer, mobile communication device, specialized computing device adapted for certain requirements, and the like.
  • the at least one instruction is then converted to a format that is recognized by the second device.
  • the transmitting may be done in any format known to those skilled in the art. This includes, for example, transmitting through wired networks, such as LAN, telephone ports, and the like; wireless networks such as WLAN, WAN, and the like; and combinations thereof.
  • the transmission may also be through secured networks that involve appropriate levels of encryption and decryption, which is also contemplated to be within the scope of the invention. Such levels of security are necessary for many situations, including privacy issues, for obtaining approvals from regulatory authorities, and the like.
  • the instruction is carried out on it, as shown in Fig. 1 and depicted by numeral 18.
  • the at least one instruction from the first device is transmitted as such to the second device, after which the second device converts the at least one instruction to a format that is recognized by it, and hence capable of executing the at least one instruction.
  • the touch event that was enacted on the first device is now re-created on the second device as well.
  • the user of the second device sees an updated view on the user interface automatically without having to repeat the touch event.
  • the touch event may be updated automatically on a real-time basis as long as the connection between the first and second device is of a certain quality and sufficient speed.
  • the touch event may be performed on a user interface at any time period, as long as the at least one instruction is carried out along with the view.
  • a second user on a second device may replay the entire set of views and the touch events that occurred originally, by retrieving the instructions from the storage location along with the views at any later time period as compared to the original time period when the actual set of gestures and touch events were recorded.
  • the second device may be the same device as the first device where the original views, gestures and touch events occurred by a first user.
  • the at least one instruction may be executed by a first device, second device, or combinations thereof, upon instructions on a series of predefined views, such as medical images or medical video images.
  • the method of the invention enables one to "ZOOM" a video to a particular frame to a certain zoom extent and subsequently, carry the "ZOOM” to the same extent of zoom levels forward to all frames.
  • a set of operations that were conducted on a first image is repeated on every subsequent image automatically.
  • rapid analysis of a series of images may be conducted without having to go through a series of repetitive steps manually, thus saving time and resources, making the user experience very comfortable and easy.
  • collaborations and teaching may also be facilitated in a great manner using the method of the invention.
  • the touch event performed on a first device may be carried out on any number of devices associated with it, either on a real-time basis or in a time-delayed manner.
  • the method of the invention is especially useful for collaboration between a first user and any number of further users, wherein all users are collaborating over a view.
  • a first user creates a touch event zoom of a MRI scan image using an appropriate gesture, the same touch event is re-created in all the devices involved in the collaboration.
  • another user may create a touch event of annotating using a "pointing arrow" at an appropriate location in the MRI image, which touch event will now be re-created on all the collaborating devices using the method of the invention.
  • the annotation may also be supplemented by a voice recording regarding the importance of the pointed location of the image.
  • the method of the invention may be used for teaching purposes, wherein the views and the instructed with associated gestures and touch events are recorded in an appropriate location. Subsequently, this entire set of views and the instructions associated with all the gestures and touch events are retrieved from the storage location and re-created on the device comprising a touch enabled user interface.
  • Other exemplary uses for the method of the invention will become obvious to one skilled in the art, and is contemplated to be within the scope of the invention.
  • the method of the invention avoids the transfer of the views repeatedly at a certain "frame rate," which consumes considerable amount of bandwidth, making real-time collaboration difficult.
  • the benefits of the invention stem from the fact that relevant views are transmitted only once from the first device to all the collaborating devices, and any further communication only involves transfer of instructions related to touch gestures and annotations. These instructions will be carried out in all the collaborating devices, thus enabling real-time collaborations while still conserving communication bandwidth.
  • the method of the invention may be enabled in the form a software tool written in the form of instructions for executing an algorithm in an appropriate programming language. The software may then be executed in collaborating devices such that when a touch event is performed in one device, the same touch event is re-created in all of the collaborating devices without having the need for any intervention by any of the other users except the first user.
  • Fig. 2 is a diagrammatic representation of an exemplary embodiment of the system of the invention 20.
  • the system of the invention is particularly useful for interacting over images, especially from a medical modality.
  • the image may be obtained from a suitable location such as an image server.
  • An exemplary image server is PACS described herein.
  • the system of the invention 20 comprises a gesture tool kit (not shown in the Fig. 2).
  • the gesture tool kit comprises at least one gesture and at least one instruction for each gesture.
  • the system of the invention 20 then comprises a first device 22 that comprises a first touch enabled user interface.
  • the first device 22 is used by a first collaborator (not shown in Fig. 2) to open an image from the image server.
  • the first collaborator creates a touch event through one or more touch screen recognizable gestures on the image.
  • the system 20 then comprises a processing device 24 to capture at least one instruction.
  • the instruction comprise at least one set of co-ordinates.
  • the gesture tool kit may be present as part of the first device 22, and the at least one instruction is generated from the gesture tool kit in the device and captured by the processing device.
  • the gesture tool kit may be present as part of the processing device 24, and the touch event is transmitted to the processing device 24, wherein the at least one instruction associated with the touch event is extracted from the gesture tool kit and captured by the processing device 24.
  • the gesture tool kit may be a stand-alone separate device, and the processing device 24 extracts the at least one instruction associated with the touch event from the gesture tool kit and captures it.
  • the system comprises a transmission means (not shown in figure) that transmits the at least one instruction.
  • Suitable transmission means include wired network connections such as LAN, telephone ports, and the like; wireless network connections such WAN, WLAN, and the like, and combinations thereof.
  • the at least one instruction is transmitted to a second device 26.
  • the second device is used by a second collaborator and has the same image being viewed by the first collaborator on the first device.
  • the second device 26 is configured to receive the at least one instruction from the processing device 24 and carry out the at least one instruction to re-create the touch event on the image on the second device.
  • the processing device may be a server which is in constant contact with the devices in collaboration.
  • the server may also comprise a storage location which stores the at least one instruction associated with all the touch events related to a collaborative event. This enables that the image and all the actions, such as zooming, panning, annotating, and the like, may be retrieved at any later point in time.
  • the system of the invention may advantageously use an appropriate software tool that encodes the algorithm associated with the method of the invention.
  • the software may then be installed in all the collaborators' devices, wherein the at least one instruction for each touch event is converted to an appropriate executable instruction for each of the other devices and the same touch event is re-created on all the devices. Subsequently, the images and the touch events on the device of the first user may be replicated in all the collaborators' devices without the necessity for any other users' intervention, while still conserving bandwidth during communication and avoiding repeated transmission of bandwidth consuming images.
  • the system of the invention may also incorporate security features such as encryption and decryption algorithms to secure the information contained within, and the information being received and transmitted. Further, the system may also include secure logging in with password of appropriate strengths to be used for collaborators to log into the system and collaborate freely within the confines of the system. The entire system may be operated within a virtual private network to ensure the privacy and security of all the data.
  • a user of a first device places a finger on a specific location of the user interface associated with the image.
  • the location of the finger will be referred to by a set of co-ordinates (x 1 ; y 1 ; z ).
  • the user drags the finger across the user interface to another location of the user interface.
  • Each distinct new location of the user interface that the finger is in contact with will be assigned a set of co-ordinates (x 2 , y 2 , z ⁇ ), (x 3 , y 3 , z ⁇ ), (x 4 , y 4 , z 4 ) etc.
  • the last point on the user interface was in contact with the finger has a set of co-ordinates (x n , y n , z ⁇ ).
  • the co-ordinates along with the action of moving the image is converted into a set of instructions, which are then saved as an executable file.
  • the executable file is then assigned a name which comprises the image file name, date, time, the number of actions??
  • the file is then transmitted to a second device through a LAN line, wherein the instructions are executed to re-create the panning action.
  • [x,y,z] may be changed to just [x,y], as I don't think 3D UI is in the purview of this patent]
  • a view comprising a medical image from a modality like CT is viewed by a first and a second user.
  • the first user creates a touch event of cropping a certain section of the image, thus the view is updated to a specific portion of the original image.
  • This touch event cropping the image is converted into a series of instructions, which may comprise a series of co-ordinates on the screen indicating the area of cropping, and the instruction associated with cropping.
  • These instructions are then transmitted to the second user' s device, wherein, upon executing the series of instructions, the touch event is recreated and hence, the view is updated to provide the cropped image.
  • the communication bandwidth is preserved by sending only the instructions for cropping instead of the entire cropped image.
  • the cropped image may be moved from a corner of the screen to the centre of the screen to enhance viewing effect.
  • a touch event of moving the cropped image is created.
  • This new touch event is then converted to instructions comprising original co-ordinates of the cropped image and the final co-ordinates of the cropped image, along with an instruction for moving.
  • the instructions are then transmitted to the second user's device, wherein, upon executing the series of instructions, the touch event is re-created and hence, the cropped image is moved to the appropriate location on the user interface.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé permettant de capturer un événement tactile, l'événement tactile ayant été créé au moyen d'au moins un geste sur un premier dispositif comprenant une première interface utilisateur tactile. Le procédé consiste à capturer au moins une instruction pour l'événement tactile. Le procédé selon l'invention peut être utilisé de manière avantageuse pour activer des collaborations entre plusieurs utilisateurs sans utiliser toute la bande passante utilisée pour les procédés de collaboration classiques. L'invention concerne également un système permettant d'activer une interaction collaborative qui s'appuie sur le procédé selon l'invention.
PCT/IB2012/050627 2011-03-21 2012-02-13 Procédé de collaboration entre des dispositifs, et système associé Ceased WO2012127329A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN863/CHE/2011 2011-03-21
IN863CH2011 2011-03-21

Publications (1)

Publication Number Publication Date
WO2012127329A1 true WO2012127329A1 (fr) 2012-09-27

Family

ID=46878686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2012/050627 Ceased WO2012127329A1 (fr) 2011-03-21 2012-02-13 Procédé de collaboration entre des dispositifs, et système associé

Country Status (1)

Country Link
WO (1) WO2012127329A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587392A (zh) * 2008-05-20 2009-11-25 宏碁股份有限公司 远程系统同步操作方法与本地端触控屏幕同步操作方法
US20100169842A1 (en) * 2008-12-31 2010-07-01 Microsoft Corporation Control Function Gestures
US20100277337A1 (en) * 2009-05-01 2010-11-04 Apple Inc. Directional touch remote
CN101893964A (zh) * 2010-07-21 2010-11-24 中兴通讯股份有限公司 移动终端远程控制方法及移动终端
US20100333043A1 (en) * 2009-06-25 2010-12-30 Motorola, Inc. Terminating a Communication Session by Performing a Gesture on a User Interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587392A (zh) * 2008-05-20 2009-11-25 宏碁股份有限公司 远程系统同步操作方法与本地端触控屏幕同步操作方法
US20100169842A1 (en) * 2008-12-31 2010-07-01 Microsoft Corporation Control Function Gestures
US20100277337A1 (en) * 2009-05-01 2010-11-04 Apple Inc. Directional touch remote
US20100333043A1 (en) * 2009-06-25 2010-12-30 Motorola, Inc. Terminating a Communication Session by Performing a Gesture on a User Interface
CN101893964A (zh) * 2010-07-21 2010-11-24 中兴通讯股份有限公司 移动终端远程控制方法及移动终端

Similar Documents

Publication Publication Date Title
US8924864B2 (en) System and method for collaboratively communicating on images and saving those communications and images in a standard known format
US20250168212A1 (en) Privacy Management And Adaptive Layouts Within A Communication Session
US8843852B2 (en) Medical interface, annotation and communication systems
US8886726B2 (en) Systems and methods for interactive smart medical communication and collaboration
US20180011627A1 (en) Meeting collaboration systems, devices, and methods
US11417367B2 (en) Systems and methods for reviewing video content
US20110113329A1 (en) Multi-touch sensing device for use with radiological workstations and associated methods of use
US10638089B2 (en) System and method of collaboratively communication on images via input illustrations and have those illustrations auto erase
US9641799B2 (en) Multimodal cognitive communications and collaborative knowledge exchange with visual neural networking and packetized augmented intelligence
US20150049163A1 (en) Network system apparatus and method of use adapted for visual neural networking with multi-channel multiplexed streaming medical imagery and packetized clinical informatics
US9778779B2 (en) Device and method for visual sharing of data
CN107615266A (zh) 用于捕获分层屏幕内容的方法
JP6407526B2 (ja) 医用情報処理システム、医用情報処理方法及び情報処理システム
Karim et al. Telepointer technology in telemedicine: a review
US20070020603A1 (en) Synchronous communications systems and methods for distance education
JP4696480B2 (ja) 遠隔会議システム、拠点サーバ及びプログラム
WO2012127329A1 (fr) Procédé de collaboration entre des dispositifs, et système associé
Shurtz Application Sharing from Mobile Devices with a Collaborative Shared Display
Cohen A practical guide to graphic communication for quality assurance, education, and patient care in echocardiography
Bogen et al. Telemedical technologies in urological cancer care: past, present and future applications
Denoue et al. Building digital project rooms for web meetings
US12477196B1 (en) AI-based video summary generation for content consumption
WO2025120517A1 (fr) Plateforme médicale collaborative en ligne destinée à faciliter la collaboration entre des praticiens médicaux distants
KR101468915B1 (ko) 멀티미디어 서비스 제공 시스템 및 방법
Poon et al. Internet-based videoconferencing and data collaboration for the imaging community

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12761395

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12761395

Country of ref document: EP

Kind code of ref document: A1