GB2503654A - Methods of outputting a manipulation of a graphic upon a boundary condition being met - Google Patents
Methods of outputting a manipulation of a graphic upon a boundary condition being met Download PDFInfo
- Publication number
- GB2503654A GB2503654A GB1211415.3A GB201211415A GB2503654A GB 2503654 A GB2503654 A GB 2503654A GB 201211415 A GB201211415 A GB 201211415A GB 2503654 A GB2503654 A GB 2503654A
- Authority
- GB
- United Kingdom
- Prior art keywords
- graphics
- data set
- image data
- image
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
In a first method an image data set including one or more image objects to be displayed is retrieved. A first graphics corresponding to at least a portion of the retrieved image data set is displayed. A user input relating to a manipulation (e.g. scroll, zoom or pan) is detected, and at least part of the image data set is manipulated. If a boundary condition relating to a limit of the retrieved data set beyond which there is no further data is determined a different manipulation process (e.g. stretch or warp) is applied to at least part of the image data set. In a second method a spatially non-uniform geometric transformation (shrink and stretch) is conducted on at least a portion of an image data set upon detecting a user input requesting a image transformations that goes beyond a limit of the retrieved image set corresponding to a limit of the display.
Description
I
A METHOD AND APPARATUS FOR OUTPUTI'ING GRAPHICS TO A
DISPLAY
Technical Field
The present invention relates to a method and an apparatus 1kw outputting graphics to a display.
Back2mund User interfaces are a necessary tool in technology because they enable users to interact with machines such as computers, mobile phones, and other such electronic or mechanical equipment. The interaction enables the user to operate the equipment in order to pertbnn specified functions. For example, a keyboard type user interface can be used to operate a computer and can be used to input instructions through typing. A separate user interface, in the form of a visual display, such as a computer monitor display, can be used to provide the user with visual feedback representative of the function performed by the computer (e.g. displaying typed characters in an electronic document).
The use of touch-sensitive displays, more commonly known as "touch screens", are becoming more important as technology continues to evolve and are becoming increasingly prominent, in particular, in mobile phone technology. Using a touch-sensitive display in a mobile phone may be of particular benefit because it can forego the need for a dedicated keypad, navigation pad and separate display screen.
As the touch screen can be used fur both the user input and visual output, the mobile phone need not reserve space on its exterior surfaces for a keypad/navigation -and can thereby have a larger display area/touch screen. Other types of interfaces such as non-touch interfaces are also evolving and, for example, infra-red, radar, magnetic fields and camera sensors are increasingly being used to generate user inputs.
As such, it has bccomc of primary importancc that thc uscr interfaces arc intuitive and easy to use. It is also important that they provide feedback and information to the user so that the user is made aware of the actions they are performing.
For example, in situations where a touch-screen is displaying a virtual keypad, a tactile/haptic feedback to the user can be implemented so that the user can be made aware that their input of a key on the keypad has been registered by the phone. This can be done by, for example, visually highlighting the displayed key selected by the user and enabling the mobile phone to vibrate as the key input is registered by the mobile phone. The user is thereby provided with feedback indicating that the key input operation has been performed by the mobile phone.
As a thrther example, a user can scroll through image objects in an image gallery (i.e. a sequence of image objects) using their mobile device. The user may place their finger on or near the touch screen and slide their finger across the surface of the touch screen so that the currently displayed image object is translated towards the general movement direction of the user's finger. When the user removes their finger from the touch screen surface, the image object currently being translated is "released" and may continue to scroll without further user input. The translation of the image object may continue with a perceived dampened momentum so that the translation of the image object or objects will slow and then eventually stop. The image object can be translated so that it leaves the screen and is no longer displayed.
The image object that leaves the display area is then replaced by a next image object in the gallery. The translation and replacing of the image objects continues until the momentum of the scrolling motion is frilly dampened, at which point the user may initiate another gesture to scroll through further image objects in the gallery. The translation of the currently displayed image object along with the user's gesture provides visual feedback to the user informing the user that they are scrolling through the image objects of the gallery.
Summary
According to a first aspect of the present invention, there is provided a method of outputting graphics to a display, the method comprising: retrieving an image data set, the retrieved image data set comprising one or more image objects to be displayed; outputting at least first graphics to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set; detecting an input from a user representative of an image manipulation request; performing a first image manipulation proccss on at least part of thc retrieved image data sct in accordance with the image manipulation request to produce second graphics, the first image manipulation process providing a first type of alteration to said at least part of the retrieved image data set; outputting the second graphics to the display area of the display; determining that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no frirther element of the retrieved image data set to be displayed; in response to determining that the boundary condition has been satisfied, performing a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to said retrieved image data set, the second type of alteration being of a different type than said first type of alteration; and outputting the third graphics to the display area of the display.
Performing a first image manipulation process comprising a first type of alteration on at least part of the retrieved image data set in accordance with the image manipulation request enables a user to be provided with visual feedback relating to the actions they are performing (i.e. the image manipulation request). Providing a boundary condition and performing a second image manipulation process comprising a second, different type alteration on the retrieved image data set when the boundary condition is satisfied enables the user to also be provided with visual feedback indicative of the boundary condition being satisfied. The different types of alterations are preferably performed on the same image object. As the second type of alteration is different from the first type of alteration, the user is provided with a distinct method of distinguishing between the two forms of visual feedback and therefore can rapidly recognise a difference between the two forms of feedback. As such, the user may be made aware of boundary conditions relating to the functions that the user is trying to perform in a surprisingly effective manner.
The first type of alteration may be a first type of geometric transformation applied to at east part of the image data set and the second type of alteration may be a second type of geometric transformation applicd to at least part of the image data set.
By using two different types of geometric transformations, the two different types of graphical alteration may both include movement of graphical elements on the display in correspondence with movcmcnt input by a user as thc imagc manipulation request.
The first type of alteration may be a spatially uniform geometric transformation applied to at least part of the image data set and the second type of alteration may be a spatially non-uniform geometric transformation applied to at least part of the image data set.
In this manner, each of the different types of alteration can provide a distinctive effect so as to provide easily recognisable visual indications of the boundary conditions relating to the functions that the user is trying to perform in a highly effective manner.
A characteristic of the iion-unifonrnty of the spatially non-uniform geometric transformation may be dependent on a position of a representation of the user input in relation to the display.
Hence, the spatially non-uniform geometric transformation has position dependency such that, as the user represented input changes position, the transformation evolves. This may be used to create a visual effect suggesting that the user is physically manipulating the displayed graphics and therefore provides the user with effective and intuitive feedback.
The spatially uniform geometric transformation may result in a translation of the first graphics in the general direction of the user input to produce the second graphics. Thus, the present invention can be used during scrolling so that the user can, for example, browse through multiple image objects on the display and be made aware of a boundary condition occurring during the scrolling.
The spatially non-uniform geometric transformation may result in a stretching of the first graphics in the general direction of the user input to produce the second graphics. The stretching acts to inform the user that their requested frmnction has reached a boundary condition beyond which the function cannot be performed.
The boundary condition may, for example, relate to no further image objects being available, or the image data for a next image object in a series of image objects being determined to be corrupt, or the image data for a next image object in a series of image objects being determined to be in an unknown format. As the user is made aware of this, they can cease or change the image manipulation request.
The spatially non-uniform geometric transformation may result in a shrinking of the first graphics along two dimensions to produce the second graphics. This could create the effect of zooming out of currently displayed graphics.
The spatially non-uniform geometric transformation may result in a stretching of the first graphics along two dimensions to produce the second graphics. This could create the effect of zooming into the currently displayed aphics.
The spatially non-uniform geometric transformation may result in a warping of the second graphics in the general direction of the user input to produce the third graphics, wherein the degree of warping is dependent on the position of the user input in relation to the display. The warping can provide an indication to the user that a boundary condition has been satisfied.
A release of the input from the user representative of the image manipulation request may be detected during said first image manipulation process, and the second image manipulation process may be performed without further user input to produce the third graphics. Therefore, a translation of image objects can continue after a scroll gesture, in a "free scrolling" type manner, whereby the translation can occur without continued user input.
The second image manipulation process may be reversed without further user input, after the third graphics have been output, to produce fourth graphics. The reversing of the second image manipulation process therefore allows the return of graphics to theft previous state. Such a process can create a bounce-like effect to provide an intuitive indication to the user that the boundary condition has been satisfied.
The release of the input from the user representative of the image manipulation request may occur during said second image manipulation process, and the second image manipulation process may be reversed in response to the detected release to produce fourth graphics. The reversing of the second image manipulation process therefore allows the return of graphics to their previous state.
The determination of the boundary condition being satisfied may comprise determining that at least one outer limit of the image data set has met at least one outer limit of the display area. This may be indicative that there is no further data in thc image data set for display beyond the graphics displayed what the boundary condition is satisfied.
The image manipulation request may relate to a representative movement of the user input, the representative movement moving on the display towards at least one outer limit of the retrieved image data set. The first type of alteration may comprise a translation of image objects corresponding to the image manipulation request movement, applied to at least part of the image data set. The boundary condition may relate to the at least one outer limit of the retrieved image data set. The second type of alteration may be an image shrinking alteration applied to at least part of the image data set.
The image manipulation request may relate to a representative movement of the user input, the representative movement moving on the display away from at least one outer limit of the retrieved image data set. The first type of alteration may comprise a translation of image objects corresponding to the image manipulation request movement, applied to at least part of the image data set. The boundary condition may relate to the at least one outer limit of the retrieved image data set. The second type of alteration may be an image stretching alteration applied to at least part of the image data set.
The boundary condition may relate to a single outer limit of the image data set, and the second type of alteration may be a one-dimensional image transformation applied to at least part of the image data set.
The boundary condition may relate to two outer limits of the image data set, and the second type of alteration is a two-dimensional image transformation applied to at least part of the image data set.
The image manipulation request may comprise a zoom-out request and the determination of the boundary condition being satisfied may comprise determining that a maximum zoom-out limit, beyond which no further image data set is present, has been reached.
The image manipulation request may comprise a zoom-in request and the determination of the boundary condition being satisfied may comprise determining that a maximum zoom-in limit, beyond which no further image data set is present, has been reached.
The display may comprise a touch-sensitive display and the image manipulation request may comprise a touch-sensitive gesture.
The imagc data set may include one or more image data portions which are not output on said display area before the image manipulation request is detected.
Therefore, the image manipulation request can be initiated to view image objects that arc "hidden" from view.
According to a second aspect of the present invention, there is provided an apparatus for outputting graphics to a display, comprising: at least one processor; at least one memory; a display; wherein operation of the processor causes the apparatus to: retrieve an image data set from the at least one memory, the retrieved image data set comprising one or more image objects to be displayed by the display; output at least first graphics to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set; detect an input from a user representative of an image manipulation request; perform a first image manipulation process on at cast part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the first image manipulation process providing a first type of alteration to said at least part of the retrieved image data set; output the second graphics to the display area of the display; determine that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no further element of the retrieved image data set to be displayed; perform, in response to determining that the boundary condition has been satisfied, a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to said retrieved image data set, the second type of alteration being of a different type than said first type of alteration; and output the third graphics to the display area of the display.
Through the use of first and second image manipulation processes, an apparatus, such as a mobile phone can be used to indicate to a user a performance of various requested functions. The user is therefore provided with an intuitive and easy-to-usc dcvicc that provides informative feedback relating to the dctcctcd user input by the device.
According to a third aspect of the present invention, there is provided a computer program comprising computer program instructions, which, when performed by a computer, enable the computer to perform: retrieving an image data set, the retrieved image data set comprising one or more image objects to be displayed; outputting at least first graphics to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set; detecting an input from a user representative of an image manipulation request; performing a first image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the first image manipulation process providing a first type of alteration to said at least part of the retrieved image data set; outputting the second graphics to the display area of the display; determining that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no frirther element of the retrieved image data set to be displayed; in response to determining that the boundary condition has been satisfied, performing a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to said retrieved image data set, the second type of alteration being of a different type than said first type of alteration; and outputting the third graphics to the display area of the display.
According to a fourth aspect of the present invention, thcrc is provided a method of outputting images to a display, thc method comprising: retrieving an image data sct, the retrieved image data set comprising one or more image objects to be S displayed; outputting at least first graphics to a display area of the display, the at least first graphics corresponding to at least a portion of the retrieved image data set, wherein a limit of the retrieved image data set corresponds with a limit of a display area when the at least first graphics are displayed therein; detecting an input from a user representative of an image manipulation request to perform a geometric image transformation which goes beyond said limit; performing an image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the image manipulation process comprising conducting a spatially non-uniform geometric transformation to the at least a portion of said retrieved image data set to provide visual feedback to the user indicating that said image manipulation request is a request to perform a geometric image transformation which goes beyond said limit; and outputting the second graphics to the display area of the display.
Providing a user with output second graphics relating to transformed retrieved image data using a spatially non-uniform geometric transformation in response to an image manipulation request aflows an intuitive user interface to be provided that is responsive to a user's input and can provide feedback to the user indicative of their requested image manipulation request. As the transformation is applied when the limit of the retrieved image data set corresponds to the limit of a display, the user is provided with feedback indicative of the two limits corresponding with one another.
This can provide the user with an indication that the displayed graphic corresponds with at least one edge of a displayed image object.
A characteristic of the non-uniformity of the spatially non-uniform geometric transformation may be dependent on a position of a representation of the user input in relation to the display.
Therefore, the spatially non-uniform geometric transformation has position dependency such that, as the user represented input changes position, the transformation evolves. This creates the effect that the user is physically manipulating the displayed graphics and therefore provides the user with effective and intuitive feedback.
The conducting of the spatially non-uniform geometric transformation of the image data set may comprise: determining the point of the at least first graphics corresponding to the starting point of the user input; segmenting the at least first graphics at the starting point into two parts along a line that is perpendicular to a general direction of the user input; associating the line with the user input, wherein the movement of the line corresponds with the movement of the user input; shrinking the part of the first image that the line is moving towards; and stretching the part of the first image that the line is moving away from.
The stretching and shrinking may occur so that the first image retains its area and shape.
It may be determined whether or not the movement of the line has satisfied a threshold, and when the movement of the line has been determined to satisfy the threshold, a second image manipulation process may be performed on at least part of the retrieved image data set to produce third graphics.
The second image manipulation process may provide a spatially uniform geometric transformation to the at least part of the retrieved image data set.
The spatially uniform geometric transformation may result in a translation of the second graphics in the general direction of the user input to produce the third graphics.
When the movement of the line has been determined not to satisfy the threshold, the first image manipulation maybe reversed to produce third graphics.
According to a fifth aspect of the present invention, there is provided a method substantially as described herein with reference to the description and accompanying drawings.
According to a sixth aspect of the present invention, there is provided an apparatus substantially as described herein with reference to the description and accompanying drawings.
According to a seventh aspect of the present invention, there is provided a computer program comprising computer program instructions which, when performed by a computer, enable the computer to perform the method substantially as described herein with reference to the description and accompanying drawings.
Further features and advantages of the invention will become apparent from the following description of prefened embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 shows a schematic diagram of an example of a mobile phone according to an embodiment of the present invention; Figure 2 shows a schematic diawam of an example of a aphical framework according to an embodiment of the present invention; Figure 3 shows a schematic flow diagram of the processes that occur in an example method of an embodiment of the present invention; Figure 4a shows a schematic diagram of a first example of a display state according to an embodiment of the present invention, the display outputting first graphics; S Figure 4b shows a schematic diagram of the first example of a display state according to an embodiment of the present invention, the display outputting second graphics; Figure 4c shows a schematic diagram of the first example of a display state according to an embodiment of the present invention, the display outputting third graphics; Figure 4d shows a schematic diagram of the first example of a display state according to an embodiment of the present invention, the display outputting fourth graphics; Figure 5a shows a schematic diagram of a second example of a display state according to an embodiment of the present invention, the display outputting first graphics; Figure Sb shows a schematic diagram of the second example of a display state according to an embodiment of the present invention, the display outputting second graphics; Figure Sc shows a schematic diagram of the second example of a display state according to an embodiment of the present invention, the display outputting third graphics; Figure Sd shows a schematic diagram of the second example of a display state according to an embodiment of the present invention, the display outputting fourth graphics; Figure Se shows a schematic diagram of the processing which occurs in the second example of a method according to an embodiment of the present invention; Figurc 6 shows a schcmatic flow diagram of the processes that occur in an example method of an embodiment of the present invention; Figure 7a shows a schematic diagram of a third examp'e of a display state according to an embodiment of the present invention, the display outputting first graphics; Figure 7b shows a schematic diagram of the third example of a display state according to an embodiment of the present invention, the display outputting second graphics; Figure 7e shows a schematic diagram of the third example of a display state according to an embodiment of the present invention, the display outputting third graphics; Figure 7d shows a schematic diagram of the third example of a display state according to an embodiment of the present invention, the display outputting fourth graphics; Figure 8a shows a schematic diagram of a fourth example of a display state according to an embodiment of the present invention, the display outputting first graphics; Figure Sb shows a schematic diagram of the fourth example of a display state according to an embodiment of the present invention, the display outputting second graphics; Figure 8c shows a schematic diagram of the fourth example of a display state according to an embodiment of the present invention, the display outputting third graphics; Figure 8d shows a schematic diagram of the fourth example of a display state according to an embodiment of the present invention, the display outputting fourth graphics; Figurc 9a shows a schematic diagram of a fifth example of a display state according to an embodiment of the present invention, the display outputting first graphics; Figure 9b shows a schematic diagram of the fifth example of a display state according to an embodiment of the present invention, the display outputting second graphics; Figure 9c shows a schematic diagram of the fifth example of a display state according to an embodiment of the present invention, the display outputting third graphics; Figure 9d shows a schematic diagram of the fifth example of a display state according to an embodiment of the present invention, the display outputting fourth graphics; Figure 10 shows a schematic diagram of a sixth example of a display state according to an embodiment of the present invention, the display outputting various graphics; Figure 11 shows a schematic diagram of a seventh example of a display state according to an embodiment of the present invention, the display outputting various graphics; Figure 12 shows a schematic diagram of an example of a display state according to an embodiment of the present invention, the display outputting various graphics.
Detailed Description
Figure Ia shows a top view of a mobile phone 102 (as seen from a user's pcrspcctivc if thcy wcrc facing a primary display of thc mobilc phonc 102) having, in accordance with embodiments of the invention, a touch-sensitive input device, such as touch screen display 104, a front-facing camera 106, a speaker 108, a loudspeaker 110, and soft keys 112, 114, 116. The touch screen 104 is operable to display graphics. Thc mobile phonc 102 also comprises at least one processor and at least one memory (not shown).
Figure 2 illustrates a schematic overview of some of the components of the mobile phone 102 which are involved in the process of viewing and manipulating image objects on the mobile phone 102. These components include hardware components such as a Central Processing Unit (CPU) (not shown), display hardware 232, for example the display part of a touch screen display 104, a Graphics Processing Unit (GPU) 234 and input hardware 236, for example the touch-sensitive part of the touch screen display 104. The components also include middleware components which form part of the operating system of the mobile phone 104, including a graphic framework module 224, a display driver 226, an input event handler module 228 and an input driver 230, and a document viewer application 222, which is executed when image objects are to be viewed on the display hardware 233. Note that GPU 234 may be either a hardware component or a software component that is run on the Central Processing Unit (CPU) (not shown). The document viewer application 222 enables interpretation of the touch movement and touch release of a user's input on the touch screen 104, via the input hardware 236, the input driver 230 and the input event handler 228. This input is translated to appropriate parameter values for the graphic framework module 224 to control the GPU 234, which also receives one or more image objects, via an input buffer, which are being viewed using the document viewer 222 The GPIJ 234 performs graphical transformations on the one or more image objects, or parts thereof; responsive to the input, and stores the resulting image data in an output buffer. The graphic framework module 224 will pass data, from the output buffer, to an input frame buffer of the display driver 226. The input frame buffer may be a direct memory access module (not shown) so that the display driver 226 can pick it up for display. The disp'ay driver 226 outputs image data an output frame buffer of the display 104 which in turn outputs it as graphics.
Figure 3 shows a schematic block diagram of an example of a method according to an embodiment of the present invention. At step 302, an image data set comprising one or more image objects is retrieved from memory (not shown). The image objects relate to image data such as pictures, electronic documents or the like.
At least first graphics are determined for outputting to a display in accordance with a function performed by the mobile phone 102, the at least first graphics corresponding to at least a portion of the retrieved image data set, and the at least first graphics are output for rendering on the display 104 (step 304). At step 306, a user input is detected in the form of an image manipulation request. The image manipulation request is associated with a particular function to be performed by mobile phone 102, such that the user can perform various image manipulation requests to perform various associated functions. For example, a first image manipulation request could be indicative that the user wishes to scroll through image objects in a gallery. A second, different image manipulation request could be indicative of the user wishing to zoom in or out of an image object, and so on. At step 308, a first image manipulation process associated with the image manipulation request is performed on at least part of the retrieved image data set in order to produce or generate second graphics resultant from a first type of alteration applied to the at least part of the retrieved image data set. The generated second graphics arc representative of the image manipulation request and provides feedback to the user indicative of the action requested by the user via the image manipulation request. For example, in the case that the user wishes to scroll from the currently displayed image object to a next image object in a gallery, the user can slide his finger across the touch screen 104. In response to the user's slide motion across the screen 104, the currently displayed graphics are altered so that the second graphics are output (step 310), which second graphics represent a first image object translating outside of the display area of the screen 104 and a second image object translating onto the display area of the screen 104 as the first image object is translated off the display area, such that the first image object is replaced by the second image object. At step 310, it is determined that a boundary condition has been satisfied. This is where it is determined that the image manipulation request is indicative of a user request to view data in the image data set that is not available. For example, in the case of scrolling through image objects of a gallery, the last image object of the gallery will terminate the scrolling because there would be no further image objects to view, and therefore, if a user attempts to scroll past the last image object, the boundary condition is met. When it has been determined that the boundary condition relating to the retrieved image data set has been satisfied, a second image manipulation process is performed (at step 314) on at least part of the retrieved data set to produce third graphics. This second image manipulation process applies a second type of alteration, different from the first type of alteration, to the retrieved image data set to produce the third graphics. The second image manipulation manipulates the image data set so that the output third graphics (at step 316) provides an indication to the user that no further image data is available for rendering on the display according to the desired function associated with the image manipulation request.
Figures 4a, 4b, 4c and 4d show a schematic drawing of the display 402 of the mobile phone 102 of Figure 1 in more detail. First graphics 400-1 (corresponding to rendered image data from an image data set retrieved from memory) displayed in Figure 4a illustrate a snapshot of a transition between a first image object 417 and an appended second image object 418. More particularly, the first graphics 400-I illustrate a portion of the first image object 417 and a portion of the second image object 418 that is appended to the first image object 417. The rendered portion of the second image object 418, more clearly shown in Figure 4b comprises multiple features 442 in the form of a hexagon 442-1, a circle 442-2 and a square 442-3. The hexagon has a width denoted as x' and a height denoted as y'.
As shown in Figures 4a, 4b, 4c and 4d, the touch screen 404 is generally responsive to a user's touch or other object 444 designed to register an input to the mobile phone 402. Therefore, as the object 444 is brought near or onto the surface of the touch screen 404 and within a detection range of the touch screen 404 surface, the mobile phone 402 senses the presence of the object 444, such as by capacitive sensing, determines the sensed object 444 to be an input and registers the input in dependence of the sensed object 444 in order to perform an operation. As shown in Figure 4a, the object 444 is first placed near or on the bottom-right region of the surface of the touch screen 404 so that it is sensed by the mobile phone 402. The object 444 is then moved in a slide type motion across the screen 404, whilst maintaining its sensed touch with the screen 404, towards the left side edge 440 of the screen 404, as indicated by motion direction arrow 446. As the object 444 is moved across the screen 404, the mobile phone 402 continues to register the sensed object 444 as an input and accordingly processes the input to determine a corresponding action to take. Figure 4b illustrates the object 444 having moved a first distance across the screen 404. Figure 4c illustrates the object 444 having moved across the screen 404 by a second distance, the second distance being greater than the first distance shown in Figure 4b. Figure 4d illustrates the object 444 having been removed away or released from the screen 404 so that it is no longer sensed.
The movement of the object 444 on the screen is known as a "gesture", a "movement request" or an "image manipulation request". The gesture is a form of user input and has characteristics such as position, direction, distance, and sensed time. The gesture can be one of a number of multiple predetermined patterns or movements that have associated actions or functions that have been programmed into the mobile phone 402 for the mobile phone 402 to take. A mobile phone processor recognises the gesture, and determines, based on the detected or determined characteristics as well as any boundary conditions relating to the retrieved image data set, an appropriate associated action for the mobile phone 402 to take.
In response to the gesture, a first image manipulation process such as an image transformation or deformation is applied to the displayed graphics 400. The image transformation is defined as changing the form of the displayed graphics 400. Figures 4a and 4b show a spatially uniform geometric transformation of first graphics 400-1 to provide second graphics 400-2. The spatially uniform geometric translation takes the form of a translation in the general direction 446 of the gesture. Figure 4c shows a spatially non-uniform geometric transformation whereby the second graphics 400-2 of Figure 4b are altered such that a portion of the second graphics 400-2 are shrunk along a first dimension, but not in the second dimension, and another portion of the second graphics 400-2 is stretched along the first dimension, thereby providing third graphics 400-3.
The geometric transformations are applied using an algorithm to analyse the displayed graphics 400 and determine how the transformation should occur, depending on the determined gesture characteristics and also depending on conditions of the retrieved image data set used to render the displayed graphics 400. The displayed graphics 400 are then manipulated to provide transformation effects of a translation (in the case of Figures 4a and 4b), and a stretch and a shrink (in the ease of Figure 4c). The algorithm operates by, in response to detecting the gesture, determining the initiation point of the gesture (i.e. where the gesture begins) and determining the corresponding spatial point within the displayed graphics 400-1 (and hence the pixel points within the image data set corresponding to the determined spatial point). An intersect line 450 is then associated with the determined corresponding point of the displayed graphics 400-1. The intersect line 450 is a line orthogonal to the general movement direction 446 of the gesture, which line is shown in Figures 4a, 4b, 4c and 4d to have a vertical orientation. The intersect line 450 is associated with the gesture such that the intersect line 450 and corresponding displayed graphics 400 move along with the gesture. The entire graphics 400 can thereby be translated in the general direction of the gesture, in association with the movement of the gesture, to enable the user to scroll through image objects in a gallery, as shown in Figure 4a and 4b. The algorithm is adapted to determine when no further image data in the retrieved image data set is available for display (which can be determined either before the outputting of the first graphics 400-1 or second graphics 400-1 or when a boundary condition is met). The algorithm determines or recognises the edges of the last image object 418 and selects the edges of the last image object 418 of which, when the image object 418 is displayed, the gesture is moving towards and away from. The edge that the gesture is moving away from is called the "trailing edge" 452-1. The edge which is in the general direction of the gesture is called the "leading edge" 452-2. The graphical region between the intersect line 450 and the trailing edge 452-1 is defined as the "trailing region" 418-1. The graphical region between the intersect line 450 and the leading edge 452-2 is defined as the "leading region" 4 18-2. The algorithm temporarily fixes the trailing edge 452- 1 and the leading edge 452-2 to their instant positions (i.e. the respective edges 438, 440 of the graphic display area, the graphic display area being the area on the touch screen 404 that the processor has determined for the display of graphics 400) until an event is flagged indicating that the respective edges need not be fixed any longer. As the leading and trailing edges 452-1, 452-2 are fixed to the edges 438, 440 of the graphic display area, the movement of the intersect line 450 causes the leading and trailing regions 418-1, 418-2 to shrink and stretch in order to accommodate the movement.
In more detail, and as shown in Figures 4a and 4b, the first graphics 400-1 are shown to transform by translating in the general direction of the gesture 446. The translation occurs so that the leading edge 452-2 of the image object 418, the trailing edge 452-1 of the image object 418, along with intersect line 450 moves towards display edge 440. In Figure 4b, the image object 418 is shown to have moved onto the graphic display area thereby having replaced image object 417 on the display 404.
In Figure 4b, the algorithm determines that the user gesture is indicating a desire to display another image object but that no thither image objects are available for output (i.e. the boundary condition is satisfied). A second image transformation process is then applied by the algorithm, in response to the boundary condition being satisfied, to the currently displayed second graphics 400-2 whereby the trailing edge 452-1 and leading edge 452-2 are fixed to the respective edges 438, 440 of the graphic display area and the second graphics 400-2 (which now displays only the image object 418) are transformed in order to output third graphics 400-3. Tn particular, the algorithm applies a spatially non-uniform geometric transformation whereby the trailing region 418-1 of the last image object 418 is stretched in a first direction in a transverse manner along a horizontal axis as the intersect line 450 moves in the gesture direction 446, and the leading region 418-2 is shrunk transversely to accommodate the stretching of the trailing region 418-1 so that the overall size and shape of the image object 418 is maintained. The stretch is applied linearly so that the image data between conesponding points along the intersect line 450 and the trailing edge 452-1 experience the same degree of stretching. The stretching and shrinking are dependent on the gesture such that, as the object 444 moves, the image object 418 stretches and shrinks. The amount of stretching and shrinking of the image object 418 increases linearly as the distance travelled by the slide gesture increases but is limited to a critical point beyond which any further stretching would cause an unwanted distortion of the displayed third graphics.
The stretching and shrinking can easily be observed with reference to the shapes 442-1, 442-2, 442-3 in Figures 4b and 4c. As shown, the hexagon 442-1 initially has a width of x. Afier the slide gesture, the hexagon 442-1 width is shown to have expanded to x', where x' is greater than x (only the part of the hexagon in the trailing region 442-1 has expanded; the part of the hexagon 442-1 in the leading region 418-1 of the image object 418 has experienced a corresponding shrink).
Similarly, the square 442-3 of Figure 4b undergoes a transformation, however instead of stretching, the square shrinks in the first direction 442-3 so that it becomes a rectangle. Once the object 444 is released, the second image transformation process is reversed to output fourth graphics 400-4 so that the transformed (i.e. stretched and shrunk) image object 418 returns to its original non-transformed state, as shown in Figure 4d where the hexagon 442-1 width x" is equal to x. The square 442-3 correspondingly returns to its original shape. The return to the original image object 418 state is gradual and spring-like so that the image object regions 418-1, 418-2 appear to recoil once the object 444 has been released, thereby giving the user an impression that the image object 418 was under the bias of object 444.
As shown in Figures 4a, 4b, 4c and 4d, different typcs of geometric image transformation processes are applied depending on the gesture characteristics and the conditions of the retrieved image data set. The geometric image transformation processes use mathematical transformations to crop, pad, scale, rotate, transpose or otherwise alter an image data array, thereby producing a modified graphical output.
The transformation relocates pixels within the image data set relating to the displayed graphics from their original spatial coordinates to new positions depending on the type of transformation selected (which is dependent on the determined gesture). A spatially uniform geometric transformation is where the mathematical function is applied in a linear fashion to each pixel within a selected group of pixels and can therefore result in, for example, a translation of displayed graphics. A spatially non-uniform geometric transformation is where the mathematical function has a non-linear effect on the pixels within a selected group of pixels and can therefore result in an appearance of a stretch or shrink, or other type of warping of the displayed graphics.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged, for example, in the above embodiment, it was assumed that the entire first image object 417 and second image object 418 would each occupy the whole graphic display area of the display 404 once they have been navigated or scrolled to. In another embodiment, the first image object corresponding to a picture or electronic document may be larger in size than the graphic display area either in a vertical dimension, a horizontal dimension or in both dimensions. For example, Figures 5a, 5b, 5e, 3d and Se illustrate an image object 554 in the form of a contact list 554 that is larger than the longitudinal axis of the display area of the display 504. The contact list 554 comprises multiple entries of contact information arranged in multiple rows with each contact being represented by an icon 558 and information 559.
Before a gesture to scroll through the contact list is initiated, first graphics 500-1 are displayed in the graphic display area of the display 504 (Figure 5a). The first graphics 504 relate to part of the image data set that represents a portion of the contact list 554 that does not show the terminus 552-1 (i.e. a portion of the contact list 554 that is away from the beginning 552-1 of the contact list 554 so that the beginning 552-1 of the contact list is not visible in the graphic display area). The scrolling gesture 556 is then initiated and moves in a downwardly direction in order to reveal portions of the contact list beyond the display 504 and towards the beginning 552-1 of the contact list 554, as shown in Figure Sb. The scroll type gesture may consist of a vertical slide motion in a downwardly direction with a quick release (i.e. the object 444 is not held in place after the slide for longer than a defined threshold time). In response, the contact list 554 begins to translate in the direction of the gesture 556 with a perceived momentum corresponding to the determined characteristics of the gesture, for example, distance and speed. The momentum is dampened so that the scrolling of the contact list 554 slows and eventually stops, depending on the characteristics of the gesture. If the beginning 552-1 of the contact list 554 is not reached after the first scroll gesture, the user can initiate another scroll gesture. The scrolling of the contact list 554 enables portions of the contact list 554 beyond the graphic display area to be revealed by translating (i.e. using a spatially uniform geometric transformation) the displayed first graphics 500-1 in the general direction of the gesture 556 to produce second graphics 500-2. As shown in Figure Sc, when the beginning 552-1 is reached and the momentum of the scroll indicates that the scrolling should continue, the contact list 554 is made to briefly stretch (i.e. using a spatially non-uniform geometric transformation) in the direction of the gesture as indicated by arrow 560 to produce third graphics 500-3, before shrinking (i.e. reversing the spatially non-uniform geometric transformation) in the opposite direction indicated by arrow 562 to produce fourth graphics 500-4 (Figure Sd). The stretch and shrink are applied so that the initial graphics after the shrink (i.e. fourth graphics 500-4) are the same as the graphics before the stretch (i.e. second graphics 500-2).
Figure Se shows an example of how the image manipulation process using the spatially non-uniform geometric transformation can be determined. As shown, once the beginning 552-I of the contact list 554 has been reached, the edge 552-I representing the beginning of the contact list 554 is fixed to its instant position (the edge 538 of the graphics display area). The contact entry that is furthest away from the edge 552-1 is then pushed beyond opposing edge 540 of the graphics display area so that the portion of the contact list 554 stretches to produce third graphics 500-3.
The stretch is gradual. The spatially non-uniform geometric translation is then reversed so that the displayed contact list 554 shrinks to its original non-stretched state, as indicated by fourth graphics 500-4 in Figure Sd. The transformations produce a stretch-and-recoil type effect or "bounce" effect, whereby the user is provided with an indication that they have reached the beginning 552-1 of the contact list 554 where they can scroll no frirther.
Figure 6 illustrates a schematic flow diagram of the above contact list 554 embodiment. At step 602, an image manipulation request 556 is detected. The image manipulation request 556 indicates a desire to scroll the displayed contact list 554 in order to reveal hidden or non-displayed portions of the contact list 554. In response to detecting and determining the image manipulation request 556, the contact list 554 or electronic document is translated in the general direction of the image manipulation request 556. The contact list 554 translates in accordance with the image manipulation request 556 by a distance corresponding to the characteristics of the image manipulation request 556 (steps 606, 608 and 610). Once it has been determined that the boundary condition has been satisfied (step 612), the end 552-1 of the contact list 554 is fixed to its current position and the opposing end 5 52-2 of the displayed contact list 554 is stretched in the direction of the image manipulation request 556 so that it moves beyond the edge 540 of the graphics display area (step 614). The stretching of the contact list 554 is then reversed so that the contact list 554 shrinks back to its original, non-stretched size (step 616). If at step 612, the end 552-1 of the displayed contact list 554 has not been reached, then the scrolling or translation of the contact list 554 continues until either the end 552-1 is reached or the power or momentum of the scrolling motion has run out (step 610).
In the above embodiment, in addition to the assumption that the entire first image object 417 and second image object 418 would each occupy the whole graphic display area, it was also assumed that a scroll could only be along a longitudinal or transverse direction of the display. However, in another embodiment, the image object 418 or electronic document may be larger in size than the graphic display area in both directions and the scrolling motion may have both longitudinal as well as transverse components. In this ease, as shown in Figures 7a, 7b, 7c and 7d, the image object 718 travels or is translated diagonally, along with the movement of the diagonal scroll gesture 764 (Figures 7a and 7b). As the corner 752-2 of the image object is reached (Figure 7b), the displayed portion of the image object 718 is stretched (Figure 7c) before recoiling (Figure 7d). The stretching occurs in a similar manner to the above contact list 554 embodiment, but instead of stretching only in one dimension it is stretched in two dimensions.
In the above embodiment, the spatially non-uniform transformations were applied along one dimension. In the diagonal scroll embodiment, the transformation was applied along two dimensions. In other embodiments the geometric transformation may be applied in a non-linear manner such as to apply a warping effect, as is shown in Figures 8c and 9e. For example, the transformation may be substantially radial about one or more points. Therefore, for example, using a "pinch" type gesture, whereby a forefinger and thumb are brought towards each other on the touch screen 804, a user may request to "zoom out" from displayed first graphics 800- 1. The pinch gesture is represented by a first user input 868-1 and a second user input 868-2 being brought together on the display 804. As shown in Figure 8a, a rectangle 866 is displaycd by thc output first graphics 800-1. As the first user input 868-1 and the second user input 868-2 are brought together, the first graphics 800-1 and displayed rectangle 866 are shrunk along two dimensions so that the aspect ratio of the rectangle 866 remains the same, as shown by the output second graphics 800-2 in Figure 2b. The shrinking is represented by arrows 870. The amount of shrinking increases until a critical limit is reached, at which point any further zooming out would cause unwanted distortion of the image object. The critical limit may be known beforehand and programmed into the processor, or can be determined by the processor based on the knowledge of the resolution of the image object and the zoom level. Once the critical level has been reached, and if the zoom out request is still being made, a second image manipulation process such as a spatially non-uniform geometric transformation, which is applied to the displayed graphics. The spatially non-uniform geometric transformation can apply a warping to the second graphics 800-2 in order to produce the warped rectangle 866 shown in the output third graphics 800-3 of Figure 8c. As shown, the warping occurs so that there is a greater amount of shrinking along the direct path between the first user input 868-1 and user input 868- 2, represented by arrows 870 and less shrinking on either side of the direct path, represented by arrows 872. The warping shown in third graphics 800-3 is additionally represented by dashed warping lines 874. The warping of the graphics provides an indication to the user that they have reached the maximum zoom out level. The warping effect can be reversed either after a threshold period of time or in response to the user inputs 868-1, 868-2 being released, in order that the rectangle 866 shown by the third graphics 800-3 can return to its original unwarped state, which is output in Figure 8d as fourth graphics 800-4. The return of the initially displayed graphics to its original shape is such that the second graphics 800-2 and the fourth graphics 800-4 appear the same.
Similar to the "zoom out" embodiment described above, the user may make an image manipulation request constituting a desire to "zoom in" on displayed graphics.
Figure 9a shows output first graphics 900-1 comprising a rectangle 966. A first user input 976-1 and a second user input 976-2 are shown to move in opposing directions on the display 904, for example when a user places their thumb and forefinger on the touch screen 904 and moves them apart from one another. As shown in Figure 9b, as the first and second user inputs 976-1, 976-2 are moved apart, a first image manipulation process is applied to the first graphics 900-1 to effect a spatially uniform geometric transformation, which in this case is a stretch in two dimensions so that the aspect ratio of the rectangle 966 remains the same. The enlarged rectangle is output as a part of second graphics 900-2. The stretching is depicted in Figure 9b by arrows 978. When a critical threshold is reached, indicating that any further zooming in would result in unwanted distortion of the graphics, a second image manipulation process is applied to the displayed graphics. The second image manipulation, as shown in Figure 9c, applies a spatially non-uniform geometric transformation to the second graphics 900-2 to produce the output third graphics 900-3. In particular, a warped stretching is applied to the second graphics 900-2 such that there is a greater amount of stretching in proximity to the user input points 976-1, 976-2 when compared with adjacent areas. As shown in Figure 9c, the arrows 978 represent a greater amount of stretching compared with arrows 980. The warping shown on third graphics 900-3 is also represented by dashed warping lines 982. The warping of the graphics provides an indication to the user that they have reached the maximum zoom in level. The warping effect can either be reversed after a threshold period of time or in response to the user inputs 976-1, 976-2 being released, so that the rectangle 966 shown by third graphics 900-3 returns to its original unwarped state output in Figure 9d as fourth graphics 900-4 (where the second graphics 900-2 and the fourth graphics 900-4 are the same).
In the above embodiment, a first alteration and second, different alteration was applied to the displayed graphics to effect a translation of the displayed graphics and then a "bounce" of the image object or displayed graphics. In other embodiments, a translation may not be required. Instead, a stretching, shrinking, warping or other type of spatially non-uniform geometric translation may be used to provide the user with an enhanced indication of an action that they are requesting be performed. In particular, after retrieving an image data set comprising one or more image objects to be displayed, first graphics may be output to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set. A limit of the retrieved image data set is determined to correspond with a limit of a display area when the at least first graphics are displayed therein. For example, the boundary condition could already be in place when the first graphics are produced, whereby the edge of an image object of the first graphics meets the edge of the graphics display area. An input from a user representative of an image manipulation request to perform a geometric image transformation which goes beyond said limit, such as a slide gesture, is dctcctcd. In response to the slide gcsturc, an image manipulation process is performed on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the image manipulation process comprising conducting a spatially non-uniform geometric transformation to the at least a portion of said retrieved image data set to provide visual feedback to the user indicating that said image manipulation request is a request to perform a geometric image transformation which goes beyond said limit. The second graphics is then output to the display area of the display.
Figure 10 shows a schematic example of first graphics 1000-1 showing an image object 1018 having a intersect line 1050, a trailing portion 1018-1, and a leading portion 1018-2. The image object has a trailing edge 1052-1 and a leading edge 1052-2. The graphics display area of the display 1004 has a first edge 1004-I and a second edge 1004-2. A slide gesture 1046 is shown to be initiated moving from the first edge 1004-1 towards the second edge 1004-2 of the graphics display area.
The trailing edge 1052-1 and the leading edge 1052-2 are determined as being mapped onto the edges 1004-1, 1004-2 of the graphics display area and arc temporarily fixed to their instant positions. The intersect line 1050 moves along with the gesture 1046 such that the trailing region 1018-1 is stretched, as indicated by arrow 1048, and the leading region 10 18-2 is shrunk, as indicated by arrow 1049, in order to output second graphics 1000-2. The stretching and shrinking arc limited to prevent unwanted distortion to the output graphics. Once the gesture 1046 is completed and the user input is removed, the stretching and shrinking transformations arc rcvcrscd such that thc trailing rcgion 10 18-1 shrinks and thc leading region 10 18-2 stretches to output third graphics 1000-3. The image object 1018 thereby returns to its original state, where the first graphics 1000-1 are the same as the third graphics 1000-3.
In the example illustrated in Figure 10, it was assumed that a release of the gesture 1046 would allow the transformed image object 1018 displayed by second graphics 1000-2 to return to its original non-transformed state. In other embodiments, thc uscr may wish to scroll to a ncxt image objcct upon rclcasc of thc gcsturc. Figurc 11 illustrates a transition to a next image object. As shown, output first graphics 1100-1 and second graphics 1100-2 are the same as first graphics 1000-1 and second graphics 1000-2 of Figure 10. In Figure 11, the stretch applied to produce the second graphics 1100-2 continues so that third graphics 1100-3 arc produccd and output, whereby the intersect line 1150 is moved so that the maximum stretching and shrinking limits of the trailing region 1118-1 and leading region 1118-2 are reached, beyond which unwanted image distortion would occur (as determined based on resolution of thc image data sct or as defined by a programmable limit programmed into the memory of the mobile phone). Once the slide gesture has been completed, the characteristics of the gesture, such as the distance travelled and the calculated speed, are compared with a predetermined threshold (which has been programmed into the memory). If the characteristics of the gesture do not satisfy the threshold, then the image object 1118 returns to its original, non-transformed state by enabling the leading region 1118-2 to gradually expand to its original form and enabling the trailing region 1118-Ito gradually shrink to its original form, similar to what is shown in Figure 10. If thc proccssor dctcrmincs that thc threshold has bccn satisfied, thcn the processor cheeks whether a next image object 1119 is available for display. For example, the currently displayed image object 1118 may form a part of an image gallery comprising a sequence of image objects. If there is no next image object 1119 to display, thc transformed image is again returned back to its original form (as with Figure 10). Where both the threshold has been satisfied and also where a next image object 1119 has been determined to be available, an event flag is raised so that the temporary fixing of trailing edge 1152-1 and leading edge 1152-2 is released. The processor then fixes or makes constant the aspect ratios and sizes of the stretched trailing region 1118-1 and the shrunken leading region 1118-2 so that no further transformation is applied to the image object 1118. The next image object 1119 is then appended to the first image object 1118 so that there are no gaps between the image objects. This is done by fixing the left side edge of next image object 1119 to the trailing edge 1152-1 of first image object 1118. The transformed first image object 1118 is then made to transition "oft" the touch screen so that it is no longer displayed. As the image object 1118 translates beyond the graphic display area, the left edge of the appended next image 1119 is "dragged" onto the graphic display area to output fourth graphics 1100-4 and fifth graphics 1100-5. The transition between image objects is gradual so that the user is provided with a visual rolling effect.
The threshold is conditional and situation dependent. For example, the threshold may only be relevant when a next image object 1119 is available. In the case of Figure 11, the threshold is defined as a predetermined distance travelled by the gesture. Therefore, if the gesture is determined to have moved a distance that is equal to or greater than the distance threshold and the gesture 1146 has been released, then a transition to the next image object 1119 is initiated. If the determined gesture distance is below that of the distance threshold and the gesture 1146 is released, then the transformation of the first image object 1118 is reversed so that the first image object 1118 returns to its original state.
In the above embodiment, a single gesture from a single object 444 was described. In another embodiment, multiple gestures resultant from multiple inputs may be present. In particular, as shown in Figure 12, a user may bring two objects 1284-1, 1284-2 together on the touch screen 1204 in a "pinch" like motion. The area 1218-2 between the two objects 1284-1, 1284-2 is effectively squeezed and thereby shrinks. The areas 1218-1, 1218-3 outside of the two objects 1284-1, 1284-2 expand so that the overall shape and area of the image object 1218 is retained. Upon release of the objects 1284-1, 1284-2 the image object 1218 returns to its non-deformed state.
In the above embodiment, the threshold was defined as being a distance threshold based on the distance travelled by the gesture satisfying a criterion. In other embodiments, the threshold may be related to one or more of the distance travelled by the gesture, the speed, the latency (time that the user input is held in one position), the position, the velocity or the pattern.
It would be useful if a user could determine whether a next image object is available for viewing before enabling a full transition to the next image object.
Therefore, in another embodiment, the processor determines whether a next image object is available before assessing whether the threshold is satisfied. If no next image object is available, the processor applies a stretch and recoil as described in, for example, the contact list embodiment. If it is determined that a next image object is available, the next image object is first appended to the currently displayed image object by attaching the opposing edges of each image object to each other. The currently displayed image object is then translated along with the gesture so that part of the currently displayed image object is translated outside of the graphics display area of the display. When the currently displayed image object is being translated, the edge of the next image object that is appended to the currently displayed image object is allowed to travel with the currently displayed image object whilst the opposing edge of the next image object is retained in its initial virtual position. This initial virtual position corresponds to calculated positional data of the edge of the next image object in the image data set if the appended next image object were to be virtually placed side-by-side the currently displayed image object. The next image object is thereby "dragged" and "stretched" onto the graphics display area of the display.
When the object is released, a determination is then made regarding whether the threshold has been satisfied. For example, if more than half of the currently displayed image object has disappeared beyond the graphics display area, then the threshold is satisfied and a transition between image objects occurs, otherwise the currently displayed image object returns to its original position (either by translating over with no stretching or shrinking, or by stretching back to its original position in the graphics display area). The transition involves moving the currently displayed image object beyond the edge of the graphics display area in the general direction of the gesture and dragging the appended edge of the next image object towards the same edge of the graphics display area. The next image object fully transitions onto the screen by allowing the virtual opposing edge of the next image object to be unfixed so that this edge can transition onto the graphics display area, effectively allowing the next image object to shrink onto the graphics display area.
In the above embodiment, it was assumed that the amount of stretching and!or shrinking of the image object would be proportional to the distance travelled by the gesture. However, in other embodiments, the amount of stretching is dependent also on the speed of the gesture. If the gesture is fast and no next document is available, the amount of stretching is limited to prevent unwanted distortion and processing burden. If the gesture is slow and there is no next document available, the processor has more time and therefore can allow the image object to be stretched or shrunk further whilst minimizing unwanted distortion.
In the above embodiment, after the image object has been stretched, the image object was then shown to recoil (if no transition occurred) to the original image object. The recoil action may, in some embodiments, use a damped sinusoidal function (rather than a critically damped function) so that the return to the original image object occurs via a pendulum stretch and shrinking motion with continually decreasing amplitude. This provides the user with the appearance of a "bounce" or spring-like return to the original image object.
In the above embodiment, a particular algorithm was used to apply the stretch and shrinking. In other embodiments, a gesture-dependent convolution function can be applied to the image data of the displayed image object to effect the transformation.
In the above embodiment, a touch screen user interthee was used to allow an image manipulation function to be registered and interpreted by a mobile phone and also to provide a visual representation of various graphics. In other embodiments other types of interfaces or displays may be used such as non-touch interfices and other motion recognition input based system. For example, infra-red, radar, magnetic fields and camera sensors can be used to generate user inputs. The display could be a projector output or any other such system of generating a display.
In the above embodiments, examples were explained with reference to mobile phones. However, in other embodiments, the mobile phone can be replaced with other apparatuses such as PDAs, laptops, desktop computers, printers, tablet personal computers, or any other device or apparatus that uses a visual display.
In the above embodiments, a touch screen was used whereby a gesture and display output utilise the same user interface. In other embodiments, the user interface fir the gesture can be separate from the user interface used to provide the display output.
In the embodiments where a linear stretch is applied, there may be a discontinuity present due to the expansion of the space between pixelated image data.
In other embodiments, the stretch is applied in a non-linear manner and, for example, using a curved stretch which applies a greater amount of stretching towards one extmmity of the output graphics when compared with the opposing extremity.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
Claims (35)
- CLAIMS1. A method of outputting graphics to a display, the method comprising: retrieving an image data set, the retrieved image data set comprising one or more image objects to be displayed; outputting at least first graphics to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set; detecting an input from a user representative of an image manipulation request; performing a first image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the first image manipulation process providing a first type of alteration to said at least part of the retrieved image data set; outpntting the second graphics to the display area of the display; determining that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no further element of the retrieved image data set to be displayed; in response to determining that the boundary condition has been satisfied, performing a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to said retrieved image data set, the second type of alteration being of a different type than said first type of alteration; and outputting the third graphics to the display area of the display.
- 2. A method according to claim 1, wherein the first type of alteration is a first type of geometric transformation applied to at least part of the image data set and the second type of alteration is a second type of geometric transformation applied to at least part of the image data set.
- 3. A method according to claim 2, wherein the first type of alteration is a spatially uniform geometric transformation applied to at least part of the image data set and the second type of alteration is a spatially non-uniform geometric transformation applied to at least part of the image data set.
- 4. A method according to claim 3, wherein a characteristic of the non-uniformity of the spatially non-uniform geometric transformation is dependent on a position of a representation of the user input in relation to the display.
- 5. A method according to claim 3 or 4, wherein the spatially uniform geometric transformation results in a translation of the first graphics in the general direction of the user input to produce the second graphics.
- 6. A method according to claim 3, 4 or 5, wherein the spatially non-uniform geometric transformation results in a stretching of the first graphics in the general direction of thc user input to produce the second graphics.
- 7. A method according to claim 3 or 4, wherein the spatially non-uniform geometric transformation results in a shrinking of the first graphics along two dimensions to produce the second graphics.
- 8. A method according to claim 3 or 4, wherein the spatially non-uniform geometric transformation results in a stretching of the first graphics along two dimensions to produce the second graphics.
- 9. A method according to claim 7 or 8, wherein the spatially non-uniform geometric transformation results in a warping of the second graphics in the general direction of the user input to produce the third graphics, wherein the degree of warping is dependent on the position of the user input in relation to the display.
- 10. A mcthod according to any preceding claim, comprising detecting a release of the input from the user representative of the image manipulation request during said first image manipulation process, and performing said second image manipulation process without further user input to produce the third graphics.
- 11. A method according to claim 10, comprising reversing said second image manipulation process without ifirther user input, after the third graphics have been output, to produce fourth graphics.
- 12. A method according to any of claims 1 to 9, comprising detecting a release of thc input from thc user rcprcscntativc of thc image manipulation rcqucst during said second image manipulation process, and reversing said second image manipulation process in response to the detected release to produce fourth graphics.
- 13. A mcthod according to any preceding claim, whcrein the dctcrmination of thc boundary condition bcing satisfied comprises dctcrmining that at least onc outcr limit of thc imagc data sct has mct at lcast onc outcr limit of thc display area.
- 14. A method according to any preceding claim, wherein: thc imagc manipulation request relates to a rcprcsentativc movemcnt of thc user input, the representative movement moving on the display towards at least one outer limit of the retrieved image data set; the first type of alteration comprises a translation of image objects corresponding to the image manipulation request movement, applied to at least part of the image data set; the boundary condition relates to the at least one outer limit of the retrieved image data set; and the second type of alteration is an image shrinking alteration applied to at least part of the image data set.
- 15. A method according to any of claims I to 9, wherein: the image manipulation request relates to a representative movement of the user input, the representative movement moving on the display away from at least one outer limit of the retrieved image data set; the first type of alteration comprises a translation of image objects corresponding to the image manipulation request movement, applied to at least part of the image data set; the boundary condition relates to the at least one outer limit of the retrieved image data set; and thc second type of alteration is an imagc stretching alteration applied to at least part of the image data set.
- 16. A method according to any preceding claim, wherein the boundary condition relates to a single outer limit of the image data set, and the second type of alteration is a one-dimensional image transformation applied to at least part of the image data set.
- 17. A method according to any of claims Ito 11, wherein the boundary condition relates to two outer limits of the image data set, and the second type of alteration is a two-dimensional image transformation applied to at least part of the image data set.
- 18. A method according to any of claims Ito 9, wherein the image manipulation request comprises a zoom-out request and wherein the determination of the boundary condition being satisfied comprises determining that a maximum zoom-out limit, beyond which no further image data set is present, has been reached.
- 19. A method according to any of claims 1 to 9, wherein the image manipulation request comprises a zoom-in request and wherein the determination of the boundary condition being satisfied comprises determining that a maximum zoom-in limit, beyond which no further image data set is present, has been reached.
- 20. A method according to any preceding claim, wherein the display comprises a touch-sensitive display and wherein the image manipulation request comprises a touch-sensitive gesture.
- 21. A method according to any preceding claim, wherein the image data set includes one or more image data portions which are not output on said display area before the image manipulation request is detected.
- 22. An apparatus for outputting aphics to a display, comprising: at least one processor; at least one memory; a display; wherein operation of the processor causes the apparatus to: retrieve an image data set from the at least one memory, the retrieved image data set comprising one or more image objects to be displayed by the display; output at least first graphics to a display area of the display, the first graphics corresponding to at least a portion of the retrieved image data set; detect an input from a user representative of an image manipulation request; perform a first image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the first image manipulation process providing a first type of alteration to said at least part of the retrieved image data set; output the second graphics to the disp'ay area of the display; determine that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no further element of the retrieved image data set to be displayed; perform, in response to determining that the boundary condition has been satisfied, a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to said retrieved image data set, the second type of alteration being of a different type than said first type of alteration; and output the third graphics to the display area of the display.
- 23. An apparatus according to claim 22 operable to perform thc method steps of any of claims ito 21.
- 24. A computer program comprising computer program instructions, which, when pcrformcd by a computer, enable thc computcr to pcrform thc method of any of claims Ito 21.
- 25. A method of outputting images to a display, the method comprising: retrieving an image data set, the retrieved image data set comprising one or more image objects to be displayed; outputting at least first graphics to a display area of the display, the at least first graphics corresponding to at least a portion of thc retrieved imagc data set, wherein a limit of the retrieved image data sct corresponds with a limit of a display area when the at least first graphics are displayed therein; detecting an input from a user representative of an image manipulation request to perform a geometric image transformation which goes beyond said limit; performing an image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics, the image manipulation process comprising conducting a spatiaHy non-uniform geometric transformation to the at least a portion of said retrieved image data set to provide visual feedback to the user indicating that said image manipulation request is a request to perform a geometric image transformation which goes beyond said limit; and outputting the second graphics to the display area of the display.
- 26. A method according to claim 25, wherein a characteristic of the non-uniformity of the spatially non-unifbrm geometric transfbrmation is dependent on a position of a representation of the user input in relation to the display.
- 27. A method according to claims 25 or 26, wherein conducting the spatially non-unifbrm geometric transformation of the image data set comprises: determining the point of the at least first graphics corresponding to the starting point of the user input; scgmcnting thc at icast first graphics at the starting point into two parts along a line that is perpendicular to a general direction of the user input, associating the line with the user input, wherein the movement of the line corresponds with the movement of the user input; shrinking the part of the first image that the line is moving towards; and stretching the part of the first image that the line is moving away from.
- 28. A method according to claim 27, wherein the stretching and shrinking occur so that the first image retains its area and shape.
- 29. A method according to claims 27 or 28, comprising determining whether or not the movement of the line has satisfied a threshold, and when the movement of the line has been determined to satisfy the threshold, performing a second image manipulation process on at least part of the reirieved image data set to produce third graphics.
- 30. A method according to claim 29 wherein the second image manipulation process provides a spatially uniform geometric transformation to the at least part of the retrieved image data set.
- 31. A method according to claim 29 or 30, wherein the spatially uniform geometric transfbrmation results in a translation of the second graphics in the general direction of the user input to produce the third graphics.
- 32. A method according to claim 27 or 28, wherein when the movement of the line has been determined not to satisfy the threshold, reversing the first image manipulation to produce third graphics.
- 33. A method substantially as described herein with reference to the description and accompanying drawings.
- 34. An apparatus substantially as describcd hcrcin with rcfcrcncc to thcdescription and accompanying drawings.
- 35. A computer program comprising computer program instructions which, when performed by a computer, enable the computer to perform the method substantially as described herein with reference to the description and accompanying drawings.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1211415.3A GB2503654B (en) | 2012-06-27 | 2012-06-27 | A method and apparatus for outputting graphics to a display |
| KR1020130065245A KR20140001753A (en) | 2012-06-27 | 2013-06-07 | A method and apparatus for outputting graphics to a display |
| US13/928,730 US20140002502A1 (en) | 2012-06-27 | 2013-06-27 | Method and apparatus for outputting graphics to a display |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1211415.3A GB2503654B (en) | 2012-06-27 | 2012-06-27 | A method and apparatus for outputting graphics to a display |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB201211415D0 GB201211415D0 (en) | 2012-08-08 |
| GB2503654A true GB2503654A (en) | 2014-01-08 |
| GB2503654B GB2503654B (en) | 2015-10-28 |
Family
ID=46704305
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1211415.3A Expired - Fee Related GB2503654B (en) | 2012-06-27 | 2012-06-27 | A method and apparatus for outputting graphics to a display |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20140002502A1 (en) |
| KR (1) | KR20140001753A (en) |
| GB (1) | GB2503654B (en) |
Families Citing this family (53)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7469381B2 (en) | 2007-01-07 | 2008-12-23 | Apple Inc. | List scrolling and document translation, scaling, and rotation on a touch-screen display |
| US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
| US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
| JP2014182638A (en) * | 2013-03-19 | 2014-09-29 | Canon Inc | Display control unit, display control method and computer program |
| US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
| EP3007049A4 (en) * | 2013-05-27 | 2017-02-15 | Nec Corporation | Display control device, control method thereof, and program |
| US10503388B2 (en) * | 2013-09-03 | 2019-12-10 | Apple Inc. | Crown input for a wearable electronic device |
| US11068128B2 (en) | 2013-09-03 | 2021-07-20 | Apple Inc. | User interface object manipulations in a user interface |
| US12287962B2 (en) | 2013-09-03 | 2025-04-29 | Apple Inc. | User interface for manipulating user interface objects |
| AU2014315234A1 (en) | 2013-09-03 | 2016-04-21 | Apple Inc. | User interface for manipulating user interface objects with magnetic properties |
| US10540073B2 (en) * | 2013-09-24 | 2020-01-21 | Lg Electronics Inc. | Mobile terminal and method for controlling camera-mounted external device |
| GB2519558A (en) * | 2013-10-24 | 2015-04-29 | Ibm | Touchscreen device with motion sensor |
| US9448687B1 (en) * | 2014-02-05 | 2016-09-20 | Google Inc. | Zoomable/translatable browser interface for a head mounted device |
| US9530183B1 (en) * | 2014-03-06 | 2016-12-27 | Amazon Technologies, Inc. | Elastic navigation for fixed layout content |
| KR102305114B1 (en) * | 2014-03-07 | 2021-09-27 | 삼성전자주식회사 | Method for processing data and an electronic device thereof |
| CN110825299B (en) | 2014-06-27 | 2024-03-29 | 苹果公司 | Reduced size user interface |
| US20160062571A1 (en) | 2014-09-02 | 2016-03-03 | Apple Inc. | Reduced size user interface |
| CN113824998B (en) | 2014-09-02 | 2024-07-12 | 苹果公司 | Method and apparatus for a music user interface |
| TW201610758A (en) | 2014-09-02 | 2016-03-16 | 蘋果公司 | Button functionality |
| TWI676127B (en) | 2014-09-02 | 2019-11-01 | 美商蘋果公司 | Method, system, electronic device and computer-readable storage medium regarding electronic mail user interface |
| CN104850340B (en) * | 2015-01-30 | 2018-11-30 | 小米科技有限责任公司 | Document display method and device on touching display screen |
| US20160253837A1 (en) * | 2015-02-26 | 2016-09-01 | Lytro, Inc. | Parallax bounce |
| US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
| US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
| US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
| US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
| US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
| US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
| US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
| US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
| US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
| US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
| US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
| US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
| AU2016100651B4 (en) | 2015-06-18 | 2016-08-18 | Apple Inc. | Device, method, and graphical user interface for navigating media content |
| US9652125B2 (en) * | 2015-06-18 | 2017-05-16 | Apple Inc. | Device, method, and graphical user interface for navigating media content |
| US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
| US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
| CN107807775B (en) * | 2016-09-09 | 2021-08-03 | 佳能株式会社 | Display control device, control method thereof, and storage medium storing control program thereof |
| JP6759023B2 (en) * | 2016-09-09 | 2020-09-23 | キヤノン株式会社 | Display control device, its control method, program, and storage medium |
| US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
| KR102337875B1 (en) * | 2017-03-31 | 2021-12-10 | 삼성전자주식회사 | Electronic apparatus and method |
| US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
| US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
| US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
| US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
| US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
| US11435830B2 (en) | 2018-09-11 | 2022-09-06 | Apple Inc. | Content-based tactile outputs |
| JP2020135430A (en) * | 2019-02-20 | 2020-08-31 | パイオニア株式会社 | Content display control device, content display control method, and program |
| EP3846014A1 (en) * | 2019-12-30 | 2021-07-07 | Dassault Systèmes | Unlock of a 3d view |
| CN114691002B (en) * | 2020-12-14 | 2023-10-20 | 华为技术有限公司 | A page sliding processing method and related devices |
| WO2022216299A1 (en) * | 2021-04-05 | 2022-10-13 | Google Llc | Stretching content to indicate scrolling beyond the end of the content |
| CN121255334A (en) * | 2021-11-30 | 2026-01-02 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment, medium and program product based on session |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110010659A1 (en) * | 2009-07-13 | 2011-01-13 | Samsung Electronics Co., Ltd. | Scrolling method of mobile terminal and apparatus for performing the same |
| US20110090255A1 (en) * | 2009-10-16 | 2011-04-21 | Wilson Diego A | Content boundary signaling techniques |
| US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
| US20110161892A1 (en) * | 2009-12-29 | 2011-06-30 | Motorola-Mobility, Inc. | Display Interface and Method for Presenting Visual Feedback of a User Interaction |
| US20110202834A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Visual motion feedback for user interface |
| US20120026181A1 (en) * | 2010-07-30 | 2012-02-02 | Google Inc. | Viewable boundary feedback |
| US20120165078A1 (en) * | 2010-12-24 | 2012-06-28 | Kyocera Corporation | Mobile terminal device and display method of mobile terminal device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7956847B2 (en) * | 2007-01-05 | 2011-06-07 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
-
2012
- 2012-06-27 GB GB1211415.3A patent/GB2503654B/en not_active Expired - Fee Related
-
2013
- 2013-06-07 KR KR1020130065245A patent/KR20140001753A/en not_active Withdrawn
- 2013-06-27 US US13/928,730 patent/US20140002502A1/en not_active Abandoned
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110010659A1 (en) * | 2009-07-13 | 2011-01-13 | Samsung Electronics Co., Ltd. | Scrolling method of mobile terminal and apparatus for performing the same |
| US20110090255A1 (en) * | 2009-10-16 | 2011-04-21 | Wilson Diego A | Content boundary signaling techniques |
| US20110107264A1 (en) * | 2009-10-30 | 2011-05-05 | Motorola, Inc. | Method and Device for Enhancing Scrolling Operations in a Display Device |
| US20110161892A1 (en) * | 2009-12-29 | 2011-06-30 | Motorola-Mobility, Inc. | Display Interface and Method for Presenting Visual Feedback of a User Interaction |
| US20110202834A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Visual motion feedback for user interface |
| US20120026181A1 (en) * | 2010-07-30 | 2012-02-02 | Google Inc. | Viewable boundary feedback |
| US20120165078A1 (en) * | 2010-12-24 | 2012-06-28 | Kyocera Corporation | Mobile terminal device and display method of mobile terminal device |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201211415D0 (en) | 2012-08-08 |
| US20140002502A1 (en) | 2014-01-02 |
| KR20140001753A (en) | 2014-01-07 |
| GB2503654B (en) | 2015-10-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| GB2503654A (en) | Methods of outputting a manipulation of a graphic upon a boundary condition being met | |
| CN102662566B (en) | Screen content amplification display method and terminal | |
| KR102384130B1 (en) | Hover-based interaction with rendered content | |
| US9600166B2 (en) | Asynchronous handling of a user interface manipulation | |
| JP5664147B2 (en) | Information processing apparatus, information processing method, and program | |
| JP6240619B2 (en) | Method and apparatus for adjusting the size of an object displayed on a screen | |
| US9250780B2 (en) | Information processing method and electronic device | |
| US11003340B2 (en) | Display device | |
| JP6171643B2 (en) | Gesture input device | |
| US20120092381A1 (en) | Snapping User Interface Elements Based On Touch Input | |
| US20120102437A1 (en) | Notification Group Touch Gesture Dismissal Techniques | |
| US9685143B2 (en) | Display control device, display control method, and computer-readable storage medium for changing a representation of content displayed on a display screen | |
| JP5371798B2 (en) | Information processing apparatus, information processing method and program | |
| US9230393B1 (en) | Method and system for advancing through a sequence of items using a touch-sensitive component | |
| US20110066983A1 (en) | Electronic device and method for providing shortcut interface | |
| JP2011520209A5 (en) | ||
| US20120278712A1 (en) | Multi-input gestures in hierarchical regions | |
| US8762840B1 (en) | Elastic canvas visual effects in user interface | |
| US10042445B1 (en) | Adaptive display of user interface elements based on proximity sensing | |
| WO2023284442A1 (en) | Page processing method and apparatus, electronic device, and readable storage medium | |
| US9619912B2 (en) | Animated transition from an application window to another application window | |
| US10895954B2 (en) | Providing a graphical canvas for handwritten input | |
| TWI420381B (en) | Systems and methods for application management, and computer program products thereof | |
| JP5075975B2 (en) | Information processing apparatus, information processing method, and program | |
| US9501206B2 (en) | Information processing apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20220627 |