HK1061586B - Method for overlaying electronic ink on a document - Google Patents
Method for overlaying electronic ink on a document Download PDFInfo
- Publication number
- HK1061586B HK1061586B HK04104422.6A HK04104422A HK1061586B HK 1061586 B HK1061586 B HK 1061586B HK 04104422 A HK04104422 A HK 04104422A HK 1061586 B HK1061586 B HK 1061586B
- Authority
- HK
- Hong Kong
- Prior art keywords
- ink
- window
- document
- inkoverlay
- rendered
- Prior art date
Links
Description
This application claims priority to U.S. provisional patent application serial nos. 60/379749 (attorney docket No. 003797.00401) and 60/379781 (attorney docket No. 003797.87571), both filed 5/14/2002 and entitled "ink interface," the entire contents of which are incorporated herein by reference, including the appended claims.
Technical Field
Aspects of the present invention relate generally to methods and apparatus for overlaying electronic ink and, more particularly, to an application programming interface that enables developers to easily use a variety of ink overlay features.
Background
Typical computer systems, particularly those using Graphical User Interface (GUI) systems such as microsoft windows, have been optimized for receiving user input from one or more discrete input devices, such as a keyboard for inputting text, and a pointing device, such as a mouse having one or more keys for actuating the user interface. These ubiquitous keyboard and mouse interfaces provide for rapid creation and modification of documents, spreadsheets, database fields, drawings, photographs, and the like. However, the flexibility provided by these keyboard and mouse interfaces is a significant gap compared to non-computer (i.e., standard) pen and paper. With standard pens and paper, a user can edit documents, write notes in blanks, and draw pictures and other shapes, among other things. In some cases, users may prefer to use a pen to mark up a document rather than browsing the document on the screen because annotations can be made freely outside the limitations of the keyboard and mouse interface.
Some computer systems allow a user to draw a picture on a screen. For example, the Microsoft READER application adds electronic ink (also referred to herein as "ink") to a document. The system stores the ink and provides it to the user upon request. Other applications (e.g., drawing applications associated with Palm 3.x and 4.x and PocketPC operating systems, as known in the relevant art) allow these drawings to be captured and stored. Also, various drawing applications (e.g., Coral Draw) and photo editing applications (e.g., Photoshop) may be used with stylus-based input products (e.g., Wacom writing tablet products). The drawings include properties associated with the strokes used to mark the drawings. For example, the width and color of the line may be stored in ink. One purpose of these systems is to replicate the appearance of physical ink applied to a sheet of paper. However, ink on paper may have a large amount of information that cannot be captured by an electronic collection of coordinates and connecting line segments. Some of this information may include the thickness of the nib used (as seen by the width of the physical ink) or the pen-to-paper angle, the shape of the nib, the speed of ink deposition, and so forth.
Another problem is presented by electronic ink. This has been considered as part of the application with which it was written. This results in a substantial inability to provide electronic ink richness for other applications or environments. Although text may be transferred between multiple applications (e.g., by using a clipboard), ink does not have the ability to interact with ink. For example, an image of a figure eight cannot be created, the created image is copied and pasted into a document by means of a clipboard, and then the ink is bolded. One difficulty is the non-portability of the image between applications.
Disclosure of Invention
Aspects of the present invention provide a flexible and efficient interface that, in concert with its features, addresses one or more problems determined by conventional devices and systems by invoking methods and/or receiving events related to electronic ink. Aspects of the present invention relate to improvements in the capacity of stored ink. Other aspects relate to modifying the stored ink.
It may be desirable to allow developers to easily add top-ranked support for ink features to their existing and new applications. It is also desirable to encourage a consistent appearance for the application of ink implementations. For example, it may be desirable to add support for writing on and/or interacting with documents that may or may not generally accept ink input.
These and other features of the present invention will become apparent upon consideration of the following detailed description of the preferred embodiments.
Drawings
The foregoing summary of the invention, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings, which are included by way of example, and not by way of limitation with regard to the claimed invention.
FIG. 1 is a functional block diagram of an illustrative general-purpose digital computing environment that can be used to implement aspects of the present invention.
FIG. 2 is a plan view of an illustrative tablet computer and stylus that may be used in accordance with aspects of the present invention.
Fig. 3-6 are functional block diagrams of architectures and interfaces that may be used as described in aspects of the present invention.
FIGS. 7-9 are illustrative screen shots of a document having one or more ink overlay objects according to aspects of the present invention.
Detailed Description
The following describes one way to overlay electronic ink on a document.
General computing platform
FIG. 1 is a functional block diagram of an example of a conventional general-purpose digital computing environment that can be used to implement aspects of the present invention. In FIG. 1, computer 100 includes a processing unit 110, a system memory 120, and a system bus 130 that couples various system components including the system memory to the processing unit. The system bus 130 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory 120 includes Read Only Memory (ROM)140 and Random Access Memory (RAM) 150.
A basic input/output system 160(BIOS), containing the basic routines that help to transfer information between elements within computer 100, such as during start-up, is stored in ROM 140. The computer 100 further includes a hard disk drive 170 for reading from and writing to a hard disk (not shown), a magnetic disk drive 180 for reading from or writing to an erasable magnetic disk 190, and an optical disk drive 191 for reading from or writing to an erasable optical disk 192 such as a CD ROM or other optical media. The hard disk drive 170, magnetic disk drive 180, and optical disk drive 191 are connected to the system bus 130 by a hard disk drive interface 194, a magnetic disk drive interface 193, and an optical drive interface 194, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the personal computer 100. Those skilled in the relevant art will appreciate that other types of computer readable media that can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, Random Access Memories (RAMs), Read Only Memories (ROMs), and the like, may also be used in the example operating environment.
A number of program modules may be stored on the hard disk drive 170, magnetic disk 190. The optical disk 192, ROM 140 or RAM 150, includes an operating system 195, one or more application programs 196, other program modules 197, and program data 198. A user may enter commands and information into the computer 100 through input devices such as a keyboard 101 and pointing device 102. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 110 through a serial port interface 106 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or a Universal Serial Bus (USB). Further, these devices may also be connected directly to the system bus 130 via some appropriate interface (not shown). A monitor 170 or other type of display device is also connected to the system bus 130 via an interface, such as a video adapter 108. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers. In a preferred embodiment, a pen digitizer 165 and accompanying pen or stylus 166 are provided to digitally capture freehand input. Although a direct connection between the pen digitizer 165 and the processing unit 110 is shown, in practice, the pen digitizer 165 may be connected to the processing unit 110 via a serial, parallel or other interface and the system bus 130, as is known in the relevant art. Furthermore, although the digitizer 165 is displayed separately from the monitor 107, the usable input area of the digitizer 165 is preferably expanded together with the display area of the monitor 107. Further, the digitizer 165 may be integrated into the display 107, or may be a separate device overlaying or otherwise appended to the monitor 107.
The computer 100 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 109. The remote computer 109 can be a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 100, although only a memory storage device 111 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include a Local Area Network (LAN)112 and a Wide Area Network (WAN) 113. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 100 is connected to the local network 112 through a network interface or adapter 114. When used in a WAN networking environment, the personal computer 100 typically includes a modem 115 or other means for establishing communications over the wide area network 113, such as the Internet. The modem 115, which may be internal or external, is connected to the system bus 130 via the serial port interface 106. In a networked environment, program modules depicted relative to the personal computer 100 (or portions thereof) may be stored in the remote memory storage device.
It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. It is assumed that there exists one of many known protocols such as TCP/IP, ethernet, FTP, HTTP, etc., and that the system may operate in some client-server configuration to enable a user to retrieve Web pages from some Web-based server. Any of a variety of conventional Web browsers can be used to display and manipulate data on Web pages.
FIG. 2 shows an example of a stylus-based computer processing system (also referred to as a tablet PC)201 that may be used in accordance with aspects of the present invention. All or any of the features, subsystems, and functions in the system of fig. 1 can be included in the computer of fig. 2. Tablet PC 201 includes a large display surface (e.g., a digitizing tablet display, preferably a Liquid Crystal Display (LCD)) on which a plurality of windows 203 are displayed. Other display technologies that may be used include, but are not limited to, OLED displays, plasma displays, and the like. Using the tip of stylus 204 (which is also referred to herein as a "pointer"), a user may select, highlight, and write on the digitizing display area. Examples of suitable digitizing display panels include electromagnetic pen digitizers, such as Mutoh or Wacom pen digitizers. Other types of pen digitizers, such as optical digitizers, may also be used. Tablet PC 201 interprets marks made with stylus 204 to manipulate data, enter text, and execute conventional computer applications such as spreadsheets, word processing programs, and the like.
The stylus may be equipped with keys or other features to increase its selection capabilities. In one embodiment, the stylus may be implemented as a "pencil" or "pen" of some kind, where one end constitutes a writing portion and the other end constitutes an "erasing" end (which, when moved across the display, indicates the portion of the display to be erased). Other types of input devices (e.g., a mouse, trackball, etc.) may also be used. Furthermore, the user's own finger may also be used to select or indicate portions of the displayed image on some touch-or proximity-sensitive display screen. Thus, the term "user input device" as used herein is intended to have a broad definition and encompasses many variations on known input devices.
Concept of electronic Ink and Ink objects
Ink, as used herein, is referred to as electronic ink. Electronic ink may be made up of a series or set of strokes, where each stroke includes a series or set of points. The sequence of strokes and/or points may be ordered by the time captured and/or where the strokes and/or points appear on the page. The group of strokes may include a sequence of strokes and/or points, and/or unordered strokes and/or points. The points may be represented by a variety of known techniques, including Cartesian coordinates (X, Y), polar coordinates (r, Θ), and techniques known in the relevant art. A stroke may alternatively be represented by one point and a vector in the direction of the next point. A stroke determination contains any representation of a point or line segment with respect to ink, regardless of the representation of what is beneath it, a point or a connection point. The gathering of ink typically begins at a digitizer (e.g., the digitizer of the display surface 202). The user can prevent a stylus from being on the digitizer and start writing or drawing. At this point, a new packet of ink (i.e., a packet of data about ink) is generated. The user may also move the stylus in the air close enough to the digitizer to be perceived by the digitizer. At this point, a data packet (referred to herein as an "over-the-air packet") may be generated based on the sensed movement of the stylus over the air. The package may include not only location information, but also stylus pressure and/or angle information.
To store Ink, an Ink object may be created to represent the original Ink strokes drawn by stylus 204 on display screen surface 202 and/or other input. The gathered ink strokes may be gathered from anywhere on the display screen surface 202 or some defined portion thereof (e.g., some particular window). An Ink object is essentially a container of Ink data. The particular format of how the ink is stored in the ink object is not important to the present invention. However, it is preferable to store the ink strokes as originally drawn in the ink object.
Two exemplary ink object types may be defined. A tInk object ("t" means "text") may be implemented as an OLE object representing ink used to compose letters or words. the tin object allows handwritten ink to be converted to text by, for example, some text recognizer. An ink object that pertains to ink and has a text context may be referred to as a tInk object. The color and/or font size of the text ink, and whether the text ink is underlined, bolded, italicized, etc., may be programmatically set, and may be based on the nature of the text surrounding the tin object. In other words, the attributes of the surrounding environment at the point where the tin object is to be inserted may be applied to the tin object. In one embodiment, the tInk object contains only a single word for submission to the text recognizer, so that a sentence may contain multiple tInk objects. On the other hand, an sInk object ("s" means "sketch") may also be defined as an object representing ink that is not used to compose a word. The sInk object may also be an OLE object. An sin object may thus be interpreted as a drawing or any other non-textual environment. The sInk object may also be used to represent multiple words. An Ink-compatible application (and/or user) may mark an Ink object as a tin object or other (e.g., a sInk object). For purposes of description, these two ink types are referred to herein as "tInk" and "sInk". It will be appreciated that other names may be used to represent the various types of ink objects that may be used. Moreover, alternative object types may be used to store electronic ink in any desired format.
API overview of ink controls
Referring to FIG. 3, an API, referred to herein as an ink control API, provides a developer with a model for a variety of objects and controls. The ink control API may be available to users using a variety of application development software, such as Microsoft native Win32 COM API, Microsoft ActiveX API, and/or Microsoft Managed API. The ink control API enables developers to easily add elegant support for ink to existing non-ink compatible applications and new applications. The developer need only add the appropriate controls and set the various properties. The ink control API, which further encourages a consistent appearance for applications that implement ink, may serve as an excellent starting point for implementing a user experience. The ink control API additionally provides the ink user interface components required by the developer, yet on the other hand needs to be self-generated from the scratch.
The various objects and controls of the ink control API include an InkCollector automation object 302, an InkCollector managed object 306, an InkOverlay automation object 303, an InkPicture ActiveX control 304, an InkOverlay managed object 305, a PictureBox WinForms control 301, and/or an InkPicture Winforms control 307. The InkOverlay object enables developers to easily add annotation functionality to applications and extend ink gathering functionality to provide support for basic editing, such as selecting, moving, resizing, and erasing ink. The InkPicture control contains some or all of the API components of the InkOverlay object and enables the developer to add a region to the window for gathering and editing ink. The InkPicture control may further enable the developer to add background pictures, images, and/or colors to the window.
These objects and controls, which are described further below, may interact with one or more host applications, such as an ActiveX host application (e.g., VB6) and/or a Win32 host application (collectively 301) and/or a Common Language Runtime (CLR) host application (VB7/C #) 306. The InkOverlay automation objects 303 and the InkPictureActiveX controls 304 may be used by native Win32/ActiveX developers, while the InkOverlay managed objects 305 and the InkPictureWinForms controls 307 may be used by developers using CRL. In the present figure, solid arrows represent an exemplary inheritance metaphor, while broken arrows indicate an exemplary usage metaphor.
InkCollector object
The InkCollector object is used to capture ink from an ink input device and/or pass the ink to an application. The InkCollector object acts, in part, as a faucet to "pour" ink into one or more different and/or unique ink objects by gathering ink from one or more ink strokes and storing the ink in one or more associated ink objects. The InkCollector object may assign itself to some known application window. It may then provide real-time inking over the window by using any or all of the available tablet devices, which may include stylus 204 and/or a mouse. To use the InkCollector object, a developer may create it, specify a window in which to gather the ink drawn, and activate the object. The InkCollector object, after being activated, may be set to gather ink in a variety of ink gathering modes (in which ink strokes and/or gestures are gathered). A gesture is some motion or other action of stylus 204, not interpreted as rendered ink, but rather as a request or command to perform some action or function. For example, a particular gesture may be performed for the purpose of selecting ink, while another gesture may be performed for the purpose of italicizing ink. For each motion on or near the digitizer input, the InkCollector object will gather a stroke and/or a gesture.
InkOverlay object
The InkOverlay object is an object useful for annotating scenes where the end user does not have to be concerned with identifying on the ink, but may be interested in the size, shape, color, and location of the ink. This is well suited for making notes and basic transcription. The primary purpose of the object is to display ink as ink. The default user interface is a transparent rectangle with opaque ink. InkOverlay extends the InkCollector class in several ways. For example, the InkOverlay object (and/or the InkPicture control discussed below) may support select, erase, and resize ink, as well as delete, cut, copy, and paste commands.
One typical scenario in which an InkOverlay object may be useful is to mark a document, such as by making handwritten annotations, drawings, etc. on the underlying document. The InkOverlay object allows the inking and layout capabilities required for this scene to be easily achieved. For example, to work with an InkOverlay object, an InkOverlay object may be instantiated, attached to hWnd for another window, and the InkOverlay Enabled attribute set to true.
Referring to FIG. 4, a high-level block diagram of the portions of components that make up the internal and external dependencies of an InkOverlay object is shown. Arrows indicate a usage metaphor. An InkOverlay object 401 may receive ink from an InkCollector object 402. The InkOverlay object 401 may have a selection management function 403 and/or an editing function 404. As discussed in the examples below, the InkOverlay object 401 may have a transparent overlay window management function 405 to transparently overlay another object (window) or other displayed data item, such as a scanned paper form. Externally, the InkOverlay object 401 may interact with a variety of applications and APIs. For example, an application may utilize the InkOverlay object 401 to implement various low-level inking functions. In one embodiment, such an application may be Microsoft WINDOWS ® INK SERVICES PLATFORM ® (WISP) 406. It should be noted that the application 406 is not limited to WISP, nor to a microsoft windows ® environment, as are the other components discussed herein. The InkOverlay object 401 may further interact with an API that automates many of the low-level WISP 406 functions. In this embodiment, this API is referred to as an automation API 407. The automation API407 includes the Ink control API discussed above and provides the developer with an object model that includes an Ink object, an InkCollector object, an InkOverlay object, and an InkPicture control. The InkOverlay object 401 may further interact with one or more operating system APIs (e.g., Microsoft WINDOWS ® Win32 API 408 and/or Microsoft. NET ® API).
The selection management function 403 of the InkOverlay object 401 supports the selection of ink. Ink may be selected in a number of ways, such as by some lasso tool (selecting objects contained in some tracked area) means. The InkOverlay object 401 may also support flick selections where any Ink object clicked on and/or near will be selected. When an Ink object or group of Ink objects is selected, resized handles (e.g., eight resized handles) may appear at the four corners and one or more midpoints between adjacent corners of the box framed by the Ink. Moving these resize handles may cause the selected ink to resize in accordance with the motion of the handles. A keyboard or other modifier may be used to instruct the InkOverlay object to maintain the original aspect ratio when resized. The ink may further be sized by any other means desired. Also, a keyboard or other modifier may be used to instruct the InkOverlay object to copy the selected ink when dragging the grass grouping operation, rather than resizing the ink when dragging. If the user presses and holds anywhere within the selected area, the ink becomes moveable within the control. A rectangular selection metaphor, and/or a word, sentence, and/or paragraph selection metaphor may further be utilized. For example, clicking within an ink word will select the word, clicking anywhere within an ink sentence will select the entire sentence, and clicking anywhere within an ink paragraph will also select the entire paragraph. Other means for selecting include using a particular gesture that indicates a selection behavior, such as a single tap on or near a certain ink to indicate selection of the ink, a double tap on or near a certain word to select the word, and a triple tap to select the entire sentence. Moreover, ink may be selected and/or modified by calling the API of the InkOverlay object directly, programmatically or by end-user input.
Also, an InkOverlay object may be provided for ink erasure functions. For example, an InkOverlay object may be provided for use in a stroke erase mode and/or a point erase mode. In the stroke erase mode, if the cursor drops and touches an existing ink stroke, the ink stroke is completely erased. In the dot erase mode, if the cursor drops and contacts an existing ink stroke, only the overlapping area of the cursor and the ink stroke is erased.
InkOverlay API
An illustrative Application Programming Interface (API) for an InkOverlay object will now be discussed with reference to FIG. 5. In fig. 5, the InkOverlay object 501 is represented by a box, and the components (or functionally grouped components) of an API are shown as arrows 542 and 558 leading out of or into the box representing the InkOverlay object 501. In general, an arrow into the block of an InkOverlay object 501 refers to an API component (or a component grouped by function) that primarily modifies the InkOverlay object 501 (e.g., by changing some attribute thereof) and/or provides information to the InkOverlay object 501. The arrows drawn from the InkOverlay object 501 box refer to API components (or components grouped by function) that primarily represent some flag or some other information provided by the InkOverlay object 501 to its environment. However, the direction of these arrows is not meant to be limiting, and thus an arrow into an InkOverlay object 501 is also not exclusive of also representing information provided by the InkOverlay object 501 to its environment. Likewise, an arrow leading from InkOverlay object 501 does not preclude also modifying or providing information to InkOverlay object 501. Fig. 5 further shows a plurality of attributes 502 and 520 of the InkOverlay object 501.
The InkOverlay API has some or all of the following enumerations (including in any combination or subset thereof) in this illustrative embodiment. An application pose enumeration defines setting a value for attention in a set of application specific poses. A gather mode enumeration defines the values of gather mode settings that determine the InkOverlay object. An event attention enumeration defines which events are of interest to the developer using the InkOverlay object and/or the InkCollector object. The InkOverlay object may use event attention enumeration to determine which information is to be provided to the developer through the event. A mouse pointer enumeration defines a value that specifies the type of mouse pointer displayed. This enumeration also occurs in the InkPicture control and InkCollector objects. An override attach pattern enumeration defines values that specify where to attach the new InkOverlay object (to attach the InkOverlay object to the back or front of the controls and/or text in the window). When attaching the InkOverlay object to the front, it means that the ink will be rendered in the window in front of the controls and/or text. When attaching the InkOverlay object to the back, it means that the ink will be rendered in the direct window, thus behind any other controls or child windows in the window hierarchy. An override edit mode enumeration defines a value that specifies which edit mode (draw ink, delete ink, edit ink) the InkOverlay object should use. An eraser mode enumeration defines a value that specifies the manner in which ink is erased when an edit mode enumeration is set to delete. A system pose enumeration defines setting values for attention in a set of operating system specific poses.
The InkOverlay API in this illustrative embodiment also has one or more of the following attributes (including in any combination or subset thereof) that can be set and that can return the information that it represents. An attach mode attribute 502 indicates whether the object is attached behind or in front of a given window. An auto-redraw attribute 503 indicates whether the InkColleletor will redraw when the window fails. A gather ink attribute 504 indicates whether the object is busy gathering ink. A gather mode attribute 505 indicates whether the object is gathering ink only, gathering gestures only, or gathering ink and gestures. A cursor gathering relevance attribute 506 represents a gathering of the cursor encountered by the object. A draw feature attribute 507 represents a default draw feature used in gathering and displaying ink. The drawing feature specified by this attribute is a feature given to a new cursor and can be applied to those cursors in the cursor set whose default drawing feature is set to none. A package description attribute 508 represents a package description for the InkOverlay object 501. A dynamic rendering attribute 509 indicates whether the InkOverlay object will dynamically render ink as it was gathered. An edit mode attribute 510 indicates whether the object is in ink mode, delete mode, or select/edit mode. An InkCollector activation attribute indicates whether the InkCollector will gather pen input (packages sent, cursor in scope event, etc.). The plurality of eraser attributes 512 indicate whether ink is erased by stroke or by dot, how ink is erased, and the width of the eraser tip. A window handle attribute 513 indicates the handle to which the InkOverlay object 501 is attached. An associated Ink object 514 represents the Ink object associated with the InkOverlay object. The edge distance attribute 515 represents the x-axis and y-axis edge distances (preferably in the screen coordinate system) of the InkOverlay object 501 around the window rectangle associated with the attached window handle. Moreover, the margin attribute 515 may provide an alternative means of capturing the behavior associated with the window rectangle method 555 discussed below. The one or more custom mouse cursor attributes 516 represent a current custom mouse icon, a type of mouse pointer displayed when the mouse is over the InkOverlay object 501 (e.g., over some paintable portion of the object), and/or a type of cursor displayed when an active pointing device (e.g., stylus 204 or mouse 102) causes the displayed cursor to be over the InkOverlay object. One renderer attribute 517 represents the renderer that is used to draw ink on the screen. One selection attribute 518 represents the set of currently selected ink strokes. The high contrast ink attribute 519 indicates whether ink will be rendered with high contrast (e.g., only one color) when the system is in high contrast mode, and whether all selection UIs (e.g., selection bezel and selection handle) will be drawn with high contrast. A tablet property 520 represents the tablet that the object is currently using to gather cursor input.
The InkOverlay API in this illustrative embodiment also has a plurality of associated events and methods (including in any combination or subset thereof). For example, there may be cursor related events and methods 542, 544. Such cursor-related events occur in dependence upon whether a cursor (e.g., the tip of stylus 204) is within physical detection range of the tablet environment, or in response to the cursor being in physical contact with the surface of the digitizing tablet (e.g., surface 202). The cursor-related method is invoked in response to the occurrence of a corresponding cursor-related event. These features enable developers to extend and override the cursor functionality of the InkOverlay object.
The InkOverlay API may further have cursor key related events and methods 543. Such a cursor key event occurs depending on whether a key on a cursor (e.g., stylus 204) is flipped up or down. The cursor key correlation method is invoked in response to a corresponding cursor key correlation event occurring. These features enable developers to extend and override the cursor button functionality of the InkOverlay object.
The InkOverlay API may further have gesture-related events and methods 545, 554. Such gesture-related events occur in response to either a recognized system gesture or a recognized application-specific gesture. Some gesture-related methods are invoked in response to respective gesture-related events that occur. Another pose correlation method specifies or captures the attention of the InkOverlay object in a given pose group. These features enable developers to extend and reload the gestural functionality of an InkOverlay object.
The InkOverlay API may further have tablet-related events and methods 546, 558. Some clipboard related events occur in response to a clipboard being added or deleted from the system. The clipboard-related method is invoked in response to the occurrence of a corresponding clipboard-related event. Other pose correlation methods 558 specify placing the InkOverlay object into a full tablet mode or an integrated tablet. In full tablet mode (which may be the default mode), if multiple devices are configured to the system, all tablets are integrated. Since all the tablet devices are integrated, the available cursors can be used on any of these tablet devices, and each tablet will map to the entire screen with the same drawing features. In the integrated tablet mode, an integrated tablet-style computer input surface shares the same surface with the display screen; this means that the entire tablet style computer input surface maps the entire screen, enabling automatic updating of the window.
The InkOverlay API may further have packet related events and methods 547. Such packet-related events occur in response to newly drawn packets and newly transmitted packets. The packet-related method is invoked in response to the occurrence of a corresponding packet-related event. These features may enable developers to extend and reload stylus functionality and responses for an InkOverlay object.
The InkOverlay API may also have rendering related events and methods 548. Such rendering-related events occur only before the InkOverlay object 501 renders ink along with any selection of ink, thus giving the developer the opportunity to change the appearance of the ink or change the ink itself. A render-related event may also occur in response to InkOverlay object 501 completely rendering some subset of its ink, thus enabling the developer to draw something outside the ink that has been drawn. The rendering-related method is invoked in response to the occurrence of the corresponding rendering-related event. This functionality may enable developers to extend and reload the ink rendering behavior of the InkOverlay object. These rendering-related methods may also not actually be part of the InkOverlay object, but may be useful for developers to implement and connect them with the InkOverlay object so that they can be invoked appropriately in response to triggered rendering-related events.
The InkOverlay API may also have selection related events and methods 549. Some selection-related events occur before a selection change, thus providing the developer the opportunity to change the selection change that is about to occur. A selection-related event may also occur in response to a selection that is completely changed (programmatically or as a result of an end-user action). Other selection related events occur in response to the currently selected location being moved or when the currently selected location changes. Still other selection-related events occur in response to a current selection size being changed or a current selection size having been changed. The selection related method is invoked in response to a respective selection related event occurring. These features may enable developers to extend and override the selection and editing functionality of the InkOverlay object.
The InkOverlay API may further have scratch related events and methods 550. One such stroke-related event occurs in response to the user drawing a new stroke on any writing board. Other stroke-related events occur in response to a stroke that is about to be deleted or a stroke that has been deleted. The stroke-related methods are invoked in response to respective stroke-related events that occur. These features may enable developers to extend and reload the ink erasing functionality of the InkOverlay object.
The InkOverlay API may have a variety of further hybrid approaches. For example, one drawing method 552 may draw ink and select a UI for a particular rectangle in a provided device environment (e.g., screen, printer, etc.). Other methods 553 set the current state for a particular InkOverlay event (e.g., the event is being intercepted or used), or obtain the current state. Still other methods 555 specify a window rectangle to set (in the window coordinate system) within what the ink is drawn, or to obtain the window rectangle. Another method 556 determines that a given coordinate fits within one of the resize handle, an internal portion of the selected area, or no selection at all. A constructor 557 specifies the creation of a new InkOverlay object that is attached to a particular window handle, which may be on a particular tablet, and maps a window input rectangle to a tablet input rectangle.
The InkOverlay API in this illustrative embodiment may also have a plurality of margin constants (not shown). The first margin constant returns a value specifying whether to prune strokes when they are outside the default margin. The second margin constant returns the default margin used by the margin attribute. These constants also appear as attributes in the InkCollector object and the InkPicture control.
InkPicture control
As mentioned previously, a control (referred to herein as an InkPicture control) (which may be, for example, an ActiveX control) may be created that enables a developer to add a window for ink gathering and editing. The InkPicture control provides the ability to place an image into a certain application or web page to which the user can add ink. The image may be in any format, such as the. jpg,. bmp,. png, or. gif format. The InkPicture control may be used primarily for scenes where ink need not be recognized as text, but may alternatively or additionally be stored as ink. In an illustrative embodiment, the runtime user interface for the InkPicture control is a window having, for example, an opaque background (e.g., a monochrome, a picture background, or both) and containing opaque or semi-transparent ink. In one illustrative embodiment, the InkPicture control wraps the InkOverlay object with an ActiveX or other control.
InkPicture API
Referring to FIG. 6, an illustrative InkPicture object 601 is shown. The InkPicture control 601 exposes some or all of the API components of the InkOverlay object 501, as well as some or all of the other API components shown in fig. 6. For example, in one illustrative embodiment, the InkPicture control 601 may enable access to all InkOverlay API components, except for Attribute mode attribute 502 and/or window handle attribute 513. As discussed below, the InkPicture control 601 may have its own API, adding the functionality of the InkPicture API. In some embodiments, the InkPicture control 601 may be an ActiveX control and may add the following functionality in contrast to the InkOverlay object 501: keyboard events, control resizing events, additional mouse events, and/or background color and image-related properties. Also, the InkPicture control 601 may be inherited from Microsoft Picture Box. For example, the PictureBox may implement some or all of the attributes discussed herein with respect to the InkPicture control 601, such as the background image.
In FIG. 6, the InkPicture control 601 is represented by a box, and components (or functionally grouped components) of an API are shown as arrows 640 and 658 coming out of or into the box representing the InkPicture control 601. In general, an arrow into the InkPicture control 601 box refers to an API component (or a component grouped by function) that primarily modifies the InkPicture control 601 (e.g., by changing some property thereof) and/or provides information to the InkPicture control 601. The arrows coming out of the InkPicture control 601 box refer to API components (or components grouped by function) that primarily represent some flag or some other information provided by the InkPicture control 601 to its environment. However, the direction of these arrows is not meant to be limiting, and thus an arrow entering an InkPicture control 601 does not exclude information also being provided by the InkPicture control 601 to its environment. Likewise, an arrow leading from the InkPicture control 601 does not preclude also modifying or providing information to the InkPicture control 601. FIG. 6 further illustrates a plurality of attributes 602 and 626 of the InkPicture control 601.
In some illustrative embodiment, the API for the InkPicture control 601 may have one or more enumerations (not shown). For example, an ink picture size enumeration defines values that specify how a background picture behaves within an InkPicture control, such as whether the picture will automatically resize to fit within or be centered within the control, or whether it will be displayed in its normal size within the control, or whether it is stretched within the control. Also, a user interface enumeration defines values that specify the state of the user interface for the InkPicture control, such as the state of the focus and keyboard prompts, whether the focus rectangle is displayed after a state change, and/or whether the keyboard prompt is underlined after a state change.
In the illustrative embodiment, the API for the InkPicture control 601 may have some or all (including in any combination or subset thereof) of a plurality of associated attributes 602 and 626. For example, the one or more accessibility attributes 602 represent the name and description of the InkPicture control used by the accessibility client application, as well as the accessible role of the InkPicture control. An anchor property 603 indicates which edge of the InkPicture control is anchored to the edge of its container. One or more background attributes 604 represent the background color for the InkPicture control and the background image displayed in the InkPicture control. An edge style attribute 605 represents the style of the edge for the InkPicture control. A check attribute 606 indicates whether the InkPicture control requires checking on any control that receives focus. One container property 607 represents a container containing the InkPicture control. A dock attribute 608 indicates which edge or edges of the parent container the InkPicture control docks against. One or more drag attributes 609 indicate the icon displayed when the pointer is in a drag-and-drop operation and whether the manual or automatic drag mode is used for the drag-and-drop operation. An activation attribute 610 indicates whether focus is available to the InkPicture control. One or more dimension attributes 611 represent the height of the InkPicture control, the width of the InkPicture control, and both the height and width of the InkPicture control. These dimensional attributes may be any unit, such as a pixel. A context-sensible help attribute 612 represents a relevant context identification for the InkPicture control and may be used to provide context-sensible help to an application. A window handle attribute 613 represents the handle of the window being drawn by the ink. One image attribute 614 represents the image displayed in the InkPicture control. A control array index property 615 represents the number that identifies the InkPicture control in a control array. One or more position properties 616 represent the distance between the inner left edge of the control and the left edge of its container, and the distance between the inner upper edge of the control and the upper edge of its container. A lock attribute 617 indicates whether the contents of the InkPicture control can be edited. One visibility property 618 indicates whether the InkPicture control is visible. A control name attribute 619 represents the name of the InkPicture control. An object attribute 620 represents an object corresponding to the InkPicture control. One parent object property 621 represents the object on which the control is located. A size mode attribute 622 indicates how the InkPicture control handles the placement and resizing of the image. One or more switch attributes 623 indicate the switching order of the InkPicture controls within their parent container and whether the user can use the Tab key to provide focus for the InkPicture controls. An object tag attribute 624 represents extended attributes or custom data about an object. One tool-tip attribute 625 represents the text that is displayed when the mouse (or stylus) is hovering over the InkPicture control. A help attribute 626 represents a context number associated with the InkPicture control. The help attribute 626 may be used to "what is this? "Pop-up provides context-sensible help for an application.
The InkPicture API in this illustrative embodiment may further have a plurality of associated events and methods (including in any combination or subset thereof). For example, a set of focus methods 640 specifies the focus that should be given to the InkPicture control. One or more focus events 641 occur in response to the InkPicture control losing focus or receiving focus. A user interface focus event 642 occurs in response to a change in focus or keyboard user interface prompts. A z-order method 643 specifies whether the InkPicture control is placed in front of or behind the z-order within its graphics hierarchy. A control size event 644 occurs in response to the InkPicture control having been resized. A size mode event 645 occurs in response to the size mode attribute 622 having been changed. One resize/move method 646 specifies the movement and/or resizing of the InkPicture controls. One style event 647 occurs corresponding to a change in the style of the InkPicture control. A create method 648 specifies the creation of a new InkPicture control. A drag method 649 specifies the start and/or cancellation of a certain drag operation on the InkPicture control. One or more mouse/stylus button events 650 occur in response to a mouse/stylus pointer being over the InkPicture control and a mouse button (or stylus button) being pressed or released. One or more click events 651 occur in response to being clicked or double-clicked on the InkPicture control. One or more mouse entry/exit events 652 occur in response to a mouse/stylus pointer entering or exiting a display region associated with the InkPicture control. One or more mouse movement events 653 occur in response to the mouse/stylus pointer moving over the InkPicture control or hovering over the InkPicture control. One mouse wheel event 654 occurs in response to movement of the mouse wheel when the InkPicture control has focus. A drag-on event 655 occurs in response to an object being dragged over the boundary of the InkPicture control. A drag-and-drop event 656 occurs in response to a certain drag-and-drop operation being completed. One or more handle methods 657 generate events in response to a handle being created or destroyed. One or more key events 658 occur in response to a key being pressed or released while the InkPicture control has focus. The InkPicture control 601 may further send any or all of the events discussed above with respect to the InkOverlay object 501.
Electronic ink coverage
Referring to fig. 7, a document 701 may be generated or provided. The document in the illustrative embodiment 6 of fig. 7 is a piece of text document. However, the term document should be broadly construed herein as any other type of document, such as, but not limited to, a WORD processing document (such as that produced using Microsoft WORD ®), an image document, a graphics document, a text-plus-graphics document, a scanned paper document, a spreadsheet document, a photograph, and/or a form having multiple data fields. As used herein to describe the present invention, the term "document" also includes within its scope certain software applications. An InkOverlay object and/or an InkPicture control may be defined to create one or more inking surfaces (e.g., windows) that are placed over some or all of the documents 701. The window or other inked surface is preferably transparent (fully transparent or translucent) so that the underlying document 701 is visible. However, some or all of the windows may also be opaque and/or may have some background image and/or color (e.g., by using one or more illustrative background properties 604 of an illustrative InkPicture control). When a background image is used, the background image may be the document itself as an alternative to overlaying the window on a separate document. The window may optionally have a border (illustratively shown as a dashed line) that may be opaque or visible. When a user writes with stylus 204 in the window area on the screen, ink data is gathered from the handwriting and rendered and displayed as electronic ink 703 in the window. In this manner, it appears that handwritten ink is written on the document 701. Ink data may also be stored in an object, such as an ink object. Also, one or more events (e.g., trace related event 548) may be triggered at the time of rendering and/or at the beginning of rendering and/or after rendering of the ink is complete.
The user may further select a portion of rendered ink 703 and change the selected portion in a variety of ways. When at least a portion of ink 703 is selected (e.g., by circling the selected portion with stylus 204), an indication (reference) of the selected portion may be stored. The selection portion may be moved and/or resized, in which case one or more events, such as event 549, may be triggered upon the selection moving or resizing, and/or upon the movement or resizing beginning, and/or upon the selection completing the movement or resizing. Some or all of ink 703 (e.g., one or more strokes) may be further deleted. For example, a user and/or an application may request that at least a portion of ink 703 be deleted, and one or more events, such as event 551, may be triggered upon ink deletion, and/or upon ink start deletion, and/or upon ink deletion.
In view of the above, application developers may have programmatic access to ink within the InkOverlay object and/or the InkPicture control (i.e., be able to directly modify internal structures without having to go through user input or control APIs). The developer and/or user may further be able to alter the selection of ink and/or various other attributes. The InkOverlay object may then manage building a tablet environment, listening for digitizer events, and/or gathering and interpreting internal details of the inking according to its current mode.
For example, a developer may easily access events associated with a new stroke and may compare the location of the new stroke to text and/or objects in the underlying document 701 by obtaining location metadata from the new stroke. Thus, by accessing the various times and methods described herein, an application developer may add a data structure to an application to enable mapping ink to application data. This may enable, for example, gestures and/or other commands issued by a user and/or an application to alter the underlying document 701 through the InkOverlay object. For example, as shown in FIG. 7, a portion of text in document 701 is circled in ink, and a large "B" is drawn in the circle. This may be interpreted as a command to change the text in the circled document 701 to bold text. Alternatively, a word may be deleted and/or inserted with a gesture and/or other command as shown in FIG. 7 (e.g., deleting the word "default" in document 701 below and replacing it with the newly inserted word "default"). The results of these poses are shown in fig. 8.
The developer may further easily configure his or her application to rearrange the ink in the InkOverlay object as the underlying text and/or objects move in the underlying document 701. This may be accomplished, for example, by positioning ink strokes in the InkOverlay object window and moving and/or resizing the ink strokes.
Developers can further easily extend the native editing functionality of an InkOverlay object to include concepts (e.g., highlighting) by listening to various events described herein. This may be achieved, for example, by overriding default drawing feature attributes. The developer may also add functionality such as selectable read-only strokes (by selectively rejecting user operations on particular strokes), and parsing (by entering strokes into some recognizer) and/or natural user gestures such as erasing with the back of stylus 204 (by listening for "new cursor" events and switching modes of the InkOverlay control).
Also, more than one InkOverlay object and/or InkPicture control may be placed on the document 701 at any one time, and the multiple objects and/or controls may be hierarchical. Referring to fig. 9, a second InkOverlay object, for example, may be instantiated and may have a second window with a second selectable edge 901. The same user or another user may write ink 902 on a second InkOverlay object window, and the associated ink data may be stored in the InkOverlay object and/or rendered in a window of the second InkOverlay object. Alternatively, the user may write ink to a first InkOverlay object window where the first and second windows overlap, and the ink may be sent to the second window.
While the exemplary systems and methods described herein embodying various aspects of the present invention are shown by way of example, it will be understood, of course, that the invention is not limited to these embodiments. Modifications may be made by those of ordinary skill in the art, particularly in light of the foregoing teachings. For example, each of the elements of the foregoing embodiments may be utilized alone or in combination with elements of the other embodiments. While the invention has been defined by the appended claims, these claims are exemplary and the invention requires that the components and steps described herein be included in any combination or subset. Thus, there are any number of alternative combinations for defining the invention, incorporating one or more components from the specification (including the description, claims, and drawings in various combinations or sub-combinations). In light of the present disclosure, it will be apparent to those of ordinary skill in the art that alternative combinations of aspects of the present disclosure, either alone or in combination with one or more components or steps defined herein, may be utilized as modifications or alterations to, or as part of, the present disclosure. It is to be understood that the written description of the invention contained herein is intended to cover all such modifications and alterations. Also, it should be appreciated that although various names for objects and other API components are provided herein, such names are merely illustrative and any name may be used without departing from the scope of the present invention.
Claims (23)
1. A method for covering ink on a document, the method comprising the steps of:
displaying the document in a first window, the first window having a handle;
creating a transparent second window over the first window;
assigning the second window to a handle of the first window;
collecting ink data in a second window; and
rendering the ink data into rendered ink in a second window.
2. The method of claim 1 including the step of storing the ink data as an object.
3. The method of claim 1, wherein the second window is translucent.
4. The method of claim 1, wherein the second window has an opaque border.
5. The method of claim 1, wherein the document comprises a text document.
6. The method of claim 1, wherein the document comprises a microsoft word ® document.
7. The method of claim 1, further comprising the step of:
generating a first event in the rendering step; and
a second event is generated after the rendering step is completed.
8. The method of claim 1, further comprising the step of:
receiving, from a software application separate from the second window, a request to delete at least a portion of the rendered ink;
generating a first event in response to the request;
deleting the at least a portion of the rendered ink; and
a second event is generated after the deletion step is completed.
9. The method of claim 1, further comprising the step of:
receiving a request from a software application separate from the second window to select at least a portion of the rendered ink;
selecting the at least a portion of the rendered ink; and
storing an indication of the at least a portion of the ink.
10. The method of claim 1, further comprising the step of:
selecting at least a portion of the rendered ink;
receiving, from a software application separate from the second window, a request to change the at least a portion of the rendered ink;
generating a first event in response to the request;
changing the at least a portion of the rendered ink; and
a second event is generated after the changing step is completed.
11. The method of claim 10, wherein the step of changing comprises changing the amount of the at least a portion of the rendered ink that is selected.
12. The method of claim 10, wherein the step of changing comprises changing the at least one portion of the rendered ink to another at least one portion of the rendered ink.
13. A method for overlaying ink on a document, said document being displayed in a first window, said first window having a handle, said method comprising the steps of:
generating a second window, the second window being at least partially transparent and generated over at least a portion of the document;
assigning the second window to a handle of the first window;
gathering ink data in the second window at a location over at least a portion of the document; and
rendering the ink data into rendered ink.
14. The method of claim 1, wherein the ink data defines a command, and the method further comprises the step of adjusting the content of the document according to the command.
15. The method of claim 13, wherein the ink data defines a command, and the method further comprises the step of adjusting the content of the document according to the command.
16. The method of claim 13 wherein the rendering step comprises rendering the ink data into rendered ink in a second window.
17. A method for overlaying ink on a document, said method comprising the steps of:
displaying a document in a first window associated with a software application;
the software application requesting, using an application programming interface, to generate a transparent second window on the document;
receiving stylus input directed to a second window;
generating ink data from the stylus input; and
the ink data is rendered in a second window over the document into rendered ink.
18. The method of claim 17, wherein the stylus input is made on a contact-sensitive display, and the rendering step comprises rendering ink data on the contact-sensitive display.
19. The method of claim 1, further comprising the step of a software application corresponding to the first window requesting generation of a second window,
20. the method of claim 1, further comprising setting an attribute associated with the second window and representing a handle to the first window.
21. The method of claim 1, further comprising reading an attribute associated with the second window and indicating whether the second window is in an ink-only collection mode, a gesture-only collection mode, or a gesture and ink mode.
22. The method of claim 1, further comprising the step of a software application separate from the second window selecting at least a portion of the ink data in the second window.
23. The method of claim 1, further comprising the step of a software application separate from the second window requesting modification of at least a portion of the ink data in the second window.
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US37978102P | 2002-05-14 | 2002-05-14 | |
| US37974902P | 2002-05-14 | 2002-05-14 | |
| US60/379,749 | 2002-05-14 | ||
| US60/379,781 | 2002-05-14 | ||
| US10/183,987 | 2002-06-28 | ||
| US10/183,987 US8166388B2 (en) | 2002-05-14 | 2002-06-28 | Overlaying electronic ink |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1061586A1 HK1061586A1 (en) | 2004-09-24 |
| HK1061586B true HK1061586B (en) | 2007-10-26 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8166388B2 (en) | Overlaying electronic ink | |
| US7259752B1 (en) | Method and system for editing electronic ink | |
| US7715630B2 (en) | Interfacing with ink | |
| US20030214553A1 (en) | Ink regions in an overlay control | |
| US8132125B2 (en) | Freeform encounter selection tool | |
| US7337389B1 (en) | System and method for annotating an electronic document independently of its content | |
| US8072433B2 (en) | Ink editing architecture | |
| US7925987B2 (en) | Entry and editing of electronic ink | |
| US6965384B2 (en) | In-situ digital inking for applications | |
| US20040257346A1 (en) | Content selection and handling | |
| US7870501B2 (en) | Method for hollow selection feedback | |
| US20050015731A1 (en) | Handling data across different portions or regions of a desktop | |
| EP1351123A2 (en) | Method for gestural interpretation | |
| US7770129B2 (en) | Viewable document section | |
| JP2005129062A (en) | Electronic sticky note | |
| US7370288B1 (en) | Method and system for selecting objects on a display device | |
| CN100481062C (en) | Ink collection and rendition | |
| HK1061586B (en) | Method for overlaying electronic ink on a document | |
| RU2365979C2 (en) | Input and reproduction of electronic ink | |
| HK1060932A (en) | Method of communication between an application and a digital ink object |