HK1057121B - Method and system for rendering an image as a display comprising a plurality of lines of pixels - Google Patents
Method and system for rendering an image as a display comprising a plurality of lines of pixels Download PDFInfo
- Publication number
- HK1057121B HK1057121B HK03109409.3A HK03109409A HK1057121B HK 1057121 B HK1057121 B HK 1057121B HK 03109409 A HK03109409 A HK 03109409A HK 1057121 B HK1057121 B HK 1057121B
- Authority
- HK
- Hong Kong
- Prior art keywords
- pixel
- shape
- data
- display
- sub
- Prior art date
Links
Description
Technical Field
The present invention relates to shape processors, and in particular, to methods and systems for rendering images as displays comprising rows of pixels.
Background
Graphical rendering of abstract shapes may require much processing of the shape description data. Known methods for processing shapes can be found in, for example, the Java 2D API, which provides a software tool for processing two-dimensional vector graphics. However, shape processing engines are also needed to reduce computational complexity to save processing resources, especially in embedded systems that include display devices.
Disclosure of Invention
The shape processor is a rendering module that can be used to stream graphical objects having a predefined format into a frame buffer or physical display. Documents rendered by the shape processor may be broken down into elementary graphical objects and passed to the shape processor, which in turn composes the objects for display. The shape processor advantageously processes each object as a gray scale value until pixel data for the object is output to a display or frame buffer.
According to an aspect of the invention, there is provided a method for presenting an image as a display comprising a plurality of rows of pixels, comprising: receiving a stream of a plurality of objects to be displayed, each object containing a shape and a fill; for each object, transforming the shape of the object into a plurality of lines of encoded scan data, wherein the encoded scan data has one of at least two possible states for each pixel of the display, the at least two possible states including a first state and a second state, wherein the first state indicates that the pixel is inside the shape and the second state indicates that the pixel is outside the shape; and blending (226, 516) each of the lines of encoded scan data and the padding into a line of a frame for the display. The method further comprises; converting the shape of the object into scan data for multi-line encoding includes: a. representing each pixel of said display as a sub-pixel matrix comprising one or more sub-pixel regions covering the pixel; b. for each horizontal row in the sub-pixel matrix, generating intersection data, wherein the intersection data comprises coordinates for each intersection between the shape of the object and the sub-pixel matrix; c. processing the intersection data for each row of the sub-pixel matrix to extract "on" sub-pixel strings within the shape or "off" sub-pixel strings outside the shape; and, for each row of the display: d. analyzing the extracted sub-pixel strings to identify pixel strings within a shape associated with the first state, pixel strings outside the shape associated with the second state, and transition pixel strings associated with a third state, wherein the transition pixel strings are at the edges of the shape such that they are partially within and partially outside the shape; e. further processing those pixels identified as transition pixels to generate a gray scale value for each transition pixel corresponding to the portion of the transition pixel within the shape; f. generating a line of encoded scan data, wherein the line of encoded scan data includes a length of the pixel string for each state; and generating an associated gray scale value for the transition pixel of the third state.
According to another aspect of the present invention there is provided a system for processing a graphical object for rendering an image as a display comprising a plurality of rows of pixels, comprising: receiving means for receiving a stream of a plurality of objects to be displayed, each object containing a shape and an alpha; converting means for converting the shape of each object into a plurality of lines of encoded scan data, wherein the encoded scan data has one of at least two possible states for each pixel of the display, the at least two possible states including a first state and a second state, the first state indicating that the pixel is inside the shape and the second state indicating that the pixel is outside the shape; and blending means for blending each line of the plurality of lines of encoded scan data, the fill, and the alpha into a line of a frame for the display. The conversion means of the system are adapted to: a. representing each pixel of said display as a sub-pixel matrix comprising one or more sub-pixel regions covering the pixel; b. generating intersection data for each horizontal row in the sub-pixel matrix, wherein the intersection data includes coordinates of each intersection between the shape of the object and the sub-pixel matrix; c. processing intersection data for each row of the sub-pixel matrix to extract "on" sub-pixel strings within the shape or "off" sub-pixel strings outside the shape; and, for each row of the display: d. analyzing the extracted sub-pixel strings to identify pixel strings within the shape associated with the first state, pixel strings outside the shape associated with the second state, and transition pixel strings associated with a third state, wherein the transition pixel strings are at the edges of the shape such that they are partially within the shape and partially outside the shape; e. further processing those pixels identified as transition pixels to generate a gray level value for each transition pixel corresponding to the portion of the transition pixel within the shape; f. generating a line of encoded scan data, wherein the line of encoded scan data includes a length of the pixel string for each state; and generating an associated gray scale value for the transition pixel of the third state.
A system for processing a graphical object may comprise: an input unit for receiving a stream of objects, wherein each object has a set of parameters defining an image; and an object processor that processes the stream of objects on an object-by-object basis to create an array of pixels.
One of the set of parameters may be a path that the object processor processes to create an array of pixels representing the outline of the image. The object processor may perform antialiasing (anti-alias) on the edges of the path. The object processor may run-length encode the contours of the image. One of the set of parameters may be a bounding box that indicates to the object handler an area in which the object is to be rendered. The object processor may receive a smoothing coefficient that specifies the number of oversamples of the object relative to the pixel array. One of the set of parameters may be transparency, which contains a transparency value for the shape or a pointer to a bitmap of transparency values.
One of the set of parameters may be padding, the padding containing at least one of a color, a texture, or a bitmap. The de-aliased edges may be represented as gray level values. The tone response curve may be applied to the gray scale values of the antialiased edges. The pixel array may be transmitted to at least one of a screen, a printer, a network port, or a file. One of these parameters may be pre-processed shape data. The pre-processed shape data may contain a clip mask (clipmask). The pre-processed shape data may include transparency. The pre-processed shape data may contain padding. The method may further comprise storing intermediate process data in the cache, the intermediate process data comprising at least one of clip masking, padding, or transparency.
A method for image rendering described herein may comprise: receiving an object to be displayed, the object containing a shape and fill; converting the shape of the object into multi-line encoded scan data having one of at least two possible states for the displayed pixels, the two states including a first state and a second state, wherein the first state indicates that a pixel is inside the shape and the second state indicates that a pixel is outside the shape; and blending each of the lines of encoded scan data and the padding into a line of a frame for display.
The encoded scan data may include a third possible state for the displayed pixel indicating that a portion of the pixel is within the shape. The shape may comprise a path comprising a plurality of segments. The method can comprise the following steps: one or more of the plurality of segments of the path that can be curved are converted into a plurality of non-curved segments. The frame may contain at least one of a video memory or a display device. The frame may correspond to at least one of a non-video memory or an output bitmap format buffer. The shape may contain a clipping mask of the encoded scan data. A value for the third possible state may be calculated for a pixel by dividing the pixel into a plurality of sub-pixel regions, determining which of the plurality of sub-pixel regions are within the shape, and determining the ratio of those sub-pixel regions within the shape to the plurality of sub-pixel regions. This value may be expressed as a gray scale value.
The object to be displayed may include a transparency and blending may further include blending each of the lines of the encoded scan data and the transparency into a line of a frame for display. The object to be displayed may contain a transparency, wherein the transparency is pre-processed in accordance with at least one of bit depth correction, tone correction, scaling, decompression or decoding. The transparency may comprise a pointer to a bitmap of transparency values for the shape. The padding may contain at least one of a color, a texture, or a bitmap. The method may include storing the multiple lines of encoded scan data as a clip mask in a cache. The method may include indexing the clip mask according to the shape.
A method described herein for colorless de-aliasing an edge of a rendered color image may comprise: receiving an object to be displayed, the object comprising a shape and a fill, the fill comprising one or more colors; representing a displayed pixel as a sub-pixel matrix comprising one or more sub-pixel regions covering the pixel; intersecting the shape with the sub-pixel matrix; and converting the sub-pixel matrix into a gray scale value for the pixel.
The method may comprise: the gray level value for the pixel and the fill corresponding to the pixel are mixed with the previous value for the pixel. The method may comprise: the receiving of the object, representing the pixels, intersecting the shape, converting the sub-pixel matrix, and blending are repeated for the pixel scan lines. The method may include run-length encoding the gray scale values of the scan line of pixels. One or more dimensions of the sub-pixel matrix may be controlled by a smoothness value.
A method for smoothing edges of a graphical object as described herein may comprise: receiving an object to be displayed, the object comprising a path that describes an outline of the object, the path having an inner side and an outer side; for each of a plurality of pixels that intersect the path, oversampling the pixels to obtain a gray scale value representing a portion of one of the pixels that may be within the path; and blending the plurality of pixels with data stored in a pixel array.
The method may comprise: for each of the plurality of pixels, a fill value for the pixel is weighted according to the gray level value, and data stored in the video memory is de-weighted according to the gray level value. The method may comprise: for each of the plurality of pixels, a fill value for the pixel is weighted according to a transparency value, and data stored in the pixel array is de-weighted according to the transparency value.
A system for processing graphical objects described herein may comprise: receiving means for receiving an object to be displayed, the object containing a shape, fill and alpha (alpha, transparency level); converting means for converting the shape of the object into encoded scan data, wherein the scan data has one of two possible states for a pixel, the two possible states including a first state and a second state, wherein the first state indicates that a pixel is inside the shape and the second state indicates that a pixel is outside the shape; and blending means for blending the encoded scan data, the fill, and the alpha into a line of a frame.
The encoded scan data may have a third possible state containing a gray scale value representing a pixel that may be on an edge of the shape, where the gray scale value corresponds to a portion of the pixel that may be within the shape. The frame may correspond to at least one of a display, a printer, a file, or a network port. The object may contain at least one of background padding or replacement padding, which the blending means blends into a line of a frame.
A computer program for processing a graphical object as described herein may comprise: computer executable code that receives an object to be displayed, wherein the object contains a shape, fill, and alpha; computer executable code for converting a shape of an object into encoded scan data, the scan data having one of at least two possible states for pixels of a pixel array, the two states including a first state and a second state, wherein the first state indicates that the pixels are inside the shape and the second state indicates that the pixels are outside the shape; and computer executable code that blends the encoded scan data, fill, and alpha into a row of a frame of the pixel array.
The array of pixels may correspond to at least one of a display, a printer, a file, or a network port. The encoded scan data may have a third possible state containing gray scale values representing pixels that may be on the edges of the shape, where the gray scale values correspond to a portion of the pixels that may be within the shape.
A system for processing graphical objects described herein may comprise: a processor configured to receive a graphical object that may include shape, fill, and transparency, to convert the shape of the graphical object into encoded scan data, and to combine the encoded scan data, fill, and alpha with a line of pixel data, wherein the encoded scan data corresponds to internal pixels, external pixels, and transition pixels of a scan line for display, wherein each transition pixel includes a gray level value corresponding to a portion of a pixel within the shape; and a memory storing the line of pixel data, the memory being adapted to provide the line of pixel data to the processor, and the memory being adapted to store a new line of pixel data that can be generated when the line of pixel data can be combined with the encoded scan data, fill, and transparency.
The system may include a display configured to display the memory. The processor may be one or more of a microprocessor, microcontroller, embedded microcontroller, programmable digital signal processor, application specific integrated circuit, programmable gate array, or programmable array logic. The system may be at least one of: the printer is configured to print a plurality of rows of pixel data stored in the memory, the storage device is configured to store the plurality of rows of pixel data stored in the memory, and the network device is configured to output the plurality of rows of pixel data stored in the memory. The processor may be at least one of a chip, a chipset, or a die (die). The processor and memory may be at least one of a chip, a chipset, or a die. The display may be a display of at least one of an electronic organizer, a palmtop computer, a handheld gaming device, a web-enabled cellular telephone, a personal digital assistant, an enhanced telephone, a thin network client, or a set-top box.
The display may be at least one of a printer or a plotter. The display may be used in a document management system. The display may be used in at least one of a facsimile, a copier, or a printer of the document management system. The display may be used in an in-vehicle system. The display may be used in at least one of an audio player, a microwave oven, a refrigerator, a washing machine, a clothes dryer, an oven, or a dishwasher. The processor may receive a plurality of graphics objects and process the plurality of graphics objects in parallel.
Brief description of the drawings
The above and other objects and advantages of the present invention will be more fully understood from the following further description, with reference to the accompanying drawings, in which:
FIG. 1 illustrates a data structure for a graphical object, which may be used with a shape processor;
FIG. 2 is a functional block diagram of a shape processor;
FIG. 3 depicts an example of an operation on intersection data, performed by an intersection process;
FIG. 4 shows a data structure for encoded scan data; and
fig. 5 is a flowchart of a processing procedure regarding shape processing.
Detailed Description
To provide a thorough understanding of the present invention, certain illustrative embodiments will be described below that include a two-dimensional shape processor that employs spatial filtering and tonal control with respect to edges of a rendered object. However, it will be appreciated by those of ordinary skill in the art that the methods and systems described herein may be suitably adapted for other applications, such as three-dimensional shape processing, and may be combined with whole-image antialiasing processing. For example, a coarse whole-image antialiasing processing step can be combined with a fine antialiasing processing of the object edges. All such adaptations and modifications will be apparent to those skilled in the art and are intended to be within the scope of the present invention as described herein
Within the scope of the invention.
FIG. 1 shows a data structure of a graphics object that may be used with a shape processor. The graphical object 100, or simply object 100, may contain a bounding box 101, a shape 102, a fill 104, and an alpha 106. Shape 102 may contain a path 108 with stroke (stroke)110 and fill 112 parameters, or a clipping mask 114. The fill 104 may contain a color 116 or a bitmap 118. Alpha 106 may contain a value 120 or a mask 122.
The bounding box 101 may contain a location where the object 100 is to be rendered and may define an area in which the object is to be drawn. This parameter can be used, for example, to simplify the rendering of the circular arc by combining a circular path with a bounding box 101, the bounding box 101 covering a quarter of the circle.
Shape 102 may contain a path 108 that defines a series of connected path elements, using a path description in PostScript (Page description language) format. Other path representations are known and may also be used. For example, the path 108 may comprise a straight line segment, a Bezier curve having a direction and curvature controlled by two points, or other path construction. The path 108 may be opened or closed. To support more complex geometries, the path 108 may contain self-intersecting or multiple disjoint regions. The delineation 110 of the path 108 may contain parameters or attributes, for example, a connection attribute specifying a presentation, such as a circle, a slope, or a miter, with respect to a connection path element, and a cap attribute specifying a presentation, such as a circle, a butt, a square, a triangle, etc., with respect to an end of the path 108. The padding 112 may contain a warping rule (winding rule), or other algorithm or parameter, for distinguishing the inside of the path 108 from the outside of the path 108 so that the appropriate area may be padded. The clip mask 114 may contain a pointer to the cached rendering of the graphical object 100 to reduce the unnecessary processing of rendering the object.
Fill 104 may generally contain information about how shape 102 is to be filled. This may include, for example, color 116, which may be a color value defined on a palette (such as an 8-bit palette), or may be a color-based component such as 24-bit RGB, 15-bit RGB, or 32-bit CMYK, or color 116 may be a gray scale value. Fill 104 may contain a bitmap 118, bitmap 118 containing a bitmap of the texture to be used to fill shape 102. Alternatively, the bitmap 118 may contain a pointer to the bitmap to be used to fill the shape 102. Such a bitmap may be provided in any kind of color model, such as those used for the fill 104.
Alpha 106 may generally contain information regarding the transparency of shape 102 when filled and displayed. The alpha may contain a value 120 that is a single value that describes the transparency for the entire shape 102, typically varying from 0 (transparent) to 1 (opaque). Optionally, alpha 106 may contain a mask 122, which is an alpha mask, or a pointer to an alpha mask, for the value of each pixel of rendered shape 102.
Appropriate modifications and enhancements to the above described data structures will be apparent to those skilled in the art. In particular, graphical object 100 may contain other features described in a presentation specification, such as PostScript, Java 2D API, or Quartz and QuickDraw libraries, for example, as used in the Mac OS X operating system.
FIG. 2 is a functional block diagram of a shape processor. In general, the shape processor 200 provides an input unit for receiving a stream of graphical objects and comprises an object processor that processes the stream of objects on an object-by-object basis to create an array of pixels for display on a screen. The shape processor 200 receives a graphical object, depicted by a shape, bounding box 203, fill 204, and alpha 206 as path 202 shown in FIG. 2, which may correspond to the components of the graphical object 100 described above with reference to FIG. 1, for example. Instead of a path 202, the shape processor 200 may receive a clipping mask 232, which may be passed directly by the shape processor 200 to a scan line mixer 226, as described below.
The control data for shape processor 200 may include a screen bounding box 208, a smoothness 210, a tone response curve 212, a bit depth 214, a color space 216, and a screen base address 218. This control data may store display-related physical parameters such as the screen base address 218 or the tone response curve 212. As described below, the tone response curve 212 may adjust the gray scale value of the encoded scan data in accordance with the non-linearity of the display device. For example, a brightness value of 50% of full scale may result in a pixel brightness of 65% for a particular device. The tone response curve 212 may use a look-up table or other algorithm or look-up based method to adjust for this non-linearity. The other control data may correspond to parameters specified by a user (or programmer). For example, the smoothness 210 stores a value for the fineness (fineness) or granularity (granularity) of the edge processing, which may be a value (or values) describing an N matrix of sub-regions of each display pixel therein, as will be described below.
The path 202 is provided to a scan converter 220, where the scan converter 220 uses data from the intersection 221 to provide intersection data to an intersection buffer 222. An intersection process 224 further processes the intersection data and provides an output to a scan line blender 226, which is combined by the scan line blender 226 with other graphical object descriptors and control data to generate an output to a video memory or a physical display. The intermediate data generated by the shape processor 200 may include a path bounding box 228, a flattened path 230, and a clipping mask 232. The clipping mask 232 or flattened path 230 can be used independently of the shape processor 200 or can be re-represented as a valid input, thereby reducing unnecessary repeated calls to the shape processor 200. Other intermediate data (not shown) may be generated by the shape processor 200 for output, including, by way of example, the input of intersections or other pre-processed adjustments, such as decompression of fill maps, and color space conversion, correction, adjustment, and scaling.
Scan converter 220 may preprocess path 202 prior to scan line processing. For example, by intersecting certain data and determining whether processing is required, unnecessary scan transformations may be avoided. For example, bounding box 203 and screen bounding box 208 of path 202 may intersect in intersection point 221. If the output from intersection point 221 is null, no further processing is required. Although not explicitly shown in fig. 2, other intersections may be obtained, such as an intersection with a bounding box for fill 204 (which may be inferred by shape processor 200 from the fill data), or a bounding box for alpha 206 (which may likewise be inferred by shape processor 200 from the alpha data). If the intersection set is empty, then no processing of the path 202 is required and the next successive path 202 can be processed immediately. As described above, if a clip mask 232 is presented as a shape instead of path 202, the clip mask 232 may be passed directly to scan line mixer 226, thus bypassing scan conversion and other path processing steps. Any intermediate processing data, including, for example, clipping mask 232, padding data, alpha data, flattened path data, etc., may be saved in this manner to avoid or reduce redundant processing.
The scan converter 220 may convert the path 202 into intersections with scan lines of a target display device. This function may be performed using smoothness 210 on an upsampled basis. That is, before locating the intersection, each row of pixels may be divided into sub-pixel regions, or sub-pixel matrices, using the smoothness 210 as a parameter. So, for example, a smoothness of 2 210 may result in one scan line of 100 pixels being processed to generate intersection data as a 2 by 200 array of sub-pixel regions covering the same area in a screen display. A smoothness 210 of 4 may result in the same scan line being processed to generate intersection data as a 4 by 400 array of sub-pixel regions, and so on.
The path 202 may then be applied to the sub-pixel area. The resulting intersections, or intersection data, may be stored on a horizontal line-by-line basis, including an X coordinate for each intersection, and the direction in which the path intersects a horizontal axis (e.g., up or down). Other representations are known and may also be used by the scan converter 220. The scan converter 220 may generate a path bounding box 228. The scan converter 220 may also generate a flattened path 230 as an intermediate step in which successive, non-linear segments (such as bezier curves) are converted into linear path segments. This may reduce the computational complexity of path-dependent operations. The intersection data may be stored in the intersection buffer 222.
In general, intersection process 224 analyzes rows of sub-pixel regions and identifies pixel strings that are outside of the shape, pixel strings that are inside of the shape, and transition pixels. Those transition pixels that are on the edge of a shape and intersect the shape so as to have them partially inside the shape and partially outside the shape may be smoothed to remove or reduce other artifacts (artifacts) or roughness (jaggedness) related to the presentation. This oversampling technique will be described in more detail below with reference to fig. 3. The inner pixels, outer pixels, and transition pixels may then be blended into the video memory as described below.
FIG. 3 depicts an example of operations on intersection data performed by intersection process 224. In the example of fig. 3, the intersection data corresponds to one scan line having 100 pixels, and the smoothness 210 has a value corresponding to a 4 by 4 sub-pixel matrix for each scan line pixel.
Graph 301 shows the intersection data received from intersection buffer 222 of fig. 2. As shown in graph 301, the intersection data may generally include the X coordinate at which path 202 intersects the subpixel area, plus the direction of path 202. For the first row, the nth row, path 202 intersects the 40 th sub-pixel in an upward direction. On the same row, path 202 intersects the 140 th sub-pixel in a downward direction. The intersection data are also listed in the graph 301 for the N +1 th to N +3 th lines. It should be appreciated that this is a particular example and that more or less intersection data may be provided for a row of sub-pixel regions depending on the complexity of the path 202.
The intersection data may be processed according to a warping rule or similar method to extract "open" or "closed" strings. In the example shown in fig. 3, the intersection data in graph 301 can be processed in this way by applying an even/odd warping rule in this example to generate the encoded data of graph 302.
As depicted in graph 302, the data for each row of subpixels may be encoded as a data pair that includes an on/off flag and a series of adjacent subpixels that share the on/off flag row. In general, the end of a string may be identified by a transition from the inside to the outside, or vice versa, as determined by applying a bending rule or similar technique to the intersection data. From this data, strings of pixels can be extracted, reflecting the pixels of the target display, which will be either entirely within the shape or outside the shape, where the shape is described by the intersection data. In the example of graph 302, the first string of five "off" pixels outside the shape can be easily identified, which correspond to lines N through N +3 and horizontal subpixel areas 1-20.
As depicted in graph 304, the transition from an "off" string to an "on" string may be characterized by the number of "on" or "off subpixel areas for each row of subpixels. In this example, the data following the five "off" pixels of the first string may be grouped into sets of four sub-pixel regions corresponding to pixels, e.g., sub-pixel regions 21-24, 25-28, etc. The "on" subpixel areas in each group of subpixel areas may then be summed over four rows to obtain a total number of "on" subpixel areas for one pixel. Graph 304 shows this total number for six horizontally consecutive pixels corresponding to horizontal sub-pixel regions 21-24 and rows N through N +3, the first of which does not contain an "on" sub-pixel region from rows N through N +2 and four "on" sub-pixel regions from row N + 3. This provides a total on' -ness for this pixel with four sub-pixel regions. This corresponds to a 4: 16 ratio or twenty-five percent (4/16 for a 4 by 4 sub-pixel matrix). This is represented as a gray scale value for twenty-five percent of this pixel. This analysis may be repeated for horizontally successive sub-pixel regions until a sufficiently "on" pixel is reached. In the example of fig. 3, one "on" pixel is reached at sub-pixel regions 41-44, where sixteen of the 16 sub-pixel regions are all "on". The corresponding pixel may start a series of "on" pixels to the end of a scan line, or until the next transition, if such a transition occurs.
The resulting data for each scan line is represented as a number of strings of "on" pixels, a number of strings of "off" pixels, and one or more transition pixels having gray scale values that indicate how much of each transition pixel is within (or alternatively, outside) a shape. In the following, fig. 4 shows an example of a data structure containing scan lines encoded in this form of a data run. In some implementations, the gray scale values may comprise maximum or minimum gray scale values (e.g., 100% or 0%) that otherwise represent pixels or runs in an "on" or "off" state. For example, this method may be advantageously applied to optimize the encoding of data that exhibits short runs (short runs) that switch between "on" and "off.
It should be understood that other techniques may be used to derive the gray scale values for the transition pixels. For example, using point and slope information about path 306, a portion of a pixel within a shape can be mathematically determined. By smoothing the shape edges to grayscale values, a colorless de-aliasing operation can be performed for full color images. The colors may then be provided in a scan line mixer, as will be described below. This technique may also be advantageously used without oversampling (i.e., having a smoothness 210 value, specifying that each pixel corresponds to a single sub-pixel region), because it defers processing alpha and fill values for a shape until the scan line of new pixel data is blended with the scan line of current pixel data. It should also be understood that while the above examples refer to a shape having a single inner region, more complex shapes containing multiple inner and outer regions may be similarly characterized.
Referring again to FIG. 2, the output of intersection process 224 may be stored as a clipping mask 232. The clip mask 232 may be indexed according to a reference number, for example, based on the path pointer of the path 202 that has been processed, as well as any scaling information. When stored in this manner, each new path 202 received by the shape processor 200 may be compared to the cached placement of clip masks, so that redundant processing of the same shape (such as rendering fonts in text lines) may be reduced or avoided.
Scan line mixer 226 may mix the output from intersection process 224, or clip mask 232, with a frame of the current video data. It will be appreciated from fig. 2 that this may involve additional calculations (not mentioned below) to map the pixel values to display parameters such as display memory addresses, color space, bit depth, etc. The pre-processing by scan line mixer 226 may include decompression of alpha maps or fill maps, color space conversion, color correction, color adjustment, and scaling.
Scan line mixer 226 may output directly to the screen, to another display device, or to a frame buffer for subsequent bitmap rendering. This may include non-video memory or an output bitmap format buffer. Scan line mixer 226 can typically operate on one line of video data, or one line of pixels at a time. In some embodiments, a number of scan line mixers may be provided to operate in parallel on a number of scan lines. For each pixel, scan line blender 226 may combine fill 204 (e.g., a 24-bit color value), alpha 206, and the intersection processing 224 output (or clipping mask when available) corresponding to that pixel. Typically, fill 204 is multiplied by alpha (for transparency (0 < alpha < 1)) and by the intersection process 224 output (0(═ off) < ≦ output < 1(═ on)). This represents the pixel values generated by the shape processor 200. In scan line mixer 226, this new value is combined with the old value for the pixel, which is de-weighted by a complementary factor. This blending operation can be mathematically expressed as:
Pi=αef+(1-αe)Pi-1[ equation 1 ]]
Where f is the fill value of the pixel (e.g., a 24-bit color value);
Piscanning line mixer output;
Pi-1previous pixel value (from buffer);
α is the alpha value of the shape at the pixel;
e is the edge value of the pixel (output of intersection process)
0, outside
Inside (1), it is
Gray scale value,% of edges within a shape
The blended output may be saved in video memory for display. It should be understood that equation 1 is representative and that other equations may be used to combine old and new data on a pixel-by-pixel basis if the equations properly weight the old and new data to reflect, for example, the transparency and edges of the new data. This may be, for example, a two-step process where edge weighting is performed first, followed by transparency weighting. Furthermore, there is a simplified form of equation 1 that can be used in scan line mixer 226 to reduce processing complexity. For example, when there is a run of pixels within a fully opaque shape (i.e., e 1& α 1), the output of scan line mixer 226 is simply a fill value for each pixel. In this case, the fill value f for the corresponding pixel may be provided directly to the video memory without further processing.
Fig. 4 shows a data structure for encoded scan data as the output of the intersection process 224. In general, pixel values may be stored as "on", "off", or "gray scale". The open pixels correspond to pixels within a shape that will be rendered as color values provided by the fill 204 in FIG. 2. The off pixels correspond to pixels outside of the shape and will not affect the existing display or frame buffer. As described above, an object may be provided with further parameters, such as background fill that provides a fill value for "off" pixels (i.e. pixels outside the shape). As another example, a replacement fill may be provided which is subtracted from the previous value in the frame buffer before blending. The gray level values represent shape edges and will be presented as color values provided by the fill 204 and scaled by the gray level values. Encoding provides a scheme for representing multiple lines of video data that can significantly reduce processing costs when processing shapes. For example, encoding into strings of "on" and "off" is inexpensive, and grayscale calculations are less expensive in memory usage and processor time, because they avoid requiring full pixel arrays for image processing. In addition, run length coding provides benefits when storing video data as a clip mask. However, it should be understood that other compression techniques may be suitably used with the system described herein.
The run-length encoded data structure 400 may include a header 402, a length 404, a width 406, a height 408, one or more offsets 410, and one or more data segments 412. The header 402 may contain any header information that may be used to identify or use the data structure 400. The length 404 may indicate the length of the data structure 400. Width 406 may indicate a value representing the width of a shape in pixels. Height 408 may indicate a value representing a number of scan lines of a shape. The one or more offsets 410 indicate a byte offset to the data segment for each scan line of the shape. One or more data segments 412 each contain encoded data for a scan line of a shape. The data segment 412 may be represented as: "inner" plus run in pixels, "outer" plus run in pixels, or "edge" plus the number of pixels in the edge and the gray scale value for each of the plurality of pixels in the edge. For example, each edge value may be represented as a one byte (256 level) gray scale value.
FIG. 5 is a process flow diagram for shape processing. In the following discussion, the phrase "intersection data" is used to refer to data that describes an intersection between a path and a sub-pixel region. In a simplified case, each sub-pixel region may correspond to a complete pixel and therefore no smoothing is performed. The phrase "encoded scan data" is used to refer to data in uncompressed or compressed (e.g., run-length encoded) form that describes the area of a scan line in one of three states (i.e., on, off, or gray scale). The runs are determined by transitions from the inside to the outside of a path defined by applying a warping rule or similar technique to the intersection data.
The process 500 may begin at 502 with the receipt of an object, as shown at step 504. For example, the object may be the graphical object 100 described above with reference to FIG. 1. In an optional step 506, it is determined whether the object is in the cache. This determination may be made, for example, using a shape name or any other information that can uniquely identify the shape of the object as corresponding to the stored item in the cache. If the shape of the object is in the cache, process 500 may continue to step 516 where the object may be blended with the current video memory using the cached shape and any fill and transparency data provided to the object. If the shape is not in the cache, process 500 may continue to step 508.
As shown in step 508, the process 500 may generate a flattened path, as described above with reference to the scan converter 220 in fig. 2. The flattened path may then be used to generate intersection data, which represents the intersection between the path and the sub-pixel region, as shown in step 510. It will be appreciated that these intersections may represent the edges of a shape. The encoded scan data may then be generated from the intersection data, as shown in step 512, e.g., as described above with reference to intersection process 224 in FIG. 2. The encoded scan data, representing the outline of the object shape, may be stored in a cache, as shown in step 514. The video store may then be blended with the encoded scan data, as shown in step 516, and as described in detail with reference to scan line blender 226 of fig. 2. The process 500 may then return to step 504 where the next successive object may be received.
The video memory may provide frames of video data to a display, wherein the contents of the video memory are converted to human-viewable form. The video memory may also store one or more frames of previous video data for blending with new lines of video data generated by the shape processor. It should be understood that the display may be a liquid crystal display, a light emitting diode display, or any other display for providing video data in human-viewable form. The display may also be a printer, plotter, or other device for rendering video data in a fixed, tangible medium, such as paper.
It should be understood that the above-described process 500, as well as the shape processor 200 in fig. 2, may be implemented in hardware, software, or some combination thereof. Process 500 may be implemented in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, as well as internal and/or external memory such as read only memory, programmable read only memory, electrically erasable programmable read only memory, random access memory, dynamic random access memory, double data rate random access memory, Rambus direct random access memory, flash memory, or any other volatile or non-volatile memory for storing program instructions, program data, and program outputs or other intermediate or final results. The process 500 and shape processor 200 may also (or instead) include an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device that may be configured to process electronic signals.
Any combination of the above circuits and components, whether packaged separately as a chip, a chipset, or a die, may be suitably adapted for use with the systems described herein. It should further be appreciated that the above described process 500 and shape processor 200 may be implemented as computer executable code created using a structured programming language such as C, an object oriented programming language such as C + +, or any other high-level or low-level programming language that may be compiled or interpreted to run on one of the above described devices, as a combination of processors, as a processor architecture, or as a combination of different hardware and software.
Shape processor 200 may be particularly suited for parallel and/or pipelined image processing systems in which different graphics objects may be processed simultaneously and then blended into a frame of video memory. Shape processor 200 may thus be implemented as several physically separate processes, or as several logically separate processes, such as multiple shape processor threads (threads) executing on one microprocessor. This method may be similarly applied to different scan lines of a graphic object.
The above-described system provides efficient image presentation for a display device, and may be well suited for small, low-power devices, such as portable devices having a liquid crystal display ("LCD") screen, including electronic organizers, palm-top computers, hand-held gaming devices, web-enabled cellular telephones (or other wireless telephones or communication devices), and personal digital assistants ("PDAs"). The system can also be incorporated into inexpensive terminal devices with display units, such as enhanced phones, thin network clients, and set-top boxes, as well as other rendering devices such as printers, plotters, and the like. For example, the system may be usefully employed as an embedded system in document processing equipment (e.g., facsimile machines, printers, copiers, etc.), wherein a display of a work document and/or a user interface may be enhanced in functionality. The system may be usefully employed in an in-car system that presents images to a car user and/or provides a graphical user interface, such as in a dashboard or center console or car. The system described herein may be incorporated into a household appliance, including an audio player, a microwave oven, a refrigerator, a washer, a dryer, an oven, or a dishwasher. The system described herein may also be usefully configured in any of the above systems, where output is produced into different devices, such as a display, printer, network, and/or file. A single device may use the shape processor for output to any or all of the devices.
While the invention has been disclosed in connection with the preferred embodiments illustrated and described in detail, it is to be understood that the invention is not limited to the embodiments disclosed herein, but is to be understood from the following claims, which are to be interpreted within the broadest scope permitted by law.
Claims (59)
1. A method for presenting an image as a display comprising a plurality of rows of pixels, comprising:
receiving a stream of a plurality of objects (100) to be displayed, each object comprising a shape (102) and a fill (104);
for each object, converting (220, 512) the shape of the object into a plurality of lines of encoded scan data, wherein the encoded scan data has one of at least two possible states for each pixel of the display, the at least two possible states including a first state and a second state, wherein the first state indicates that the pixel is inside the shape and the second state indicates that the pixel is outside the shape; and
blending (226, 516) each of the lines of encoded scan data and the pad to a line in a frame for the display, the method characterized by;
converting the shape of the object into scan data for multi-line encoding includes:
a. representing each pixel of said display as a sub-pixel matrix comprising one or more sub-pixel regions covering the pixel;
b. for each horizontal row in the sub-pixel matrix, generating intersection data, wherein the intersection data comprises coordinates for each intersection between the shape of the object and the sub-pixel matrix;
c. processing intersection data for each row of the sub-pixel matrix to extract "on" sub-pixel strings within the shape or "off" sub-pixel strings outside the shape;
and, for each row of the display:
d. analyzing the extracted sub-pixel strings to identify pixel strings within the shape associated with the first state, pixel strings outside the shape associated with the second state, and transition pixel strings associated with a third state, wherein the transition pixel strings are at the edges of the shape such that they are partially within and partially outside the shape;
e. further processing those pixels identified as transition pixels to generate a gray scale value for each transition pixel corresponding to the portion of the transition pixel within the shape;
f. generating a line of encoded scan data, wherein the line of encoded scan data includes the length of the pixel string for each state; and generating an associated gray scale value for the transition pixel of the third state.
2. The method of claim 1, wherein the gray scale value for a transition pixel is calculated by determining which of the plurality of sub-pixel regions are within the shape and determining a ratio of those sub-pixel regions within the shape to the plurality of sub-pixel regions.
3. The method of claim 1 or claim 2, wherein the shape comprises a path (108), the path (108) comprising a plurality of segments.
4. The method of claim 3, wherein the path segment includes parameters defining a stroke attribute of the path segment.
5. The method of claim 3, further comprising: converting one or more of the plurality of segments of the path being curved into a plurality of non-curved segments.
6. The method of claim 1, wherein the frame comprises at least one of a video memory or a display device.
7. The method of claim 1, wherein the frame corresponds to at least one of non-video memory or an output bitmap format buffer.
8. The method of claim 1, further comprising: the lines of encoded scan data are stored (514) as a clip mask in a cache.
9. The method of claim 8, further comprising: the clipping mask is indexed according to the shape.
10. The method of claim 8 or claim 9, wherein mixing comprises: selecting the clipping mask, and blending the encoded scan data associated with the clipping mask.
11. The method of claim 1, wherein the object to be displayed comprises a transparency (106), and blending further comprises: blending each row of the plurality of rows of encoded scan data and the transparency into a row in a frame for the display.
12. The method of claim 1, wherein the object to be displayed comprises a transparency (106), wherein the transparency is pre-processed according to at least one of bit depth correction, tone correction, scaling, decompression, or decoding.
13. The method of claim 12, wherein the transparency comprises a pointer to a bitmap of transparency values of the shape.
14. The method of claim 1, wherein the fill (104) comprises at least one of a color (116), a texture, or a bitmap (118).
15. The method of claim 1, wherein the shape includes a clipping mask (114) representing the shape, the clipping mask (114) containing a plurality of lines of encoded scan data.
16. The method of claim 1, further comprising: an algorithm is applied for determining the inside of the shape and the outside of the shape.
17. The method of claim 16, wherein applying an algorithm comprises: a bending rule is applied.
18. The method of claim 1 for performing colorless de-aliasing of edges of a rendered color image, wherein the fill (104) of objects to be displayed includes one or more colors (116, 118), the method further comprising: the gray level value for each transition pixel and the fill (104) corresponding to that pixel are mixed with the previous value for that pixel.
19. The method of claim 1, further comprising: run-length encoding the gray level values for the transition pixels.
20. The method of claim 1, wherein one or more dimensions of the sub-pixel matrix are controlled by a smoothness value.
21. The method of claim 1, for smoothing the edges of said object, wherein the shape of the object comprises a path (108) describing the contour of the object, the path having an inner side and an outer side, the method comprising:
for each of a plurality of pixels intersecting the path, oversampling the pixel of the plurality of pixels to obtain a gray-scale value representing a portion of the pixel of the plurality of pixels inside the path; and
the plurality of pixels are blended with data stored in a pixel array.
22. The method of claim 1, wherein each object further comprises a bounding box (101) representing an area in which the object is to be rendered.
23. The method of claim 22, further comprising: the object is preprocessed by intersecting its bounding box with parameters that define the boundaries of the display to determine when the object falls within the display region.
24. The method of claim 23, wherein no further processed preprocessed objects whose intersection between the bounding box of the object and the display is null.
25. The method of claim 1, wherein blending each row of the plurality of rows of encoded scan data and the padding into a row in a frame for the display comprises: the fill (104) corresponding to a pixel is mixed with a previous pixel value by weighting the fill (104) with a weighting factor and de-weighting the previous value of the pixel with a complementary factor, wherein the weighting factors are set to 1 for inner pixels, to zero for outer pixels and to the grey level value for transition pixels.
26. The method of claim 25, wherein the object to be displayed comprises a transparency (106), and the blending is additionally weighted according to the transparency.
27. The method of claim 1, wherein the intersection data further comprises, for each row of the sub-pixel matrix, an X-coordinate of each intersection between the shape of the object and the row of sub-pixels, and a direction in which the shape intersects the row is up or down.
28. A system for processing a graphical object for rendering an image as a display comprising a plurality of rows of pixels, comprising:
-receiving means (202, 204, 206) for receiving a stream of a plurality of objects (100) to be displayed, each object comprising a shape (102), fill (104) and alpha (106);
converting means (220, 222, 224) for converting (220, 512) the shape of each object into a plurality of lines of encoded scan data, wherein the encoded scan data has one of at least two possible states for each pixel of the display, the at least two possible states including a first state and a second state, the first state indicating that the pixel is inside the shape and the second state indicating that the pixel is outside the shape; and
blending means (226) for blending each line of said plurality of lines of encoded scan data, the fill, and the alpha to a line for said displaying a frame, the system characterized by;
the conversion device is used for:
a. representing each pixel of said display as a sub-pixel matrix comprising one or more sub-pixel regions covering the pixel;
b. generating intersection data for each horizontal row in the sub-pixel matrix, wherein the intersection data includes coordinates of each intersection between the shape of the object and the sub-pixel matrix;
c. processing the intersection data for each row of the sub-pixel matrix to extract "on" sub-pixel strings within the shape or "off" sub-pixel strings outside the shape;
and, for each row of the display:
d. analyzing the extracted sub-pixel strings to identify pixel strings within the shape associated with the first state, pixel strings outside the shape associated with the second state, and transition pixel strings associated with a third state, wherein the transition pixel strings are at the edges of the shape such that they are partially within the shape and partially outside the shape;
e. further processing those pixels identified as transition pixels to generate a gray level value for each transition pixel corresponding to the portion of the transition pixel within the shape;
f. generating a line of encoded scan data, wherein the line of encoded scan data includes a length of the pixel string for each state; and generating an associated gray scale value for the transition pixel of the third state.
29. The system of claim 28, wherein the frame corresponds to at least one of a display, a printer, a file, or a network port.
30. The system of claim 28 or 29, each object further comprising at least one of a background fill or a replacement fill, the blending means blending the at least one of the background fill or the replacement fill into a row in a frame.
31. The system of claim 28, wherein the receiving means, converting means, and mixing means comprise a processor and a memory,
the processor is configured to combine the encoded scan data, the fill, and the alpha with a line of pixel data, and
the memory is for storing the line of pixel data and the line of pixel data is provided to the processor, and the memory is for storing a new line of pixel data generated when the line of pixel data is combined with the encoded scan data, the fill, and the transparency.
32. The system of claim 31, further comprising a display configured to display the memory.
33. The system of claim 31 or claim 32, the processor further comprising one or more of a microprocessor, a microcontroller, an embedded microcontroller, a programmable digital signal processor, an application specific integrated circuit, a programmable gate array, or programmable array logic.
34. The system of claim 31, further comprising at least one of the following: the system includes a printer configured to print a plurality of lines of pixel data stored in the memory, a storage device configured to store the plurality of lines of pixel data stored in the memory, and a network device configured to output the plurality of lines of pixel data stored in the memory.
35. The system of claim 31, wherein the processor is at least one of a chip, a chipset, or a die.
36. The system of claim 31, wherein the processor and the memory are at least one of a chip, a chipset, or a die.
37. The system of claim 32, wherein the display is a display of at least one of an electronic organizer, a palmtop computer, a handheld gaming device, a web-enabled cellular telephone, a personal digital assistant, an enhanced telephone, a thin network client, or a set-top box.
38. The system of claim 32, wherein the display is at least one of a printer or a plotter.
39. The system of claim 32, wherein the display is used in a document management system.
40. The system of claim 32, wherein the display is used in at least one of a facsimile machine, a copier, or a printer of a document management system.
41. The system of claim 32, wherein the display is used in an in-vehicle system.
42. The system of claim 32, wherein the display is used in at least one of an audio player, a microwave oven, a refrigerator, a washer, a dryer, an oven, or a dishwasher.
43. The system of claim 31, wherein the processor receives a plurality of graphical objects and processes the plurality of graphical objects in parallel.
44. The system of claim 28, wherein:
the receiving apparatus comprises an input unit for receiving the stream of the plurality of objects, an
The converting means and the mixing means comprise object processors for processing the object streams on an object-by-object basis to create an array of pixels.
45. The system of claim 44, wherein the shape of each object includes a path, the object processor processing the path to create an array of pixels representing the outline of the image.
46. The system of claim 45, wherein the object processor performs de-aliasing processing on edges of the path.
47. A system as claimed in claim 45 or claim 46, wherein the object processor encodes an outline of the image.
48. The system of claim 44, wherein each object further comprises a bounding box (101) indicating to the object handler an area in which the object is to be rendered.
49. The system of claim 44, wherein said object processor receives a smoothing coefficient (210) specifying an amount of oversampling of the object relative to the pixel array.
50. The system as recited in claim 44, wherein the alpha (106) of each object includes a transparency value or a pointer to a bitmap of transparency values for the shape.
51. The system as recited in claim 44, wherein the fill (104) of each object includes at least one of a color (116), a texture, or a bitmap (118).
52. The system of claim 46, wherein the edges that are de-aliased are represented as gray scale values.
53. The system of claim 52, wherein a tone response curve is applied to the grayscale values of the de-aliased edge.
54. The system of claim 44, wherein the array of pixels is transmitted to at least one of a screen, a printer, a network port, or a file.
55. The system of claim 44, wherein each object includes preprocessed shape data.
56. The system of claim 55, wherein the preprocessed shape data includes a clipping mask representing the shape, the clipping mask including lines of encoded scan data.
57. The system of claim 55, wherein the preprocessed shape data includes transparency.
58. The system of claim 55, wherein the preprocessed shape data comprises padding.
59. The system of claim 44, further comprising: storing intermediate process data in the cache, the intermediate process data including at least one of clip masking, padding, or transparency.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB0009192.8 | 2000-04-14 | ||
| GBGB0009129.8A GB0009129D0 (en) | 2000-04-14 | 2000-04-14 | Digital document processing |
| US09/703,502 | 2000-10-31 | ||
| US09/703,502 US7055095B1 (en) | 2000-04-14 | 2000-10-31 | Systems and methods for digital document processing |
| PCT/GB2001/001712 WO2001080183A1 (en) | 2000-04-14 | 2001-04-17 | Shape processor |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1057121A1 HK1057121A1 (en) | 2004-03-12 |
| HK1057121B true HK1057121B (en) | 2006-07-28 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1241150C (en) | Method and system for rendering an image as a display comprising rows of pixels | |
| KR100748802B1 (en) | Method for rendering an image, system for processing graphical objects and computer program storage media | |
| JP3462211B2 (en) | Polymorphic graphic device | |
| CA2221752C (en) | Method and apparatus for reducing storage requirements for display data | |
| JP5595739B2 (en) | Method for processing graphics and apparatus therefor | |
| EP1359545B1 (en) | Process for rendering mixed raster content files | |
| JP4309270B2 (en) | System and method for generating visual representations of graphic data and digital document processing | |
| JPH06180758A (en) | System and method for generating raster graphic picture | |
| JP2008165760A (en) | Method and apparatus for processing graphics | |
| US5287442A (en) | Serpentine rendering of antialiased vectors in a computer graphics system | |
| CN114820370A (en) | Picture conversion method of ink screen equipment, electronic equipment and storage medium | |
| CN1089459C (en) | Ink rendering | |
| JP2013505854A (en) | How to create a printable raster image file | |
| HK1057121B (en) | Method and system for rendering an image as a display comprising a plurality of lines of pixels | |
| CN100377179C (en) | shape processor | |
| CN1748229A (en) | Low-cost supersampling rasterization | |
| HK1089539A1 (en) | Shape processor | |
| HK1089539B (en) | Shape processor | |
| JP2001502485A (en) | Lossless compression and decompression of bitmaps | |
| Ryan | Applications of antialiasing in an image processing framework setting | |
| JP2020203432A (en) | Drawing processing device, drawing processing method and drawing processing program | |
| JP2013068985A (en) | Vector drawing device, vector drawing method, and program | |
| JPH1063253A (en) | High quality character display device and gradation display method | |
| AU2014277651A1 (en) | Generating and rendering an anti-aliased page representation | |
| HK1107170A (en) | Cache efficient rasterization of graphics data |