[go: up one dir, main page]

AU2005201626A1 - Method and apparatus for trapping using in-render edge marking - Google Patents

Method and apparatus for trapping using in-render edge marking Download PDF

Info

Publication number
AU2005201626A1
AU2005201626A1 AU2005201626A AU2005201626A AU2005201626A1 AU 2005201626 A1 AU2005201626 A1 AU 2005201626A1 AU 2005201626 A AU2005201626 A AU 2005201626A AU 2005201626 A AU2005201626 A AU 2005201626A AU 2005201626 A1 AU2005201626 A1 AU 2005201626A1
Authority
AU
Australia
Prior art keywords
pixel
edge
colour
trapping
dominant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2005201626A
Inventor
Benjamin Michael Lever
Kevin John Moore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2005201626A priority Critical patent/AU2005201626A1/en
Publication of AU2005201626A1 publication Critical patent/AU2005201626A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Record Information Processing For Printing (AREA)
  • Image Generation (AREA)

Description

S&F Ref: 707060 0 1-(
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo, 146, Japan Kevin John Moore Benjamin Michael Lever Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) Method and apparatus for trapping using in-render edge marking The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c -1- METHOD AND APPARATUS FOR TRAPPING USING 00 IN-RENDER EDGE MARKING FIELD OF INVENTION IO The current invention relates to colour document printing and in particular to handling O artefacts caused by misalignment between ink layers.
BACKGROUND
Trapping is a process that has traditionally been used in the printing industry to compensate for misalignment of printing presses. In trapping, the area covered by the lower density ink is slightly expanded, so as to cover any gap produced by the misalignment. See Fig. 26 for an example. Fig. 26(a) shows the expected output 2600 from the combination of the separate ink channel bitmap outputs 2610 and 2620 of Fig. 26(b). Fig. 26(c) shows the actual output 2630 when the second ink channel output 2620, from the denser ink, is printed misaligned downwards and to the right with respect to the (lighter) first ink channel output 2610. A white gap area 2640 maybe perceived between the two outputs.
Trapping involves expanding the area covered by the lighter ink so that the alignment error does not show. Fig. 27 repeats the example of Fig. 26, except that the first, lighter channel output 2610 in Fig. 26(b) has been trapped by expanding it into the white region to produce the image 2710 in Fig. 27(b). The dashed line 2715 shows the original outline of the shape. The actual output 2730 in Fig. 27(c) shows that the area 2740 that was a white gap 2640 in Fig. 26(c) has been filled in by the lower density ink. The artefact caused by the overlap of the two inks elsewhere around the shape, eg. at 2750, is less noticeable to the eye.
Post-render trapping systems attempt to find areas of constant colour, large enough to require trapping, with data that consists solely of the pixel map produced by the render process, itself. The conventional means is simply to look for areas of colour, pixel by pixel.
This scanning process takes a long time, but because the scanning process only deals with colours as rendered, trapping based on the pixel bitmap is more accurate than trapping based on object descriptions, pre-render.
The other common alternative is to modify object descriptions before they are passed to the rendering system (pre-render trapping). In pre-render trapping, the drawing order implies two different types of trapping: 707060 -2spread trapping, where the trap applied to the top object increases its 00 area, and S choke trapping, where the trap applied to the top object reduces its area.
ISee Fig. Error! Reference source not found..
The rules that are applied for trapping in professional print shops mirror the O description above: lighter colour spreads into denser colour, and a denser colour is choked by lighter colour. The effect of these rules is that the area of the lighter ink is expanded into the Sarea of the denser ink.
For multiple-ink data over white, choke trapping is often applied to the lighter ink colour, so that misalignment does not cause colour fringes around objects known as "haloes".
This can be considered as a special case of the choke above.
To apply traps that are associated with an object pair, it is necessary to find the curve where the objects intersect, and apply traps to the intersection. Since this entails scanning through the display list for intersecting objects when a new object is added, this scales poorly.
Binning of objects into segmented areas of the page is an alternative that reduces the complexity.
Other approaches to trapping include pre-rendering, and matching the pixel output with the object data, and applying traps in this way, before producing a new, trapped object description.
SUMMARY OF THE INVENTION The present invention is intended to ameliorate one or more of the above disadvantages by implementing trapping as a post-render process making use of edge markings which are made during the rendering and stored as pixel attribute bits of the rendered pixel bitmap.
According to one aspect of the present invention there is disclosed A method of trapping an image to be printed, said image being described by a page description comprising at least one graphical object, said method comprising the steps of: rendering said page description to a rendered colour pixel bitmap while marking pixels at edges of said graphical objects to produce an edge bitmap; for each marked pixel in said edge bitmap: 707060 in -3comparing the ink density of the corresponding pixel in the colour bitmap with 00 the ink density of a neighbouring pixel indicated by the edge marking; marking the corresponding pixel or the neighbour pixel as dominant depending Son the result of the comparison; and altering the colour of each pixel marked as dominant using the colour of at least one pixel in a neighbouring object.
tfl BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention are described with reference to the following figures, in which: Fig. 1 is a schematic block diagram representation of a computer system incorporating a rendering arrangement; Fig. 2 is a block diagram showing the functional data flow of the rendering arrangement; Fig. 3 is a schematic block diagram representation of the pixel sequential rendering apparatus of Fig. 2 and associated display list and temporary stores; Fig. 4 shows a rectangular run with each of its four edges marked; Fig. 5 shows dirty vertical edges based on summary bits maintained for the runs on the last scanlines of tiles; Fig. 6 shows commands where a vertical edge will be marked for given runs; Fig. 7 shows examples of runs of different lengths and contributing levels, and the runs and run portions that are marked with horizontal edges; Fig. 8 shows commands where a horizontal edge will be marked for given runs; Fig. 9 shows commands where a right edge will be marked for given runs; Fig. 10 shows commands where a bottom edge will be marked for given runs; Figs. 11 and 12 shows two alternative arrangements for dominance checking and trapping; Fig. 13 shows typical density curves for each colour ink in CMYK colour space; Fig. 14 shows the dominance checking system with all options included; Fig. 15 shows an active pixel and its corresponding test area during dominance checking; 707060 -4- Fig. 16 shows an active pixel and its corresponding test area during dominance 00 checking; Fig. 17 shows pixels within the test area of an active pixel marked as dominant and O submissive; Fig. 18 shows the final marking of dominant pixels for the example of Figs. 15 to 17; Fig. 19 shows edge marking bits for four rotation states of the same bitmap; tt Fig. 20 shows a comparison of the results of the trapping method of the preferred embodiment with those of a prior art method; Fig. 21 shows trap colour determination for a single channel; Fig. 22 shows the search area for an active pixel during the trapping operation; Fig. 23 shows the identified pixels in the search area during the trapping operation; Fig. 24 shows the determined neighbours of the identified pixels during the trapping operation; Fig. 25 shows output colour determination for single channel with single bit dominance map; Fig. 26 shows an example of misalignment of ink channels in multiple-ink printing; and Fig. 27 shows how trapping can ameliorate the artefacts caused by channel misalignment.
DETAILED DESCRIPTION INCLUDING BEST MODE For a better understanding of the pixel sequential rendering system 1, a brief overview of the system is first undertaken. Then follows a brief discussion of the driver software for interfacing between a third party software application and the pixel sequential rendering apparatus 20 of the system. A brief overview of the pixel sequential rendering apparatus 20 is then discussed. As will become apparent, the pixel sequential rendering apparatus 20 includes an instruction execution module 300; an edge tracking module 400; a priority determination module 500; a fill colour determination module 600; a pixel compositing module 700; and a pixel output module 800.
Fig. 1 illustrates schematically a computer system 1 configured for rendering and presentation of computer graphic object images. The system includes a host processor 2 707060 associated with system random access memory (RAM) 3, which may include a non-volatile 00 hard disk drive or similar device 5 and volatile, semiconductor RAM 4. The system 1 also includes a system read-only memory (ROM) 6 typically founded upon semiconductor ROM 7 ID and which in many cases may be supplemented by compact disc devices (CD ROM) 8. The system 1 may also incorporate some means 10 for displaying images, such as a video display O unit (VDU) or a printer, both, which operate in raster fashion.
The above-described components of the system 1 are interconnected via a bus system 9 and are operable in a normal operating mode of computer systems well known in the art, such as IBM PC/AT type personal computers and arrangements evolved therefrom, Sun Sparcstations and the like.
Also seen in Fig. 1, a pixel sequential rendering apparatus 20 (or renderer) connects to the bus 9, and is configured for the sequential rendering of pixel-based images derived from graphic object-based descriptions supplied with instructions and data from the system 1 via the bus 9. The apparatus 20 may utilise the system RAM 3 for the rendering of object descriptions although preferably the rendering apparatus 20 may have associated therewith a dedicated rendering store arrangement 30, typically formed of semiconductor
RAM.
Image rendering operates generally speaking in the following manner. A render job to be rendered is given to the driver software by third party software for supply to the pixel sequential renderer 20. The render job is typically in a page description language or in a sequence of function calls to a standard graphics application program interface (API), which defines an image comprising objects placed on a page from a rearmost object to a foremost object to be composited in a manner defined by the render job. The driver software converts the render job to an intermediate render job, which is then fed to the pixel sequential renderer The pixel sequential renderer 20 generates the colour and opacity for the pixels one at a time in raster scan order. At any pixel currently being scanned and processed, the pixel sequential renderer 20 composites only those exposed objects that are active at the currently scanned pixel. The pixel sequential renderer determines that an object is active at a currently scanned pixel if that pixel lies within the boundary of the object. The pixel sequential renderer 20 achieves this by reference to a fill counter associated with that object. The fill counter keeps a running fill count that indicates whether the pixel lies within the boundary of 707060 -6the object. When the pixel sequential renderer 20 encounters an edge associated with the 0object it increments or decrements the fill count depending upon the direction of the edge.
The renderer 20 is then able to determine whether the current pixel is within the boundary of the object depending upon the fill count and a predetermined winding count rule. The NiD N 5 renderer 20 determines whether an active object is exposed with reference to a flag associated with that object. This flag associated with an object indicates whether or not the object obscures lower order objects. That is, this flag indicates whether the object is partially Otransparent, and in which case the lower order active objects will thus make a contribution to the colour and opacity of the current pixel. Otherwise, this flag indicates that the object is opaque in which case active lower order objects will not make any contribution to the colour and opacity of the currently scanned pixel. The pixel sequential renderer 20 determines that an object is contributing if it is the uppermost active object, or if all the active objects above the object have their corresponding flags set to transparent. The renderer 20 then composites these contributing active objects to determine and output the colour and opacity for the currently scanned pixel.
The driver software, in response to the page, also extracts edge information defining the edges of the objects for feeding to the edge tracking module. The driver software also generates a linearised table of priority properties and status information (herein called the level activation table) of the expression tree of the objects and their compositing operations which is fed to the priority determination module. The level activation table contains one record for each object on the page. In addition, each record contains a field for storing a pointer to an address for the fill of the corresponding object in a fill table. This fill table is also generated by the driver software and contains the fill for the corresponding objects, and is fed to the fill determination module. The level activation table together with the fill table is devoid of any edge information and effectively represent the objects, where the objects are infinitively extending. The edge information is fed to an edge tracking module, which determines, for each pixel in raster scan order, the edges of any objects that intersect a currently scanned pixel. The edge processing module passes this information onto a priority determination module. Each record of the level activation table contains a counter, which maintains a fill count associated with the corresponding object of the record. The priority 707060 -7determination module processes each pixel in a raster scan order. Initially, the fill counts 00 associated with all the objects are zero, and so all objects are inactive. The priority determination module continues processing each pixel until it encounters an edge intersecting ID that pixel. The priority determination module updates the fill count associated with the object ID 5 of that edge, and so that object becomes active. The priority determination continues in this a fashion updating the fill count of the objects and so activating and de-activating the objects.
In The priority determination module also determines whether these active objects are exposed Oor not, and consequently whether they make a contribution to the currently scanned pixel. In the event that they do, the pixel determination module generates a series of messages which ultimately instructs a pixel compositing module to composite the colour and opacity for these exposed active objects in accordance with the compositing operations specified for these objects in the level activation so as to generate the resultant colour and opacity for the currently scanned pixel. These series of messages do not at that time actually contain the colour and opacity for that object but rather an address to the fill table, which a fill determination module uses to determine the colour and opacity of the object.
A software program, hereafter referred to as the driver, is loaded and executed on the host processor 2 for generating instructions and data for the pixel-sequential graphics rendering apparatus 20, from data provided to the driver by a third-party application. The third-party application may provide data in the form of a standard language description of the objects to be drawn on the page, such as PostScript and PCL, or in the form of function calls to the driver through a standard software interface, such as the Windows GDI or X-11.
The driver software separates the data associated with an object, supplied by the thirdparty application, into data about the edges of the object, any operation or operations associated with painting the object onto the page, and the colour and opacity with which to fill pixels which fall inside the edges of the object.
The driver software partitions the edges of each object into edges which are monotonic increasing in the Y-direction, and then divides each partitioned edge of the object into segments of a form suitable for the edge module described below. Partitioned edges are sorted by the X-value of their starting positions and then by Y. Groups of edges starting at the 707060 same Y-value remain sorted by X-value, and may be concatenated together to form a new 00 edge list, suitable for reading in by the edge module when rendering reaches that Y-value.
The driver software sorts the operations, associated with painting objects, into priority order, and generates instructions to load the data structure associated with the priority determination module (described below). This structure includes a field for the fill rule, O which describes the topology of how each object is activated by edges, a field for the type of fill which is associated with the object being painted, and a field to identify whether data on Olevels below the current object is required by the operation. There is also a field, herein called clip count, that identifies an object as a clipping object, that is, as an object which is not, itself, filled, but which enables or disables filling of other objects on the page.
The driver software also prepares a data structure (the fill table) describing how to fill objects. The fill table is indexed by the data structure in the priority determination module.
This allows several levels in the priority determination module to refer to the same fill data structure.
The driver software assembles the aforementioned data into a job containing instructions for loading the data and rendering pixels, in a form that can be read by the rendering system, and transfers the assembled job to the rendering system. This may be performed using one of several methods known to the art, depending on the configuration of the rendering system and its memory.
Referring now to Fig. 2, a functional data flow diagram of the rendering process is shown. The functional flow diagram of Fig. 2 commences with an object graphic description 11 which is used to describe those parameters of graphic objects in a fashion appropriate to be generated by the host processor 2 and/or, where appropriate, stored within the system RAM 3 or derived from the system ROM 6, and which may be interpreted by the pixel sequential rendering apparatus 20 to render therefrom pixel-based images. For example, the object graphic description 11 may incorporate objects with edges in a number of formats including straight edges (simple vectors) that traverse from one point on the display to another, or an orthogonal edge format where a two-dimensional object is defined by a plurality of edges including orthogonal lines. Further formats, where objects are defined by continuous curves are also appropriate and these can include quadratic polynomial fragments where a single 707060 -9curve may be described by a number of parameters which enable a quadratic based curve to be 00 rendered in a single output space without the need to perform multiplications. Further data formats such as cubic splines and the like may also be used. An object may contain a mixture Iof many different edge types. Typically, common to all formats are identifiers for the start and
(N
end of each line (whether straight or curved) and typically, these are identified by a scan line number thus defining a specific output space in which the curve may be rendered.
Returning to Fig. 2, having identified the data necessary to describe the graphic objects Oto the rendered, the graphic system 1 then performs a display list generation step 12.
The display list generation 12 is preferably implemented as a software driver executing on the host processor 2 with attached ROM 6 and RAM 3. The display list generation 12 converts an object graphics description, expressed in any one or more of the well known graphic description languages, graphic library calls, or any other application specific format, into a display list. The display list is typically written into a display list store 13, generally formed within the RAM 4 but which may alternatively be formed within the temporary rendering stores 30. As seen in Fig. 3, the display list store 13 can include a number of components, one being an instruction stream 14, another being edge information 15 and where appropriate, raster image pixel data 16.
The instruction stream 14 includes code interpretable as instructions to be read by the pixel sequential rendering apparatus 20 to render the specific graphic objects desired in any specific image.
The display list store 13 is read by a pixel sequential rendering apparatus 20. The pixel sequential rendering apparatus 20 is typically implemented as an integrated circuit and converts the display list into a stream of raster pixels which can be forwarded to another device, for example, a printer, a display, or a memory store.
Although the pixel sequential rendering apparatus 20 is described as an integrated circuit, it may be implemented as an equivalent software module executing on a general purpose processing unit, such as the host processor 2.
Fig. 3 shows the configuration of the pixel sequential rendering apparatus 20, the display list store 13 and the temporary rendering stores 30. The processing stages 22 of the pixel-sequential rendering apparatus 20 include an instruction executor 300, an edge 707060 2 processing module 400, a priority determination module 500, a fill colour determination Smodule 600, a pixel compositing module 700, and a pixel output module 800. The processing operations use the temporary stores 30 which, as noted above, may share the same device (eg.
ND magnetic disk or semiconductor RAM) as the display list store 13, or may be implemented as individual stores for reasons of speed optimisation. The edge processing module 400 uses an edge record store 32 to hold edge information which is carried forward from scanline to t scanline. The priority determination module 500 uses a priority properties and status table 34 to hold information about each priority, and the current state of each priority with respect to edge crossings while a scanline is being rendered. The fill colour determination module 600 uses a fill data table 36 to hold information required to determine the fill colour of a particular priority at a particular position. The pixel compositing module 700 uses a pixel compositing stack 38 to hold intermediate results during the determination of an output pixel that requires the colours from multiple priorities to determine its value.
The display list store 13 and the other stores 32-38 detailed above may be implemented in RAM or any other data storage technology.
The processing steps shown in the arrangement of Fig. 3 take the form of a processing pipeline 22. In this case, the modules of the pipeline may execute simultaneously on different portions of image data in parallel, with messages passed between them as described below. In another arrangement, each message described below may take the form of a synchronous transfer of control to a downstream module, with upstream processing suspended until the downstream module completes the processing of the message.
The priority determination module 500 is responsible for marking edges that are associated with pixel runs of constant fill. Fig. 4 illustrates the four edges 410, 420, 430, 440 adjacent to a run, referred to as left, top, right, and bottom respectively. The four Boolean variables mO, ml, m2, and m3 are referred to as the edge marking bits for a pixel, and refer respectively to the left, top, right, and bottom edges of the pixel.
One of two options can be specified for edge marking: Marking of left (vertical) and top (horizontal) edges; and Marking of all (perimeter) edges.
707060 -11- It is important to note that false positive edge markings are allowed. That is, the 00 marking of non-existing edges is just considered redundant information. Therefore, as long as all existing edges are appropriately marked, any extra edge markings that are specified for a kI run will not have an erroneous effect.
A run is marked as having a left edge adjacent to it whenever: S Its contributing levels are different to those of the previous run on the t current scanline; and S It is not a background run at the start of a scanline.
During the priority determination stage, a fill command (eg. 600 in Fig. 6) may be generated at the start of a run, containing an instruction list describing how to generate the first pixel of the run. If the run is more than one pixel long, a repeat_for command (eg.
610) will also be generated to indicate the initial pixel generation is to be repeated a given number of times. A background run is generated by a background_repeat (bkgrpt) command (eg. 620). A vertical edge need only be defined once for a run as it is implicit that it occurs on the left-most pixel. As such, a vertical left edge 630 is marked on either the top-most level of a fill command 600 or in the background_repeat command 620, as indicated by the setting of the left edge bit mO.
A run is marked as having a top edge adjacent to it whenever its contributing levels are different to those of the run in the scanline above it.
Detection of horizontal edges requires a line buffer for comparing the run-length and contributing levels of the current run with those on the previous scanline. As runs are generated by the priority determination module, the contributing level(s) and run-length of the run above it are read in from the previous line buffer.
Refer to Fig. 7. If the run-length of the current run 700 is less than that of the corresponding run 710 on the previous scanline, as in and of Fig. 7, the current run is emitted and the run-length of the run 710 on the previous scanline is reduced by the length of the current run. If the contributing levels of the current run are different to those of the run on the previous scanline, the emitted run is marked with horizontal edges (eg. 720), as in Fig.
The next run received from the priority determination module 500 is then compared to the remaining portion 730 of the run on the previous scanline.
707060 -12- 2 If the run-length of the run 710 on the previous scanline is less than that of the current 00run 700, as in and of Fig. 7, then a portion 740 of the current run is emitted with a runlength equal to that of the run on the previous scanline. The run-length of the current run is D then reduced by the length of the emitted portion 740 and the next run 750 of the previous scanline is read in for comparison with the remaining portion 760 of the current run. If the contributing levels of the current run are different to those of the run on the previous scanline, as in Fig. the emitted run is marked with horizontal edges, e.g 720.
As the runs generated by the priority determination module are compared with those of the previous scanline they are also written to a line buffer. At the completion of each scanline, the line buffers are swapped in a ping-pong fashion so that the current line buffer becomes the previous line buffer for the next line.
Refer to Fig. 8. A horizontal edge must be marked for all pixels in a marked run as the edge affects every pixel. As such, a horizontal top edge 800, indicated by the setting of horizontal edge bit ml, is marked on both the top-most level of a fill command 810 and the following repeat_for command 820, or for a bkg_rpt command 830. A repeating run can also have different horizontal edge markings compared with original fill, and they will be applied to all pixels within the repeating run. This is illustrated in Fig. 8(c) where the second repeatfor command 840 has a different edge marking from the first repeat_for command 850.
In an alternative rendering arrangement, the rendering apparatus is not completely pixel-sequential, but is tile-sequential, and pixel-sequential within each tile. Pixel runs cannot be readily compared between vertically adjacent tiles, so detecting and marking horizontal edges is difficult for pixels in the top scanline of a tile. As a short cut, a "dirty edge" may be marked for that scanline's first run if there could be a top edge at any pixel on the first scanline of a tile. The dirty edge is preferably marked for the run by marking the run as if it were commenced by a left edge in the manner described above. The dirty edge marking is later, during trapping, interpreted as both a left edge on the top left pixel of the marked tile and a horizontal edge on each pixel in the first scanline of the marked tile.
The decision to insert a dirty edge can be performed in a number of ways, some more effective, others more efficient.
707060 -13- (Refer to Fig. 5) Maintain a summary bit for each tile interval across a 00 scanline, e.g. 520, in a scanline of tiles, and set the summary bit to indicate that the interval is dirty if any non-background run occurs within it. On the following scanline ID530, the first run of each tile's first scanline is marked with a dirty edge, e.g. 500, if the summary bit for the tile directly above it was set. While this method will produce fewer spurious vertical edges, the required storage for the summary bits may be quite rn costly. An alternative to this method, in order to reduce the storage cost, is to increase the number of tiles represented by a summary bit.
S Record the x-coordinate where the first non-background run begins and the x-coordinate where the final non-background run finishes for the final scanlines of tiles. The entire run between these minimum and maximum x-coordinates can be treated as being dirty, and dirty vertical edges inserted at tile intervals between these points on the following scanline. The extension to this method is simply to mark the whole scanline as dirty if it has any non-background runs.
If the perimeter edge marking style is selected, a run can be marked as having edges adjacent to it whenever its contributing levels are different to those of the runs surrounding it, in all four directions. Perimeter edge marking will locate and mark runs with left, top, right and bottom adjacent edges, as shown in Fig. 4. Runs with right and bottom adjacent edges will be marked in the fill, repeat_for, and bkg_rpt commands in analogous fashion to left and top edges, respectively. This is illustrated Figs. 9 and 10 (compare with Figs. 6 and 8).
The remaining pipeline modules (Fill Colour Generation, Pixel Compositing and Pixel Output) handle the edge markings as follows: The Fill Colour Generator makes no attempt to interpret edge markings.
If the Pixel Colour Generation module is responsible for expanding repeat_for commands into pixel data, it simply includes the edge marking bits from the repeat for command into the end_pixel command for each pixel in a run.
0 The Pixel Compositing module makes no attempt to interpret edge markings. If the Pixel Colour Compositing module is expanding a repeat_for command into single pixels, it reads the edge marking bits from the repeat_for 707060 14command, and appends them to the attribute bits obtained from the compositing process; otherwise, in the case where the compositing is being performed pixel by pixel, it reads the edge marking bits from the endpixel command, and appends them to the attribute bits for the pixel.
(N
ND5 The Pixel Output module makes no attempt to interpret edge markings.
It simply outputs them in the same way as for any other attribute.
Edge-marked output is collected and interpreted by the trapping module. This is a Oseparate pipeline stage, and is preferably separated out as a post-rendering module.
The edge marking data that reaches the trapping module must have its topology preserved correctly under whatever resolution conversion that is applied: if resolution is reduced for intermediate storage, the edge markings for the pixels subtended by a reducedresolution pixel are bitwise ORed together, so that the continuity of the boundary is maintained.
The result of the process discussed in the previous section is a rendered bitmap of page data, together with an attribute bitmap where two of the attributes are Top edge and Left edge marking bits m0 and ml. The Right edge and Bottom edge marking bits m2 and m3 (marked under perimeter edge marking) may be ignored by the trapping module except for rotation purposes, described below.
The edge marking bits define the boundaries between regions of the page where different sets of objects are active, and where we might therefore expect to find different colours that need to be trapped against each other.
There are two stages of the trapping process: dominance checking, where pixel colour is compared for the two sides of a boundary, and trap colour determination, where the trap colour is determined for pixels on the dominant side of the boundary.
The dominance check is performed separately, because of the corner condition where three colours meet: this may require changing the dominance state of one of the trap regions, as discussed later. This is not possible if a trap colour has replaced the original colour. Since trapping works with ink values, it is preferable to perform the trapping operation after the 707060 colour space conversion to ink-value space. However, dominance can be estimated on the basis of render-space colours RGB space) and the other attribute bits.
There are two options for implementing the post-processing trapping module: 1. As in the process 1100 shown in Fig. 11, establish dominance and N 5 perform trapping in a step 1120 which follows colour space conversion 1110. The advantage of this option is that dominance checking can be performed more accurately, while the disadvantage is that the dominance checking process 1120 S(described below) requires knowledge of all four ink component values at once. The ink component value data has to be buffered until required by the print engine 1130, imposing an extra buffer memory cost.
2. As in the process 1200 shown in Fig. 12, establish dominance in a step 1210 before colour space conversion 1220, and perform trapping 1230 after colour space conversion. This option requires an attribute bit, the "dominance bit", in addition to the edge marking and other attribute bits, to be set by the dominance check module. The advantage of this option is that it allows the full separation of ink channels, allowing them to be independently processed in the printing stage 1230.
This is crucial to reducing the amount of buffered data required in a multiple drum printer engine. The disadvantage is that dominance checking is based on the "expected" ink channel values, estimated from the rendered colour and attribute bitmaps, rather than on the actual ink channel values.
Dominance checking has to be performed in circumstances where the two neighbouring colours can be directly compared, which preferably means at full page resolution, with attribute bits available. Since checking is required across a scanline boundary, at least two consecutive scanlines of data must be buffered at a time in a "running buffer".
In option 2 above, dominance checking sets attributes called dominance bits in the attribute bitmap to indicate that the colour of certain pixels close to an edge is dominant over the neighbouring object.
Dominance checking is, in essence, determining which of two neighbouring pixels has the greater ink density, and is therefore the dominant pixel. The pixel with the lesser ink 707060 S-16- Sdensity is called submissive. A simple formula for determining ink density in a 4-channel oo system is: D c c C cM M cY Y cK K Equation 1 \D where c, is the ink value in each channel and the coefficients C, M, Y, K are the ND 5 (empirically predetermined) ink channel density coefficients in CMYK colour space.
O A more complicated calculation for the ink density is a sum of "neutral density" (ND) t values for each channel: D -1.7 log (c(l -10-06d Equation 2 channels where c, is the ink value in each channel and d is the ink density value for that channel. Typical d values for each channel, and the respective multiplying factor for each, are: Channel d 10 -0.6d C .610 .5695 M .760 .6501 Y .160 .1983 K 1.700 .9045 The corresponding neutral density curves, as a function of colour component value, are shown for each channel M, Y, and K) in Fig. 13.
Neutral density functions as contained in Equation 2 D= -1.7 log 1006d))) Equation 2 channels are preferably implemented using a 1-D LUT with piecewise linear interpolation for each channel. With 32 samples in the LUT, the maximum error/channel associated with interpolation is 0.007. This should be adequate for threshold values around 0.05, the value used in the preferred trapping embodiment.
A pixel is considered dominant if its ink density is greater than that of the pixels in its neighbouring pixel regions. The density differences should be thresholded, so that areas that 707060 -17- Sare close in density are not trapped asymmetrically, leading to an apparent growth in one or oo other object. Thus, one criterion for a pixel 0 to be dominant over another pixel 1 is
D
o D 45 bs Equation 3 IND The preferred embodiment has an additional criterion, being that the relative density IND 5 difference for the two pixels also has to exceed a second threshold value: D(max) D(min) 2 x D(min) Equation 4 Both criteria Equation 3 and Equation 4) must be met for a dominance bit to be generated.
Once dominance has been established, the dominance check module considers the neutral density of each colour channel individually to establish shared channels between the two pixels and the dominant channel of the dominant pixel. Shared channels are those whose neutral densities are sufficiently close in value as to fail one or other of the criteria in Equation 3 and Equation 4. Shared channels are not trapped, because a single channel cannot become misaligned with itself.
The dominant channel, or dominant ink, is the highest-density of the remaining (nonshared) channels of the dominant pixel. This channel defines the shape of the object, and so must remain unaltered by the trapping system. Only channels within the dominant pixel that are neither shared nor dominant, i.e. submissive channels, are subject to alteration by trapping.
Fig. 14 shows the dominance checking system 1400 with all options included. If the dominance check is performed in rendered colour space (see option 2 above), approximate colour space conversion (step 1410) will be required before the neutral densities are computed. Shared and dominant channel determination (step 1450) may be performed after the pixel dominance check 1440. As discussed below, this module may alternatively be moved to the trap colour generation stage depending on further analysis of the trade-offs involved. If step 1450 is present as shown in Fig. 14, the final stage (1460 or 1470) is a marking of submissive channels within the dominant pixel, either the active pixel (step 1460) or the neighbour pixel (step 1470). Otherwise, the final stage is marking the dominant pixel.
A runningbuffer is kept for edge attribute and colour bitmap data, with a 1-1 correspondence between the two. The dominance bit is marked to indicate the dominant 707060 CD -18pixels in the running buffer. For reasons described later, a second attribute bit, "submissive", is required for pixels in the running buffer, but is not passed on to downstream parts of the system. Note that the pixels where the dominance bit is set are the pixels where the trap colour selection operation will be performed: if the dominance bit is not set, the original colour will be output unchanged.
In step 1420, the edge attribute map in the running buffer is scanned, and for each tr' pixel, the edge bits are checked. If the active pixel has its Top edge (ml or its Left edge (mO) Sbit set, its neighbour pixel becomes the pixel above or to the left respectively. The dominance state of the active pixel with respect to the neighbour pixel is determined in step 1440 using Equation 3 and Equation 4, having first determined the neutral densities for each ink channel of the active pixel and the neighbour pixel in steps 1425 and 1430 and summed them in steps 1435 and 1437 according to Equation 2.
As mentioned above, if the top left pixel of a tile is marked with a left edge, this will be interpreted as both a left edge on the top left pixel of the marked tile and a horizontal edge on each pixel in the first scanline of the marked tile.
For each active pixel, a surrounding test area is notionally defined. The symmetry point of the test area is to the top left of the active pixel, to ensure that there is no preferred direction for the dominant area. The radius of the test area is the width of the trapping region, which is a predetermined parameter dependent on the print engine specifications. The size of the test area sets the size of the running buffer. Refer to Fig. 15, which shows the active pixel 1510 and its corresponding test area 1520, with a radius of three pixels, within the running buffer 1500. Fig. 16 is similar, but in this case the active pixel 1610 is marked with a left edge 1620.
Pixels within the test area and in the same topological region as a dominant pixel are marked as dominant. Pixels within the test area and in the same topological region as the corresponding submissive pixel are marked as submissive. The topological region is delimited by the Top edge and Left edge bits. Dominant pixels may be modified in the trap operation. See Fig. 17, which shows pixels 1730) marked as dominant and pixels (eg.
1740) marked as submissive in the case where the active pixel 1710, marked with a left edge, is found to be submissive to its left neighbour pixel 1720.
707060 -19- The submissive state overrides the dominant: that is, a submissive pixel cannot be 0marked as dominant, and if a dominant pixel is later found to be submissive to something else, it is marked as submissive. For example, in Fig. 17, the pixels (eg. 1740) marked as submissive cannot later become dominant; however, the pixel 1750 above and to the left of the active pixel, being a pixel with two edge markings (where three colours meet), currently marked as dominant, may later be marked as submissive if it is found to be submissive to its top neighbour pixel 1760. For this reason the "submissive" state of a pixel needs to be Oretained for the duration of the dominance checking stage. Fig. 18 shows the final markings of dominant pixels (eg. 1810) for the example of Figs. 15 to 17.
Dominant and shared channels of the two colours are preferably also determined at this stage in the manner outlined above. This allows the separation of the ink channels to be performed at any later point. Those channels within the dominant pixel that are not shared, and are not the dominant channel, are marked as submissive. Only the channels marked as submissive are subject to later trapping.
There is a trade-off between: determining shared and dominant channels at this stage, which requires a larger attribute bitmap to be stored for later processing by the trap colour determination step, and determining shared and dominant channels at the trap colour determination stage, which requires re-calculation of the neutral densities (Equation 2).
When an image is rotated by a multiple of 90 degrees after rendering and before printing, the orientation and meaning of the edge markings also changes. Horizontal and vertical edge markings are relative to the render orientation: the trapping system needs to know through what angle an image has been rotated in order to extract the trap information correctly.
There are three approaches to this problem: Modify the edge marking bitmaps according to the rotation applied, by rotating and interchanging them. In the case of perimeter edge marking, the interchange is a simple cyclic permutation for each 90 degree counterclockwise 707060 2 rotation: right becomes top, top becomes left, left becomes bottom, and bottom 00 becomes right. However, for top and left marking only, top and left cannot simply be interchanged; there needs to be a shift of one pixel of one of the maps for each IND degree rotation, as shown in Fig. 19. For example, when rotating an edge map 1900 by 5 90 degrees counterclockwise, the original top edge bitmap 1910, after 90 degree counterclockwise rotation, becomes the left edge bitmap 1920. The original left edge t bitmap 1930, after 90 degrees counterclockwise rotation, requires a downwards shift 1940 of one scanline before becoming the top edge bitmap 1950. In the case where the rotation is tile-wise, the new scanline 1955 (diagonally hatched) at the top of the new top edge bitmap is marked as "dirty" to allow for the appearance of any top edges in that scanline. Likewise the next 90 degree counterclockwise rotation marks the first scanline 1965 (cross-hatched) of the new top edge bitmap 1960 as "dirty". On the final degree rotation, the diagonally hatched scanline 1940 drops off the new top edge bitmap 1970 after the downwards shift and is not replaced at the top because of the knowledge of the original left edge bitmap 1930.
Include orientation information in the tile header, to change the interpretation of the flags in the edge marking bitmap. This allows tiles to remain independent, and is the preferred approach.
Mark the edge pixels affected in this way when a rotation occurs, and check three, rather than two, colour values at the dominance check stage. This approach confines the problem to the trapping module, and therefore has the fewest implications for other parts of the system.
Since the other bitmap attributes are available at the time when dominance checking is performed, they can be used to prevent trapping from being performed against particular graphic object types. In particular, natural images, thin lines and small text cause problems for trapping systems: the colour of a natural image varies along its boundary on a small scale: dominance will therefore change from one side of the boundary to the other, and lead to lines that do not appear straight.
707060 -21- S• thin lines and small text, if submissive, may appear to be fatter than 00 intended when a trap is performed, although a well-chosen set of dominance criterion parameters 6 abs and 6 re1) should minimise this; IN thin lines and small text, if dominant, may consist entirely of the trap
(N
region, in which case they will not appear in their correct colour.
These three conditions attributes are tested for in step 1480 of Fig. 14. The AND logic Sgates (eg. 1490) in Fig. 14, having inverted (inhibiting) inputs from the detection step 1480, Sprevent trapping from being performed across boundaries if one or more of the three conditions occurs.
When checking for dominance in a render colour space, there is a problem in that the mapping into the ink value space is not unique, but is controlled by the attribute bits. There is also the problem of the non-linearity of the colour map to ink value space. These problems mean that the trapping operation cannot be performed correctly before colour space conversion. However, the dominance checking stage can be performed before the final colour space conversion, since this is only a threshold calculation, and does not require exact calculation of the ink channel values.
For example, if the render space is RGB space, a rough guess at the CMYK values is K =U *(1-max(R,G,B)) C=1-R-K Equation M =1-G-K Y=1-B-K where the undercolour removal fraction, U, is dependent on the other attributes of the rendered bitmap text-mode). While Equation 5 is not sufficiently accurate to determine the final output colours, it is sufficiently accurate for dominance checking provided that the RGB space is chosen so that the error is smaller than the threshold value used in dominance checking.
707060 -22- An improved estimate may be possible by allowing U to be a function of R, G and B, Sfor example: U RI <)ANDG-B ANDB-R Equation 6 Nu 2 otherwise IDwhere the parameters ul, u, 2 and S are register settings. This allows an approximation of the behaviour of black-and-white-text-mode mappings where U is 100% on the grey line t and some lower value elsewhere.
SMore complicated functions might be possible, such as linear interpolation between u 2 based on distance from the grey line, however some analysis would be required to determine whether it can be reasonably done in hardware. Anything more complicated would require a 3-D LUT, in which case you might as well do the full colour map.
As mentioned above, the trapping operation is performed for certain channels of dominant pixels, i.e. those marked as submissive. The submissive channels are the nondominant ink channels of the dominant pixel colour that are not shared with the neighbouring object. The trap colour is formed by replacing the submissive ink channel value(s) with the corresponding ink channel value(s) of the neighbouring object. This is referred to herein as the "minimum unshared neighbour" trapping scheme.
Note that this scheme will effect both "choking" of the submissive channel of the dominant object and "spreading" of the submissive object in a single operation.
The creation of trap colours by choosing the neighbour colour for channels that are not dominant or shared produces similar results to that of taking the maximum value between the two channel values in each ink channel, the "max-per-channel" scheme known in the prior art.
However, minimum unshared neighbour trapping also produces correct trapping for mixed colours against a white background. As mentioned above, these should be choke-trapped to prevent "halo" formation. Fig. 20 shows the perfectly-aligned output 2000 of the minimum unshared neighbour trapping scheme compared with the perfectly-aligned output 2010 of the max-per-channel scheme on an image with two objects on a white background. The two images printed with the yellow ink channel misaligned (2020 and 2030 respectively) clearly show the minimisation of the halo effect using the minimum unshared neighbour approach when compared to the max-per-channel approach.
707060 CD-23- When creating trap colours using the max-per-channel approach, dominance checking 00 that simply determines which pixels are dominant is sufficient. The minimum unshared neighbour method needs to determine which channels have to be modified.
If the determination as to which channels need to be modified is done at the N 5 dominance checking stage, as shown in step 1450 of Fig. 14, separate dominance bits are IND created for each channel; the dominance bit is only set for the channels that are subject to t trapping. As described above, for the dominant channel of the dominant object, and any Schannel that is shared by a neighbouring object, the dominance bit is not set. Fig. 21 shows trap colour determination for single channel pixel data 2120 that is obtained by colour conversion 2110 from the rendered bitmap 2100. The dominance bits 2130 are read in channel order, and for pixels and channels where the dominance bit is set, the neighbour pixel channel value read in step 2140 is selected for output 2170 in place of the active pixel channel value read in step 2150. A multiplexer 2160 controlled by the channel dominance bits 2130 accomplishes the selection.
The trapping operation also requires a running buffer. The search area for neighbour pixels is inverted in x and y with respect to the positioning of the test during dominance checking. This ensures that those pixels (with edge bits set) that caused the dominance bit to be set may be found within the search area. Fig. 22 shows the search area 2200 around the active pixel 2210, being the mirror image of the test area 1520 in Fig. An expanding search is performed around the active pixel to identify edge-marked pixels within the search area that are on the boundary of the region topologically connected to the active pixel (the "active region"). Fig. 23 shows, cross-hatched, the seven pixels (eg.
2300) meeting this definition for the current active pixel 2310.
A pixel in the neighbouring region is determined for each of the identified pixels. The reference point in each case is the top left corner of the active pixel. The cases for neighbour determination are: Identified pixel to left of reference point, Left edge bit set: Neighbour pixel is to the left of identified pixel; Identified pixel to right of reference point, Left edge bit set: Neighbour pixel is identified pixel; 707060 -24- Identified pixel above reference point, Top edge bit set 00 Neighbour pixel is above identified pixel; Identified pixel below reference point, Top edge bit set O Neighbour pixel is identified pixel; Fig. 24 shows the seven determined neighbour pixels (eg. 2400) obtained by applying the above rules to the seven identified pixels (eg. 2300) in Fig. 23, computed from the reference point 2420. Two (eg. 2410) are from the region above the active region and five (eg. 2400) are from the region to the right of the active region.
The trap operation takes the minimum of the submissive ink channel values over all the determined neighbour pixels as the value of the active pixel in that channel. (An average of the neighbouring ink channel values may be better, but requires a division). In the example of Fig. 24, this means a value from either of the two regions neighbouring the active region could be used.
Trapping using a single bit dominance map is also feasible, and is sufficient for a system where the max-per-channel approach is used for trapping. The max-per-channel scheme can be performed independently on each channel, so that the process shown in Fig. 21 may be used, with the channel dominance map 2130 replaced by the pixel dominance map, and the multiplexer 2160 replaced by a maximum finder.
The max-per-channel scheme does not choke against white, so it is preferable that the minimum unshared neighbour channel method should be used. In this case, for a single bit dominance map, the full colour ink channel data is required to perform the trap colour determination.
The output determination for a channel with a single bit dominance map using the minimum unshared neighbour scheme is shown in Fig. 25. The RGB bitmap 2500 is converted to the ink channel bitmap 2520 in step 2510. The original RGB value of the active pixel is read in step 2530 and its dominant channel determined in step 2540. The ink channel values of the active pixel and the neighbour pixel are read in steps 2550 and 2560 respectively. Step 2570 determines whether the channel is shared between the active pixel and its neighbour. The AND gate 2580 determines whether the channel is neither shared nor dominant, but is part of a dominant pixel (determined from the dominance bitmap 2585). The 707060 O-i multiplexer 2590 then performs selection between the channel values of the active and oo neighbour pixel in similar fashion to multiplexer 2160.
Determining whether a channel is dominant requires recomputation of the per-channel neutral density data that was previously computed (step 1425) when the dominance bitmap N 5 was created (step 1440). This repetition of neutral density computation is undesirable, so this variation is not preferred.
707060

Claims (2)

1. A method of trapping an image to be printed, said image being described by a page Sdescription comprising at least one graphical object, said method comprising the steps of: (\N a. rendering said page description to a rendered colour pixel bitmap while marking pixels tIt at edges of said graphical objects to produce an edge bitmap; Sfor each marked pixel in said edge bitmap: comparing the ink density of the corresponding pixel in the colour bitmap with the ink density of a neighbouring pixel indicated by the edge marking; marking the corresponding pixel or the neighbour pixel as dominant depending on the result of the comparison; and altering the colour of each pixel marked as dominant using the colour of at least one pixel in a neighbouring object.
2. A method of trapping an image to be printed substantially as described herein with reference to any one of the embodiments as that embodiment is illustrated in the drawings. Dated this 18 th day of April 2005 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant Spruson&Ferguson 707060
AU2005201626A 2005-04-18 2005-04-18 Method and apparatus for trapping using in-render edge marking Abandoned AU2005201626A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2005201626A AU2005201626A1 (en) 2005-04-18 2005-04-18 Method and apparatus for trapping using in-render edge marking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2005201626A AU2005201626A1 (en) 2005-04-18 2005-04-18 Method and apparatus for trapping using in-render edge marking

Publications (1)

Publication Number Publication Date
AU2005201626A1 true AU2005201626A1 (en) 2006-11-02

Family

ID=37395657

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005201626A Abandoned AU2005201626A1 (en) 2005-04-18 2005-04-18 Method and apparatus for trapping using in-render edge marking

Country Status (1)

Country Link
AU (1) AU2005201626A1 (en)

Similar Documents

Publication Publication Date Title
JP3545409B2 (en) How to add traps to print pages specified in page description language format
US7978196B2 (en) Efficient rendering of page descriptions
US9406159B2 (en) Print-ready document editing using intermediate format
US10586381B2 (en) Image processing apparatus
EP2096586A1 (en) Form making system, network system using same, and form making method
US7755629B2 (en) Method of rendering graphic objects
US8723884B2 (en) Scan converting a set of vector edges to a set of pixel aligned edges
US20050099642A1 (en) Trapping method, trapping apparatus, program, and printing system
US8638470B2 (en) Efficient banded hybrid rendering
US6795048B2 (en) Processing pixels of a digital image
US9390689B2 (en) Need-below processing across z-band boundaries
US9384427B2 (en) Systems and methods for optimizing pixel based raster trapping
AU2013200696A1 (en) Image processing apparatus, image processing method, and program
US7046403B1 (en) Image edge color computation
AU2005201626A1 (en) Method and apparatus for trapping using in-render edge marking
US11831839B1 (en) Image processing apparatus, method, and program product for printing based on PDL data in print job for printing monochrome representation of color data
US8537425B2 (en) Method for optimizing the search for trapping regions
CN107203354B (en) Image processing apparatus and control method thereof
KR20080076933A (en) Computer-implemented methods for transparent printing, computer readable media and systems for printing
CN110097147B (en) Method and system for setting primitive drawing attribute, computer equipment and storage medium
AU2005202861A1 (en) Method and apparatus for trapping using a pixel-sequential renderer
US20120200896A1 (en) Method for Optimizing the Search for Trapping Regions
AU2007226809A1 (en) Efficient rendering of page descriptions containing grouped layers
AU2006200899A1 (en) Efficient rendering of page descriptions
AU2005227419A1 (en) Method of clip-centric rendering

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period