CN118214817A - Method, device, equipment, system and storage medium for layer processing - Google Patents
Method, device, equipment, system and storage medium for layer processing Download PDFInfo
- Publication number
- CN118214817A CN118214817A CN202211629378.9A CN202211629378A CN118214817A CN 118214817 A CN118214817 A CN 118214817A CN 202211629378 A CN202211629378 A CN 202211629378A CN 118214817 A CN118214817 A CN 118214817A
- Authority
- CN
- China
- Prior art keywords
- image data
- layer
- image
- data block
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 505
- 238000000034 method Methods 0.000 title claims abstract description 116
- 238000003672 processing method Methods 0.000 claims abstract description 7
- 230000015654 memory Effects 0.000 claims description 177
- 238000004590 computer program Methods 0.000 claims description 15
- 238000005192 partition Methods 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 abstract description 18
- 239000010410 layer Substances 0.000 description 402
- 238000010586 diagram Methods 0.000 description 44
- 230000008569 process Effects 0.000 description 39
- 230000006870 function Effects 0.000 description 21
- 238000007781 pre-processing Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 17
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 230000009467 reduction Effects 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 101150064138 MAP1 gene Proteins 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/775—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television receiver
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides a method, a device, equipment, a system and a storage medium for processing a layer, which relate to the technical field of image processing and comprise the following steps: and editing the layers by the root to obtain a target topological graph of the display image, wherein the target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs. And obtaining a plurality of image data blocks according to the superposition information. And performing image processing on the image data of each image data block in the plurality of image data blocks to obtain processed image data of each image data block. And outputting a display image according to the processed image data of each image data block. The scheme can provide a layer processing method with good algorithm compatibility, thereby solving the technical problem of how to be quickly applicable to various layer processing algorithms.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, a system, and a storage medium for layer processing.
Background
With the development of the video processing field and the advent of a distributed IP system, video transmission according to the IP codec technology can be infinitely expanded on the conventional video processing, so as to obtain a plurality of signal sources (i.e., layers). In the prior art, in order to adapt to the access of a plurality of signal sources, a linear expansion system processing module needs to be added, and a line cache algorithm is adopted to process images according to a plurality of image layers. However, under the condition that a plurality of layers are overlapped, an algorithm adopting a line cache may have the technical problems that the algorithm compatibility is poor and the algorithm cannot be quickly applied to various multi-layer image processing algorithms, so that the display effect is poor and the implementation cost is high.
Disclosure of Invention
The embodiment of the application provides a layer processing method, device, equipment, system and storage medium.
In a first aspect, an embodiment of the present application provides a method for layer processing, where the method includes: editing the layers to obtain a target topological graph of the display image, wherein the target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs. And obtaining a plurality of image data blocks according to the superposition information. And performing image processing on the image data of each image data block in the plurality of image data blocks to obtain processed image data of each image data block. And outputting a display image according to the processed image data of each image data block.
The embodiment of the application provides a method for processing image layers, which is characterized in that an object topological graph of a display image is obtained by editing the image layers, the object topological graph is formed by superposing a plurality of image layer topological graphs, superposition information of the image layer topological graphs can be obtained, a plurality of image data blocks are obtained according to the superposition information, image processing is carried out on image data of each image data block in the plurality of image data blocks to obtain processed image data of each image data block, and because the image data is processed by taking the image data block as a unit in the image processing process, the technical problem that the data content of each image layer needs to be judged when the image data is processed by taking the image layer as a unit is solved, and finally, the display image is output according to the processed image data corresponding to each image data block, so that the method for processing the image layers with good algorithm compatibility can be provided, and the technical problem of how to be rapidly applied to processing various image layers is solved.
In one possible implementation of the present application, the image data corresponding to one image data block belongs to the same video source and/or the same layer.
In one possible implementation of the present application, outputting a display image based on the processed image data of each image data block includes: the processed image data of each image data block is stored to the first memory such that the output interface reads the processed image data of each image data block from the first memory to output a display image. This may enable the output interface to read the processed image data from the first memory as an image that meets the requirements of the display device.
In one possible implementation of the present application, storing the processed image data of each image data block to a first memory includes: the processed image data of all the image data blocks are stored in the first memory according to the target storage mode. This allows storing the image data in accordance with the location of each image data block in the target topology and the processed image data of the image data block.
In one possible implementation of the present application, the overlay information includes layer priority information, layer start coordinate information, layer size information, and/or layer-corresponding video source information for each layer topology.
In one possible implementation of the present application, the target topology further includes output screen topology information including at least start coordinate information and size information of the output screen topology.
In one possible implementation of the present application, performing image processing on image data of each of a plurality of image data blocks to obtain processed image data of each image data block includes: and each time the image data of one image data block is determined, performing image processing on the image data of the image data block until the processed image data of all the image data blocks are obtained. Thus, serial processing can be realized, and system bandwidth and resource consumption of the image processing device can be reduced.
In one possible implementation manner of the present application, after determining the image data of one image data block and before performing image processing on the image data of the image data block, the method provided by the embodiment of the present application further includes: processing the original image data of any one image data block into image data meeting the requirements under the condition that the original image data of any one image data block is determined to be not meeting the requirements, and obtaining the image data of any one image data block; when the original image data of any one of the image data blocks is determined to be satisfactory, the original image data of any one of the image data blocks is set as the image data of any one of the image data blocks. In this way, the original image data which does not meet the requirements can be processed into the image data which meets the requirements, so that the final display effect is improved.
In one possible implementation manner of the present application, the method provided by the embodiment of the present application further includes: storing the processed image data of each image data block to a first memory, such that the output interface reads the processed image data of each image data block from the first memory to output a display image; each time the image data of one image data block is determined, image processing is performed on the image data of the image data block until processed image data of all the image data blocks are obtained, including: and storing the processed image data of the obtained image data blocks in the first memory every time the processed image data of one image data block is obtained until the processed image data of all the image data blocks are stored in the first memory.
In one possible implementation manner of the present application, the method provided by the embodiment of the present application further includes: obtaining dividing information, wherein the dividing information is used for dividing image data blocks of image data corresponding to the layer topological graph; obtaining a plurality of image data blocks according to the superposition information, including: a plurality of image data blocks is determined based on the division information and the superimposition information.
In one possible implementation manner of the present application, in a case that a layer topology map of at least two layers exists in a plurality of layer topologies included in a target topology map, the layer topology map of at least two layers has an overlapping area, original image data of at least two image data blocks exist in a plurality of image data blocks overlap, and at least two image data blocks belong to different layers in the at least two layers; or in the case that the layer topology map of at least two layers among the plurality of layer topology maps included in the target topology map has an overlapping area, the original image data of each of the plurality of image data blocks does not overlap. In this way, the bandwidth and resources required by the image processing device when processing the image can be reduced to the greatest extent without repeated reading of the overlapped area covered in the two image layers.
In one possible implementation manner of the present application, the partition information is used to instruct image data of each adjacent pixel point of original image data corresponding to the target topological graph to sequentially perform consistency judgment of the image layer according to a preset sequence, and an area formed by a plurality of continuous pixel points with consistent image layer is determined as an image data block; and/or the dividing information is used for indicating that the consistency of the video source is judged for the image data of each adjacent pixel point of the original image data corresponding to the target topological graph according to the preset sequence, and the area formed by a plurality of continuous pixel points with consistent video source is determined as an image data block. This makes it possible to determine the image data block formed by the pixel points based on the coincidence judgment.
In one possible implementation manner of the present application, the partition information includes first information and second information, the first information is used for determining a layer to which the image data block belongs, and the second information is used for determining original image data corresponding to the image data block in the layer to which the image data block belongs; determining a plurality of image data blocks based on the division information and the superimposition information, including: reading the layer original image data of the layer to which each image data block belongs according to the first information of each image data block; and determining the original image data corresponding to the corresponding image data block from the layer original image data of the layer to which each image data block belongs according to the second information of each image data block. Based on the division information, the original image data of each image data block can thus be acquired.
In one possible implementation manner of the present application, reading layer original image data of a layer to which each image data block belongs according to first information of each image data block includes: reading layer original image data of a layer to which each image data block belongs from a second memory according to first information of each image data block, wherein the first memory and the second memory are the same memory or different memories, at least the layer original image data of the layer to which each image data block belongs is stored in the second memory, and the layer original image data is original video source data corresponding to the layer to which each image data block belongs; or the original image data of the image layer is image data which corresponds to the image layer to which each image data block belongs and is subjected to front-stage processing; or the original image data of the image layer is the image data which is written into the second memory for the first time and corresponds to the image layer to which each image data block belongs. In this way, the hardware resource consumption in the image processing apparatus can be reduced when the first memory and the second memory are the same memory.
In one possible implementation manner of the present application, reading layer original image data of a layer to which each image data block belongs according to first information of each image data block includes: and reading the layer original image data of the layer to which the image data block belongs, which is determined by the first information of each image data block, according to a preset reading rule.
In one possible implementation of the present application, each time image data of one image data block is determined, image processing is performed on the image data of the image data block, including: and processing the image data of the image data block according to the image processing parameters of the image data block to obtain the processed image data of the image data block.
In one possible implementation of the present application, the image processing parameters include at least a scaling parameter, and when determining the image data of one image data block, the method further includes, before processing the image data of each image data block according to the image processing parameters of the image data block to obtain the processed image data of the image data block: and determining the scaling parameter of any image data block according to the display parameter of any image data block and the display parameter of the video source to which any image data block belongs. In this way, each image data block can be processed according to its scaling parameters.
In one possible implementation of the application, the shape of the image data block is rectangular or non-rectangular.
It will be appreciated that, alternatively, a method for layer processing provided in the first aspect of the embodiment of the present application may be performed by an image processing apparatus or a video processing apparatus. Specifically, the processing unit in the image processing apparatus may execute the processing unit when executed by the image processing apparatus. The execution by the video processing apparatus may be performed by an image processing unit in the video processing apparatus, which is not limited by the embodiment of the present application.
In a second aspect, an embodiment of the present application provides another method for layer processing, where the method includes: editing the layers to obtain a target topological graph of the display image, wherein the target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs. Original image data of each of the plurality of image data blocks is read from the second memory based on the superimposition information. Each time the original image data of one image data block is read, the image data of the image data block is obtained from the original image data of one image data block. Image processing is performed every time image data of one image data block is obtained. The processed image data of each image data block is stored in the first memory until the processed image data of all image data blocks are stored in the first memory. And calling an output interface to read the processed image data of all the image data blocks from the first memory so as to output a display image.
In a third aspect, an embodiment of the present application provides an apparatus for layer processing, including: the image layer stacking module is used for performing editing operation on the image layers to obtain a target topological graph of the display image, the target topological graph is formed by stacking a plurality of image layer topological graphs, the image layer topological graphs correspond to the image layers of the display screen one by one, and the target topological graph at least comprises stacking information of the image layer topological graphs. And the reading module is also used for obtaining a plurality of image data blocks according to the superposition information. And the processing module is used for carrying out image processing on the image data of each image data block in the plurality of image data blocks to obtain the processed image data of each image data block. And the output module is used for outputting a display image according to the processed image data of each image data block.
It may be understood that the device for layer processing in the embodiment of the present application may be an image processing device, or may be a chip or a circuit applied to the image processing device, which is not limited in the embodiment of the present application.
In a possible implementation of the present application, the processing module is further configured to store the processed image data of each image data block to the first memory, so that the output module reads the processed image data of each image data block from the first memory to output the display image.
In a possible implementation manner of the present application, the processing module is further configured to perform image processing on the image data of the image data block every time the image data of one image data block is determined, until processed image data of all the image data blocks are obtained.
In one possible implementation manner of the present application, the processing module is further configured to process the original image data of any one of the image data blocks into the image data that meets the requirements when the determined original image data of any one of the image data blocks does not meet the requirements, obtain the image data of any one of the image data blocks, and use the original image data of any one of the image data blocks as the image data of any one of the image data blocks when the determined original image data of any one of the image data blocks meets the requirements.
In a possible implementation of the present application, the processing module is further configured to store the processed image data of the obtained image data block in the first memory every time the processed image data of one image data block is obtained, until the processed image data of all the image data blocks are stored in the first memory.
In one possible implementation manner of the present application, the layer stacking module is further configured to obtain partition information, where the partition information is used to perform image data block partition on image data corresponding to the layer topology. Correspondingly, the image layer stacking module is further used for determining a plurality of image data blocks according to the dividing information and the stacking information.
In one possible implementation manner of the present application, the processing module is further configured to process the image data of the image data block according to the image processing parameter of the image data block to obtain the processed image data of the image data block when determining the image data of one image data block.
In one possible implementation manner of the present application, the processing module is further configured to determine a scaling parameter of any image data block according to a display parameter of any image data block and a display parameter of a video source to which any image data block belongs.
In a fourth aspect, an embodiment of the present application provides a layer processing system, including: a codec device, an image processing device, and a display device. The codec device is connected to an image processing device, which is connected to a display device, for performing the method of layer processing described in the first aspect, as well as in various possible implementations of the first aspect. For example, the image processing device may be connected to the display device through an output interface.
In a fifth aspect, an embodiment of the present application provides a layer processing system, including: the video processing device is connected with the display device, the video processing device integrates a coding and decoding unit and an image processing unit, the coding and decoding unit is used for coding and decoding video source data received by an input interface of the video processing device, and the image processing unit is used for executing the method of layer processing described in the first aspect or the second aspect and various possible implementation manners of the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored therein a computer program or instructions which, when run on a computer, cause the computer to perform a method of layer processing as described in the first aspect or any one of the possible implementations of the first aspect or the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of layer processing described in the first aspect or any one of the possible implementations of the first aspect or the second aspect.
In an eighth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface and the processor being coupled, the processor being configured to execute a computer program or instructions to implement the first aspect or any one of the possible implementations of the first aspect or a method of layer processing described in the second aspect. The communication interface is used for communicating with other modules outside the chip.
Drawings
FIG. 1 is a diagram of an architecture of an image processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 3 is an internal block diagram of a processing unit according to the present application;
FIG. 4 is a flowchart illustrating a method for layer processing according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a target topology provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of another target topology provided by an embodiment of the present application;
FIG. 7A is a schematic diagram of image data block processing according to an embodiment of the present application;
FIG. 7B is a schematic diagram of image data block processing according to an embodiment of the present application;
FIG. 8 is a schematic diagram of image data block preprocessing according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an image scaling process according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating another method for layer processing according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a device for layer processing according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before describing the embodiments of the present application, the following definitions are first applied to the related terms related to the present application:
(1) The geometric transformation of the image refers to the transformation of changing the processing position of the image without changing the pixels of the image, so that the image generates position, shape and size according to the requirement. Wherein the image is caused to generate a transformation in position, for example: the image is translated, rotated, mirrored, scaled, etc.
(2) Scaling refers to a process of reducing or enlarging an image. Reducing the image means that the image is reduced from large to small. It is understood that the image is made smaller in the horizontal direction, or in the vertical direction, or in both the horizontal and vertical directions; the image is enlarged from small to large, which can be understood as being enlarged in the horizontal direction, or in the vertical direction, or in both the horizontal and vertical directions.
(3) Image enhancement refers to the processing of certain image features in the image, such as edges, contours, contrast, etc., that degrade to improve the visual effect of the image, to increase the sharpness of the image, or to transform the image into a form that is more suitable for human or computer analysis.
(4) Image compression refers to reducing the amount of data required to represent an image to save the size of space for storing image data and the image data transfer time.
(5) Image restoration refers to removing noise in an image, thereby restoring image data of an original image.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
In the field of image technology, it is often involved in processing images, such as image geometry transformation, image enhancement, image compression, image restoration, etc. Because of the bandwidth variation and the hardware processing speed in the image processing process, the image processing system in the prior art generally consists of a processing device and a display device, and a processing module and a storage module are arranged in the processing device. The processing module acquires an image to be processed and transmits the processed image to the storage module; after the storage module acquires the processed image, the processed image is transmitted to the display device, and the display device displays the processed image. The image processing system in the related art may be applied to image processing based on a single layer, and also to image processing based on multiple layers.
With the development of the image technology field and the advent of a distributed internet protocol (Internet Protocol, IP) system, image transmission based on an IP codec technology can perform infinite capacity expansion on a traditional image processing, and acquire signal sources of a plurality of images at a cloud end, so that an image formed by superposition of a plurality of signal sources (i.e., image layers) is realized. In the prior art, in order to adapt to access of multiple signal sources, a processing module in a processing device may process images of multiple layers by adopting a line computing method. However, in an image formed by stacking a plurality of layers, since the layers may have discontinuous lines due to stacking, an algorithm of line buffering is used to process the image according to the plurality of layers. However, under the condition that a plurality of layers are overlapped, an algorithm adopting a line cache may have the technical problems that the algorithm compatibility is poor and the algorithm cannot be quickly applied to various multi-layer image processing algorithms, so that the display effect is poor and the implementation cost is high.
Therefore, the scheme provided by the application is a layer processing method, device, equipment, system and storage medium, and is used for providing a layer processing method with good algorithm compatibility, so that the technical problem of how to be quickly suitable for various layer processing is solved.
In one aspect, as shown in fig. 1, fig. 1 is a architecture of an image processing system according to an embodiment of the present application, as shown in fig. 1, the system includes:
The codec device 100 and the image processing device 200 may be a physical device having a signal codec function and a physical device having an image processing function, respectively, wherein the codec device 100 and the image processing device 200 are connected through a physical interface.
The physical interface may be a communication interface of a high-speed serial bus, or a video transmission interface, for example. The communication interface of the high-speed serial bus has the characteristics of high speed, low noise and low power consumption.
Such as: the communication interface of the high-speed serial bus may be a SerDes (Serializer/Deserializer) communication interface or an LVDS (Low Voltage DIFFERENTIAL SIGNALING) communication interface.
Such as: the video transmission interface may be an HDMI interface (High Definition Multimedia Interface, high-definition multimedia interface), or a DVI interface (Digital Visual Interface).
Specifically, the codec device 100 is configured to obtain the original image data of the video source and/or the layer from the cloud, and send the original image data of the video source and/or the layer to the image processing device 200 for storage and processing by the image processing device 200.
As an example, the codec device 100 may be an IP signal codec device for implementing video IP processing, typically implemented by a dedicated IC. As another example, the video processing device may also obtain video sources of a plurality of video source devices directly through an input interface of the video processing device.
Alternatively, the codec device 100 may decode the layer original image data of the video source and/or the layer obtained by encoding acquired from the cloud, and send the decoded original image data of the video source and/or the layer to the image processing device 200. In this way, the efficiency of data transmission can be improved. Illustratively, the codec device 100 decodes the encoded layer raw image data of the video source and/or layer acquired from the cloud via a video compression algorithm (Video compression).
It should be explained that the video compression (Video Compression) algorithm is to use compression technology to remove redundant information in digital video, and reduce the storage required for representing the original video, so as to facilitate the transmission and storage of video data. Common video compression algorithms are for example: h264, H265.
Optionally, the image processing apparatus 200 further has a display screen for displaying the display image processed by the image processing apparatus 200. The image processing apparatus 200 may be, for example, a mobile phone, a tablet, or the like having an image processing function.
Optionally, the system as shown in FIG. 1 may also include a display device 300. The display device 300 is connected to the image processing device 200.
The image processing apparatus 200 is for providing the processed image data to the display apparatus 300 for display by the display apparatus 300, for example.
Wherein the display device 300 and the image processing device 200 may be connected through an output interface. The output interface may be a high definition multimedia interface (High Definition Multimedia Interface, HDMI), or a Video graphics array (Video GRAPHICS ARRAY, VGA) interface, for example.
As shown in fig. 2, fig. 2 is a schematic structural diagram of an image processing apparatus 200 according to an embodiment of the present application, where the image processing apparatus 200 includes: a processing unit 201.
Optionally, the image processing apparatus 200 may further include a storage unit 202.
Wherein the processing unit 201 is connected to the storage unit 202.
When the input and output frame rates of the image processing apparatus 200 are not identical, in order to ensure that the input and output video achieves smooth frame rate conversion, the storage unit 202 is required for buffering data or discarding the role of data. The storage unit 202 is used for storing original image data of a video source and/or a layer provided by the codec device 100.
As an example, the storage unit 202 includes a first memory, or the storage unit 202 includes a first memory and a second memory. Optionally, the memory according to the embodiment of the present application is a double rate memory (Double Date Rate Memory, DDR memory).
It should be understood that, in fig. 2, the storage unit 202 may be a storage unit inside the image processing apparatus 200, or may be a storage device externally connected to the image processing apparatus 200, which is not limited by the embodiment of the present application.
In the case where the storage unit 202 includes the first memory, referring to fig. 2: the first memory may be used for storing both raw image data and processed image data from the processing unit 201.
In this case, the processing unit 201 is configured to read the raw image data to be processed from the first memory, and perform corresponding image processing on the raw image data to be processed to obtain processed image data, and then, the processing unit 201 is configured to store the processed image data in the first memory. Alternatively, in the architecture shown in fig. 2, the first memory may be divided into a plurality of memory areas, one memory area for storing raw image data and the other memory area for storing processed image data, so that partitioned storage of data can be realized. Of course, in the embodiment of the present application, each original image data may be stored in the same location as the original image data in the first memory.
In this case, the processing unit 201 is also connected to an output interface of the image processing apparatus 200, so that the image processing apparatus 200 calls the output interface to read the processed image data from the first memory and supply it to the display apparatus 300. By the method of processing communication at a serial high speed, the resource consumption between the image processing apparatus 200 and the display apparatus 300 can be reduced.
In the case where the storage unit 202 includes a first memory and a second memory, refer to fig. 2: the first memory is used to receive the processed image data from the processing unit 201 and store the processed image data. The second memory is used for storing the original image data.
In this case, the processing unit 201 is configured to read the raw image data to be processed from the first memory, and perform corresponding image processing on the raw image data to be processed to obtain processed image data, and then, the processing unit 201 is configured to store the processed image data in the first memory. Optionally, the second memory is further connected to an output interface of the image processing apparatus 200, so that the image processing apparatus 200 invokes the output interface to read the processed image data from the second memory and provide the processed image data to the display apparatus 300 for display.
The processing unit 201 is a unit for image processing, such as a field programmable gate array (Field Programmable GATE ARRAY, FPGA), or a video processing IC, which can realize functions such as capturing an image, processing an image, and outputting an image.
As an example, as an internal structural diagram of the processing unit 201 shown in fig. 3, the processing unit 201 in the embodiment of the present application includes: a diagram layer laminating module 2011, a reading control module 2012, a control processing module 2013, a data reading module 2014 and an image processing module 2016.
It will be appreciated that alternatively, the read control module 2012 and the data read module 2014 may be considered as a single entity, i.e., a read module, i.e., the read module has the functions of the read control module 2012 and the data read module 2014. The control processing module 2013 and the image processing module 2016 may also be considered as a whole, i.e., a processing module.
The map layer stacking module 2011 is connected to the read control module 2012 and the control processing module 2013, respectively. The read control module 2012 is respectively connected with the control processing module 2013 and the data read module 2014; the control processing module 2013 is connected to the image processing module 2016; the data reading module 2014 is connected to the storage unit 202 and the image processing module 2016, respectively; the image processing module 2016 is connected to the storage unit 202.
As an example, the layer stacking module 2011 is configured to obtain a target topology map of a display image by performing editing operation on a layer by a user, and determine stacking information and dividing information in the target topology map. The read control module 2022 is configured to control the data reading module 2014 to obtain corresponding image data content. The data reading module 2014 is configured to obtain raw image data of the image data block from the storage unit 202.
The control processing module 2013 is configured to control the image processing module 2016 to perform image processing on the original image data of the image data block read by the data reading module 2014.
In one possible implementation of the present application, as shown in fig. 3, the processing unit 201 may further include a preprocessing module 2015, wherein the data reading module 2014 and the image processing module 2016 are connected through the preprocessing module 2015, that is, an output of the data reading module 2014 is connected to an input of the preprocessing module 2015, and an output of the preprocessing module 2015 is connected to the image processing module 2016.
The preprocessing module 2015 is configured to obtain image data of any image data according to the original image data of any image data read by the data reading module 2014. For example, the preprocessing module 2015 is configured to process the original image data into image data that is image data block and is in accordance with requirements when the original image data of any image data read by the data reading module 2014 is not in accordance with requirements, and the preprocessing module 2015 is configured to process the original image data that is image data block and is in accordance with requirements when the original image data of any image data read by the data reading module 2014 is in accordance with requirements.
It may be appreciated that, in the embodiment of the present application, in the case that the architecture shown in fig. 3 does not include the preprocessing module 2015, the image processing module 2016 is configured to perform image processing on the original image data of the image data block read by the data reading module 2014 as the image data of the image data block. In the case where the architecture includes the preprocessing module 2015, the image processing module 2016 is configured to process the image data of the image data block obtained after the preprocessing module 2015.
Optionally, the image processing apparatus 200 in this embodiment of the present application may further have an output module, where the image processing module 2016 is further connected to the output module, and the output module is configured to output a display image according to the processed image data of each of the image data blocks.
It should be noted that, in the case of a hardware circuit implementation, the storage unit 202 may be a memory. The processing unit 201 may be a processor. The output module may be an output interface.
In another aspect, an embodiment of the present application provides an architecture of an image processing system, including: a video processing apparatus.
The video processing device has a plurality of input interfaces, and the plurality of input interfaces are connected to a plurality of video source devices, and are used for receiving video sources provided by the video source devices and storing the video sources in a memory of the video processing device, where the memory of the video processing device may be an internal memory of the video processing device or an external memory connected to the video processing device, and the embodiment of the application is not limited thereto. As an example, the video processing device may convert a video source provided by the video source device and store the converted video source in a memory of the video processing device.
Optionally, the image processing system may further include a display device connected to the video processing device, where the display device is configured to display the image processed by the video processing device. Of course the video processing device may also have a display.
As an example, the video processing apparatus integrates a codec unit and an image processing unit, where the codec unit is configured to perform codec processing on video source data received by an input interface of the video processing apparatus, and the function of the image processing unit is the same as that of the image processing apparatus 200 described above, which is not described herein again. As an example, the codec unit and the image processing unit may be a module having a signal codec function and a module having an image processing function integrated in one video processing apparatus, and the two modules are connected through a board wiring inside the video processing apparatus.
Illustratively, in response to a user performing an editing operation on a layer to obtain a configuration of a target topology, the video processing apparatus invokes image data of the layer in a video source stored in the video processing apparatus, and performs image processing on the image data.
In the embodiment of the present application, the specific structure of the execution body of the method of layer processing is not particularly limited as long as communication can be performed with the method of layer processing according to the embodiment of the present application by running a program in which the code of the method of layer processing of the embodiment of the present application is recorded. For example, the execution subject of a layer processing method provided in an embodiment of the present application may be a functional module in an image processing apparatus or a video processing apparatus that can call a program and execute the program, or a device, such as a chip, applied to the image processing apparatus or the video processing apparatus. The application is not limited in this regard. The following embodiments describe an example in which an execution subject of a method of layer processing is an image processing apparatus.
As shown in fig. 4, fig. 4 is a flowchart of a method for layer processing according to an embodiment of the present application, where the method includes:
Step 410, the image processing device performs editing operation on the image layer to obtain a target topological graph of the display image.
The target topological graph is in one-to-one correspondence with the plurality of layers of the display screen, and at least comprises superposition information of the plurality of layers of topological graphs.
The target topology is a preview displayed on an operation interface of a computer controlling the image processing apparatus or a preview displayed on a display panel of the image processing apparatus having a display function, which is superimposed by performing editing operations on a plurality of layer topologies, and the present application is not limited thereto.
The layer topology map is a preview of the processing result of the layer after processing, or a preview of the image data of the layer after processing, which is specified by the layer after the editing operation.
It may be understood that the target topology map may be obtained by performing an editing operation on the layer topology map by a user, or may be obtained according to a preset template, or may be obtained by performing automatic editing on the layer topology map by an image processing device, which is not particularly limited in the embodiment of the present application.
The topological diagrams of the layers are in one-to-one correspondence with the layers of the display screen. The display screen image layer is an upper screen display image which is output by processing the image data corresponding to the image layer topological graph after editing operation is carried out on the plurality of image layer topological graphs according to a user.
The superposition information of the multiple layer topological graphs comprises the superposition position of the layer topological graphs, the data content displayed by any layer topological graph after superposition, and the like.
It will be appreciated that the size of the target topology in the display image may be the same as or smaller than the size of the display image.
Step 420, the image processing device obtains a plurality of image data blocks according to the superimposition information.
It will be appreciated that a layer topology may contain one image data block or a plurality of image data blocks.
As an example, reference is made to a schematic diagram of a target topology as shown in fig. 5, which includes layer topologies 1-4. In the case that the layer topologies 1 to 4 do not overlap with each other, the image processing apparatus may obtain, according to the layer topologies 1 to 4, one image data block respectively included in each layer topology. Specifically, layer topology fig. 1 contains image data block a. Layer topology fig. 2 contains image data block B. Layer topology fig. 3 contains image data block C. Layer topology fig. 4 contains image data block D.
As another example, reference is made to a schematic diagram of another target topology as shown in fig. 6, which includes layer topologies 1 through 4, with the layer topologies overlapping each other. Specifically, when the layer topology map 2 to the layer topology map 4 are overlaid on the layer topology map 1, as shown in fig. 6, the layer topology map 1 includes an image data block a, an image data block B, an image data block F, an image data block I, and an image data block K. Layer topology fig. 2 contains image data block C, image data block D, image data block E, and image data block J. Layer topology fig. 3 contains image data blocks G. Layer topology fig. 4 contains image data block H.
Step 430, the image processing device performs image processing on the image data of each of the plurality of image data blocks, to obtain processed image data of each of the image data blocks.
Specifically, the image processing device performing image processing on the image data of each image data block may include: performing geometric transformation processing, such as scaling, translation, rotation or mirroring, on image data of the image data block; image compression, image enhancement and image restoration are carried out on the image data.
Step 440, the image processing apparatus outputs a display image based on the processed image data of each image data block.
The embodiment of the application provides a method for processing image layers, which comprises the steps that image processing equipment obtains a target topological graph of a display image by editing the image layers, the target topological graph is formed by superposing a plurality of image layer topological graphs, superposition information of the plurality of image layer topological graphs can be obtained, then the image processing equipment obtains a plurality of image data blocks according to the superposition information, and image processing is carried out on image data of each image data block in the plurality of image data blocks to obtain processed image data of each image data block.
In a possible embodiment of the application, the image data corresponding to one image data block belongs to the same video source and/or the same layer. In other words, the image data corresponding to one image data block belongs to one video source. For example, the image data corresponding to the image data block a belongs to the video source 1, that is, the image data corresponding to the image data a is the image data in a certain layer in the video source 1. Or the image data corresponding to one image data block belongs to the same image layer. I.e. the image data corresponding to the image data block a are all image data in layer a. Or the video source 1 is configured with only one layer a, the image data corresponding to one image data block belongs to the same video source and the same layer.
It will be appreciated that the layers include active layers, but may also be passive layers. It should be explained that the active layer is one of the corresponding video sources, in other words, one video source may be configured as a plurality of active layers, which correspond to the same video source. The passive layer has no corresponding video source, so the image processing device cannot acquire the image data of the passive layer through the signal source in the video source.
In the case where the layers are active layers, the image data of one image data block originates from the same active layer in one video source. Specifically, the video source and the active layer may be data from the cloud, or may be image data transmitted by a video source device connected to an input interface of the video processing device. When the image processing device acquires the active layer a, the image data of the corresponding active layer a can be acquired through the video source 1 corresponding to the active layer a.
In the case where the layers are passive layers, the image data of one image data block belongs to the same video source or the same passive layer. Specifically, since the passive layer has no corresponding video source, the image processing apparatus cannot acquire the image data of the passive layer through the video source. Thus, in the case where the image data of one image data block originates from a passive layer, the image processing apparatus does not process the image data block upon recognizing that the layer to which the image data block corresponds has no video source, and the image processing apparatus does not display a portion of the passive layer when outputting the display image. Or the image processing apparatus replaces the portion of the passive layer with another image, for example, a solid-color image, or a preset image, at the time of outputting the image.
In one possible implementation of the present application, the above step 240 may be implemented by: the image processing apparatus stores the processed image data of each image data block to the first memory, so that the output interface reads the processed image data from the first memory and outputs a display image.
By reading the processed image data from the first memory by the output interface, it is ensured that the processed image data read from the first memory by the output interface is an image conforming to the requirement of the display device in case that the frame rates of the image processing device and the display device are not identical.
It will be appreciated that the output interface in the embodiments of the present application may be connected to a processing unit in the image processing apparatus, so that the image processing apparatus may call the output interface to read the processed image data from the first memory.
For example, in the case of performing image processing on image data of each of a plurality of image data blocks, the image data after each image data block processing is stored in the first memory; the image processing apparatus reads the processed image data from the first memory by calling the output interface to supply the processed image data to the display apparatus.
It will be appreciated that the processed image data of all image data blocks is stored in the first memory by the image processing device in accordance with the location of each image data block in the target topology.
In a possible embodiment of the application, the storing of the processed image data of all image data blocks in the first memory may also be realized by: the image processing apparatus stores the processed image data of all the image data blocks in the first memory in the target storage mode.
Wherein the target storage mode specifies where each image data block is located in the target topology and the processed image data of the image data block. For example, in the case where the target storage mode is the display mode of the target topology, the target storage mode specifies information such as a position, a size, and the like, at which each image data block is displayed, according to the target topology and superimposition information in the target topology, so that the processed image data of each image data block may be stored in the first memory according to the target storage mode.
By the target storage mode, the image processing apparatus can store image data in accordance with the position of each image data block in the target topology and the processed image data of the image data block.
In the case where the target storage mode is the sequential storage or the random storage, the image processing apparatus sequentially stores or randomly stores the processed image data of all the image data blocks in the first memory in the processing order of each image data block.
As an example, referring to a schematic diagram of image data block processing shown in fig. 7A in combination with fig. 5, in the target topology shown in fig. 5, the image data block a is located at the upper left position (i.e., position 1) in the target topology, the image data block B is located at the upper right position (i.e., position 2) in the target topology, the image data block C is located at the lower left position (i.e., position 3) in the target topology, and the image data block D is located at the lower right position (i.e., position 4) in the target topology, so that the image processing apparatus performs image processing on the image data of the image data block a in accordance with the position of the image data block a in the target topology shown in fig. 5, the image data block a is placed at the upper left position (i.e., position 1) in the target topology; after image processing is performed on the image data of the image data block B in the time period T2, the image processing device places the image data block B at the position of the upper right corner, namely the position 2, according to the position of the image data block B in the target topological graph; after image processing is performed on the image data of the image data block C in the time period T3, the image processing device places the image data block C at the position of the lower left corner, namely, the position 3 according to the position of the image data block C in the target topological graph; after image processing is performed on the image data of the image data block D in the T4 time period, the image processing apparatus places the image data block D at the position in the lower right corner, i.e., position 4, in accordance with the position of the image data block D in the target topology.
In a possible embodiment of the application, the storing of the processed image data of all image data blocks in the first memory may also be realized by: the image processing apparatus may also store the processed image data of all the image data blocks in the first memory sequentially or randomly in the processing order of each image data block.
For example, if the processing order of the respective image data blocks a to D is image data block a→image data block c→image data block d→image data block B, then, as shown in fig. 7B, the image processing apparatus processes the image data of the image data block a first in the period T1, the image processing apparatus may store the processed image processing of the image data block a in the storage position 1 of the first memory, and then, in the period T2, if the image processing apparatus processes the image data of the image processing block C, the image processing apparatus may store the processed image processing of the image data block C in the storage position 2 of the first memory. Next, in a period of T3, if the image processing apparatus processes the image data of the image processing block D, the image processing apparatus may store the processed image processing of the image data block D in the storage location 3 of the first memory, and in a period of T4, if the image processing apparatus processes the image data of the image processing block B, the image processing apparatus may store the processed image processing of the image data block B in the storage location 4 of the first memory, wherein the storage addresses of the storage location 1, the storage location 2, the storage location 3, and the storage location 4 are continuous or discontinuous.
In one possible embodiment of the present application, the overlay information includes layer priority information, layer start coordinate information, layer size information, and/or layer-corresponding video source information for each layer topology.
The layer priority information of each layer topology map comprises the stacking order of the layer topology map relative to other layer topology maps.
For example, as shown in the schematic diagram of one target topology shown in fig. 5, the layer topologies are not overlapped with each other, so that the layer priority information between the layer topologies 1 to 4 is the same.
For another example, as shown in the schematic diagram of another target topology shown in fig. 6, the layer topologies 2-4 are overlaid on the layer topology 1, the layer topologies 2 and 3 are not overlapped with each other, and the layer topology 4 is overlaid on the layer topology 2, so that the priorities of the layer topologies 2-4 are the same as the priorities of the layer topologies 2 and 3 before the layer topology 1, and the priorities of the layer topologies 4 are the same as the priorities of the layer topologies 2 and 3 before the layer topology 2.
The layer starting coordinate information includes a starting coordinate of the layer topological graph in the display image, and the starting coordinate may be an upper left corner, an upper right corner, a lower left corner, or a lower right corner of each layer topological graph in the target topological graph.
For example, as shown in FIG. 5, a schematic diagram of a target topology is shown, where the starting coordinates of the layer topology of FIG. 1 are (0, 2); layer topology the starting coordinates of fig. 2 are (4, 2); layer topology the starting coordinates of fig. 3 are (0, 0); the starting coordinate of the layer topology fig. 4 is (4, 0), it should be noted that the starting coordinate of the layer topology may be changed according to the size information of other layers, for example, in the case that the starting coordinate is the upper left corner of each layer topology, the sizes of the layer topology X and the layer topology Y are 100×100, and the stitching manner is that the layer topology X and the layer topology Y are sequentially aligned and stitched in a left-right horizontal direction, the starting coordinate of the layer topology X is (0, 0), and the starting coordinate of the layer topology Y is (100, 0).
For another example, as shown in fig. 6, the initial coordinates of the layer topology of fig. 1 are (0, 0); layer topology the starting coordinates of fig. 2 are (1, 1); layer topology the starting coordinates of fig. 3 are (6, 3); layer topology fig. 4 has a start coordinate of (2, 2).
The dimension information of the layers comprises dimension information of the layers after expected processing corresponding to each layer topological graph. Specifically, the size information of the layer after the expected processing corresponding to each layer topological graph can be the size information of the corresponding video source, or can be the custom size information of the layer after the expected processing set by the user.
Wherein, when the layer is an active layer, the superimposition information further includes information of a video source corresponding to the layer. The image processing device can acquire the original layer data corresponding to the layer from the video source according to the video source information.
In one possible embodiment of the application, the target topology further comprises output screen topology information comprising at least start coordinate information and size information of the output screen topology.
The output screen topological graph can be composed of at least one output interface, and under the condition that only one output interface exists, the output screen topological graph is identical to the initial coordinate information and the size information data of the output interface; in another case, when the output screen topology is formed by splicing a plurality of output interfaces, the size information of the output screen topology is determined according to the splicing mode and the size information of each output interface, and the size information of the output screen topology may be in units of pixels or in units of physical distances, such as millimeters, centimeters, and the like.
The initial coordinate information of the output screen topological graph may be (0, 0) or may be any specified coordinate position of the user or a preset coordinate position inside the device, for example, an upper left corner, an upper right corner, a lower left corner, or a lower right corner, which is not limited herein.
For example, when the output screen topology map is composed of one output interface, only one layer topology map G is configured, the size information of the edited layer topology map G is the same as the size information of the output screen topology map, for example, 800 x 800, and when the initial coordinate is selected as the upper left corner, the initial coordinate (0, 0) of the layer topology map G may be used as the initial coordinate of the output screen topology map, and the size 800 x 800 of the layer topology map G may be used as the size information of the output screen topology map; that is, when the output screen topology is composed of one output interface and the edited layer topology is completely overlapped with the output screen topology, the starting coordinate and the size information of the layer topology may be used to replace the starting coordinate and the size information of the output screen topology, or when the output screen topology is composed of one output interface, a plurality of layer topologies are configured, when the multi-layer topology is overlapped, the layer topology G layer has the highest display priority, the size information of the layer topology G after editing is the same as the size information of the output screen topology, for example, 800×800, and when the starting coordinate is selected as the upper left corner, the starting coordinate (0, 0) of the layer topology G may be used as the starting coordinate of the output screen topology, and the size 800×800 of the layer topology G may be used as the size information of the output screen topology; that is, in the case where the output screen topology is composed of one output interface and the edited highest priority layer topology is completely overlapped with the output screen topology, the start coordinates and the size information of the output screen topology may be replaced with those of the layer topology.
For example, as shown in fig. 6, when the output screen topology is composed of four output interfaces, one layer topology 1 to 4 are configured, and the relative position of each layer topology, that is, the start coordinate of each layer topology on the output screen topology, needs to be determined by the output screen topology.
In one possible embodiment of the present application, the step 430 includes: and each time the image processing equipment determines the image data of one image data block, performing image processing on the image data of the image data block until the image data corresponding to all the image data blocks is obtained.
The image processing apparatus performs image processing on the image data of the image data block every time it determines the image data of one image data block so that serial processing can be realized. The system bandwidth and the resource consumption of the image processing device can be reduced by a serial processing mode, such as a time-sharing multiplexing mode.
For example, the plurality of image data blocks include at least an image data block a and an image data block B, the image processing apparatus reads the original image data a11 of the image data block a from the first memory, performs image processing on the image data a11 to obtain processed image data a11', and stores the image data a11', for example, in the first memory or the second memory; then, the image processing apparatus reads the original image data B11 of the image data block B from the first memory, performs image processing on the image data B11 to obtain processed image data B11', and stores the image data B11' until the image processing apparatus obtains processed image data of all the image data blocks.
Since the original image data of the image data block is read from the first memory in the image processing apparatus in actual process, there may be an unsatisfactory read original image data, such as an unsatisfactory shape or size. In this way, the display effect of the image data displayed on the display device may be poor. Based on this, in order to improve the final display effect, the image processing device in the embodiment of the present application may process the read original image data which does not meet the requirements into image data which meets the requirements, and then perform image processing. Therefore, in one possible embodiment of the present application, each time the image processing device determines the image data of one image data block, the image processing is performed on the image data of the image data block until all the processed image data corresponding to the image data block are obtained, and the method provided by the embodiment of the present application further includes the following two cases:
In case 1, when the original image data of any one of the determined image data blocks does not meet the requirements, the image processing apparatus processes the original image data of any one of the image data blocks into image data meeting the requirements, and obtains the image data of any one of the image data blocks.
In case 2, the image processing apparatus sets the original image data of any one of the image data blocks as the image data of any one of the image data blocks, in the case where the original image data of any one of the image data blocks is determined to be satisfactory.
It is understood that when the image processing apparatus adopts the architecture shown in fig. 3, case 1 and case 2 may be performed by the preprocessing module 2015.
It is understood that the image processing apparatus may determine whether or not the original image data of the acquired image data block and the image data of the image data block agree before image processing is performed on the image data of the image data block. For example, the image processing apparatus may determine whether the original image data of the image data block meets the requirements based on whether the pixel size of the original image data and the pixel size of the image data are identical.
As an example, fig. 8 is a diagram of (a) in a schematic diagram of image data block preprocessing: the Pixel (pi) size of the image data block a is 3×3pi, and although the image processing apparatus acquires the original image data of the image data block a based on the related information of the image data block a, due to the characteristics of the first memory, there may be the acquired Pixel size of the original image data of the image data block a is 4×3pi, and the image processing apparatus determines that the original image data of the image data block a does not meet the requirement, and processes the original image data of the image data block a into the image data meeting the requirement, thereby obtaining the image data meeting the requirement.
As another example, fig. 8 is a diagram of (b) in a schematic diagram of image data block preprocessing: the pixel size of the image data block B is 3×3pi, and the pixel size of the original image data of the acquired image data block B is 3×3pi, the original image data of the image data block B determined by the image processing apparatus is satisfactory, and this original image data is taken as the image data of the image data block B.
Of course, in the actual process, there may be a case where the shape of the original image data of a certain image data block acquired by the image processing apparatus does not conform to the expected shape, for example, the expected shape is a rectangular shape, and the shape of the original image data of a certain image data block acquired by the image processing apparatus is a non-rectangular shape, then the image processing apparatus may process the original image data of the image data block into a rectangular shape, thereby obtaining the image data of the image data block.
In one possible embodiment of the present application, the method provided by the embodiment of the present application further includes: the image processing device stores the processed image data of each image data block into the first memory, so that the output interface reads the processed image data of each image data block from the first memory and outputs a display image, and then each time the image processing device determines the image data of one image data block, the image processing device performs image processing on the image data of the image data block until the processed image data corresponding to all the image data blocks are obtained, including:
and each time the image processing device obtains the processed image data of one image data block, storing the processed image data of the obtained image data block in the first memory until the processed image data of all the image data blocks are stored in the first memory.
As an example, refer to a schematic diagram of an image data block process as shown in fig. 7A: the image processing device determines the image data of the image data blocks 1 to 4 to be processed, wherein the image processing device adopts a time-sharing multiplexing mode, and in the time period T1, the image processing device performs image processing on the image data of the image data block A and stores the processed image data in a first memory; the image processing device performs image processing on the image data of the image data block B in the T2 time period, and stores the processed image data in the first memory; the image processing device performs image processing on the image data of the image data block C in the period of T3, and stores the processed image data in the first memory; the image processing apparatus performs image processing on the image data of the image data block 4 over a period of T4, and stores the processed image data in the first memory.
In one possible embodiment of the present application, the method provided by the embodiment of the present application further includes, before step 420: the image processing device obtains division information, where the division information is used to divide image data blocks of the image data corresponding to the layer topology map, and step 420 may be implemented as follows: the image processing apparatus determines a plurality of image data blocks based on the division information and the superimposition information.
In one possible embodiment of the present application, the method provided by the embodiment of the present application further includes, before step 420: the image processing equipment acquires dividing information, wherein the dividing information is used for dividing image data blocks of image data corresponding to the layer topological graph, the target topological graph also comprises output screen topological graph information, and the output screen topological graph information at least comprises initial coordinate information and size information of the output screen topological graph; the above step 420 may be implemented by: the image processing apparatus determines a plurality of image data blocks based on the division information, the output screen topology information, and the superimposition information.
As to how the image processing apparatus determines the plurality of image data blocks based on the division information and the superimposition information, reference may be made to the following embodiments, and details thereof will not be repeated.
In one possible embodiment of the present application, the method provided in the embodiment of the present application further includes at least one of the following two cases:
The case 1, the division information is used for indicating that image data of each adjacent pixel point of original image data corresponding to the target topological graph is sequentially subjected to consistency judgment of the image layer according to a preset sequence, and an area formed by a plurality of continuous pixel points with consistent image layers is determined as an image data block.
And 2, dividing information is used for indicating that the consistency of the video source is judged according to the preset sequence on the image data of each adjacent pixel point of the original image data corresponding to the target topological graph, and determining the area formed by a plurality of continuous pixel points with consistent video source as an image data block.
Specifically, in the case where the layer is an active layer, the image processing apparatus needs to determine an area formed by a plurality of consecutive pixels in which the layer coincides as an image data block based on the division information.
The layer consistency determination refers to determining whether the image data of each adjacent pixel point of the original image data is the same layer. If the image data of each adjacent pixel point is of the same image layer, determining the area formed by the continuous pixel points as an image data block; if the image data of each adjacent pixel is not the same layer, the area formed by the pixels cannot be determined as an image data block.
The preset sequence may be a specific processing sequence preset by the image processing apparatus for each layer, for example, refer to a schematic diagram of a target topology diagram as shown in fig. 5: the image processing device determines layers 1 to 4 of the original image data corresponding to the layer topology diagram in the target topology diagram, and presets a specific processing sequence for the four layers, wherein the processing sequence is as follows: layer 1, layer 2, layer 3, layer 4. Of course, the preset sequence may also be a random processing sequence preset by the image processing apparatus for each layer.
The layer is described as an example. Referring to a schematic diagram of another target topology as shown in fig. 6: the target topology is composed of layer topologies 1 to 4, and the image data displayed in the layer topologies 1 to 4 are divided into image data blocks a to K. According to the editing operation on the image layer, the image processing device determines the partition information of the target topological graph, and processes the original image data corresponding to the image layer topological graphs 1-4 according to the partition information. Taking layer 1 corresponding to layer topology fig. 1 as an example: the image processing apparatus determines whether or not the original image data of layer 1 belongs between each adjacent pixel point.
Specifically, if the pixel points [1 to 100] and the adjacent pixel points [101 to 120] are the original image data of the same layer 1, the image processing device may determine the area formed by the pixel points [1 to 120] as an image data block. If the adjacent pixels [ 121-150 ] of the pixels [ 1-120 ] are the original image data of the layer 2, the image processing device determines that the pixels [ 1-120 ] are inconsistent with the pixels [ 121-150 ], and the image processing device can respectively determine the areas formed by the pixels [ 1-120 ] as one image data block and the areas formed by the pixels [ 121-150 ] as another image data block.
In one possible embodiment of the present application, the partition information includes first information and second information, where the first information is used to determine a layer to which the image data block belongs, and the second information is used to determine original image data corresponding to the image data block in the layer to which the image data block belongs, and then the step 420 may be specifically implemented by the following manner:
in step 4201, the image processing apparatus reads, according to the first information of each image data block, layer original image data of a layer to which each image data block belongs.
As an example, the first information may include: the video source to which the image data block belongs, and sub information for determining image layer original image data of the image layer to which the image data block belongs. For example, the sub information may be size information of the layer. The size information of the layer includes: the width and height of the layers.
Step 4202, the image processing apparatus determines, according to the second information of each image data block, original image data corresponding to the corresponding image data block from the layer original image data of the layer to which the image processing apparatus belongs.
As an example, the second information may include: the starting coordinates of the image data block in the layer to which it belongs, and the size information of the image data block. Wherein the size information of the image data block includes: the width and height of the image data block.
For example, referring to the schematic diagram of another target topology as shown in fig. 6, the image data block C in the layer topology 2 is illustrated as an example: the first information of the image data block C includes: the image data block C belongs to the identification 1 of the video source 1 of the layer 1 to which the layer 1 belongs, and the size information of the layer 1, such as the width and height of the layer 1. The second information of the image data block C includes: the size information of the image data block C, for example, the width and height of the image data block C, and the start coordinates of the image data block C in the belonging layer 1 are (0, 0). The image processing device determines that the layer of the image data block C belongs to the video source 1 based on the identification 1 of the video source 1 of the image data block C. The image processing apparatus determines layer original image data of layer 1 from the video source 1 based on the size information of layer 1. Then, the image processing device determines the original image data corresponding to the image data block C from the layer original image data of the layer 1 according to the second information of the image data block C, so as to read the original image data corresponding to the image data block C.
In one possible embodiment of the present application, the above step 2201 may be implemented by: the image processing apparatus reads, from the second memory, layer original image data of a layer to which the image data block belongs, based on the first information of each image data block. The first memory and the second memory are the same memory or different memories, wherein the second memory at least stores the layer original image data of the layer to which each image data block belongs.
It should be noted that, when the first memory and the second memory are the same memory, the consumption of hardware resources in the image processing apparatus can be reduced.
The original image data of the layer includes the following three cases:
in case 1, the layer original image data is original video source data corresponding to the layer to which each image data block belongs.
It will be appreciated that in the case where the layer is an active layer, the layer original image data may be the video source in the source device acquired by the video processing device through the input interface, or may be the original video source data acquired by the codec device.
And 2, the original image data of the image layer is the image data which corresponds to the image layer of each image data block and is subjected to the pre-stage processing.
It should be explained that the image processing apparatus obtains the processed image data after image processing of the image data block, the processed image data being stored in the first memory. In the case where the first memory and the second memory are the same memory, the image processing apparatus may perform image processing for the second time on the processed image data. The pre-processing may be a reduction or enhancement processing of the image.
And 3, the original image data of the image layer is the image data which is written into the second memory for the first time and corresponds to the image layer to which each image data block belongs.
In one possible embodiment of the present application, the above step 2201 may be implemented by: the image processing device reads the layer original image data of the layer to which the image data block belongs, which is determined by the first information of each image data block, according to a preset reading rule.
It is understood that the preset reading rule may be sequentially read in the order of reading the plurality of image data blocks. Of course, the preset reading rule may be random reading.
It should be noted that, when the layer original image data of the layer to which each image data block belongs is stored in the first memory/the second memory, the image processing apparatus sequentially reads the layer original image data of the layer to which the image data block belongs determined by the first information of each image data block from the first memory/the second memory according to a preset reading rule.
As an example, reference is made to a schematic diagram of another target topology as shown in fig. 6: the image processing apparatus determines image data blocks a to K in the target topology, and sets a specific reading order for the image data blocks, such as: the image data blocks a to K are sequentially read. Or a read sequence such as: the image data blocks are read row by row from left to right or from right to left in sequence starting from the start coordinates of the target topology.
Since in the practical application scenario, the display image is formed by overlapping the layer original image data of the plurality of layers, but the layer topology diagrams of the plurality of layers may have overlapping areas, the following will be divided into the contents of the plurality of image data blocks obtained by the image processing device when the layer topology diagrams of the at least two layers have overlapping areas:
in case 1, in the case that the layer topology map of at least two layers in the multiple layer topology maps included in the target topology map has an overlapping area, correspondingly, original image data of at least two image data blocks in the multiple image data blocks overlap, and the at least two image data blocks belong to different layers in the at least two layers. That is, the image data blocks divided by the layer a at least include the image data blocks divided by the overlapping area in the layer a, and the image data blocks divided by the layer B at least include the image data blocks divided by the overlapping area in the layer B.
As shown in fig. 6, the layer topology 2 and the layer topology 4 corresponding to the target topology have overlapping areas, for example. Specifically, the layer topology diagram 2 includes: image data block C, image data block D, image data block E, image data block J, and image data block H', the layer topology fig. 4 includes: image data block H. The original image data of the image data block H' is covered by the original image data of the image data block H, i.e. the two overlap.
In case 2, in the case where the layer topology map of at least two layers among the plurality of layer topology maps included in the target topology map has an overlapping area, the original image data of each of the plurality of image data blocks does not overlap. That is, in the case where the layer a and the layer B have overlapping areas, if the layer B is overlaid on the layer a, the image data block into which the layer B is divided includes at least the image data block in the layer B divided by the overlapping areas. The image data blocks divided by the layer A comprise image data blocks divided by other areas in the layer A. The other areas are areas of layer a other than the area overlapping layer B.
As shown in fig. 6, the layer topology 2 and the layer topology 4 corresponding to the target topology have overlapping areas, for example. The layer topology fig. 2 includes: image data block C, image data block D, image data block E, and image data block J, the layer topology fig. 4 includes: image data block H. Wherein for the part of the layer topology fig. 2 covered by the image data block H, the image processing apparatus does not divide the image data block for the part.
Because the overlapped area covered in the two layers does not need to be read repeatedly or only the layer data which can be seen by a user is required to be read, the situation 2 provided by the embodiment of the application can reduce the bandwidth and the resource required by the image processing equipment when the image is processed to the greatest extent.
Of course, there may be cases where the layer raw image data of the respective layers do not overlap each other, in which case each layer may be divided into one or more image data blocks.
In one possible embodiment of the present application, each time the image processing apparatus determines the image data of one image data block, image processing is performed on the image data of the image data block, including: and each time the image processing device determines the image data of one image data block, processing the image data of each image data block according to the image processing parameters of the image data block to obtain the processed image data of the image data block.
Wherein the image processing parameters may include any one or more of the following: a reduction parameter, an amplification parameter, a color mode parameter.
As an example, taking an image processing parameter as an enlargement parameter, the image processing apparatus performs enlargement processing on image data of an image data block according to the enlargement parameter of the image data block, thereby obtaining image data of the image data block after enlargement.
For example, after the image processing apparatus obtains the image data of the image data block 1, the image processing apparatus may perform the enlarging process on the image data of the image data block 1 according to the enlarging parameter of the image data block 1, store the processed image data of the image data block 1 after the enlarging process, and then read the image data of the image data block 2, and may perform the enlarging process on the image data of the image data block 2 according to the enlarging parameter of the image data block 2, and store the processed image data of the image data block 2 after the enlarging process.
As another example, taking an image processing parameter as an enlargement parameter, the image processing apparatus performs enlargement processing on image data of an image data block according to the enlargement parameter of the image data block, thereby obtaining image data of the image data block after enlargement.
The zoom-in parameter and the zoom-out parameter may include: an enlargement parameter and a reduction parameter in the horizontal direction, and/or an enlargement parameter and a reduction parameter in the vertical direction. In other words, the image processing apparatus may individually enlarge or reduce the width in the horizontal direction of the original image of the image data block or may individually enlarge or reduce the height in the vertical direction of the original image of the image data block when processing the image data block.
Thus, as another example, taking the width of the image processing parameter in the horizontal direction as an enlargement parameter and the height in the vertical direction as a reduction parameter as an example, the image processing apparatus performs enlargement processing on the width of the image data block in accordance with the enlargement parameter of the image data block in the horizontal direction; accordingly, the image processing apparatus performs reduction processing on the height of the image data block according to the reduction parameter of the image data block in the vertical direction, thereby obtaining the image data after the image data block processing. As shown in a schematic diagram of processed image data in fig. 6, as shown by an image data block H in fig. 4 of a layer topology, the image processing apparatus needs to perform an enlarging process on the original image data of the image data block H at a position corresponding to the image data block H in fig. 4 in a horizontal direction and a reducing process in a vertical direction, so as to obtain the image data processed by the image data block H in the target topology 4.
In one possible implementation of the present application, the image processing parameters corresponding to all the image data blocks may be the same or different. Or in a possible implementation manner of the present application, the image processing parameters corresponding to at least two image data blocks belonging to the same layer are the same, and the image processing parameters corresponding to each image data block belonging to different layers may be the same or different.
In a possible implementation of the application, the image processing parameters may be preconfigured to the image processing device, such as by a user inputting the image processing parameters to the image processing device according to actual requirements. Of course, the image processing parameters may be determined by the image processing apparatus itself, or the image processing parameters may be determined by editing when editing the target topology. The following will describe how the image processing apparatus acquires the image processing parameters of the image data block, taking the image processing parameters as scaling parameters as an example.
In one possible embodiment of the present application, the image processing parameters include at least a scaling parameter, and if the image processing device determines the image data of one image data block, the image processing device processes the image data corresponding to each image data block according to the image processing parameters of the image data block, so that before obtaining the processed image data corresponding to the image data block, the method provided by the embodiment of the present application further includes: the image processing device determines scaling parameters of any image data block according to the display parameters of any image data block and the display parameters of the video source to which any image data block belongs.
As one possible implementation, the scaling parameters include: the image data X0 after the expected processing of the image data block corresponding to the target topology map, the original image data X0' of the image data block, and the scaling ratio at which the scaling processing is required for the original image data of the image data block. The scaling includes a magnification scale and a reduction scale. The scaling may be preset by the user, or may be automatically preset by the image processing device according to the layer topology map, or the image processing parameters may be determined by editing when the target topology map is edited, which is not limited by the embodiment of the present application.
It should be explained that the scaling ratio is the ratio of the image data X0 after the expected processing of the image data block corresponding to the target topology map and the original image data X0' of the image data block.
As an example, a schematic diagram of scaling an image is shown in fig. 9, and the processed image data X0 of the image data block a and the original image data X0' of the image data block a are shown. The original image data X0' of the image data block a is scaled to an enlarged scale in the case of being smaller than the expected processed image data X0 of the image data block a corresponding to the target topology.
It will be appreciated that the display parameters of the image data blocks include the width and height of the image data blocks on the target topology in the display image, such as the width and height of the image data block a on the target topology in the schematic diagram of the other target topology shown in fig. 6.
It will be appreciated that the display parameters of the signal source to which the image data block belongs include the width and height of the original image data of the image data block on the layer and/or the video source, for example, the width and height of the original image data in layer 1 to which the image data block a belongs in the schematic diagram of another target topology shown in fig. 6.
As an example, reference is made to image data block a on the target topology of the schematic diagram of another target topology shown in fig. 6: in the case that the display parameter of the image data block a corresponding to the target topology is smaller than the display parameter of the signal source 1 to which the image data block a belongs, the image processing apparatus determines the reduction parameter of the image data block a.
As another example, refer to image data block G on the target topology map of the schematic diagram of another target topology map shown in fig. 6: in the case where the display parameter of the image data block G is smaller than the display parameter of the signal source 3 to which the image data block G belongs, the image processing apparatus determines the magnification parameter of the image data block G.
In one possible embodiment of the application, the shape of the image data block is rectangular or non-rectangular.
Specifically, the non-rectangular shape may be a polygon other than a quadrangle, such as a circle, a heart, a pentagon, etc., which is not limited in the embodiment of the present application.
In the case where the image data blocks are rectangular in shape, as shown in fig. 6, which is a schematic diagram of another target layer topology, the image data blocks a to K are all rectangular in shape. In this way, it can be ensured that there is no undivided image data in the gap between the two image data blocks.
As shown in fig. 10, fig. 10 is a flowchart of another method for layer processing according to an embodiment of the present application, where the method includes:
Step 910, the image processing device performs editing operation on the layer topology map to obtain a target topology map of the display image.
The target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs.
The specific implementation of step 910 may refer to the description in step 410, which is not repeated herein.
As an example, in connection with fig. 3, the above-mentioned step 910 may be implemented specifically by:
In step 920, the image processing apparatus reads the original image data of each of the plurality of image data blocks from the second memory according to the superimposition information.
As an example, in connection with fig. 3, the above-mentioned step 920 may be specifically implemented by: the read control module 2012 controls the data read module 2014 to read the original image data of each of the plurality of image data blocks from the second memory according to the superimposition information.
Step 930, obtaining the image data of the image data block according to the original image data of one of the image data blocks every time the image processing device reads the original image data of one of the image data blocks.
As an example, in connection with fig. 3, the above step 930 may be implemented specifically by: after each reading of the original image data of one of the image data blocks, the data reading module 2014 sends the read original image data of the image data block to the preprocessing module 2015, and then the preprocessing module 2015 obtains the image data of the image data block according to the original image data of one of the image data blocks.
As an example, in the event that the preprocessing module 2015 determines that the original image data of the image data block is not satisfactory, the original image data of the image data block is processed into satisfactory image data. In the event that the preprocessing module 2015 determines that the original image data of the image data block is satisfactory, the original image data of the image data block is supplied to the image processing module 2016 as the image data of the image data block.
Step 940, the image processing device performs image processing every time the image data of one image data block is obtained.
For example, the image processing apparatus may perform a scaling process or a translation process or an image enhancement or restoration process on the image data of the image data block in step 940.
As an example, in connection with fig. 3, the above step 940 may be implemented specifically by: the preprocessing module 2015 obtains the image data of one image data block, and then sends the image data of the image data block to the image processing module 2016, and the control processing module 2013 controls the image processing module 2016 to perform image processing on the image data of the one image data block.
In step 950, the image processing apparatus stores the processed image data of each image data block in the first memory until the processed image data of all image data blocks are stored in the first memory.
For example, in connection with fig. 3, the above step 950 may be specifically implemented by: the image processing module 2016 stores the image data processed by each image data block into the first memory. For example, each time the image processing module 2016 processes an image of a block of image data, the processed image data of the block of image data is stored in the first memory until the processed image data of all the blocks of image data are stored in the first memory.
In step 960, the image processing apparatus calls an output interface to read the processed image data of all the image data blocks from the first memory to output a display image.
The method for processing another layer provided in this embodiment may refer to the contents of steps 410 to 440 in the above embodiment, and will not be described herein.
It will be appreciated that each device, such as an image processing device, includes corresponding structures and/or software modules that perform the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may perform the division of the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
The method according to the embodiment of the present application is described above with reference to fig. 1 to 10, and the apparatus for performing the method according to the embodiment of the present application is described below. It will be understood by those skilled in the art that the method and apparatus may be combined with each other and reference may be made to the method and apparatus for layer processing provided in the embodiments of the present application, where the steps performed by the layer processing apparatus and the electronic device in the method for layer processing described above may be performed respectively.
In the case of using an integrated unit, fig. 11 shows a layer processing apparatus referred to in the above embodiment, which may be an image processing device or an apparatus applied in the image processing device, such as a chip or a processing circuit, and may include: a processor 1101 and an output interface 1102.
In an alternative implementation, the layer processing device may further include at least one memory (e.g., the first memory, the second memory, etc.). The at least one memory is configured to store program code and data for the apparatus for layer processing.
The layer processing device is a layer processing device, or a chip applied to the layer processing device, for example. The processor 1101 is configured to edit the layers to obtain a target topology map of the display image, where the target topology map is formed by stacking a plurality of layer topology maps, the layer topology maps are in one-to-one correspondence with the display screen layers, and the target topology map at least includes stacking information of the layer topology maps. The processor 1101 is further configured to obtain a plurality of image data blocks according to the superimposition information. The processor 1101 is configured to perform image processing on image data of each of the plurality of image data blocks, to obtain processed image data of each of the image data blocks. An output interface 1102 for outputting a display image based on the processed image data of each image data block.
In a possible implementation of the present application, the processor 1101 is further configured to store the processed image data of each image data block to the first memory, so that the output interface 1102 reads the processed image data of each image data block from the first memory to output the display image.
In a possible implementation of the present application, the processor 1101 is further configured to, for each determination of the image data of one image data block, perform image processing on the image data of the image data block until processed image data of all image data blocks are obtained.
In a possible implementation manner of the present application, the processor 1101 is further configured to process the original image data of any one of the image data blocks into the image data of any one of the image data blocks if the determined original image data of any one of the image data blocks is not satisfactory, obtain the image data of any one of the image data blocks, and use the original image data of any one of the image data blocks as the image data of any one of the image data blocks if the determined original image data of any one of the image data blocks is satisfactory.
In a possible implementation of the present application, the processor 1101 is further configured to store the processed image data of the obtained image data block in the first memory every time the processed image data of one image data block is obtained, until the processed image data of all image data blocks are stored in the first memory.
In a possible implementation manner of the present application, the processor 1101 is further configured to obtain partition information, where the partition information is used to perform image data block partition on image data corresponding to the layer topology map;
in a possible implementation of the present application, the processor 1101 is further configured to determine a plurality of image data blocks according to the partition information and the superimposition information.
In a possible implementation manner of the present application, the processor 1101 is further configured to process the image data of the image data block according to the image processing parameter of the image data block every time the image data of one image data block is determined, so as to obtain the processed image data of the image data block.
In a possible implementation of the present application, the processor 1101 is further configured to determine a scaling parameter of any image data block according to a display parameter of any image data block and a display parameter of a video source to which any image data block belongs.
The processor 1101 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application Specific Integrated Circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application. Such as an FPGA.
The memory 1103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, a compact disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be provided separately and coupled to the processor 1101 via a communication line 1104. The memory 1103 may also be integrated with the processor 1101.
The memory 1103 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 1101. The processor 1101 is configured to execute computer-executable instructions stored in the memory 1103, thereby implementing a layer processing method according to the following embodiments of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not particularly limited in the embodiments of the present application.
In a particular implementation, the processor 1101 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 11, as an embodiment.
In a particular implementation, as one embodiment, the communication device may include multiple processors, such as processor 1101 and processor 1105 in FIG. 11. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions). Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Fig. 12 is a schematic structural diagram of a chip 120 according to an embodiment of the present application. The chip 120 includes one or more (including two) processors 1210 and a communication interface 1230.
Alternatively, the chip 120 may be applied to an image processing apparatus or a video processing apparatus, which is not limited in the embodiment of the present application.
Optionally, the chip 120 further includes a memory 1240, the memory 1240 may include read only memory and random access memory, and provide operating instructions and data to the processor 1210. A portion of the memory 140 may also include non-volatile random access memory (non-volatile random access memory, NVRAM). Alternatively, the memory 1240 is an external memory of the chip 120, and it should be noted that, whether the memory 1240 is an internal memory or an external memory of the chip 120, it may store the processed image data of the image data blocks and/or the layer original image data of the layer to which each image data block belongs.
In some implementations, the memory 1240 stores the elements, execution modules or data structures, or a subset thereof, or an extended set thereof.
In an embodiment of the present application, the corresponding operation is performed by calling an operation instruction stored in the memory 1240 (which may be stored in an operating system).
The processor 1210 controls the processing operations of the transmitting device, and the processor 1210 may also be referred to as a central processing unit (central processing unit, CPU).
Memory 1240 may include read-only memory and random access memory and provides instructions and data to processor 1210. A portion of the memory 1240 may also include NVRAM. Such as an application memory 1240, a communication interface 1230, and a memory 1240, are coupled together by a bus system 1220, where the bus system 1220 may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 1220 in fig. 12.
The method disclosed in the above embodiment of the present application may be applied to the processor 1210 or implemented by the processor 1210. Processor 1210 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware in processor 1210 or by instructions in software. The processor 1210 may be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 1240 and the processor 1210 reads information in the memory 1240 and performs the steps of the method described above in connection with its hardware.
In a possible implementation, the communication interface 1230 is used to perform the steps of receiving and transmitting by the communication device in the embodiment of the signal transmission method described above. The processor 1210 is configured to execute the steps of the processing of the image processing apparatus in the embodiment of the signal transmission method described above.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not particularly limited in the embodiments of the present application.
In one aspect, a computer readable storage medium is provided having instructions stored therein that, when executed, perform the functions as performed by the image processing device in fig. 4.
In one aspect, a computer program product is provided comprising instructions that when executed perform the functions as performed by an image processing device in fig. 4.
In one aspect, embodiments of the present application provide a chip for use in an image processing device, the chip including at least one processor and a communication interface coupled to the at least one processor for executing instructions to perform functions as performed by the image processing device in fig. 4.
The embodiment of the application provides a layer processing system, which comprises: the device comprises an image processing device, a coding and decoding device and a display device, wherein the image processing device is connected with the display device. The codec device is connected with the image processing device. Wherein the image processing device is adapted to perform the functions of fig. 4.
The embodiment of the application provides a layer processing system, which comprises: the video processing device is connected with the display device, integrates a coding and decoding unit and an image processing unit, and is used for coding and decoding video source data received by an input interface of the video processing device, and the image processing unit is used for executing the functions executed by the image processing device in fig. 4.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs (digital video disc, DVD); but also semiconductor media such as Solid State Drives (SSDs) STATE DRIVE.
Although the application is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (23)
1. A method of layer processing, comprising:
Editing the layers to obtain a target topological graph of a display image, wherein the target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs;
Obtaining a plurality of image data blocks according to the superposition information;
Image processing is carried out on the image data of each image data block in the plurality of image data blocks, so that the processed image data of each image data block is obtained;
and outputting the display image according to the processed image data of each image data block.
2. Method according to claim 1, characterized in that the image data corresponding to one of the image data blocks belongs to the same video source and/or the same layer.
3. The method of claim 1, wherein outputting the display image based on the processed image data of each of the image data blocks comprises:
Storing the processed image data of each of the image data blocks to a first memory, such that an output interface reads the processed image data of each of the image data blocks from the first memory to output the display image.
4. A method according to claim 3, wherein said storing the processed image data of each of said image data blocks to the first memory comprises:
And storing the processed image data of all the image data blocks in the first memory according to a target storage mode.
5. The method of claim 1, wherein the overlay information includes layer priority information, layer start coordinate information, layer size information, and/or layer-corresponding video source information for each layer topology.
6. The method of claim 1, wherein the target topology further comprises output screen topology information including at least start coordinate information and size information of the output screen topology.
7. The method of claim 1, wherein image processing the image data of each of the plurality of image data blocks to obtain processed image data of each of the image data blocks, comprises:
And each time the image data of one image data block is determined, performing image processing on the image data of the image data block until the processed image data of all the image data blocks are obtained.
8. The method of claim 7, wherein after each determination of the image data of one of the image data blocks and before image processing the image data of the image data block, the method further comprises:
Processing the original image data of any one of the image data blocks into image data meeting the requirements under the condition that the determined original image data of any one of the image data blocks does not meet the requirements, and obtaining the image data of any one of the image data blocks;
and taking the original image data of any one of the image data blocks as the image data of any one of the image data blocks when the determined original image data of any one of the image data blocks meets the requirements.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
Storing the processed image data of each of the image data blocks to a first memory, such that an output interface reads the processed image data of each of the image data blocks from the first memory to output the display image;
And performing image processing on the image data of the image data blocks every time the image data of one image data block is determined until the processed image data of all the image data blocks are obtained, wherein the image processing comprises the following steps:
And storing the processed image data of the obtained image data blocks in the first memory every time the processed image data of one image data block is obtained until the processed image data of all the image data blocks are stored in the first memory.
10. The method according to any one of claims 1-8, further comprising:
obtaining partition information, wherein the partition information is used for dividing image data blocks of image data corresponding to the layer topological graph;
the obtaining a plurality of image data blocks according to the superposition information includes:
determining a plurality of image data blocks according to the dividing information and the superposition information; or alternatively
Obtaining dividing information, wherein the dividing information is used for dividing image data blocks of image data corresponding to the layer topological graph, the target topological graph further comprises output screen topological graph information, and the output screen topological graph information at least comprises initial coordinate information and size information of the output screen topological graph;
the obtaining a plurality of image data blocks according to the superposition information includes:
and determining a plurality of image data blocks according to the division information, the output screen topological graph information and the superposition information.
11. The method according to any one of claims 1-8, wherein in case that said layer topology map of at least two layers of said plurality of layer topology maps has an overlapping area, original image data of at least two of said plurality of image data blocks have an overlap, and at least two of said image data blocks belong to different layers of at least two layers;
Or alternatively
In the case where the layer topology map of at least two layers among the plurality of layer topology maps has an overlapping area, original image data of each of the plurality of image data blocks does not overlap.
12. The method according to claim 10, wherein the dividing information is used for indicating that image data of each adjacent pixel point of original image data corresponding to the target topological graph is sequentially subjected to consistency judgment of layers according to a preset sequence, and an area formed by a plurality of continuous pixel points with consistent layers is determined as an image data block; and/or the number of the groups of groups,
The dividing information is used for indicating that the consistency of the video source is judged according to the preset sequence on the image data of each adjacent pixel point of the original image data corresponding to the target topological graph, and the area formed by a plurality of continuous pixel points with consistent video source is determined to be an image data block.
13. The method according to claim 10, wherein the division information includes first information for determining a layer to which the image data block belongs and second information for determining the original image data to which the image data block corresponds in the layer to which the image data block belongs;
the determining a plurality of image data blocks according to the dividing information and the superposition information includes:
Reading the layer original image data of the layer to which each image data block belongs according to the first information of each image data block;
And determining the original image data corresponding to the corresponding image data block from the layer original image data of the layer to which each image data block belongs according to the second information of each image data block.
14. The method according to claim 13, wherein reading the layer raw image data of the layer to which each of the image data blocks belongs according to the first information of each of the image data blocks includes:
Reading layer original image data of a layer to which the image data blocks belong from a second memory according to first information of each image data block, wherein the first memory and the second memory are the same memory or different memories,
The second memory stores at least layer original image data of a layer to which each image data block belongs, wherein the layer original image data is original video source data corresponding to the layer to which each image data block belongs;
Or the image layer original image data is image data which corresponds to the image layer to which each image data block belongs and is subjected to pre-stage processing;
Or the layer original image data is the image data which is written into the second memory for the first time and corresponds to the layer to which each image data block belongs.
15. The method according to claim 13, wherein reading the layer raw image data of the layer to which each of the image data blocks belongs according to the first information of each of the image data blocks includes:
and reading the layer original image data of the layer to which the image data block belongs, which is determined by the first information of each image data block, according to a preset reading rule.
16. The method of claim 7, wherein each time the image data of the image data block is determined, performing image processing on the image data of the image data block, comprising:
and processing the image data of the image data blocks according to the image processing parameters of the image data blocks every time the image data of one image data block is determined, so as to obtain the processed image data of the image data blocks.
17. The method of claim 16, wherein the image processing parameters include at least a scaling parameter, and wherein each time the image data of one of the image data blocks is determined, the image data of each of the image data blocks is processed according to the image processing parameters of the image data block to obtain the processed image data of the image data block, the method further comprising:
And determining the scaling parameter of any image data block according to the display parameter of any image data block and the display parameter of any video source to which the image data block belongs.
18. The method of any of claims 1-8, wherein the shape of the image data block is rectangular or non-rectangular.
19. A method of layer processing, comprising:
editing the layers to obtain a target topological graph of a display image, wherein the target topological graph is formed by superposing a plurality of layer topological graphs, the layer topological graphs correspond to the display screen layers one by one, and the target topological graph at least comprises superposition information of the layer topological graphs;
Reading original image data of each image data block in a plurality of image data blocks from a second memory according to the superposition information;
Obtaining the image data of the image data block according to the original image data of one image data block every time the original image data of one image data block is read;
performing image processing every time the image data of one image data block is obtained;
Storing the processed image data of each of the image data blocks in a first memory until the processed image data of all of the image data blocks are stored in the first memory;
And calling an output interface to read the processed image data of all the image data blocks from the first memory so as to output the display image.
20. An apparatus for layer processing, comprising:
The image layer stacking module is used for performing editing operation on the image layers to obtain a target topological graph of the display image, wherein the target topological graph is formed by stacking a plurality of image layer topological graphs, the image layer topological graphs correspond to the image layers of the display screen one by one, and the target topological graph at least comprises stacking information of the image layer topological graphs;
The image layer overlapping module is used for obtaining a plurality of image data blocks according to the overlapping information;
The processing module is used for carrying out image processing on the image data of each image data block in the plurality of image data blocks to obtain the processed image data of each image data block;
and the output module is used for outputting the display image according to the processed image data of each image data block.
21. An image processing system, comprising: a codec device, an image processing device, and a display device, the codec device being connected to the image processing device, the image processing device being connected to the display device, the image processing device being configured to perform the method of layer processing of any one of claims 1 to 18; or comprises:
the video processing device is connected with the display device, the video processing device integrates a coding and decoding unit and an image processing unit, the coding and decoding unit is used for coding and decoding video source data received by an input interface of the video processing device, and the image processing unit is used for executing the layer processing method of any one of claims 1-18.
22. An image processing device comprising a processor and at least one memory, and a computer program stored in at least one of the memories and executable on the processor, wherein the processor implements the method of any of claims 1 to 18 when executing the computer program.
23. A computer readable storage medium storing a computer program, which when executed by a processor implements the method of any one of claims 1 to 18.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211629378.9A CN118214817A (en) | 2022-12-16 | 2022-12-16 | Method, device, equipment, system and storage medium for layer processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211629378.9A CN118214817A (en) | 2022-12-16 | 2022-12-16 | Method, device, equipment, system and storage medium for layer processing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118214817A true CN118214817A (en) | 2024-06-18 |
Family
ID=91449436
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211629378.9A Pending CN118214817A (en) | 2022-12-16 | 2022-12-16 | Method, device, equipment, system and storage medium for layer processing |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118214817A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118573795A (en) * | 2024-07-31 | 2024-08-30 | 北京数字小鸟科技有限公司 | Multi-video stream superposition method based on combination of FPGA and software |
| CN119477655A (en) * | 2025-01-17 | 2025-02-18 | 广东匠芯创科技有限公司 | Image display processing method, device and storage medium |
-
2022
- 2022-12-16 CN CN202211629378.9A patent/CN118214817A/en active Pending
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118573795A (en) * | 2024-07-31 | 2024-08-30 | 北京数字小鸟科技有限公司 | Multi-video stream superposition method based on combination of FPGA and software |
| CN119477655A (en) * | 2025-01-17 | 2025-02-18 | 广东匠芯创科技有限公司 | Image display processing method, device and storage medium |
| CN119477655B (en) * | 2025-01-17 | 2025-06-03 | 广东匠芯创科技有限公司 | Image display processing method, device and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8947432B2 (en) | Accelerated rendering with temporally interleaved details | |
| CN103026402B (en) | Show Compressed Super Slice Image | |
| JP6978542B2 (en) | Electronic device and its control method | |
| US8498500B2 (en) | Image processing apparatus and image processing method | |
| TWI698834B (en) | Methods and devices for graphics processing | |
| CN118214817A (en) | Method, device, equipment, system and storage medium for layer processing | |
| CN112399095A (en) | Video processing method, device and system | |
| US8527689B2 (en) | Multi-destination direct memory access transfer | |
| US11212435B2 (en) | Semiconductor device for image distortion correction processing and image reduction processing | |
| US11055820B2 (en) | Methods, apparatus and processor for producing a higher resolution frame | |
| CN114339045A (en) | Image processing system and display device | |
| CN100534125C (en) | An image processing method and an image rotatable digital photo frame for realizing the method | |
| CN109214977B (en) | Image processing device and control method thereof | |
| CN114697555A (en) | Image processing method, device, equipment and storage medium | |
| JP2004328178A (en) | Image processing apparatus | |
| CN116957899B (en) | Graphics processor, system, device, equipment and method | |
| CN114938453B (en) | Video coding method, chip, storage medium and computer equipment | |
| US20240161229A1 (en) | Image processing device and method using video area splitting, and electronic system including the same | |
| CN118138784A (en) | Video segmentation compression method, device, equipment and medium | |
| CN118967439A (en) | Terminal device and image super-resolution method | |
| JP2001195569A (en) | Image data compression and control system | |
| JP6476500B2 (en) | Image processing system, gaming machine | |
| KR102077146B1 (en) | Method and apparatus for processing graphics | |
| CN120495489A (en) | Multimedia content presentation method and device, electronic equipment and storage medium | |
| CN115866167A (en) | LED display screen image data splicing transmission method and device and terminal |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |