[go: up one dir, main page]

CN116074537A - Encoding method, decoding method, electronic device, and computer-readable storage medium - Google Patents

Encoding method, decoding method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN116074537A
CN116074537A CN202211743996.6A CN202211743996A CN116074537A CN 116074537 A CN116074537 A CN 116074537A CN 202211743996 A CN202211743996 A CN 202211743996A CN 116074537 A CN116074537 A CN 116074537A
Authority
CN
China
Prior art keywords
block
template
predicted
matching
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211743996.6A
Other languages
Chinese (zh)
Inventor
方诚
林聚财
江东
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211743996.6A priority Critical patent/CN116074537A/en
Publication of CN116074537A publication Critical patent/CN116074537A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an encoding method, a decoding method, an electronic device and a computer readable storage medium, the method comprising: obtaining a current block in an image frame, and performing geometric segmentation on the current block to obtain at least two blocks to be predicted; at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted; and carrying out intra-frame template matching prediction in a preset range of the current block by utilizing at least one matching template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted. By the scheme, the coding precision can be improved.

Description

Encoding method, decoding method, electronic device, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of video encoding technology, and in particular, to an encoding method, a decoding method, an electronic device, and a computer readable storage medium.
Background
Since the original data size of the video is large, the video is usually required to be encoded and compressed to reduce the data size of the video, in the existing encoding standard, intra-frame prediction is required to be performed when intra-frame encoding is performed, and the current block is usually predicted based on a general encoding standard, for example, the prediction block of the current block is determined by using the h.266 standard, so that the manner of obtaining the prediction block when intra-frame prediction is performed in the prior art is single and solidified, so that the encoding precision is low. In view of this, how to improve the coding accuracy is a problem to be solved.
Disclosure of Invention
The technical problem that this application mainly solves is to provide an encoding method, decoding method, electronic equipment and computer readable storage medium, can improve the precision of coding.
To solve the above technical problem, a first aspect of the present application provides an encoding method, including: obtaining a current block in an image frame, and performing geometric segmentation on the current block to obtain at least two blocks to be predicted; at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted; and carrying out intra-frame template matching prediction in a preset range of the current block by utilizing at least one matching template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted.
In order to solve the above technical problem, a second aspect of the present application provides a decoding method, including: receiving encoded data sent by an encoder; decoding the encoded data to obtain a target decoding block corresponding to the current decoding block; wherein the encoded data is processed by the encoding method according to the first aspect.
To solve the above technical problem, a third aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, the processor invoking the program data to perform the method of the first or second aspect.
To solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program data, which when executed by a processor, implements the method according to the first or second aspect.
According to the scheme, the current block in the image frame is obtained, the current block is subjected to geometric segmentation, so that the current block is further divided, at least two blocks to be predicted are obtained, so that the process of predicting the current block is finer, at least part of the blocks to be predicted correspond to a matched template area, the matched template area comprises matched reconstruction pixels corresponding to the blocks to be predicted, intra-frame template matching prediction (IntraTMP) is carried out within a preset range of the current block based on the matched template area by utilizing the matched template area corresponding to the at least one block to be predicted, and therefore the template matching predicted block of the blocks to be predicted is obtained. Therefore, the current block is segmented to refine the encoding process, a plurality of blocks to be predicted are obtained, and the intra-frame template matching prediction is applied to at least one block to be predicted, so that the intra-frame template matching prediction is carried out by utilizing the matching template area of the block to be predicted, the template matching prediction block of the block to be predicted is obtained, and the encoding precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of an embodiment of the encoding method of the present application;
FIG. 2 is a flow chart of another embodiment of the encoding method of the present application;
fig. 3 is a schematic view of an application scenario of an embodiment corresponding to step S201 in fig. 2;
fig. 4 is a schematic view of an application scenario of another embodiment corresponding to step S201 in fig. 2;
fig. 5 is a schematic diagram of an application scenario of an embodiment corresponding to the step S202 in fig. 2;
fig. 6 is a schematic diagram of an application scenario of an embodiment corresponding to step S202 in fig. 2;
FIG. 7 is a flow chart of an embodiment of a decoding method of the present application;
FIG. 8 is a schematic diagram of an embodiment of an electronic device of the present application;
fig. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
The coding method and the decoding method provided by the application are used for coding the image frames, and the decoding method is used for decoding the image frames coded by the coding method, wherein the image frames can be single images or video frames obtained from videos, and the execution main bodies of the coding method and the decoding method provided by the application are processors capable of calling the videos or the images.
Referring to fig. 1, fig. 1 is a flow chart illustrating an embodiment of a coding method of the present application, the method includes:
s101: and obtaining a current block in the image frame, and performing geometric segmentation on the current block to obtain at least two blocks to be predicted, wherein at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted.
Specifically, a current block in an image frame is obtained, the current block is subjected to geometric segmentation, and therefore the current block is further divided, at least two blocks to be predicted are obtained, so that the process of predicting the current block is finer, at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted.
In an application mode, a current block in an image frame is obtained, a current template area of the current block is determined, geometric segmentation is carried out on the current block by using a segmentation line, two blocks to be predicted are obtained, each block to be predicted corresponds to a respective matching template area if the current template area of the current block is segmented into two parts after the segmentation line is prolonged, a matching template area is corresponding to a block to be predicted adjacent to the current template area if the current template area of the current block is not segmented after the segmentation line is prolonged, and the block to be predicted far away from the current template area does not comprise the matching template area.
In another application mode, a current block in an image frame is obtained, a current template area of the current block is determined, geometric segmentation is carried out on the current block by utilizing a plurality of angle segmentation lines to obtain a plurality of blocks to be predicted, the segmentation lines extend to areas outside the current block, a part of the current template area which is positioned on the same side of each block to be predicted as a matching template area of the block to be predicted is taken as the matching template area of the block to be predicted, and if the block to be predicted on one side of any segmentation line does not correspond to the current template area, the corresponding block to be predicted does not comprise the matching template area.
In an application scenario, a current block is segmented into two blocks to be predicted by using a spatial domain geometric segmentation mode (Spatial Geometric Partitioning Mode, SGPM), wherein the position of a segmentation line is determined based on angle parameters and offset parameters of a specific partition, the angle parameters comprise 24 angles quantized at unequal intervals of 360 degrees, and each angle corresponds to at most 4 offsets, 64 alternative segmentation lines are used for segmenting the current block, and at least one alternative segmentation line is used for improving the accuracy of intra-frame prediction of the current block.
S102: and carrying out intra-frame template matching prediction in a preset range of the current block by utilizing at least one matching template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted.
Specifically, using at least one matching template region corresponding to the block to be predicted, performing intra-frame template matching prediction (IntraTMP) within a preset range of the current block based on the matching template region, thereby obtaining a template matching prediction block of the block to be predicted. Wherein the preset range includes reconstructed pixels within the current block.
Further, after obtaining the predicted value in the template-matched predicted block, a difference between the predicted value and the original pixel value, that is, a residual, may be determined, and then the encoder encodes based on the residual to compress the data amount of the video.
In an application mode, selecting one block to be predicted from all blocks to be predicted, carrying out intra-frame template matching prediction within a preset range of a current block by utilizing a matching template area corresponding to the block to be predicted, obtaining a reference template area and a reference block corresponding to the reference template area, and taking the reference block as a template matching prediction block of the block to be predicted.
In another application mode, selecting part of blocks to be predicted from all blocks to be predicted, performing intra-frame template matching prediction in a preset range of a current block by utilizing a matching template area corresponding to the blocks to be predicted to obtain a plurality of reference template areas and corresponding reference blocks, obtaining template cost value based on the reference template areas and the matching template areas, and taking the reference block with the minimum template cost value as a template matching prediction block of the blocks to be predicted.
In an application mode, selecting all blocks to be predicted including a matching template area from all blocks to be predicted, performing intra-frame template matching prediction within a preset range of a current block by using the matching template area corresponding to the blocks to be predicted to obtain a plurality of reference template areas and corresponding reference blocks, obtaining template cost value based on the reference template areas and the matching template areas, and taking the reference block with the minimum template cost value as a template matching prediction block of the blocks to be predicted.
In an application scene, comparing the areas of the matched template areas corresponding to each block to be predicted to obtain a block to be predicted with the largest area of the matched template area; searching in a preset range of the current block by utilizing a matching template area corresponding to the block to be predicted with the largest area to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area; and obtaining template cost value based on each reference template region and each matching template region, and taking a reference block corresponding to the reference template region with the minimum template cost value as a template matching prediction block of the block to be predicted.
Specifically, the areas of the matched template areas corresponding to each block to be predicted are compared, namely the number of matched reconstruction pixels in the matched template areas is compared, so that a block to be predicted with the largest area of the matched template areas is obtained, the matched template area corresponding to the block to be predicted with the largest area is utilized to search in a preset range of a current block, at least one reference template area corresponding to the block to be predicted is obtained, and a reference block corresponding to the reference template area is obtained, wherein the reference template area has the same shape as the matched template area, and the reference block has the same shape as the block to be predicted.
Further, based on the difference between reconstructed pixels in each reference template area and the matched template area, the template cost value is obtained, and the reference block corresponding to the reference template area with the smallest template cost value is used as the template matched prediction block of the block to be predicted, so that only the block to be predicted with the largest area is selected for carrying out intra-frame template matched prediction, and the coding efficiency is improved on the premise of reducing the influence on the coding precision as much as possible. The template cost value can be calculated based on the sum of absolute errors, and the template cost value is not particularly limited in the application.
According to the scheme, the current block in the image frame is obtained, the current block is subjected to geometric segmentation, so that the current block is further divided, at least two blocks to be predicted are obtained, so that the process of predicting the current block is finer, at least part of the blocks to be predicted correspond to a matched template area, the matched template area comprises matched reconstruction pixels corresponding to the blocks to be predicted, intra-frame template matching prediction (IntraTMP) is carried out within a preset range of the current block based on the matched template area by utilizing the matched template area corresponding to the at least one block to be predicted, and therefore the template matching predicted block of the blocks to be predicted is obtained. Therefore, the current block is segmented to refine the encoding process, a plurality of blocks to be predicted are obtained, and the intra-frame template matching prediction is applied to at least one block to be predicted, so that the intra-frame template matching prediction is carried out by utilizing the matching template area of the block to be predicted, the template matching prediction block of the block to be predicted is obtained, and the encoding precision is improved.
Referring to fig. 2, fig. 2 is a flow chart illustrating another embodiment of the encoding method of the present application, the method includes:
s201: and obtaining a current block in the image frame, and performing geometric segmentation on the current block to obtain at least two blocks to be predicted, wherein at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted.
Specifically, a current block in an image frame is obtained, geometric segmentation is carried out on the current block by utilizing a segmentation line, so that the image frame is divided into at least two blocks to be predicted, the blocks to be predicted are correspondingly provided with matching template areas, and the matching template areas comprise matching reconstruction pixels corresponding to the blocks to be predicted.
In one application mode, a current block in an image frame is obtained, geometric segmentation is performed on the current block, and at least two blocks to be predicted are obtained, including: obtaining a current block in an image frame, and determining a current template area corresponding to the current block; performing geometric segmentation on the current block by using a segmentation line, and extending the segmentation line to an area outside the current block to obtain at least two blocks to be predicted corresponding to the current block and an initial reference area corresponding to at least part of the blocks to be predicted; wherein all initial reference regions constitute the current template region.
Specifically, referring to fig. 3, fig. 3 is a schematic view of an application scenario of an embodiment corresponding to step S201 in fig. 2, in which the largest rectangular frame in fig. 3 is the current block, the small rectangular frames above and at the left side adjacent to the current block are the current template areas, the dividing line is indicated by a dotted line in fig. 3, the current block is geometrically divided by using the dividing line to obtain blocks to be predicted on both sides of the dividing line, and the dividing line is extended to an area outside the current block, so that the current template areas are divided into areas on both sides of the dividing line, and the partial current template areas corresponding to the blocks to be predicted on the same side of the dividing line are the matching template areas corresponding to the blocks to be predicted.
Referring to fig. 4, fig. 4 is a schematic view of an application scenario of another embodiment corresponding to step S201 in fig. 2, in which the largest rectangular frame in fig. 4 is the current block, the small rectangular frames above and to the left adjacent to the current block are the current template areas, the dividing line is indicated by a dotted line in fig. 4, the current block is geometrically divided by using the dividing line to obtain blocks to be predicted on two sides of the dividing line, the dividing line extends to an area outside the current block, one side of the dividing line where the block to be predicted is located on the lower right side does not include the current template area, and then the current template areas are all matching template areas corresponding to the block to be predicted on the upper left side, so that the current block can be divided by using the dividing lines with different angles and positions, the distribution of the blocks to be predicted has randomness, and the accuracy of intra-frame prediction of the whole image frame is improved.
S202: and carrying out intra-frame template matching prediction in a preset range of the current block by utilizing at least one matching template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted.
Optionally, using at least one matching template area corresponding to the block to be predicted, performing intra-frame template matching prediction within a preset range of the current block, and before obtaining a template matching prediction block of the block to be predicted, including: judging whether the number of the matched reconstruction pixels corresponding to the block to be predicted exceeds a number threshold value; the quantity threshold is preset fixed quantity or is positively correlated with the size of the current block; if yes, taking the block to be predicted as a block to be predicted which can be subjected to intra-frame template matching prediction; otherwise, taking the block to be predicted as the block to be predicted which can not be subjected to intra-frame template matching prediction, or expanding a template area of the block to be predicted to obtain a modified template area corresponding to the block to be predicted, updating the matching template area of the block to be predicted by using the modified template area, and taking the block to be predicted as the block to be predicted which can be subjected to intra-frame template matching prediction.
Specifically, the number of matched reconstruction pixels corresponding to each block to be predicted is sequentially compared with a number threshold, and if the number of matched reconstruction pixels exceeds the number threshold, the corresponding block to be predicted is used as the block to be predicted which can be subjected to intra-frame template matching prediction.
Further, if the template area is not exceeded, the block to be predicted is taken as a block to be predicted which cannot be subjected to intra-frame template matching prediction, and the intra-frame template matching prediction is not necessarily performed so as to improve the coding efficiency, or the template area of the block to be predicted is expanded so as to obtain a corrected template area corresponding to the block to be predicted, and the matched template area of the block to be predicted is updated by utilizing the corrected template area so that the block to be predicted can be subjected to intra-frame template matching prediction, thereby improving the coding precision.
Specifically, the number threshold is a preset fixed number or is positively correlated with the size of the current block, and the number of matched reconstruction pixels corresponding to each block to be predicted is compared with the number threshold, so that all the blocks to be predicted are divided into blocks to be predicted which reach standards and blocks to be predicted which do not reach standards.
Optionally, the number threshold is a preset fixed number or is in positive correlation with the size of the current block, and the blocks to be predicted can be divided into standard-reaching blocks to be predicted and non-standard-reaching blocks to be predicted based on the number threshold, wherein the standard-reaching blocks to be predicted can perform intra-frame template matching prediction, and the non-standard-reaching blocks to be predicted cannot perform intra-frame template matching prediction.
In an application scenario, a fixed number T0 is preset, if the number of matched reconstructed pixels corresponding to the block to be predicted is greater than the preset fixed number, the corresponding block to be predicted is a block to be predicted which meets the standard, otherwise, the block to be predicted is a block to be predicted which does not meet the standard, or a fixed number T1 is preset, if the number of matched reconstructed pixels corresponding to the block to be predicted is less than the preset fixed number T1, the corresponding block to be predicted is a block to be predicted which does not meet the standard, otherwise, the block to be predicted which meets the standard is a block to be predicted which meets the standard, and the efficiency of determining the block to be predicted which does not meet the standard is improved.
In another application scenario, a quantity threshold T2 positively correlated with the size of the current block is preset, if the quantity of matched reconstructed pixels corresponding to the block to be predicted is greater than the quantity threshold T2, the corresponding block to be predicted is a block to be predicted which meets the standard, otherwise, the block to be predicted is a block not to be predicted which meets the standard, or a quantity threshold T3 positively correlated with the size of the current block is preset, if the quantity of matched reconstructed pixels corresponding to the block to be predicted is less than the quantity threshold T3, the corresponding block to be predicted is a block not to be predicted which meets the standard, otherwise, the block to be predicted which meets the standard is not to be predicted, and the accuracy of determining the block to be predicted which meets the standard and the block not to be predicted is improved.
In a specific application scenario, the size of the current block includes the width w and the height h of the current block, and the number threshold t=log 2 w+log 2 h, thus, the number threshold is positively correlated with the size of the current block, the block to be predicted with the number of the matched reconstructed pixels being greater than the number threshold T is taken as a standard-reaching block to be predicted, the block to be predicted with the number of the matched reconstructed pixels being less than or equal to the number threshold T is taken as a non-standard-reaching block to be predicted, and in other specific application scenarios, the number threshold T can also be in linear relation with the width w and the height h of the current block, which is not particularly limited in the application.
In an application mode, expanding a template area of a block to be predicted to obtain a modified template area corresponding to the block to be predicted, including: expanding the current template area of the current block according to the expansion length matched with each preset direction in the preset direction corresponding to the current block to obtain an expansion template area, and dividing the expansion template area by using a dividing line corresponding to the block to be predicted to obtain a correction template area corresponding to the block to be predicted; the expansion length is a preset fixed length or is positively correlated with the size of the current block.
Specifically, referring to fig. 5, fig. 5 is a schematic view of an application scenario of an embodiment corresponding to the step S202 in fig. 2, a preset direction corresponding to a current template area corresponding to a current block is determined, each preset direction corresponds to an expansion length, the current template area of the current block is expanded according to the corresponding expansion length in the preset direction of the current block, for example, in fig. 5, a template area corresponding to a width w above the current block, a template area corresponding to a height h of a left template area corresponding to a right expansion length w 'of the matched template area above the current block, and a template area corresponding to a left template area of the current block is expanded downwards by the matched expansion length h', so as to obtain an expanded template area, wherein the expanded template area includes the current template area and the template area obtained after expansion.
It should be noted that if the available reconstructed pixels in the preset direction do not reach the corresponding expansion length, the method expands to the edges of the available reconstructed pixels.
Further, the expansion template area is segmented by utilizing a segmentation line corresponding to the block to be predicted, so that the expansion template area which is positioned on the same side of the segmentation line as the expansion template area corresponding to the block to be predicted which does not reach the standard is used as the matching template area corresponding to the block to be predicted which does not reach the standard.
Optionally, the expansion length matched with the preset direction may be a fixed length, or may be a length positively correlated with the width and the height of the current block, and is set to a preset multiple of the width and the height, so as to increase the diversity of the expansion length.
In another application mode, expanding a template area of a block to be predicted to obtain a modified template area corresponding to the block to be predicted, including: expanding a current template area of the current block in a preset direction corresponding to the current block until the number of reconstructed pixels in the expanded template area reaches an expansion threshold value matched with the preset direction to obtain an expanded template area, and dividing the expanded template area by using a dividing line corresponding to the block to be predicted to obtain a corrected template area corresponding to the block to be predicted; the expansion threshold is a preset fixed threshold or is positively correlated with the size of the current block.
Specifically, determining preset directions corresponding to a current template area corresponding to a current block, wherein each preset direction corresponds to an expansion threshold value, expanding the current template area of the current block in the preset direction corresponding to the current block until the number of reference reconstruction pixels in the expanded template area reaches the expansion threshold value matched with the preset direction, and obtaining an expanded template area, wherein the expanded template area comprises the current template area and the template area obtained after expansion.
It should be noted that, if the available reconstructed pixels in the preset direction do not reach the corresponding expansion threshold, the method expands until the edges of the available reconstructed pixels.
Further, the expansion template area is segmented by utilizing a segmentation line corresponding to the block to be predicted which is not up to standard, so that the expansion template area which is positioned on the same side of the segmentation line as the expansion template area corresponding to the block to be predicted which is not up to standard is used as the matching template area corresponding to the block to be predicted which is not up to standard.
Optionally, the expansion threshold may be a preset fixed threshold, or may be a value positively correlated with the width and height of the current block, and a preset multiple of the number of pixels in the width direction and the number of pixels in the height direction is set, so as to increase the diversity of the expansion threshold.
Further, after the matching template area corresponding to the block to be predicted is obtained, at least one matching template area corresponding to the block to be predicted can be utilized to perform intra-frame template matching prediction (IntraTMP) based on the matching template area within a preset range of the current block, so that a template matching prediction block of the block to be predicted is obtained.
In an application mode, a block to be predicted corresponds to two matched template areas in a preset direction, and intra-frame template matching prediction is performed within a preset range of a current block by utilizing at least one matched template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted, which comprises the following steps: searching in a preset range of the current block by utilizing all the matched template areas corresponding to at least one block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area; and obtaining template cost value based on each reference template region and each matching template region, and taking a reference block corresponding to the reference template region with the minimum template cost value as a template matching prediction block of the block to be predicted.
Specifically, when the block to be predicted corresponds to the two matching template areas in the preset direction, searching is performed within the preset range of the current block by utilizing all the matching template areas corresponding to at least one block to be predicted, so as to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area, thereby improving the searching precision, wherein the reference template area has the same shape as the matching template area, and the reference block has the same shape as the block to be predicted.
Further, based on the difference value between the reconstructed pixels in each reference template area and the matched template area, the template cost value is obtained, and the reference block corresponding to the reference template area with the smallest template cost value is used as the template matched prediction block of the block to be predicted, so that the precision of the template matched prediction block is improved.
In an embodiment, searching is performed within a preset range of a current block by using all matching template areas corresponding to at least one block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area, including: and determining at least one target template area from the two matched template areas based on the ratio between the numbers of matched reconstruction pixels in the two matched template areas in response to the distance between the two matched template areas corresponding to the block to be predicted being greater than or equal to a distance threshold, and searching in a preset range of the current block by utilizing the target template area corresponding to the block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and the reference block corresponding to the at least one reference template area.
Specifically, if the distance between two matching template areas corresponding to the block to be predicted is greater than or equal to a distance threshold, determining at least one target template area from the two matching template areas based on the ratio between the numbers of the matching reconstruction pixels in the two matching template areas, searching within a preset range of the current block by utilizing the target template area corresponding to the block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the at least one reference template area, and improving the accuracy of intra-frame prediction by utilizing the matching template area by considering the distance and the continuity between the matching template areas.
In an application scene, based on a ratio between numbers of matched reconstruction pixels in two matched template areas, determining at least one target template area from the two matched template areas, searching in a preset range of a current block by utilizing the target template area corresponding to the block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area, wherein the method comprises the following steps: responding to the fact that the ratio between the numbers of the matched reconstruction pixels in the two matched template areas is larger than a first ratio threshold, taking the matched template area with more matched reconstruction pixels as a target template area, and searching in a preset range of a current block by utilizing the target template area corresponding to the block to be predicted to obtain a reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area; and in response to the ratio between the numbers of the matched reconstruction pixels in the two matched template areas being greater than a second proportional threshold, respectively taking the two matched template areas as target template areas, searching in a preset range of the current block by utilizing the target template areas corresponding to the blocks to be predicted to obtain two reference template areas and corresponding initial reference blocks, and fusing the two initial reference blocks to obtain the reference block corresponding to the blocks to be predicted.
Specifically, when the ratio between the numbers of the matched reconstruction pixels in the two matched template areas is larger than a first ratio threshold, the matched template area with the smaller number of the matched reconstruction pixels is removed, only the matched template area with the larger number of the matched reconstruction pixels is used as a target template area, searching is conducted within a preset range of the current block by utilizing the target template area corresponding to the block to be predicted, a reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area are obtained, and therefore prediction is conducted only by using the matched template area with the larger influence on the block to be predicted, and prediction efficiency is improved.
Further, when the ratio between the numbers of the matched reconstruction pixels in the two matched template areas is larger than a second proportion threshold, the two matched template areas are respectively used as target template areas, wherein the first proportion threshold is larger than the second proportion threshold, the two target template areas are respectively used for searching in a preset range of a current block to obtain two reference template areas and corresponding initial reference blocks, the two initial reference blocks are fused to obtain the reference block corresponding to the block to be predicted, and accordingly the two target template areas are used for respectively predicting and fusing, and the accuracy of the finally obtained template matching prediction block is improved.
In an implementation scenario, fusing two initial reference blocks to obtain a reference block corresponding to a block to be predicted, including: determining fusion weights corresponding to the two initial reference blocks based on the ratio between the numbers of matched reconstruction pixels in the two matched template areas; and carrying out weighted summation on the two initial reference blocks by utilizing the fusion weight to obtain the reference block corresponding to the block to be predicted.
Specifically, the fusion weights corresponding to two initial reference blocks are determined by using the ratio between the numbers of the matched reconstruction pixels in the two matched template areas, wherein the fusion weight corresponding to the initial reference block with more matched reconstruction pixels is larger than that of the initial reference block with less matched reconstruction pixels, so that more reasonable fusion weights are set, and the two initial reference blocks are weighted and summed by using the fusion weights, so that the template matched prediction block corresponding to the block to be predicted is obtained.
In a specific application scenario, please refer to fig. 3 again, the second proportion threshold is 2.5, the number of matching reconstruction pixels in the left matching template area and the upper matching template area of the block to be predicted in the upper left corner of the current block is 42, the block to be predicted in the upper left corner predicts the block to be predicted by using the matching reconstruction pixels in all the matching template areas to obtain a template matching prediction block corresponding to the block to be predicted, the number of matching reconstruction pixels in the left matching template area of the block to be predicted in the lower right corner of the current block is 22, the number of matching reconstruction pixels in the upper matching template area is 86, 86/22=3.9, and the number of matching reconstruction pixels in the upper matching template area is greater than 2.5, and the reference template areas corresponding to the block to be predicted are respectively used as target template areas.
Further, referring to fig. 6, fig. 6 is a schematic view of an application scenario of an embodiment corresponding to step S202 in fig. 2, a block to be predicted in fig. 6 is the same as a block to be predicted in a lower right corner in fig. 3, a target template area above the block to be predicted is used for searching and matching to obtain an initial prediction block Pb, a target template area on a left side of the block to be predicted is used for obtaining the initial prediction block Ps, a ratio of numbers of matched reconstructed pixels in two target template areas corresponding to the current block to be predicted is 3.9, and a higher fusion weight is set for more matched reconstructed pixels, and the ratio is 4:1, so as to determine a predicted value p=4/5×pb+1/5×ps in a reference block.
S203: obtaining intra-frame intra-block prediction blocks, wherein the intra-frame intra-block prediction blocks are obtained based on a plurality of intra-frame prediction modes, and the intra-frame intra-partition prediction block and the template matching prediction block correspond to the same block to be predicted.
Specifically, intra-frame intra-prediction blocks are obtained based on a plurality of intra-frame prediction modes, that is, in the manner described in the background art.
S204: fusing the intra-frame intra-partition prediction block and the template matching prediction block based on at least one set of weighting coefficients, and obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block.
Specifically, when intra-frame intra-prediction blocks corresponding to the blocks to be predicted are obtained, weighting and summing the intra-frame intra-prediction blocks and the wind-pattern prediction blocks by using at least one set of weighting coefficients to obtain a fusion prediction block, wherein the intra-frame intra-prediction blocks are obtained based on a plurality of intra-frame prediction modes.
Further, a fusion cost value corresponding to the fusion prediction block is determined, wherein the fusion cost value is obtained based on original pixels in the fusion prediction block and the block to be predicted, or the fusion cost value is obtained based on pixels in a template area corresponding to the fusion prediction block and the block to be predicted.
In one application, intra-frame intra-partition prediction blocks and template matching prediction blocks are fused based on at least one set of weighting coefficients, obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block, wherein the method comprises the following steps: a plurality of sets of weight coefficients are obtained, intra-frame intra-partition prediction block using multiple sets of weighting coefficients and the template matching prediction block are respectively weighted and summed, and obtaining a plurality of prediction blocks to be screened, taking the prediction block to be screened with the lowest cost value as a fusion prediction block, and determining the fusion cost value corresponding to the fusion prediction block.
Specifically, a plurality of groups of preset weight coefficients are obtained, each group of weight coefficients is used for respectively carrying out weighted summation on intra-frame intra-prediction blocks and template matching prediction blocks to obtain a plurality of prediction blocks to be screened, and cost values of the prediction blocks to be screened are determined, wherein the cost values can be determined based on differences between the prediction blocks to be screened and pixels on the prediction blocks to be screened, can also be determined based on differences between the prediction blocks to be screened and pixels in template areas corresponding to the prediction blocks to be predicted, and then the prediction blocks to be screened with the lowest cost values are selected as fusion prediction blocks, fusion cost values corresponding to the fusion prediction blocks are determined, and the diversity of weights is increased, so that the probability of obtaining fusion prediction blocks with lower fusion cost values is improved.
In another application, intra-frame intra-partition prediction blocks and template-matching prediction blocks are fused based on at least one set of weighting coefficients, obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block, wherein the method comprises the following steps: obtaining a group of weight coefficients, adjusting the weight coefficients based on cost values corresponding to intra-frame intra-prediction blocks and cost values corresponding to template matching prediction blocks, obtaining target weights corresponding to the intra-frame intra-partition prediction block and the template matching prediction block, the intra-frame intra-partition prediction block and the template matching prediction block are weighted and summed with a target weight, obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block; wherein the target weight is inversely related to the cost value.
Specifically, a set of preset weight coefficients are obtained, based on the difference between the intra-frame intra-prediction block and the original pixels in the block to be predicted, or based on the difference between the intra-frame intra-prediction block and the pixels in the template area corresponding to the block to be predicted, a cost value corresponding to the intra-frame intra-prediction block is determined, based on the difference between the template matching prediction block and the original pixels in the block to be predicted, or based on the difference between the template matching prediction block and the pixels in the template area corresponding to the block to be predicted, a cost value corresponding to the template matching prediction block is determined, wherein the weight coefficients are adjusted by using the cost value corresponding to the intra-frame intra-prediction block and the cost value corresponding to the template matching prediction block, a target weight corresponding to the intra-frame intra-prediction block and the template matching prediction block are negatively correlated, the fusion prediction block is obtained by using the target weight, and the fusion cost value corresponding to the fusion prediction block is determined, and the precision of the fusion prediction block is improved.
In a specific application scenario, the weight of the intra-frame intra-prediction block in the preset weight coefficient is 0.6, the weight of the template matching prediction block is 0.4, the cost value corresponding to the template matching prediction block being P0 is assumed to be R0, the cost value corresponding to the intra-frame intra-prediction block being P1 is assumed to be R1, and if r0=r1, the initial weight is maintained unchanged; if R0> R1, the target weight w0=0.4×r1/(r0+r1) of P0, and the target weight w1=1-w0 of P1; if R0< R1, the target weight w0=0.4+0.6r1/(r0+r1) of P0, and the target weight w1=1-w0 of P1. In other specific application scenarios, the preset weight coefficient may also be other values, and the adjustment mode complies with the negative correlation between the target weight and the cost value, which is not specifically limited in the present application.
S205: and determining a target prediction block corresponding to the block to be predicted from the fused prediction blocks based on the fused cost value.
Specifically, the fusion prediction block with the lowest fusion cost value is used as the target prediction block corresponding to the block to be predicted.
Further, after determining the target prediction block corresponding to the block to be predicted from the fused prediction blocks based on the fused cost value, the method includes: generating a first syntax element; wherein the first syntax element is for indicating a decoder weight coefficient; generating a second syntax element in response to the fused cost value being obtained based on the original pixels in the fused prediction block and the block to be predicted; the second syntax element is used for indicating a decoder, and the step of performing intra-frame template matching prediction within a preset range of the current block by using at least one matching template area corresponding to the block to be predicted in the prediction process to obtain a template matching prediction block of the block to be predicted.
Specifically, a first syntax element is generated, where the first syntax element corresponds to the weight coefficient used in the above step, and is used to instruct the decoder to adopt the same weight coefficient in the decoding process, so as to ensure the decoding accuracy.
Further, if the fusion cost value is obtained based on the original pixels in the fusion prediction block and the block to be predicted, a second syntax element is generated, and since the decoder cannot directly obtain the pixels of the current decoding block during decoding, the second syntax element is used as an enabling or closing identifier of the method adopted in the application and used for indicating that the decoder adopts the intra-frame template matching prediction in the encoding method of the application during prediction, and indicating how the decoder obtains the predicted pixel information corresponding to the current decoding block so as to enable the decoder to complete the decoding process of the current decoding block.
It can be understood that if various cost values such as the fusion cost value are obtained based on the template region, the decoding end can also obtain the template region to perform cost value comparison, so as to complete the decoding process of the current decoding block.
In an application scenario, if weighting needs to be applied to obtain a fusion prediction block, further generating a first syntax element sgpm_inter_weight, where sgpm_inter_weight equals 0 to represent that the block to be predicted does not ultimately undergo weighted prediction, and sgpm_inter_weight equals 1 to represent that the block to be predicted ultimately undergoes weighted prediction. When sgpm_intra_weight is equal to 1, the conventional intra prediction mode of the block to be predicted is transmitted according to the prior art, and information of weight coefficients is transmitted, wherein preset weight coefficients are { (0.5 ), (0.6,0.4), (0.7,0.3) } respectively correspond to index 0-2, and if the final selection weight is (0.6,0.4), weight coefficient information of index=1 needs to be transmitted, so that the decoding end obtains an accurate weight coefficient.
Further, when the cost value is related to the original pixel in the block to be predicted, generating a CU-level second syntax element sgpm_inter_mode, sgpm_inter_mode equal to 0 represents using the prior art, sgpm_inter_mode equal to 1 represents using the scheme of the present disclosure.
In this embodiment, the current block is geometrically segmented by using a segmentation line, so that the image frame is divided into at least two blocks to be predicted, the blocks to be predicted are corresponding to a matching template area, the matching template area includes matching reconstruction pixels corresponding to the blocks to be predicted, the accuracy of intra-frame prediction by using the matching template area is improved by considering the distance and continuity between the matching template areas, and when the intra-frame intra-prediction block corresponding to the blocks to be predicted is obtained, the intra-frame intra-prediction block and the wind-pattern prediction block are weighted and summed by using at least one set of weighting coefficients, so as to obtain a fusion prediction block, thereby obtaining a more accurate target prediction block based on the fusion prediction block and the fusion cost value corresponding to the fusion prediction block.
Referring to fig. 7, fig. 7 is a flow chart illustrating an embodiment of a decoding method according to the present application, the method includes:
s701: and receiving the coded data sent by the encoder.
Specifically, the encoded data is processed by the encoding method in any of the above embodiments, and the description of the related content is referred to the detailed description of the above method embodiments, which is not repeated herein.
S702: and decoding the encoded data to obtain a target decoding block corresponding to the current decoding block.
Specifically, the same prediction operation is performed at the decoding end, so as to obtain a target decoding block corresponding to the current decoding block.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of an electronic device according to the present application, where the electronic device 80 includes a memory 801 and a processor 802 coupled to each other, where the memory 801 stores program data (not shown), and the processor 802 invokes the program data to implement a method according to any one of the above embodiments, and the description of the related content refers to the detailed description of the above method embodiments, which is not repeated herein.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer readable storage medium 90 of the present application, where the computer readable storage medium 90 stores program data 900, and the program data 900 when executed by a processor implements a method in any of the above embodiments, and details of the related content are described in the above embodiments, which are not repeated herein.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (15)

1. A method of encoding, the method comprising:
obtaining a current block in an image frame, and performing geometric segmentation on the current block to obtain at least two blocks to be predicted; at least part of the blocks to be predicted correspond to a matching template area, and the matching template area comprises matching reconstruction pixels corresponding to the blocks to be predicted;
and carrying out intra-frame template matching prediction in a preset range of the current block by utilizing at least one matching template area corresponding to the block to be predicted, so as to obtain a template matching prediction block of the block to be predicted.
2. The encoding method according to claim 1, wherein the performing intra-frame template matching prediction within a preset range of the current block using at least one matching template region corresponding to the block to be predicted, before obtaining the template matching prediction block of the block to be predicted, includes:
Judging whether the number of the matched reconstruction pixels corresponding to the block to be predicted exceeds a number threshold; wherein the number threshold is a preset fixed number or is positively correlated with the size of the current block;
if yes, taking the block to be predicted as a block to be predicted which can be subjected to intra-frame template matching prediction;
otherwise, taking the block to be predicted as a block to be predicted which can not be subjected to intra-frame template matching prediction, or expanding a template area of the block to be predicted to obtain a correction template area corresponding to the block to be predicted, updating a matching template area of the block to be predicted by using the correction template area, and taking the block to be predicted as the block to be predicted which can be subjected to intra-frame template matching prediction.
3. The encoding method according to claim 2, wherein expanding the template region of the block to be predicted to obtain the modified template region corresponding to the block to be predicted includes:
expanding a current template area of the current block according to the expansion length matched with each preset direction in the preset direction corresponding to the current block to obtain an expansion template area, and dividing the expansion template area by using a dividing line corresponding to the block to be predicted to obtain the correction template area corresponding to the block to be predicted; the expansion length is a preset fixed length or is positively related to the size of the current block; or,
Expanding a current template region of the current block in a preset direction corresponding to the current block until the number of reconstructed pixels in the expanded template region reaches an expansion threshold value matched with the preset direction to obtain an expanded template region, and dividing the expanded template region by using a dividing line corresponding to the block to be predicted to obtain the corrected template region corresponding to the block to be predicted; the expansion threshold is a preset fixed threshold or is positively related to the size of the current block.
4. The encoding method according to claim 1, wherein the performing intra-frame template matching prediction within a preset range of the current block by using at least one matching template region corresponding to the block to be predicted to obtain a template matching prediction block of the block to be predicted includes:
comparing the areas of the matched template areas corresponding to the blocks to be predicted to obtain the blocks to be predicted with the largest areas of the matched template areas;
searching in a preset range of the current block by utilizing a matching template area corresponding to the block to be predicted with the largest area to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area;
And obtaining template cost value based on each reference template area and each matching template area, and taking a reference block corresponding to the reference template area with the minimum template cost value as a template matching prediction block of the block to be predicted.
5. The encoding method according to claim 1, wherein the block to be predicted corresponds to the matching template areas in two preset directions, and the performing intra-frame template matching prediction within a preset range of the current block by using at least one matching template area corresponding to the block to be predicted to obtain a template matching prediction block of the block to be predicted includes:
searching in a preset range of the current block by utilizing all the matching template areas corresponding to at least one block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the at least one reference template area;
and obtaining template cost value based on each reference template area and each matching template area, and taking a reference block corresponding to the reference template area with the minimum template cost value as a template matching prediction block of the block to be predicted.
6. The encoding method according to claim 5, wherein searching within a preset range of the current block by using all the matching template areas corresponding to at least one block to be predicted, to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area, comprises:
And determining at least one target template area from the two matched template areas based on the ratio between the numbers of matched reconstruction pixels in the two matched template areas in response to the distance between the two matched template areas corresponding to the block to be predicted being greater than or equal to a distance threshold, and searching in a preset range of the current block by utilizing the target template area corresponding to the block to be predicted to obtain at least one reference template area corresponding to the block to be predicted and a reference block corresponding to the at least one reference template area.
7. The encoding method according to claim 6, wherein determining at least one target template region from the two matching template regions based on a ratio between the numbers of matching reconstructed pixels in the two matching template regions, and searching within a preset range of the current block by using the target template region corresponding to the block to be predicted, to obtain at least one reference template region corresponding to the block to be predicted and a reference block corresponding to the reference template region, comprises:
responding to the fact that the ratio between the numbers of the matched reconstruction pixels in the two matched template areas is larger than a first ratio threshold, taking the matched template area with the larger number of the matched reconstruction pixels as the target template area, and searching in the preset range of the current block by utilizing the target template area corresponding to the block to be predicted to obtain a reference template area corresponding to the block to be predicted and a reference block corresponding to the reference template area;
And in response to the ratio between the numbers of the matched reconstruction pixels in the two matched template areas being greater than a second proportional threshold, respectively taking the two matched template areas as the target template areas, searching in a preset range of the current block by utilizing the target template areas corresponding to the blocks to be predicted to obtain two reference template areas and initial reference blocks corresponding to the two reference template areas, and fusing the two initial reference blocks to obtain the reference blocks corresponding to the blocks to be predicted.
8. The encoding method according to claim 7, wherein the fusing the two initial reference blocks to obtain the reference block corresponding to the block to be predicted includes:
determining fusion weights corresponding to the two initial reference blocks based on the ratio between the numbers of matched reconstruction pixels in the two matched template areas;
and carrying out weighted summation on the two initial reference blocks by utilizing the fusion weight to obtain the reference block corresponding to the block to be predicted.
9. The encoding method according to claim 1, wherein the performing intra-frame template matching prediction within a preset range of the current block using at least one matching template region corresponding to the block to be predicted, after obtaining a template matching prediction block of the block to be predicted, includes:
Obtaining intra-frame intra-block intra-prediction blocks, wherein the intra-frame intra-prediction blocks are obtained based on a plurality of intra-prediction modes, and the intra-frame intra-partition prediction block and the template matching prediction block correspond to the same block to be predicted;
fusing the intra-frame intra-partition prediction block and the template matching prediction block based on at least one set of weighting coefficients, obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block; the fusion cost value is obtained based on original pixels in the fusion prediction block and the block to be predicted, or the fusion cost value is obtained based on pixels in a template area corresponding to the fusion prediction block and the block to be predicted;
and determining a target prediction block corresponding to the block to be predicted from the fused prediction blocks based on the fused cost value.
10. The encoding method according to claim 9, wherein the fusing the intra-frame intra-prediction block and the template-matching prediction block based on at least one set of weight coefficients, to obtain a fused prediction block, and determining a fused cost value corresponding to the fused prediction block, includes:
obtaining a plurality of groups of weight coefficients, respectively carrying out weighted summation on the intra-frame intra-partition prediction block and the template matching prediction block by utilizing the plurality of groups of weight coefficients, obtaining a plurality of prediction blocks to be screened, taking the prediction block to be screened with the lowest cost value as the fusion prediction block, and determining the fusion cost value corresponding to the fusion prediction block; or,
A set of said weight coefficients is obtained, adjusting the weight coefficient based on the cost value corresponding to the intra-frame intra-prediction block and the cost value corresponding to the template matching prediction block, obtaining target weights corresponding to the intra-frame intra-prediction block and the template matching prediction block, the intra-frame intra-partition prediction block and the template matching prediction block are weighted and summed with the target weight, obtaining a fusion prediction block and determining a fusion cost value corresponding to the fusion prediction block; wherein the target weight is inversely related to the cost value.
11. The encoding method according to claim 9, wherein after determining the target prediction block corresponding to the block to be predicted from the fused prediction blocks based on the fused cost value, the method comprises:
generating a first syntax element; wherein the first syntax element is for indicating the weight coefficient to a decoder;
generating a second syntax element in response to the fusion cost value being obtained based on original pixels in the fusion prediction block and the block to be predicted; the second syntax element is used for indicating a decoder, and the step of performing intra-frame template matching prediction within a preset range of the current block by using at least one matching template area corresponding to the block to be predicted in the prediction process to obtain a template matching prediction block of the block to be predicted.
12. The encoding method according to claim 1, wherein the obtaining the current block in the image frame, geometrically partitioning the current block to obtain at least two blocks to be predicted, includes:
obtaining a current block in the image frame, and determining a current template area corresponding to the current block;
geometrically segmenting the current block by using a segmentation line, and extending the segmentation line to an area outside the current block to obtain at least two blocks to be predicted corresponding to the current block and an initial reference area corresponding to at least part of the blocks to be predicted; wherein all of the initial reference regions constitute the current template region.
13. A decoding method, the method comprising:
receiving encoded data sent by an encoder;
decoding the encoded data to obtain a target decoding block corresponding to the current decoding block; wherein the encoded data is processed by the encoding method according to any one of claims 1 to 10.
14. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor invokes to perform the method of any of claims 1-12 or 13.
15. A computer readable storage medium having stored thereon program data, which when executed by a processor implements the method of any of claims 1-12 or 13.
CN202211743996.6A 2022-12-30 2022-12-30 Encoding method, decoding method, electronic device, and computer-readable storage medium Pending CN116074537A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211743996.6A CN116074537A (en) 2022-12-30 2022-12-30 Encoding method, decoding method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211743996.6A CN116074537A (en) 2022-12-30 2022-12-30 Encoding method, decoding method, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116074537A true CN116074537A (en) 2023-05-05

Family

ID=86174297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211743996.6A Pending CN116074537A (en) 2022-12-30 2022-12-30 Encoding method, decoding method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116074537A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024255897A1 (en) * 2023-06-14 2024-12-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for intra prediction using fractional-pel block vector
WO2025049984A3 (en) * 2023-09-01 2025-04-24 Beijing Dajia Internet Information Technology Co., Ltd Methods and devices of extrapolation filter-based prediction mode
WO2025107200A1 (en) * 2023-11-22 2025-05-30 深圳传音控股股份有限公司 Processing method, processing device, and storage medium
WO2025123743A1 (en) * 2023-12-12 2025-06-19 中兴通讯股份有限公司 Intra-frame template matching prediction method, processing node, and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024255897A1 (en) * 2023-06-14 2024-12-19 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for intra prediction using fractional-pel block vector
WO2025049984A3 (en) * 2023-09-01 2025-04-24 Beijing Dajia Internet Information Technology Co., Ltd Methods and devices of extrapolation filter-based prediction mode
WO2025107200A1 (en) * 2023-11-22 2025-05-30 深圳传音控股股份有限公司 Processing method, processing device, and storage medium
WO2025123743A1 (en) * 2023-12-12 2025-06-19 中兴通讯股份有限公司 Intra-frame template matching prediction method, processing node, and storage medium

Similar Documents

Publication Publication Date Title
CN108141594B (en) Method and apparatus for encoding or decoding image
CN116074537A (en) Encoding method, decoding method, electronic device, and computer-readable storage medium
CN110519600B (en) Intra-frame and inter-frame joint prediction method and device, coder and decoder and storage device
KR100871646B1 (en) H.264 spatial error concealment based on the intra-prediction direction
US8179969B2 (en) Method and apparatus for encoding or decoding frames of different views in multiview video using global disparity
JP5807272B2 (en) Prediction method and predictor in encoding or decoding
US9510010B2 (en) Method for decoding images based upon partition information determinations and apparatus for decoding using same
CN114900691B (en) Encoding method, encoder, and computer-readable storage medium
CN111669584B (en) Inter-frame prediction filtering method and device and computer readable storage medium
JP3823767B2 (en) Moving image foreground / background region separation method, and moving image encoding method using conditional pixel interpolation using the method
Fu et al. Efficient depth intra frame coding in 3D-HEVC by corner points
US8649436B2 (en) Methods for efficient implementation of skip/direct modes in digital video compression algorithms
CN110519597B (en) HEVC-based encoding method and device, computing equipment and medium
CN116074536A (en) Encoding method, decoding method, electronic device, and computer-readable storage medium
CN108924551B (en) Method for predicting video image coding mode and related equipment
KR20240044497A (en) Sign prediction for block-based video coding
EP2842325A1 (en) Macroblock partitioning and motion estimation using object analysis for video compression
US11202082B2 (en) Image processing apparatus and method
US20040141555A1 (en) Method of motion vector prediction and system thereof
KR101793623B1 (en) Method and apparatus for detemining coding unit depth based on history
JP5043849B2 (en) Variable shape motion estimation in video sequences
CN114071138A (en) Intra-frame prediction encoding method, intra-frame prediction encoding device, and computer-readable medium
JP5102810B2 (en) Image correction apparatus and program thereof
CN112235576A (en) Encoding method, encoding device, electronic device and storage medium
US10075691B2 (en) Multiview video coding method using non-referenced view video group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination