CN112801876B - Information processing method and device, electronic equipment and storage medium - Google Patents
Information processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112801876B CN112801876B CN202110168959.6A CN202110168959A CN112801876B CN 112801876 B CN112801876 B CN 112801876B CN 202110168959 A CN202110168959 A CN 202110168959A CN 112801876 B CN112801876 B CN 112801876B
- Authority
- CN
- China
- Prior art keywords
- image
- images
- frame
- reference image
- auxiliary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Controls And Circuits For Display Device (AREA)
- Television Systems (AREA)
Abstract
The embodiment of the application discloses an information processing method, an information processing device, electronic equipment and a storage medium, wherein after a screen capturing instruction is obtained, an N-frame image is obtained in response to the screen capturing instruction, the N-frame image at least comprises a reference image, and the reference image is a display image of display contents of a display output area of a display screen in response to the screen capturing instruction; generating a screen capturing image based on the obtained N frames of images; the screen capturing image at least comprises display content of a reference image, and the resolution of the screen capturing image is higher than that of the reference image. In the present application, when acquiring a screen shot image, instead of directly taking a display image (i.e., a reference image) of display contents of a display output area of a display screen in response to a screen shot instruction as the screen shot image, N frame images including at least the reference image are acquired, and the screen shot image is generated based on the N frame images, whereby the screen shot image with a higher resolution can be acquired.
Description
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing method, an information processing apparatus, an electronic device, and a storage medium.
Background
When a user browses information by using the electronic device, content displayed on the display screen is often stored through a screen capturing function provided by the electronic device, but the resolution of an image captured by the current screen capturing function is the same as the resolution of the display screen, and if the resolution of the display screen is lower, the resolution of the captured image is also lower.
Therefore, how to capture a screen to obtain a screen capturing image with higher resolution is a technical problem to be solved.
Disclosure of Invention
The application aims to provide an information processing method and device, electronic equipment and storage medium, and the information processing method and device comprise the following technical scheme:
an information processing method, the method comprising:
acquiring a screen capturing instruction;
Responding to the screen capturing instruction, obtaining N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
Preferably, the generating a screen capturing image based on the N frames of images includes:
generating a screen capturing image based on the image intelligent engine processing the reference image and the M frame auxiliary images related to the time sequence of the reference image;
the M-frame auxiliary image belongs to the N-frame image.
The above method, preferably, the image-based intelligent engine processes the reference image and the M-frame auxiliary image related to the reference image timing, including:
Respectively performing motion compensation on the M frame auxiliary images according to the reference image to obtain M frame compensation auxiliary images;
And processing the reference image and the M frame compensation auxiliary image based on an image intelligent engine to generate a screen capturing image.
In the above method, preferably, the performing motion compensation on the M-frame auxiliary images according to the reference image to obtain M-frame compensated auxiliary images includes:
For any pixel point in the auxiliary image, acquiring an error between the pixel point and a corresponding pixel point in the reference image;
acquiring the weight corresponding to the pixel point in the auxiliary image according to the error, wherein the larger the error is, the smaller the weight corresponding to the pixel point in the auxiliary image is;
weighting and summing the pixel point and the corresponding pixel point in the reference image according to the weight corresponding to the pixel point in the auxiliary image to obtain a pixel point of the pixel point in the auxiliary image after motion compensation;
And the sum of the weight corresponding to the pixel point in the auxiliary image and the weight corresponding to the corresponding pixel point in the reference image is 1.
The above method, preferably, generating a screen capturing image based on the image intelligent engine processing the reference image and the M-frame compensation auxiliary image, includes:
Interpolation is carried out on the reference image, and an interpolation image is obtained;
extracting features of the reference image and the M frame compensation auxiliary image through a feature extraction layer of the image intelligent engine to obtain an initial feature sequence;
Residual feature extraction is carried out on the initial feature map sequence through a residual learning network layer of the image intelligent engine, so that a residual feature map sequence is obtained;
Upsampling the residual feature map sequence through an upsampling convolution layer of the image intelligent engine to obtain a residual feature map; the residual feature map represents detail information of display content in the interpolation image;
and carrying out residual connection on the interpolation image and the residual feature map through a residual connection layer of the image intelligent engine to obtain a screen capturing image.
In the above method, preferably, the feature extraction of the reference image and the M-frame compensation auxiliary image by the feature extraction layer of the image intelligent engine, to obtain an initial feature sequence, includes: performing multi-scale feature extraction on the reference image and the M-frame compensation auxiliary image through the feature extraction layer to obtain a feature sequence with multiple scales; fusing the feature sequences of the multiple scales to obtain an initial feature sequence;
The number of layers of the residual blocks in the residual learning network layer is greater than a threshold value.
In the above method, preferably, the obtaining N frames of images includes:
Obtaining a reference image, and copying the reference image for N-1 times to obtain the N frames of images;
Or alternatively
Successive N frames of images are acquired in the image sequence.
An information processing apparatus comprising:
the instruction acquisition module is used for acquiring a screen capturing instruction;
The to-be-processed image obtaining module is used for responding to the screen capturing instruction to obtain N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
The screen capturing image generating module is used for generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
An information processing apparatus comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the information processing method as claimed in any one of the above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of the information processing method as claimed in any one of the preceding claims.
According to the information processing method, the information processing device, the electronic equipment and the storage medium, after a screen capturing instruction is obtained, N frames of images are obtained in response to the screen capturing instruction, the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen in response to the screen capturing instruction; generating a screen capturing image based on the obtained N frames of images; the screen capturing image at least comprises display content of a reference image, and the resolution of the screen capturing image is higher than that of the reference image. In the present application, when acquiring a screen shot image, instead of directly taking a display image (i.e., a reference image) of display contents of a display output area of a display screen in response to a screen shot instruction as the screen shot image, N frame images including at least the reference image are acquired, and the screen shot image is generated based on the N frame images, whereby the screen shot image with a higher resolution can be acquired.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an implementation of an information processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of processing a reference image and an M-frame auxiliary image related to a reference image timing based on an image intelligent engine according to an embodiment of the present application;
FIG. 3 is a flowchart of an implementation of motion compensation for a j-th auxiliary image in M-frame auxiliary images corresponding to an i-th reference image according to the i-th reference image according to an embodiment of the present application;
FIG. 4 is a system architecture diagram of a screen capturing image corresponding to an ith frame reference image generated by processing the ith frame reference image and an M frame compensation auxiliary image based on an image intelligent engine according to an embodiment of the present application;
FIG. 5 is a flowchart of one implementation of generating a screenshot image corresponding to an ith frame reference image based on an image intelligent engine processing the ith frame reference image and an M frame compensation auxiliary image according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a residual block according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an information processing apparatus according to an embodiment of the present application;
Fig. 8 is a block diagram of a hardware configuration of an information processing apparatus according to an embodiment of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in other sequences than those illustrated herein.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
The information processing method provided by the embodiment of the application is applied to the electronic equipment, the electronic equipment is provided with the display screen, and in the process of outputting the content in the display output area of the display screen, a user can operate the electronic equipment at any time, so that the electronic equipment can obtain the screen capturing instruction.
An implementation flowchart of the information processing method provided by the embodiment of the present application is shown in fig. 1, and may include:
Step S101: and obtaining a screen capturing instruction.
In the process of using the electronic equipment, a user can perform preset operation on the electronic equipment at any time so as to trigger the electronic equipment to generate a screen capturing instruction.
For example, if the user sees the content to be kept during browsing of the web page, the first screen capturing instruction may be triggered so as to obtain a screen capturing image containing the content displayed by the display screen. If the content which the user wants to keep is relatively large, the display screen cannot display all the content once, at this time, the user can trigger the first screen capturing instruction for a plurality of times, and each time the screen capturing instruction is triggered, a screen capturing image of the content which is displayed in the display output area of the display screen is obtained, then the webpage is dragged, the content which is desired to be kept but not displayed in the display screen is displayed in the display output area of the display screen, then the first screen capturing instruction is triggered again, and accordingly a screen capturing image of the content which is displayed in the display output area of the display screen is obtained again, and so on until the content which the user wants to keep is captured.
Alternatively, if the user wants to hold more content, the user may trigger a second screen capturing instruction (also referred to as a scroll screen capturing instruction, or a long screen capturing instruction) so that an image containing all the content to be held can be obtained one frame at a time without performing multiple operations by the user.
For another example, if a user sees a picture of interest while watching a video, a first screen capturing instruction may be triggered when the picture of interest is displayed in a display output area of the display screen so as to obtain a screen capturing image containing the picture of interest. In addition, if the user wants to save a small segment of video (i.e., preserve a continuous multi-frame image of video), a third screen capture instruction (also referred to as a screen capture instruction) may be triggered to be able to obtain a screen capture image (equivalent to a picture) containing a small segment of video.
For another example, if the user wants to capture a local area of the photo, the user may zoom in on the photo, adjust the local area of interest to the display output area of the display screen for display, and then trigger the first screen capturing instruction to obtain a screen capturing image containing the area of interest.
Step S102: and responding to the screen capturing instruction, obtaining N frames of images, wherein the N frames of images at least comprise a reference image, and the reference image is a display image of display contents of a display output area of the display screen when responding to the screen capturing instruction.
The N frame images are images having an association relationship, for example, the N frame images may be N frames of the same image, that is, the N frame images are the same reference image, or the N frame images are N frames of different reference images, or the N frame images include at least one frame of reference image and at least one frame of non-reference image.
For example, in an alternative scenario, if the user is viewing a still picture, if the user wants to capture the picture, a first capture instruction may be triggered, and in response to the first capture instruction, only one frame of display image is displayed in the display output area of the display screen, where the frame image is a reference image, and N frames of images may be N frames of the same reference image.
In another optional scenario, if the user is browsing the web page, if the user has content that wants to be retained, the first screen capturing instruction or the second screen capturing instruction may be triggered, and only one frame of display image of the display content in the display output area of the display screen is responded when the first screen capturing instruction or the second screen capturing instruction, where the frame image is the reference image, and at this time, the N frame images may be N frames of the same reference image.
In yet another alternative scenario, if the user is browsing the web page, if the user has content that wants to be retained, the first screen capturing instruction or the second screen capturing instruction may be triggered, and only one frame of display image of the display content of the display output area of the display screen is responded when the first screen capturing instruction or the second screen capturing instruction, where the frame image is the reference image, and at this time, the N frame images may be the reference image and the N-1 frame non-reference image. The N-1 frame non-reference image may be an image carrying different web page fragments, where the size of each non-reference image is the same as that of the reference image, and the content carried by the N-1 frame non-reference image and the web page fragment carried by the reference image form continuous web page fragments in the web page, for example, the N-1 frame non-reference image carried web page fragment is continuous N-1 web page fragments before the web page fragment carried by the reference image, or continuous N-1 web page fragments after the web page fragment carried by the reference image, or may be N1 web page fragments before the web page fragment carried by the reference image and N2 web page fragments after the web page fragment carried by the reference image, where n1+n2=n-1.
In still another optional scenario, if the user is watching the video, if the user wants to keep a small segment in the video, the third screen capturing instruction may be triggered, and the display image of the display content of the display output area of the display screen in response to the third screen capturing instruction is a plurality of frames, the plurality of frames of images are all reference images, and the plurality of frames of reference images are different, at this time, the N frames of images may be a plurality of frames of images displayed in the display output area of the display screen in response to the third screen capturing instruction, that is, the N frames of images are N frames of different reference images.
In yet another optional scenario, if the user is watching the video, if the user wants to keep a small section in the video, the third screen capturing instruction may be triggered, and the display image of the display content of the display output area of the display screen in response to the third screen capturing instruction is a plurality of frames, the plurality of frames of images are all reference images, and the plurality of frames of reference images are different, where the N frames of images may include a plurality of frames of images displayed in response to the third screen capturing instruction and at least one frame of non-reference image.
In yet another alternative scenario, if the user is watching the video, if the user wants to keep a certain picture in the video, the first screen capturing instruction may be triggered, and only one frame of display image of the display content of the display output area of the display screen is responded when the first screen capturing instruction is responded, the frame image is the reference image, and at this time, the N frame images may be the reference image and the N-1 frame non-reference image. The N frame images may be consecutive N frame images in the video, for example, the N-1 frame non-reference image is an N-1 frame image before the reference image in the video, or an N-1 frame image after the reference image in the video, or an N1 frame image before the reference image and an N2 frame image after the reference image, where n1+n2=n-1.
In summary, in the embodiment of the present application, the obtained N frame image may be an obtained one frame reference image, and the N frame reference image is obtained by copying the reference image N-1 times; or obtaining N frames of images may be obtaining consecutive N frames of images in a sequence of images, such as a sequence of images in a video.
Typically, before outputting an image, the electronic device processes the image so that the resolution of the image is compatible with the resolution of the display screen, and thus, the resolution of the N frames of images is the resolution of the display screen.
Step S103: generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
The screen capturing image may be a single-frame still image or a moving image composed of a plurality of frames of images. Optionally, in the case that the screen capturing instruction indicates capturing one frame of image, the screen capturing image is a frame of still image, and in the case that the screen capturing instruction (for example, the screen recording instruction) indicates capturing multiple frames of image, the screen capturing image is a dynamic image composed of multiple frames of image.
In the information processing method provided by the embodiment of the application, when acquiring the screen capturing image, the display image (namely, the reference image) of the display content of the display output area of the display screen is not directly taken as the screen capturing image when responding to the screen capturing instruction, but N frames of images at least comprising the reference image are acquired, the screen capturing image with higher resolution is generated based on the N frames of images, and the content of the screen capturing image comprises the display content of the reference image. A new screen capture method is provided.
In an optional embodiment, the above information processing method may be performed at a terminal device side, or may be performed by a terminal device and a cloud device in combination, and specifically, the steps S101 to S102 may be completed by the terminal device, the terminal device sends N frames of images to the cloud device, the cloud device generates a screen capturing image based on the received N frames of images, and returns the screen capturing image to the terminal device.
In an alternative embodiment, one implementation of generating the screenshot image based on the N frame image may be:
a screen capture image is generated based on the image intelligence engine processing the reference image and the M-frame auxiliary image associated with the reference image timing.
Wherein the M-frame auxiliary image belongs to the N-frame image. In the case of a reference image having multiple frames, the reference image may or may not be included in the M-frame auxiliary image.
As described above, the N frames of images include at least one frame of reference image. In the embodiment of the application, for each frame of reference image, the screen capturing image corresponding to the reference image can be generated based on the processing of the reference image and the M frame auxiliary images related to the time sequence of the reference image by the image intelligent engine. That is, for the i-th frame reference image, a screen shot image corresponding to the i-th frame reference image, which is any one of the at least one frame reference image, may be generated based on the image intelligent engine processing the i-th frame reference image and the M-frame auxiliary image related to the i-th frame reference image timing.
Alternatively, in the case where the N-frame image is the same reference image as the N-frame image, the reference image and the auxiliary image are the same image. At this time, m=n-1.
In the case where the N-frame image includes only one frame of reference image, the reference image is not included in the M-frame auxiliary image, and only the non-reference image is included. The M-frame auxiliary image may be an M-frame image (i.e., an image that has been displayed) located before the reference image in the video, or may be an M-frame image (i.e., an image that has not been displayed) located after the reference image in the video, or may be an M1-frame image located before the reference image and an M2-frame image located after the reference image in the video, where m1+m2=m. At this time, the i-th frame reference image and the M-frame auxiliary image related to the i-th frame reference image time sequence constitute m+1 frame images which are consecutive m+1 frame images in the video. At this time, m=n-1.
When the N frame images are N different reference images, the M frame auxiliary images are M different reference images, and at this time, the i frame reference image and the M frame auxiliary images related to the i frame reference image time sequence form m+1 frame images which are continuous m+1 frame images in the video. At this time, m=n-1, or M < N-1.
In the case where the N frames of images include a plurality of frames of reference images and at least one frame of non-reference image, the M frames of auxiliary images may be different reference images, or may include only the non-reference image, or include a part of the reference images and a part of the non-reference images. At this time, the i-th frame reference image and the M-frame auxiliary image related to the i-th frame reference image timing constitute m+1 frame images as consecutive m+1 frame images in the video. At this time, m=n-1, or M < N-1.
In an alternative embodiment, a flowchart of an implementation of the above-mentioned image-based intelligent engine processing the reference image and the M-frame auxiliary image related to the timing of the reference image is shown in fig. 2, and may include:
step S201: and respectively performing motion compensation on the M-frame auxiliary images according to the reference image to obtain M-frame compensation auxiliary images.
In the prior art, in the case that the content output by the display output area of the display screen is dynamic content, for example, in the case of outputting a video or a moving picture, if a moving object in an auxiliary image frame generates a large motion displacement or motion blur relative to the same object in a reference image, or the auxiliary image generates a scene change relative to the reference image, the generated screen capturing image is prone to generate boundary effects and artifacts.
In the embodiment of the application, in order to avoid the boundary effect and the artifact phenomenon of the screen capturing image, each frame of auxiliary image is respectively subjected to motion compensation according to the reference image so as to reduce or eliminate the influence of the motion displacement or motion blur of a moving object in the auxiliary image frame relative to the same object in the reference image or the scene change on the screen capturing image.
Specifically, corresponding to the ith frame reference image, motion compensation is performed on each of the M auxiliary images corresponding to the ith frame reference image according to the ith frame reference image.
Optionally, an implementation manner of performing motion compensation on the j-th frame auxiliary image in the M-frame auxiliary images corresponding to the i-th frame reference image according to the i-th frame reference image may be:
And carrying out weighted summation on the j-th frame auxiliary image and the i-th frame reference image to obtain a j-th frame compensation auxiliary image. The larger the difference between the jth frame auxiliary image and the ith frame reference image is, the smaller the weight corresponding to the jth frame auxiliary image is; the smaller the difference between the j-th frame auxiliary image and the i-th frame reference image is, the larger the weight corresponding to the j-th frame auxiliary image is. The sum of the weight corresponding to the j-th frame auxiliary image and the weight corresponding to the i-th frame reference image is 1.
Alternatively, the difference between the jth frame auxiliary image and the ith frame reference image may be a distance between the jth frame auxiliary image and the ith frame reference image, for example, may be a euclidean distance, or may be a manhattan distance, or may be a correlation distance, or the like.
In the above embodiment, the weights of the pixels at different positions in the j-th frame auxiliary image are the same, and the weights of the pixels at different positions in the i-th frame reference image are also the same. In the screen capturing image generated based on the implementation manner, the probability of occurrence of the boundary effect and the artifact is reduced, but there is still a further reduced space, based on which, an implementation flowchart for performing motion compensation on a j-th frame auxiliary image in M-frame auxiliary images corresponding to the i-th frame reference image according to the i-th frame reference image provided in the embodiment of the present application is shown in fig. 3, and may include:
step S301: for any one pixel point in the jth frame auxiliary image (for convenience of description, the pixel at the position (a, b) in the jth frame auxiliary image is denoted as j (a, b)), the error (for convenience of description, the pixel at the position (a, b) in the ith frame reference image is denoted as i (a, b)) of the pixel point j (a, b) and the corresponding pixel point in the ith frame reference image is acquired.
E (a, b) may be the absolute value of the difference between pixel j (a, b) and pixel i (a, b).
Step S302: and acquiring the weight alpha (a, b) corresponding to the pixel j (a, b) in the j-th frame auxiliary image according to the error e (a, b), wherein the larger the error e (a, b), the smaller the weight alpha (a, b) corresponding to the pixel j (a, b) in the i-th frame auxiliary image.
Optionally, the weight α (a, b) corresponding to the pixel j (a, b) in the jth frame of auxiliary image may be determined according to a predetermined correspondence between the error range and the weight, that is, the error range to which the error e (a, b) belongs is determined first, and the weight corresponding to the error range is taken as the weight corresponding to the pixel j (a, b).
Or alternatively
The weight can be obtained by calculation according to the association relation between the preset weight and the error. As an example, the weight α (a, b) corresponding to the pixel point j (a, b) may be calculated by the following association relationship:
α(a,b)=exp(-k·e(a,b)) (1)
Where k is a constant.
Step S303: weighting and summing the pixel point j (a, b) and the corresponding pixel point i (a, b) in the ith frame reference image according to the weight corresponding to the pixel point j (a, b) in the jth frame auxiliary image to obtain a pixel point subjected to motion compensation of the pixel point j (a, b) in the jth frame auxiliary image
Wherein the sum of the weight corresponding to the pixel j (a, b) in the j-th frame auxiliary image and the weight corresponding to the pixel i (a, b) in the i-th frame reference image is 1. Specifically, the method can be expressed as follows:
Step S202: a screen capture image is generated based on the image intelligence engine processing the reference image and the M-frame compensation auxiliary image.
In the embodiment of the application, the weights of the pixels at different positions in the jth frame auxiliary image can be different, and the weights of the pixels at different positions in the ith frame reference image can be different, so that the probability of occurrence of boundary effect and artifact in the screen capturing image is further reduced.
In an alternative embodiment, referring to fig. 4, fig. 4 is a system architecture diagram of a screen capturing image corresponding to an ith frame reference image according to an embodiment of the present application, which is provided by the present application and is based on an image intelligent engine to process the ith frame reference image and an M frame compensation auxiliary image. The system architecture diagram comprises two branches, wherein one branch (for convenience of description, the first branch is marked as a first branch) is used for inputting an ith frame reference image into an interpolation module, and the interpolation module interpolates the ith frame reference image to obtain an interpolation image of the ith frame reference image, wherein the resolution of the interpolation image is higher than that of the ith frame reference image. The interpolated image is prone to high frequency component damage or severe concussion and overcorrection problems. In order to overcome the problems of the interpolated image, in the embodiment of the application, a residual characteristic diagram representing the content details in the interpolated image is obtained through another branch, and the interpolated image is corrected through the residual characteristic diagram, so that the problems of the interpolated image are overcome. Specifically, the other branch (for convenience of description, denoted as a second branch) is to input an m+1 frame image (i.e., an ith frame reference image and an M frame compensation auxiliary image) into an image intelligent engine, sequentially perform feature extraction, residual learning, up-sampling convolution and residual connection on the m+1 frame image by the image intelligent engine to obtain a residual feature map representing content details in an interpolation image, and perform residual connection on the residual feature map and the interpolation image obtained by the first branch so as to supplement details of the interpolation image, thereby obtaining a screen capturing image with higher resolution and clear details (i.e., information of high-frequency components in the image, such as edge boundaries in the image, textures of non-edge regions, and the like).
As shown in fig. 5, fig. 5 is a flowchart of an implementation of processing an i-th frame reference image and an M-frame compensation auxiliary image based on an image intelligent engine to generate a screen capturing image corresponding to the i-th frame reference image according to an embodiment of the present application, which may include:
Step S501: and interpolating the ith frame reference image to obtain an interpolation image of the ith frame reference image.
The i-th frame reference image may be interpolated by a bilinear interpolation method, or may be interpolated by a bicubic interpolation method.
Step S502: and extracting features of the ith frame of reference image and M frames of compensation auxiliary images corresponding to the ith frame of reference image through a feature extraction layer of the image intelligent engine to obtain an initial feature sequence.
In an alternative embodiment, the feature extraction layer may perform single-scale feature extraction on m+1 frame images (i.e., the ith frame reference image and the M frame compensation auxiliary image corresponding to the ith frame reference image) to obtain the initial feature sequence.
In an alternative embodiment, the feature extraction layer may perform multi-scale feature extraction on the m+1 frame image (i.e., the ith frame reference image and the M frame compensation auxiliary image corresponding to the ith frame reference image) to obtain a feature sequence of multiple scales, and fuse the feature sequences of multiple scales to obtain an initial feature sequence.
Optionally, one implementation manner of performing multi-scale feature extraction on the m+1 frame image through the feature extraction layer may be:
And carrying out first-scale feature extraction on the M+1 frame images through a first convolution layer of the feature extraction layer to obtain an intermediate feature map sequence (which is marked as a first intermediate feature map sequence for convenience of description). The size of the convolution kernel in the first convolution layer is f 11×f11, and the number of the convolution kernels is C1, so that the number of input channels of the first convolution layer is m+1, and the number of output channels is C1, that is, the number of frames of the image in the first intermediate feature image sequence is C1.
The second convolution layer of the feature extraction layer is used for extracting features of two scales from the first intermediate feature map sequence, and the feature extraction method specifically can be as follows: the second-scale feature extraction is performed on the first intermediate feature map sequence through the second-layer convolution layer of the feature extraction layer, so as to obtain another intermediate feature map sequence (for convenience of description, the second intermediate feature map sequence is marked), and the third-scale feature extraction is performed on the first intermediate feature map sequence through the second-layer convolution layer of the feature extraction layer, so as to obtain yet another intermediate feature map sequence (for convenience of description, the third intermediate feature map sequence is marked). The number of input channels of the second layer of convolution layers is C1, the number of output channels of the second layer of convolution layers is C2, the size of a convolution kernel used when the second layer of convolution layers extracts the features of the second scale from the first intermediate feature map sequence is f 21×f21, the number of convolution kernels is C2, and the number of frames of images in the second intermediate feature map sequence is C2. The second convolution layer performs third-scale feature extraction on the first intermediate feature image sequence, the size of convolution kernels used is f 22×f22, the number of convolution kernels is C2, and then the number of frames of images in the second intermediate feature image sequence is C2. The feature sequence of the multiple scales may be specifically a feature sequence of the second intermediate feature sequence and a feature sequence of the third intermediate feature sequence are overlapped in depth, that is, the second intermediate feature sequence and the third intermediate feature sequence are spliced into one feature sequence with a frame number of c2+c2=2×c2, that is, the frame number of the initial feature sequence is 2×c2. Wherein, f 21 and f 11 may be the same or different, f 22 and f 11 may be the same or different, and f 21 and f 22 are different.
It should be noted that the feature map of the second intermediate feature map sequence is the same as the size of the feature map in the third intermediate feature map sequence, and in the case of the size determination of the convolution kernel, the size of the feature map extracted based on the convolution kernel is related to the size of the feature map in the first intermediate feature map sequence and the movement step size (stride) of the convolution kernel. Thus, the movement steps of the two convolution kernels may be controlled when performing feature extraction with the convolution kernels of size f 21×f21 and the convolution kernels of size f 22×f22, and/or the edges of the feature map in the first feature map sequence may be filled in (padded) with pixels to obtain a feature map of larger size for feature extraction such that the feature map of the second intermediate feature map sequence extracted based on the convolution kernels of size f 21×f21 and the convolution kernels of size f 22×f22 is the same size as the feature map of the third intermediate feature map sequence. How to control the moving steps of the two convolution kernels and/or how to fill in the edges of the feature map in the first feature map sequence with pixels (padding) can refer to the existing scheme, which is not described here in detail as it is not the focus of the present application.
Step S503: and carrying out residual feature extraction on the initial feature map sequence through a residual learning network layer of the image intelligent engine to obtain a residual feature map sequence.
The purpose of residual learning is to overcome the degradation problem of the deep neural network, network degradation refers to: when the depth of the neural network is continuously increased, a degradation (Degradation) problem occurs, that is, the accuracy increases and then reaches saturation as the depth of the network increases, and then the accuracy decreases as the depth increases. With the residual learning network layer, even if the network depth of the image intelligent engine is extremely deep, the network degradation problem can not occur after the network accuracy reaches saturation.
In the embodiment of the application, the residual learning network layer consists of d layers of residual blocks which are connected in series. The structures of the residual blocks may be the same, as shown in fig. 6, which is a schematic structural diagram of a residual block provided in an embodiment of the present application, where the residual block includes two convolution layers conv1 and conv2, two batch normalization layers BN1 and BN2, and two nonlinear activation function layers Relu and Relu. Where x represents the input of the residual block, G (x) represents the output of the residual block, W 1 represents the parameters of the convolutional layer conv1, W 2 represents the parameters of the convolutional layer conv2, neither BN1 nor BN2 has parameters, σ1 represents the parameters of the nonlinear activation function layer Relu1, σ2 represents the parameters of the nonlinear activation function layer Relu, then:
G(x)=σ2(H(x))
H(x)=F(x)+x
F(x)=BN2(W2*σ1(BN1(W1*x)))
BN1 () and BN2 () in the formula each represent a batch normalization operation; "x" means a convolution operation.
Optionally, the number of layers d of the residual block is greater than a threshold. As an example, the number of layers d of the residual block is greater than 7.
Step S504: and upsampling the residual feature map sequence through an upsampling convolution layer of the image intelligent engine to obtain a residual feature map.
Alternatively, the residual feature map sequence may be subjected to sub-pixel convolution by an upsampling convolution layer of the image intelligence engine to obtain a residual feature map.
Assuming that the number of frames of the residual feature map in the residual feature map sequence is r 2, the nature of the sub-pixel convolution is to periodically insert r 2 low-resolution feature maps into the high-resolution feature map according to a specific position (essentially rearranging and combining pixels in r 2 low-resolution feature maps), and assuming that the size of each of r 2 low-resolution feature maps is h×w×c, wherein H, W and C are the length, width, and channel number of the feature maps, respectively, the size of the high-resolution feature map obtained by rearranging and combining pixels in r 2 low-resolution feature maps is rh× rW ×c.
Step S505: and carrying out residual connection on the interpolation image and the residual feature map through a residual connection layer of the image intelligent engine to obtain a screen capturing image.
Optionally, if the size of the residual feature map is the same as that of the interpolated image, residual connection between the interpolated image and the residual feature map may specifically be performed by summing the interpolated image and the residual feature map to obtain the screen capturing image.
In an alternative embodiment, the image intelligence engine may apply for a network model for a pre-trained depth. The image intelligence engine can be trained as follows:
Inputting at least one training sample into an image intelligent engine to obtain a high-resolution image corresponding to each training sample output by the image intelligent engine; wherein each training sample comprises a reference image and M frames of auxiliary images related to the reference image; the resolution of the high-resolution image corresponding to each training sample is higher than that of the reference image in the training sample, and the high-resolution image corresponding to each training sample comprises the content of the reference image in the training sample;
updating parameters of the image intelligent engine by taking the difference approach zero of the high-resolution image output by the intelligent engine and the real high-resolution image corresponding to the training sample as a target;
And returning to the process of inputting at least one training sample into the image intelligent engine to obtain the high-resolution image corresponding to each training sample output by the image intelligent engine until the training ending condition is met (for example, the training times reach the preset times, or the difference between the high-resolution image output by the intelligent engine and the real high-resolution image corresponding to the training sample meets a certain condition, etc.).
Alternatively, the difference between the high resolution image output by the intelligent engine and the actual high resolution image corresponding to the training sample may be determined by a loss function. As an example, the loss function L may be:
L=ρ·MSE+(1-ρ)(1-SSIM)
where ρ is a constant defined in the range of 0 to 1.
MSE (Mean square error) is the mean square error, defined as:
Where y represents the true high resolution image (i.e., the high resolution image corresponding to the training sample), Representing a high resolution image output by the intelligent engine.
SSIM (Structural similarity index) are structural similarities defined as:
Where y represents the true high resolution image, Representing a high resolution image output by the intelligent engine; u y denotes the average value of all pixels in the real high-resolution image,Representing the average value of all pixels in the high-resolution image output by the intelligent engine; beta y represents the standard deviation of all pixels in a true high resolution image,Representing the standard deviation of all pixels in the high resolution image output by the intelligent engine,Representing the cross covariance of the high-resolution image and the real high-resolution image output by the intelligent engine; c 1 and c 2 are both constants.
Corresponding to the method embodiment, the embodiment of the application also provides an information processing device, and a schematic structural diagram of the information processing device provided by the embodiment of the application is shown in fig. 7, which may include:
an instruction obtaining module 701, a to-be-processed image obtaining module 702, and a screen capture image generating module 703; wherein,
The instruction obtaining module 701 is configured to obtain a screen capturing instruction;
The to-be-processed image obtaining module 702 is configured to obtain N frames of images in response to the screen capturing instruction, where the N frames of images include at least a reference image, and the reference image is a display image of display content of a display output area of the display screen in response to the screen capturing instruction;
the screen capture image generating module 703 is configured to generate a screen capture image based on the N frame images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
In the information processing method provided by the embodiment of the application, when acquiring the screen capturing image, the display image (namely, the reference image) of the display content of the display output area of the display screen is not directly taken as the screen capturing image when responding to the screen capturing instruction, but N frames of images at least comprising the reference image are acquired, the screen capturing image with higher resolution is generated based on the N frames of images, and the content of the screen capturing image comprises the display content of the reference image. A new screen capture scheme is provided.
In an alternative embodiment, the screenshot image generation module 703 is configured to:
generating a screen capturing image based on the image intelligent engine processing the reference image and the M frame auxiliary images related to the time sequence of the reference image;
the M-frame auxiliary image belongs to the N-frame image.
In an alternative embodiment, the screenshot image generation module 703 includes:
The compensation module is used for respectively carrying out motion compensation on the M frame auxiliary images according to the reference image to obtain M frame compensation auxiliary images;
and the processing module is used for processing the reference image and the M frame compensation auxiliary image based on an image intelligent engine to generate a screen capturing image.
In an alternative embodiment, the compensation module includes:
The error acquisition module is used for acquiring the error between any pixel point in the auxiliary image and the corresponding pixel point in the reference image;
The weight acquisition module is used for acquiring the weight corresponding to the pixel point in the auxiliary image according to the error, wherein the larger the error is, the smaller the weight corresponding to the pixel point in the auxiliary image is;
The weighting module is used for carrying out weighted summation on the pixel point and the corresponding pixel point in the reference image according to the weight corresponding to the pixel point in the auxiliary image to obtain a pixel point of the pixel point in the auxiliary image after motion compensation;
And the sum of the weight corresponding to the pixel point in the auxiliary image and the weight corresponding to the corresponding pixel point in the reference image is 1.
In an alternative embodiment, the processing module includes:
The interpolation module is used for interpolating the reference image to obtain an interpolation image;
the first feature extraction module is used for extracting features of the reference image and the M frame compensation auxiliary image through a feature extraction layer of the image intelligent engine to obtain an initial feature sequence;
The second feature extraction module is used for extracting residual features of the initial feature map sequence through a residual learning network layer of the image intelligent engine to obtain a residual feature map sequence;
The third feature extraction module is used for upsampling the residual feature map sequence through an upsampling convolution layer of the image intelligent engine to obtain a residual feature map; the residual feature map represents detail information of display content in the interpolation image;
And the connecting module is used for carrying out residual connection on the interpolation image and the residual feature map through a residual connecting layer of the image intelligent engine to obtain a screen capturing image.
In an alternative embodiment, the first feature extraction module is configured to: performing multi-scale feature extraction on the reference image and the M-frame compensation auxiliary image through the feature extraction layer to obtain a feature sequence with multiple scales; fusing the feature sequences of the multiple scales to obtain an initial feature sequence;
The number of layers of the residual blocks in the residual learning network layer is greater than a threshold value.
In an alternative embodiment, the image obtaining module to be processed 702 is configured to:
Obtaining a reference image, and copying the reference image for N-1 times to obtain the N frames of images;
Or alternatively
Successive N frames of images are acquired in the image sequence.
The present application also provides an information processing apparatus, such as a terminal, a server, etc., corresponding to the method embodiment. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The terminal may be a mobile terminal such as a smart phone, a tablet computer, a notebook computer, or a desktop computer, but is not limited thereto. In some embodiments, the terminal or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
An example diagram of a hardware structure block diagram of an information processing apparatus provided by an embodiment of the present application is shown in fig. 8, and may include:
A processor 1, a communication interface 2, a memory 3 and a communication bus 4;
wherein the processor 1, the communication interface 2 and the memory 3 complete the communication with each other through the communication bus 4;
Alternatively, the communication interface 2 may be an interface of a communication module, such as an interface of a GSM module;
The processor 1 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present application.
The memory 3 may comprise a high-speed RAM memory or may further comprise a non-volatile memory, such as at least one disk memory.
Wherein the processor 1 is specifically configured to execute a computer program stored in the memory 3 to perform the following steps:
acquiring a screen capturing instruction;
Responding to the screen capturing instruction, obtaining N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
Alternatively, the refinement and expansion functions of the computer program may be as described above.
The embodiment of the application also provides a readable storage medium, which can store a computer program suitable for being executed by a processor, the computer program being used for:
acquiring a screen capturing instruction;
Responding to the screen capturing instruction, obtaining N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image.
Alternatively, the refinement and expansion functions of the computer program may be as described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be understood that in the embodiments of the present application, the claims, the various embodiments, and the features may be combined with each other, so as to solve the foregoing technical problems.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. An information processing method, the method comprising:
acquiring a screen capturing instruction;
Responding to the screen capturing instruction, obtaining N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image;
wherein the generating a screen capture image based on the N frame images includes:
respectively performing motion compensation on the M-frame auxiliary images according to the reference image to obtain M-frame compensation auxiliary images so as to reduce or eliminate motion displacement or motion blur of a moving object in the auxiliary image frames relative to the same object in the reference image;
processing the reference image and the M frame compensation auxiliary image based on an image intelligent engine to generate a screen capturing image;
the M-frame auxiliary image belongs to the N-frame image.
2. The method according to claim 1, wherein the performing motion compensation on the M-frame auxiliary images according to the reference image to obtain M-frame compensated auxiliary images includes:
For any pixel point in the auxiliary image, acquiring an error between the pixel point and a corresponding pixel point in the reference image;
acquiring the weight corresponding to the pixel point in the auxiliary image according to the error, wherein the larger the error is, the smaller the weight corresponding to the pixel point in the auxiliary image is;
weighting and summing the pixel point and the corresponding pixel point in the reference image according to the weight corresponding to the pixel point in the auxiliary image to obtain a pixel point of the pixel point in the auxiliary image after motion compensation;
And the sum of the weight corresponding to the pixel point in the auxiliary image and the weight corresponding to the corresponding pixel point in the reference image is 1.
3. The method of claim 1, generating a screenshot image based on an image intelligence engine processing the reference image and the M-frame compensation auxiliary image, comprising:
Interpolation is carried out on the reference image, and an interpolation image is obtained;
extracting features of the reference image and the M frame compensation auxiliary image through a feature extraction layer of the image intelligent engine to obtain an initial feature sequence;
residual feature extraction is carried out on the initial feature map sequence through a residual learning network layer of the image intelligent engine, so that a residual feature map sequence is obtained;
Upsampling the residual feature map sequence through an upsampling convolution layer of the image intelligent engine to obtain a residual feature map; the residual feature map represents detail information of display content in the interpolation image;
and carrying out residual connection on the interpolation image and the residual feature map through a residual connection layer of the image intelligent engine to obtain a screen capturing image.
4. A method according to claim 3, wherein the feature extraction of the reference image and the M-frame compensation auxiliary image by the feature extraction layer of the image intelligence engine, to obtain an initial feature sequence, comprises: performing multi-scale feature extraction on the reference image and the M-frame compensation auxiliary image through the feature extraction layer to obtain a feature sequence with multiple scales; fusing the feature sequences of the multiple scales to obtain an initial feature sequence;
The number of layers of the residual blocks in the residual learning network layer is greater than a threshold value.
5. A method according to claim 2 or 3, the obtaining N frames of images comprising:
Obtaining a reference image, and copying the reference image for N-1 times to obtain the N frames of images;
Or alternatively
Successive N frames of images are acquired in the image sequence.
6. An information processing apparatus comprising:
the instruction acquisition module is used for acquiring a screen capturing instruction;
The to-be-processed image obtaining module is used for responding to the screen capturing instruction to obtain N frames of images, wherein the N frames of images at least comprise reference images, and the reference images are display images of display contents of a display output area of a display screen when responding to the screen capturing instruction;
The screen capturing image generating module is used for generating a screen capturing image based on the N frames of images; the screen capturing image at least comprises display content of the reference image, and the resolution of the screen capturing image is higher than that of the reference image;
the image obtaining module to be processed is configured to generate a screen capturing image based on the N frames of images, and specifically includes:
respectively performing motion compensation on the M-frame auxiliary images according to the reference image to obtain M-frame compensation auxiliary images so as to reduce or eliminate motion displacement or motion blur of a moving object in the auxiliary image frames relative to the same object in the reference image;
processing the reference image and the M frame compensation auxiliary image based on an image intelligent engine to generate a screen capturing image;
the M-frame auxiliary image belongs to the N-frame image.
7. An information processing apparatus comprising:
a memory for storing a computer program;
Processor for executing the computer program for implementing the steps of the information processing method according to any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of the information processing method according to any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110168959.6A CN112801876B (en) | 2021-02-07 | 2021-02-07 | Information processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110168959.6A CN112801876B (en) | 2021-02-07 | 2021-02-07 | Information processing method and device, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112801876A CN112801876A (en) | 2021-05-14 |
| CN112801876B true CN112801876B (en) | 2024-07-26 |
Family
ID=75814668
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110168959.6A Active CN112801876B (en) | 2021-02-07 | 2021-02-07 | Information processing method and device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112801876B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113641286B (en) * | 2021-07-30 | 2025-01-24 | 联想(北京)有限公司 | Screen capture method, electronic device, and computer storage medium |
| CN113938631B (en) * | 2021-11-29 | 2023-11-03 | 青岛信芯微电子科技股份有限公司 | Reference monitor, image frame interception method and system |
| CN114554133B (en) * | 2022-02-22 | 2023-01-06 | 联想(北京)有限公司 | Information processing method and device and electronic equipment |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109525874A (en) * | 2018-09-27 | 2019-03-26 | 维沃移动通信有限公司 | A kind of screenshotss method and terminal device |
| CN111047516A (en) * | 2020-03-12 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2011094292A1 (en) * | 2010-01-28 | 2011-08-04 | Pathway Innovations And Technologies, Inc. | Document imaging system having camera-scanner apparatus and personal computer based processing software |
| CN111399735B (en) * | 2020-04-16 | 2022-04-12 | Oppo广东移动通信有限公司 | Screen capturing method, screen capturing device, electronic equipment and storage medium |
-
2021
- 2021-02-07 CN CN202110168959.6A patent/CN112801876B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109525874A (en) * | 2018-09-27 | 2019-03-26 | 维沃移动通信有限公司 | A kind of screenshotss method and terminal device |
| CN111047516A (en) * | 2020-03-12 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112801876A (en) | 2021-05-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11354785B2 (en) | Image processing method and device, storage medium and electronic device | |
| US12148123B2 (en) | Multi-stage multi-reference bootstrapping for video super-resolution | |
| US20220366193A1 (en) | Neural network model training method and device, and time-lapse photography video generating method and device | |
| US11615510B2 (en) | Kernel-aware super resolution | |
| CN111402139B (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
| CN112602088B (en) | Methods, systems and computer-readable media for improving the quality of low-light images | |
| CN110570356B (en) | Image processing methods and devices, electronic equipment and storage media | |
| CN112801876B (en) | Information processing method and device, electronic equipment and storage medium | |
| US20070237425A1 (en) | Image resolution increasing method and apparatus for the same | |
| CN111951165B (en) | Image processing method, device, computer equipment and computer readable storage medium | |
| CN111507333A (en) | Image correction method and device, electronic equipment and storage medium | |
| CN113628115B (en) | Image reconstruction processing method, device, electronic equipment and storage medium | |
| CN114155152B (en) | A real-time super-resolution reconstruction method and system based on historical feature fusion | |
| CN109862208A (en) | Video processing method, device and computer storage medium | |
| Jeong et al. | Multi-frame example-based super-resolution using locally directional self-similarity | |
| US10007970B2 (en) | Image up-sampling with relative edge growth rate priors | |
| CN108876716B (en) | Super-resolution reconstruction method and device | |
| CN114529456B (en) | Super-resolution processing method, device, equipment and medium for video | |
| US11842463B2 (en) | Deblurring motion in videos | |
| CN114897711A (en) | Method, device and equipment for processing images in video and storage medium | |
| CN118608387A (en) | Method, device and apparatus for super-resolution reconstruction of satellite video frames | |
| CN114445277B (en) | Depth Image Pixel Enhancement Method, Apparatus, and Computer-Readable Storage Medium | |
| CN116962880A (en) | A foreground anti-shake method and device based on deep learning image segmentation | |
| CN115731098A (en) | Video image processing method, network training method, electronic device, medium | |
| CN114140324A (en) | Video image processing method and device, storage medium and terminal equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |