CN114511811A - Video processing method, video processing device, electronic equipment and medium - Google Patents
Video processing method, video processing device, electronic equipment and medium Download PDFInfo
- Publication number
- CN114511811A CN114511811A CN202210107911.9A CN202210107911A CN114511811A CN 114511811 A CN114511811 A CN 114511811A CN 202210107911 A CN202210107911 A CN 202210107911A CN 114511811 A CN114511811 A CN 114511811A
- Authority
- CN
- China
- Prior art keywords
- sub
- feature
- coloring
- object segmentation
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
本公开提供了一种神经网络模型的训练方法、视频处理方法、装置、电子设备和介质,涉及人工智能技术领域,尤其涉及深度学习、计算机视觉技术领域。实现方案为:获取多个第一样本图像和与多个第一样本图像对应的彩色第二样本图像;将多个第一样本图像输入特征提取子网络,以获取第一特征;将第二样本图像输入特征提取子网络,以获取第二特征;将第一特征和第二特征输入预着色子网络,以获取预着色结果;将第一特征输入对象分割子网络,以获取多个对象分割特征;将预着色结果和多个对象分割特征输入最终着色子网络,以获取最终着色结果;以及至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。
The present disclosure provides a neural network model training method, video processing method, device, electronic device and medium, and relates to the technical field of artificial intelligence, in particular to the technical fields of deep learning and computer vision. The implementation scheme is: obtaining a plurality of first sample images and color second sample images corresponding to the plurality of first sample images; inputting the plurality of first sample images into a feature extraction sub-network to obtain the first features; The second sample image is input into the feature extraction sub-network to obtain the second feature; the first feature and the second feature are input into the pre-coloring sub-network to obtain the pre-coloring result; the first feature is input into the object segmentation sub-network to obtain multiple object segmentation features; inputting the pre-coloring result and the plurality of object segmentation features into a final shading sub-network to obtain a final shading result; and adjusting parameters of the neural network model based on at least the final shading result and the second sample image.
Description
技术领域technical field
本公开涉及人工智能技术领域,尤其涉及深度学习、计算机视觉技术领域,具体涉及一种神经网络模型的训练方法、视频处理方法、装置、电子设备、计算机可读存储介质和计算机程序产品。The present disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of deep learning and computer vision, and in particular to a training method for a neural network model, a video processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
背景技术Background technique
人工智能是研究使计算机来模拟人的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术也有软件层面的技术。人工智能硬件技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理等技术;人工智能软件技术主要包括计算机视觉技术、语音识别技术、自然语言处理技术以及机器学习/深度学习、大数据处理技术、知识图谱技术等几大方向。Artificial intelligence is the study of making computers to simulate certain thinking processes and intelligent behaviors of people (such as learning, reasoning, thinking, planning, etc.), both hardware-level technology and software-level technology. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, and big data processing; artificial intelligence software technologies mainly include computer vision technology, speech recognition technology, natural language processing technology, and machine learning/depth Learning, big data processing technology, knowledge graph technology and other major directions.
图像与视频处理任务是应用深度学习算法的一种实践应用。使用深度学习算法能够极大提升图像与视频处理的效率。Image and video processing tasks are a practical application of applying deep learning algorithms. Using deep learning algorithms can greatly improve the efficiency of image and video processing.
在此部分中描述的方法不一定是之前已经设想到或采用的方法。除非另有指明,否则不应假定此部分中描述的任何方法仅因其包括在此部分中就被认为是现有技术。类似地,除非另有指明,否则此部分中提及的问题不应认为在任何现有技术中已被公认。The approaches described in this section are not necessarily approaches that have been previously conceived or employed. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the issues raised in this section should not be considered to be recognized in any prior art.
发明内容SUMMARY OF THE INVENTION
本公开提供了一种神经网络模型的训练方法、视频处理方法、装置、电子设备、计算机可读存储介质和计算机程序产品。The present disclosure provides a training method for a neural network model, a video processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
根据本公开的一方面,提供了一种神经网络模型的训练方法。其中,该神经网络模型包括特征提取子网络、预着色子网络、对象分割子网络和最终着色子网络。该方法包括:获取多个第一样本图像和与多个第一样本图像对应的彩色第二样本图像,其中,多个第一样本图像中的每个第一样本图像均包括待处理对象;将多个第一样本图像输入特征提取子网络,以获取第一特征;将第二样本图像输入特征提取子网络,以获取第二特征;将第一特征和第二特征输入预着色子网络,以获取针对多个第一样本图像中的至少一个第一样本图像所预测的预着色结果;将第一特征输入对象分割子网络,以获取多个对象分割特征,其中,多个对象分割特征中的每个对象分割特征分别针对多个第一样本图像中的相应一个第一样本图像;将预着色结果和多个对象分割特征输入最终着色子网络,以获取最终着色结果;以及至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。According to an aspect of the present disclosure, a method for training a neural network model is provided. Among them, the neural network model includes a feature extraction sub-network, a pre-coloring sub-network, an object segmentation sub-network and a final coloring sub-network. The method includes: acquiring a plurality of first sample images and color second sample images corresponding to the plurality of first sample images, wherein each first sample image in the plurality of first sample images includes a Processing object; inputting multiple first sample images into the feature extraction sub-network to obtain the first features; inputting the second sample images into the feature extraction sub-network to obtain the second features; inputting the first features and the second features into the preset a coloring sub-network to obtain a predicted pre-coloring result for at least one first sample image in a plurality of first sample images; the first feature is input into the object segmentation sub-network to obtain a plurality of object segmentation features, wherein, Each object segmentation feature in the multiple object segmentation features is respectively for a corresponding one of the multiple first sample images; the pre-coloring result and the multiple object segmentation features are input into the final coloring sub-network to obtain the final coloring the result; and adjusting parameters of the neural network model based on at least the final coloring result and the second sample image.
根据本公开的另一方面,提供了一种视频处理方法。该方法包括:将待着色视频的目标着色帧、与目标着色帧相邻的至少一个帧、以及彩色参考图像输入神经网络模型,以获取针对目标着色帧的最终着色结果,其中,神经网络模型是通过根据上述的方法训练得到的。According to another aspect of the present disclosure, a video processing method is provided. The method includes: inputting a target coloring frame of a video to be colorized, at least one frame adjacent to the target coloring frame, and a color reference image into a neural network model to obtain a final coloring result for the target coloring frame, wherein the neural network model is obtained by training according to the above method.
根据本公开的另一方面,提供了一种神经网络模型的训练装置。其中,该神经网络模型包括特征提取子网络、预着色子网络、对象分割子网络和最终着色子网络。该装置包括:第一单元,被配置为获取多个第一样本图像和与多个第一样本图像对应的彩色第二样本图像,其中,多个第一样本图像中的每个第一样本图像均包括待处理对象;第二单元,被配置为将多个第一样本图像输入特征提取子网络,以获取第一特征;第二单元还被配置为将第二样本图像输入特征提取子网络,以获取第二特征;第三单元,被配置为将第一特征和第二特征输入预着色子网络,以获取针对多个第一样本图像中的至少一个第一样本图像所预测的预着色结果;第四单元,被配置为将第一特征输入对象分割子网络,以获取多个对象分割特征,其中,多个对象分割特征中的每个对象分割特征分别针对多个第一样本图像中的相应一个第一样本图像;第五单元,被配置为将预着色结果和多个对象分割特征输入最终着色子网络,以获取最终着色结果;以及第六单元,被配置为至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。According to another aspect of the present disclosure, a training apparatus for a neural network model is provided. Among them, the neural network model includes a feature extraction sub-network, a pre-coloring sub-network, an object segmentation sub-network and a final coloring sub-network. The apparatus includes: a first unit configured to acquire a plurality of first sample images and color second sample images corresponding to the plurality of first sample images, wherein each of the plurality of first sample images Each of the sample images includes the object to be processed; the second unit is configured to input the plurality of first sample images into the feature extraction sub-network to obtain the first features; the second unit is further configured to input the second sample images into the feature extraction sub-network a feature extraction sub-network to obtain a second feature; a third unit configured to input the first feature and the second feature into the pre-coloring sub-network to obtain a first sample for at least one of the plurality of first sample images The pre-coloring result predicted by the image; the fourth unit is configured to input the first feature into the object segmentation sub-network to obtain a plurality of object segmentation features, wherein each object segmentation feature in the plurality of object segmentation features is respectively for multiple object segmentation features. A corresponding one of the first sample images; a fifth unit configured to input the pre-coloring result and the plurality of object segmentation features into a final coloring sub-network to obtain a final coloring result; and a sixth unit, is configured to adjust parameters of the neural network model based at least on the final shading result and the second sample image.
根据本公开的另一方面,提供了一种视频处理装置。该装置包括:着色单元,被配置为将待着色视频的目标着色帧、与目标着色帧相邻的至少一个帧、以及彩色参考图像输入神经网络模型,以获取针对目标着色帧的最终着色结果,其中,神经网络模型是通过根据上述的方法训练得到的。According to another aspect of the present disclosure, a video processing apparatus is provided. The device includes: a coloring unit configured to input a target coloring frame of the video to be colorized, at least one frame adjacent to the target coloring frame, and a color reference image into a neural network model to obtain a final coloring result for the target coloring frame, The neural network model is obtained by training according to the above method.
根据本公开的另一方面,提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述任意一种方法。According to another aspect of the present disclosure, there is provided an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executed by at least one processor A processor executes to enable at least one processor to execute any one of the methods described above.
根据本公开的另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机执行上述任意一种方法。According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to perform any one of the above methods.
根据本公开的另一方面,提供了一种计算机程序产品,包括计算机程序,其中,计算机程序在被处理器执行时实现上述任意一种方法。According to another aspect of the present disclosure, there is provided a computer program product, comprising a computer program, wherein the computer program, when executed by a processor, implements any one of the above methods.
根据本公开的一个或多个实施例,可以减少或消除着色视频在时域上的“抖动”或“闪烁”效果,从而提高视频着色的质量。According to one or more embodiments of the present disclosure, the "jitter" or "flicker" effect of the rendered video in the temporal domain can be reduced or eliminated, thereby improving the quality of video rendering.
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。It should be understood that what is described in this section is not intended to identify key or critical features of embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become readily understood from the following description.
附图说明Description of drawings
附图示例性地示出了实施例并且构成说明书的一部分,与说明书的文字描述一起用于讲解实施例的示例性实施方式。所示出的实施例仅出于例示的目的,并不限制权利要求的范围。在所有附图中,相同的附图标记指代类似但不一定相同的要素。The accompanying drawings illustrate the embodiments by way of example and constitute a part of the specification, and together with the written description of the specification serve to explain exemplary implementations of the embodiments. The shown embodiments are for illustrative purposes only and do not limit the scope of the claims. Throughout the drawings, the same reference numbers refer to similar but not necessarily identical elements.
图1示出了根据本公开的实施例的可以在其中实施本文描述的各种方法的示例性系统的示意图;1 shows a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to embodiments of the present disclosure;
图2示出了根据本公开的实施例的神经网络模型的训练方法的流程图;FIG. 2 shows a flowchart of a training method of a neural network model according to an embodiment of the present disclosure;
图3示出了根据本公开的实施例的图2的方法中部分示例过程的流程图;FIG. 3 shows a flowchart of a portion of an example process in the method of FIG. 2 according to an embodiment of the present disclosure;
图4是示出利用根据本公开实施例的神经网络模型的训练方法来训练神经网络模型的过程示意图;4 is a schematic diagram illustrating a process of training a neural network model by using a training method for a neural network model according to an embodiment of the present disclosure;
图5是示出利用根据本公开实施例的神经网络模型的训练方法来训练神经网络模型的过程示意图;5 is a schematic diagram illustrating a process of training a neural network model by using a training method for a neural network model according to an embodiment of the present disclosure;
图6示出了根据本公开的实施例的神经网络模型的训练装置的结构框图;6 shows a structural block diagram of a training apparatus for a neural network model according to an embodiment of the present disclosure;
图7示出了根据本公开的实施例的视频处理装置的结构框图;以及FIG. 7 shows a structural block diagram of a video processing apparatus according to an embodiment of the present disclosure; and
图8示出了能够用于实现本公开的实施例的示例性电子设备的结构框图。8 shows a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
具体实施方式Detailed ways
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding and should be considered as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted from the following description for clarity and conciseness.
在本公开中,除非另有说明,否则使用术语“第一”、“第二”等来描述各种要素不意图限定这些要素的位置关系、时序关系或重要性关系,这种术语只是用于将一个要素与另一要素区分开。在一些示例中,第一要素和第二要素可以指向该要素的同一实例,而在某些情况下,基于上下文的描述,它们也可以指代不同实例。In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, timing relationship or importance relationship of these elements, and such terms are only used for Distinguish one element from another. In some examples, the first element and the second element may refer to the same instance of the element, while in some cases they may refer to different instances based on the context of the description.
在本公开中对各种所述示例的描述中所使用的术语只是为了描述特定示例的目的,而并非旨在进行限制。除非上下文另外明确地表明,如果不特意限定要素的数量,则该要素可以是一个也可以是多个。此外,本公开中所使用的术语“和/或”涵盖所列出的项目中的任何一个以及全部可能的组合方式。The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly dictates otherwise, if the number of an element is not expressly limited, the element may be one or more. Furthermore, as used in this disclosure, the term "and/or" covers any and all possible combinations of the listed items.
相关技术中,在对灰度视频进行着色时,通常由专业人员通过专用视频编辑软件,对灰度视频的每一帧进行颜色填充,这种方式耗费大量的人力物力,效率低下。在另外的相关技术中,可以使用基于深度学习的视频着色技术,使用在很多着色数据集上学习之后得到的神经网络模型来对灰度视频进行着色,然而这种方式得到的着色结果相对固定,不能按照用户的期望进行着色。在另外的相关技术中,可以基于用户提供的参考图对灰度视频进行着色,这虽然能够在一定程度上按照用户的期望进行着色,但是,着色后的视频中的同一个物体在不同时间的着色结果会存在不同,因此,当播放经着色的视频时,视频在时域上会存在明显的“抖动”或“闪烁”效果,降低视频着色的质量。In the related art, when coloring a grayscale video, professionals usually fill each frame of the grayscale video with color through special video editing software, which consumes a lot of manpower and material resources and is inefficient. In another related art, the video colorization technology based on deep learning can be used to colorize the grayscale video using the neural network model obtained after learning on many colorization datasets. However, the colorization results obtained in this way are relatively fixed. Shading is not possible as the user expects. In another related art, the grayscale video can be colored based on the reference image provided by the user. Although the coloring can be performed according to the user's expectation to a certain extent, the same object in the colored video may be colored at different times. Shading results can vary, so when a shaded video is played, the video will have a noticeable "jitter" or "flickering" effect in the temporal domain, reducing the quality of the video's shading.
为解决上述问题,本公开提出一种神经网络模型的训练方法,通过以待着色的多个第一样本图像以及对应的彩色第二样本图像作为训练样本,并针对每个第一样本图像,均获取对应的对象分割特征,利用对象分割特征,能够引导神经网络模型更加关注多个第一样本图像中的待着色对象。基于此而训练得到的神经网络模型,在利用参考图对待着色图像进行着色的基础上,由于模型更加关注图像中的待着色对象,能够使得着色后的图像所形成的视频中的同一个对象在不同时间的着色结果尽可能相近,从而减少或消除着色视频在时域上的“抖动”或“闪烁”效果,提高视频着色的质量。In order to solve the above problems, the present disclosure proposes a training method for a neural network model, by using a plurality of first sample images to be colored and corresponding colored second sample images as training samples, , corresponding object segmentation features are obtained, and the object segmentation features can be used to guide the neural network model to pay more attention to the objects to be colored in the multiple first sample images. The neural network model trained based on this, on the basis of using the reference image to colorize the image to be colored, because the model pays more attention to the object to be colored in the image, the same object in the video formed by the colored image can be The shading results at different times are as similar as possible, so as to reduce or eliminate the "jitter" or "flicker" effect of the shading video in the time domain, and improve the quality of video shading.
在本公开中,神经网络的“子网络”并不一定是基于由神经元构成的层的网络结构,也可能通过其他网络结构和处理方法对输入子网络的数据、特征等进行处理,在此不做限定。In the present disclosure, the "sub-network" of the neural network is not necessarily a network structure based on layers composed of neurons, and may also process the data, features, etc. input to the sub-network through other network structures and processing methods. Not limited.
本公开的技术方案中,所涉及的用户个人信息的收集、存储、使用、加工、传输、提供和公开等处理,均符合相关法律法规的规定,且不违背公序良俗。In the technical solutions of the present disclosure, the collection, storage, use, processing, transmission, provision, and disclosure of the user's personal information involved are all in compliance with relevant laws and regulations, and do not violate public order and good customs.
下面将结合附图详细描述本公开的实施例。Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
图1示出了根据本公开的实施例可以将本文描述的各种方法和装置在其中实施的示例性系统100的示意图。参考图1,该系统100包括一个或多个客户端设备101、102、103、104、105和106、服务器120以及将一个或多个客户端设备耦接到服务器120的一个或多个通信网络110。客户端设备101、102、103、104、105和106可以被配置为执行一个或多个应用程序。1 shows a schematic diagram of an
在本公开的实施例中,服务器120可以运行使得能够执行模型训练方法或视频处理方法的一个或多个服务或软件应用。In an embodiment of the present disclosure, the
在某些实施例中,服务器120还可以提供可以包括非虚拟环境和虚拟环境的其他服务或软件应用。在某些实施例中,这些服务可以作为基于web的服务或云服务提供,例如在软件即服务(SaaS)模型下提供给客户端设备101、102、103、104、105和/或106的用户。In some embodiments,
在图1所示的配置中,服务器120可以包括实现由服务器120执行的功能的一个或多个组件。这些组件可以包括可由一个或多个处理器执行的软件组件、硬件组件或其组合。操作客户端设备101、102、103、104、105和/或106的用户可以依次利用一个或多个客户端应用程序来与服务器120进行交互以利用这些组件提供的服务。应当理解,各种不同的系统配置是可能的,其可以与系统100不同。因此,图1是用于实施本文所描述的各种方法的系统的一个示例,并且不旨在进行限制。In the configuration shown in FIG. 1 ,
用户可以使用客户端设备101、102、103、104、105和/或106来提供带着色的视频以及参考图像。客户端设备可以提供使客户端设备的用户能够与客户端设备进行交互的接口。客户端设备还可以经由该接口向用户输出信息。尽管图1仅描绘了六种客户端设备,但是本领域技术人员将能够理解,本公开可以支持任何数量的客户端设备。A user may use
客户端设备101、102、103、104、105和/或106可以包括各种类型的计算机设备,例如便携式手持设备、通用计算机(诸如个人计算机和膝上型计算机)、工作站计算机、可穿戴设备、智能屏设备、自助服务终端设备、服务机器人、游戏系统、瘦客户端、各种消息收发设备、传感器或其他感测设备等。这些计算机设备可以运行各种类型和版本的软件应用程序和操作系统,例如MICROSOFT Windows、APPLE iOS、类UNIX操作系统、Linux或类Linux操作系统(例如GOOGLE Chrome OS);或包括各种移动操作系统,例如MICROSOFT WindowsMobile OS、iOS、Windows Phone、Android。便携式手持设备可以包括蜂窝电话、智能电话、平板电脑、个人数字助理(PDA)等。可穿戴设备可以包括头戴式显示器(诸如智能眼镜)和其他设备。游戏系统可以包括各种手持式游戏设备、支持互联网的游戏设备等。客户端设备能够执行各种不同的应用程序,例如各种与Internet相关的应用程序、通信应用程序(例如电子邮件应用程序)、短消息服务(SMS)应用程序,并且可以使用各种通信协议。
网络110可以是本领域技术人员熟知的任何类型的网络,其可以使用多种可用协议中的任何一种(包括但不限于TCP/IP、SNA、IPX等)来支持数据通信。仅作为示例,一个或多个网络110可以是局域网(LAN)、基于以太网的网络、令牌环、广域网(WAN)、因特网、虚拟网络、虚拟专用网络(VPN)、内部网、外部网、公共交换电话网(PSTN)、红外网络、无线网络(例如蓝牙、WIFI)和/或这些和/或其他网络的任意组合。
服务器120可以包括一个或多个通用计算机、专用服务器计算机(例如PC(个人计算机)服务器、UNIX服务器、中端服务器)、刀片式服务器、大型计算机、服务器群集或任何其他适当的布置和/或组合。服务器120可以包括运行虚拟操作系统的一个或多个虚拟机,或者涉及虚拟化的其他计算架构(例如可以被虚拟化以维护服务器的虚拟存储设备的逻辑存储设备的一个或多个灵活池)。在各种实施例中,服务器120可以运行提供下文所描述的功能的一个或多个服务或软件应用。
服务器120中的计算单元可以运行包括上述任何操作系统以及任何商业上可用的服务器操作系统的一个或多个操作系统。服务器120还可以运行各种附加服务器应用程序和/或中间层应用程序中的任何一个,包括HTTP服务器、FTP服务器、CGI服务器、JAVA服务器、数据库服务器等。The computing units in
在一些实施方式中,服务器120可以包括一个或多个应用程序,例如,基于图像、视频、语音、文本、数字信号等数据的目标检测与识别、信号转换等服务的应用程序,以处理从客户端设备101、102、103、104、105和/或106接收的语音交互、文本分类、图像识别或关键点检测等任务请求。服务器可以根据具体的深度学习任务,利用训练样本训练神经网络模型,并且可以对神经网络模型的超网络模块中的各个子网络进行测试,根据各个子网络的测试结果,确定用于执行深度学习任务的神经网络模型的结构和参数。可以将各种数据作为深度学习任务的训练样本数据,如图像数据、音频数据、视频数据或文本数据等。在神经网络模型的训练完成后,服务器120还可以通过模型搜索技术自动搜索出最优模型结构来执行相应的任务。In some embodiments, the
在一些实施方式中,服务器120可以为分布式系统的服务器,或者是结合了区块链的服务器。服务器120也可以是云服务器,或者是带人工智能技术的智能云计算服务器或智能云主机。云服务器是云计算服务体系中的一项主机产品,以解决传统物理主机与虚拟专用服务器(VPS,Virtual Private Server)服务中存在的管理难度大、业务扩展性弱的缺陷。In some embodiments, the
系统100还可以包括一个或多个数据库130。在某些实施例中,这些数据库可以用于存储数据和其他信息。例如,数据库130中的一个或多个可用于存储诸如音频文件和视频文件的信息。数据库130可以驻留在各种位置。例如,由服务器120使用的数据库可以在服务器120本地,或者可以远离服务器120且可以经由基于网络或专用的连接与服务器120通信。数据库130可以是不同的类型。在某些实施例中,由服务器120使用的数据库例如可以是关系数据库。这些数据库中的一个或多个可以响应于命令而存储、更新和检索到数据库以及来自数据库的数据。
在某些实施例中,数据库130中的一个或多个还可以由应用程序使用来存储应用程序数据。由应用程序使用的数据库可以是不同类型的数据库,例如键值存储库,对象存储库或由文件系统支持的常规存储库。In some embodiments, one or more of the
图1的系统100可以以各种方式配置和操作,以使得能够应用根据本公开所描述的各种方法和装置。The
图2示出了根据本公开的实施例的神经网络模型的训练方法200的流程图,其中,所述神经网络模型包括特征提取子网络、预着色子网络、对象分割子网络和最终着色子网络。方法200包括:2 shows a flowchart of a
步骤S210、获取多个第一样本图像和与多个第一样本图像对应的彩色第二样本图像,其中,多个第一样本图像中的每个第一样本图像均包括待处理对象;Step S210: Acquiring a plurality of first sample images and a color second sample image corresponding to the plurality of first sample images, wherein each first sample image in the plurality of first sample images includes to-be-processed object;
步骤S220、将多个第一样本图像输入特征提取子网络,以获取第一特征;Step S220, inputting a plurality of first sample images into the feature extraction sub-network to obtain the first features;
步骤S230、将第二样本图像输入特征提取子网络,以获取第二特征;Step S230, inputting the second sample image into the feature extraction sub-network to obtain the second feature;
步骤S240、将第一特征和第二特征输入预着色子网络,以获取针对多个第一样本图像中的至少一个第一样本图像所预测的预着色结果;Step S240, inputting the first feature and the second feature into the pre-coloring sub-network to obtain a predicted pre-coloring result for at least one first sample image in the plurality of first sample images;
步骤S250、将第一特征输入对象分割子网络,以获取多个对象分割特征,其中,多个对象分割特征中的每个对象分割特征分别针对多个第一样本图像中的相应一个第一样本图像;Step S250, inputting the first feature into the object segmentation sub-network to obtain multiple object segmentation features, wherein each object segmentation feature in the multiple object segmentation features is respectively for a corresponding one of the multiple first sample images. sample image;
步骤S260、将预着色结果和多个对象分割特征输入最终着色子网络,以获取最终着色结果;以及Step S260, input the pre-coloring result and multiple object segmentation features into the final coloring sub-network to obtain the final coloring result; and
步骤S270、至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。Step S270, at least based on the final coloring result and the second sample image, adjust the parameters of the neural network model.
方法200通过以待着色的多个第一样本图像以及对应的彩色第二样本图像作为训练样本,并针对每个第一样本图像,均获取对应的对象分割特征,利用对象分割特征,能够引导神经网络模型更加关注多个第一样本图像中的待着色对象。基于此而训练得到的神经网络模型,在利用参考图对待着色图像进行着色的基础上,由于模型更加关注图像中的待着色对象,能够使得着色后的图像所形成的视频中的同一个对象在不同时间的着色结果尽可能相近,从而减少或消除着色视频在时域上的“抖动”或“闪烁”效果,提高视频着色的质量。In the
应当理解,第一样本图像和第二样本图像中的所述“对象”可以是图像中所包括的实例或物体的像。It should be understood that the "objects" in the first sample image and the second sample image may be images of instances or objects included in the images.
根据一些实施例,所述多个第一样本图像中的每个第一样本图像均可以包括相同种类和相同数量的对象,并且所述第二样本图像可以包括与每个第一样本图像中的对象对应的彩色参考对象。According to some embodiments, each of the plurality of first sample images may include the same kind and the same number of objects, and the second sample image may include the same number of objects as each of the first sample images. The colored reference object corresponding to the object in the image.
所采用的训练样本中,多个第一样本图像与第二样本图像中包括种类和数量均相同的对象,而第二样本图像中所包括的对象是彩色对象。换言之,第二样本图像中的对象与多个第一样本图像中的对象的不同仅在于所包括的对象的颜色。由此能够提升训练得到的神经网络模型对第一样本图像的预着色的质量,从而提升对第一样本图像的最终着色质量。Among the adopted training samples, the plurality of first sample images and the second sample images include objects of the same type and quantity, and the objects included in the second sample images are colored objects. In other words, the objects in the second sample image differ from the objects in the plurality of first sample images only by the colors of the included objects. In this way, the quality of pre-coloring of the first sample image by the neural network model obtained by training can be improved, thereby improving the final coloring quality of the first sample image.
根据一些实施例,第二样本图像可以是Lab色彩空间的彩色图像。According to some embodiments, the second sample image may be a color image in the Lab color space.
不同于RGB或CMYK色彩空间,Lab色彩空间由三个要素构成,一个要素是亮度L;a和b是两个颜色通道。a包括的颜色是从深绿色(低亮度值)到灰色(中亮度值)再到亮粉红色(高亮度值);b是从亮蓝色(低亮度值)到灰色(中亮度值)再到黄色(高亮度值)。可见,使用Lab色彩空间来表示第二样本图像,由于只需两个通道的特征(即上述a、b两个颜色通道特征),而无需亮度通道L的特征(因为作为用于参考着色的参考图像,只需要第二样本图像的色彩信息,而无需其亮度信息),这减少了模型运算过程中的特征通道数量,从而提升模型训练的速度。另一方面,相对于RGB色彩空间,Lab色彩空间对色彩的表示更加线性,这降低了模型训练的难度,从而进一步提升模型训练的效率,也使得当使用训练得到的模型对图像进行着色时,着色质量更高。Different from RGB or CMYK color space, Lab color space consists of three elements, one element is luminance L; a and b are two color channels. a includes colors from dark green (low lightness value) to gray (medium lightness value) to bright pink (high lightness value); b is from light blue (low lightness value) to gray (medium lightness value) and then to yellow (high brightness value). It can be seen that the Lab color space is used to represent the second sample image, because only the features of two channels (that is, the above-mentioned two color channel features of a and b) are needed, and the features of the luminance channel L are not needed (because it is used as a reference for reference coloring). image, only the color information of the second sample image is needed, not its brightness information), which reduces the number of feature channels in the model operation process, thereby improving the speed of model training. On the other hand, compared to the RGB color space, the Lab color space has a more linear representation of color, which reduces the difficulty of model training, thereby further improving the efficiency of model training, and also makes it possible to use the trained model to colorize images. Shading quality is higher.
根据一些实施例,所述多个第一样本图像可以是待着色视频中的连续的多个帧。由于在连续的多个帧中,所包括的对象在相邻帧之间的位置可能发生变化,通过使用待着色视频中的连续多个帧作为多个第一样本图像,训练样本更加符合模型使用时的实际输入值,基于此训练得到的模型能够进一步提升对待着色视频的连续多个帧的着色效果,从而减少或消除着色视频在时域上的“抖动”或“闪烁”效果,提高视频着色的质量。According to some embodiments, the plurality of first sample images may be consecutive frames in the video to be rendered. Since the positions of included objects may change between adjacent frames in consecutive frames, by using consecutive frames in the video to be colored as the first sample images, the training samples are more in line with the model The actual input value when used, the model obtained based on this training can further improve the coloring effect of multiple consecutive frames of the video to be colored, thereby reducing or eliminating the "jitter" or "flickering" effect of the colored video in the time domain, improving the video quality. Shading quality.
根据一些实施例,所述连续的多个帧可以是连续的3个帧,并且,获取针对多个第一样本图像中的至少一个第一样本图像所预测的预着色结果可以包括:获取针对3个帧中的中间帧所预测的预着色结果。According to some embodiments, the consecutive plurality of frames may be consecutive 3 frames, and obtaining a predicted pre-coloring result for at least one first sample image among the plurality of first sample images may include: obtaining Predicted preshading results for the intermediate frame of the 3 frames.
由此,使用连续3个帧作为多个第一样本图像,而仅对中间帧预测其预着色结果,使得模型针对中间帧所预测的最终着色结果能够更加关注中间帧的前后相邻帧中的对象,使得在最终着色结果中,中间帧中的对象着色结果相对于相邻帧中的对象的着色结果更加统一,即,着色后的视频中的同一个对象在相邻3帧中的着色结果相近,从而减少或消除视频在时域上的“抖动”或“闪烁”效果,提升视频着色的质量。Therefore, three consecutive frames are used as multiple first sample images, and the pre-coloring result is only predicted for the intermediate frame, so that the final coloring result predicted by the model for the intermediate frame can pay more attention to the adjacent frames before and after the intermediate frame. , so that in the final shading result, the shading result of the object in the intermediate frame is more uniform than the shading result of the object in the adjacent frame, that is, the shading of the same object in the shaded video in the adjacent 3 frames The results are similar, thereby reducing or eliminating the "jitter" or "flickering" effect of the video in the temporal domain and improving the quality of video rendering.
在一些示例中,在步骤S220和步骤S230中,可以使用VGG、ResNet50、ResNet101等现有的图像特征提取主干网络或它们的组合,对多个第一样本图像和第二样本图像进行特征提取。In some examples, in steps S220 and S230, existing image feature extraction backbone networks such as VGG, ResNet50, ResNet101, etc. or their combination may be used to perform feature extraction on multiple first sample images and second sample images .
在一些示例中,可以使用自定义的特征提取子网络。例如,特征提取子网络可以先对多个第一样本图像分别进行特征提取,以得到多个提取出的特征,进一步地,可以对提取出的多个特征进行合并操作,从而得到一个第一特征。In some examples, a custom feature extraction sub-network can be used. For example, the feature extraction sub-network may first perform feature extraction on multiple first sample images to obtain multiple extracted features, and further, may perform a merge operation on the multiple extracted features to obtain a first feature.
在一些示例中,在步骤S240中,可以使用nonlocal网络作为预着色子网络,对多个第一样本图像中的至少一个第一样本图像进行预着色。应当理解的是,可以对多个第一样本图像中的一个、两个或更多个第一样本图像进行预着色;相应地,基于预着色结果,可以获取一个、两个或更多个最终着色结果。In some examples, in step S240, a nonlocal network may be used as a pre-coloring sub-network to pre-color at least one first sample image among the plurality of first sample images. It should be understood that one, two or more of the plurality of first sample images may be pre-colored; accordingly, based on the pre-coloring results, one, two or more may be acquired final coloring result.
根据一些实施例,在步骤S260、将预着色结果和多个对象分割特征输入最终着色子网络可以包括:将预着色结果和多个对象分割特征在特征通道维度进行合并;以及将合并得到的结果输入最终着色子网络,以获取最终着色结果。According to some embodiments, in step S260, inputting the pre-coloring result and the plurality of object segmentation features into the final shading sub-network may include: merging the pre-coloring result and the plurality of object segmentation features in the feature channel dimension; and merging the obtained result Enter the final shading sub-network to get the final shading result.
由此,通过先对预着色结果和多个对象分割特征在特征通道维度进行合并,得到特征块,再将合并得到的特征块输入最终着色子网络,简化了最终着色子网络的输入,提高模型的运算效率。Therefore, by first merging the pre-coloring results and multiple object segmentation features in the feature channel dimension, a feature block is obtained, and then the merged feature block is input into the final shading sub-network, which simplifies the input of the final shading sub-network and improves the model. operational efficiency.
应当理解,在特征通道维度进行合并指的是不改变所合并的特征的高度和宽度,而将特征通道的数量进行累加。It should be understood that merging in the feature channel dimension refers to accumulating the number of feature channels without changing the height and width of the merged features.
根据一些实施例,所述对象分割特征可以是对象分割掩膜。对象分割掩膜(mask)可以由0和1组成,即,在掩膜中用1表示图像中的分割出对象的部分;在掩膜中用0表示图像中的未分割出对象的部分(例如背景),由此,结合对象分割掩膜,最终着色子网络能够精准地对图像中用1表示的分割出对象的部分进行最终着色,而不会错误地将对象周围的部分(例如用0表示的背景部分)也进行相同的着色。由此,最终着色结果在相邻帧之间更加精准,从而减少或消除着色视频的“抖动”效果。According to some embodiments, the object segmentation feature may be an object segmentation mask. The object segmentation mask (mask) can consist of 0 and 1, that is, 1 is used in the mask to represent the part of the image in which the object is segmented; 0 is used in the mask to represent the part of the image that does not segment the object (eg background), thus, combined with the object segmentation mask, the final shading sub-network can accurately finalize the part of the image where the object is segmented by 1, without mistakenly coloring the part around the object (for example, represented by 0). the background part) are also colored in the same way. As a result, the final shaded result is more accurate between adjacent frames, reducing or eliminating the "shaky" effect of shaded video.
图3示出了根据本公开的实施例的图2的方法200中部分示例过程(即步骤S270)的流程图。如图3所示,根据一些实施例,至少基于最终着色结果和第二样本图像,调整神经网络模型的参数的步骤可以包括:FIG. 3 shows a flowchart of part of the example process (ie, step S270 ) in the
S371、基于最终着色结果和第二样本图像,计算第一损失值;S371, based on the final coloring result and the second sample image, calculate the first loss value;
S372、至少基于对象分割特征计算第二损失值;以及S372. Calculate a second loss value based on at least the object segmentation feature; and
S373、基于第一损失值和第二损失值,调整神经网络模型的参数。S373. Based on the first loss value and the second loss value, adjust the parameters of the neural network model.
由此,不仅根据基于最终着色结果和第二样本图像得到的第一损失值对神经网络模型进行调参,还根据至少基于对象分割特征得到的第二损失值对神经网络模型进行调参,使得模型参数能够更加针对对象分割特征进行优化,从而使训练得到的模型更加关注对象分割特征,进一步提升最终着色质量。Therefore, not only the parameters of the neural network model are adjusted according to the first loss value obtained based on the final coloring result and the second sample image, but also the parameters of the neural network model are adjusted according to the second loss value obtained based on at least the object segmentation feature, so that The model parameters can be more optimized for the object segmentation features, so that the trained model pays more attention to the object segmentation features and further improves the final coloring quality.
可以理解的是,第二损失值可以是基于对象分割特征与相应的ground truth而计算的。It can be understood that the second loss value can be calculated based on the object segmentation features and the corresponding ground truth.
根据本公开的实施例,还提供了一种视频处理方法,该视频处理方法包括:将待着色视频的目标着色帧、与目标着色帧相邻的至少一个帧、以及彩色参考图像输入神经网络模型,以获取针对目标着色帧的最终着色结果,其中,神经网络模型是通过根据上述的神经网络模型训练方法训练得到的。According to an embodiment of the present disclosure, a video processing method is also provided, the video processing method comprising: inputting a target coloring frame of a video to be colorized, at least one frame adjacent to the target coloring frame, and a color reference image into a neural network model , to obtain the final coloring result for the target coloring frame, wherein the neural network model is obtained by training according to the above-mentioned neural network model training method.
由此,使用根据上述的神经网络模型训练方法训练得到的模型,对待着色的视频帧进行着色,使得目标着色帧中的对象着色结果相对于相邻帧中的对象的着色结果更加统一,即,着色后的视频中的同一个对象在目标着色帧与相邻帧中的着色结果相近,从而减少或消除视频在时域上的“抖动”或“闪烁”效果,提升视频着色的质量。Thus, using the model trained according to the above-mentioned neural network model training method, the video frame to be colored is colored, so that the coloring result of the object in the target colored frame is more uniform than the coloring result of the object in the adjacent frame, that is, The same object in the shaded video has similar shaded results in the target shaded frame and adjacent frames, thereby reducing or eliminating the "jitter" or "flickering" effect of the video in the temporal domain and improving the quality of video coloring.
以下将结合图4和图5,对根据本公开实施例的神经网络模型的训练方法做出进一步说明。The method for training a neural network model according to an embodiment of the present disclosure will be further described below with reference to FIG. 4 and FIG. 5 .
图4和图5是示出利用根据本公开实施例的神经网络模型的训练方法来训练神经网络模型的过程示意图。4 and 5 are schematic diagrams illustrating a process of training a neural network model by using a training method for a neural network model according to an embodiment of the present disclosure.
如图4所示,神经网络模型400包括:征提取子网络410、预着色子网络420、对象分割子网络430和最终着色子网络440。此外,图5示出了预着色子网络520、对象分割子网络530和最终着色子网络540,它们分别与图4示出的预着色子网络420、对象分割子网络430和最终着色子网络440相似。As shown in FIG. 4 , the
可以先获取多个第一样本图像401、402、403(例如待着色视频的连续三帧,即,t-1帧、t帧和t+1帧)以及对应的彩色第二样本图像404(例如参考图像)。其中,多个第一样本图像中的每个第一样本图像401、402、403均包括相同种类和相同数量的对象(即,t-1帧、t帧、和t+1帧均包括海豚、玩具、手臂),并且第二样本图像404(参考图像)包括与每个第一样本图像中的对象一一对应的彩色参考对象(例如彩色的海豚、玩具、手臂)。这里,第二样本图像是Lab色彩空间的彩色图像。A plurality of
将获取到的多个第一样本图像以及彩色第二样本图像输入神经网络模型400的征提取子网络410,以分别对多个第一样本图像以及第二样本图像进行特征提取。例如,针对3个第一样本图像中的每个第一样本图像,可以分别提取出一个维度为(H11,W11,Cx)的特征图,并对3个维度为(H1,W1,Cx)的特征图进行合并,得到维度为(H11,W11,C)的第一特征,其中,H、W分别表示特征图的高度、宽度;C表示特征通道数。针对第二样本图像,可以提取出一个维度为(H22,W22,C)的第二特征。The acquired multiple first sample images and the color second sample images are input into the
获取到第一特征和第二特征后,可以将第一特征和第二特征输入预着色子网络420。这里,继续参考图5,第一特征和第二特征被输入预着色子网络520,为了便于计算,可以先将这两个特征拉平,分别得到维度为((H11*W11),C)的第一拉平特征501-1以及维度为((H22*W22),C)的第二拉平特征501-2。随后,可以对第二拉平特征501-2进行转置,得到维度为(C,(H22*W22))的特征矩阵。随后,例如使用矩阵乘法,将上述两路特征相乘,以得到相似度矩阵502(((H11*W11),C)*(C,(H22*W22))),相似度矩阵502用于表示待着色帧(例如第t帧)中的每个像素与参考图像中的每个像素之间的相似度映射。After the first feature and the second feature are obtained, the first feature and the second feature can be input into the
随后,预着色子网络可以根据得到的相似度矩阵502,对参考图像中的每个像素的色彩信息进行采样(例如对参考图像的a、b通道的色彩信息进行采样),进而将采样得到的色彩信息映射至待着色的帧中,从而得到预着色结果。具体而言,可以将相似度矩阵502与参考图像的Lab图相乘,即,((H11*W11),(H22*W22)*(H22*H22,C)得到((H11*W11),C),再将其转换为(H11,W11,C),便得到预着色结果。Then, the pre-coloring sub-network can sample the color information of each pixel in the reference image according to the obtained similarity matrix 502 (for example, sample the color information of the a and b channels of the reference image), and then sample the obtained color information Color information is mapped into the frame to be rendered, resulting in a pre-colored result. Specifically, the
进一步地,参考图5,需要对预着色结果进一步优化。因此,可以将第一特征输入对象分割子网络530,以获取多个对象分割特征(例如上文提到的对象分割掩膜),其中,多个对象分割特征中的每个对象分割特征分别针对多个第一样本图像中的相应一个第一样本图像。具体地,针对t-1帧、t帧和t+1帧,可以将第一特征读取为三个通道的对象分割掩膜。Further, referring to FIG. 5 , further optimization of the pre-coloring results is required. Therefore, the first feature can be input into the
进一步地,将预着色结果和多个对象分割特征输入最终着色子网络540,从而获取最终着色结果。Further, the pre-coloring result and the plurality of object segmentation features are input into the
继续参考图4,可以至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。具体地,可以基于最终着色结果和第二样本图像404(即,参考图像),计算第一损失值;并基于对象分割特征和与之相对应的对象分割ground truth 405,来计算第二损失值。这里,第一损失值可以是最终着色结果与参考图像在ab色彩空间中的L1损失。由此,不仅根据基于最终着色结果和第二样本图像得到的第一损失值对神经网络模型进行调参,还根据至少基于对象分割特征得到的第二损失值对神经网络模型进行调参,使得模型参数能够更加针对对象分割特征进行优化,从而使训练得到的模型更加关注对象分割特征,能够使得着色后的图像所形成的视频中的同一个对象在不同时间的着色结果尽可能相近,从而减少或消除着色视频在时域上的“抖动”或“闪烁”效果,提高视频着色的质量。Continuing to refer to FIG. 4, the parameters of the neural network model can be adjusted based at least on the final colorization result and the second sample image. Specifically, the first loss value may be calculated based on the final colorization result and the second sample image 404 (ie, the reference image); and the second loss value may be calculated based on the object segmentation feature and the object
图6示出了根据本公开的实施例的神经网络模型的训练装置600的结构框图。其中,所述神经网络模型包括特征提取子网络、预着色子网络、对象分割子网络和最终着色子网络。如图6所示,装置600包括:FIG. 6 shows a structural block diagram of an
第一单元610,被配置为获取多个第一样本图像和与多个第一样本图像对应的彩色第二样本图像,其中,多个第一样本图像中的每个第一样本图像均包括待处理对象;The
第二单元620,被配置为将多个第一样本图像输入特征提取子网络,以获取第一特征;The
第二单元620还被配置为将第二样本图像输入特征提取子网络,以获取第二特征;The
第三单元630,被配置为将第一特征和第二特征输入预着色子网络,以获取针对多个第一样本图像中的至少一个第一样本图像所预测的预着色结果;a
第四单元640,被配置为将第一特征输入对象分割子网络,以获取多个对象分割特征,其中,多个对象分割特征中的每个对象分割特征分别针对多个第一样本图像中的相应一个第一样本图像;The
第五单元650,被配置为将预着色结果和多个对象分割特征输入最终着色子网络,以获取最终着色结果;以及a
第六单元660,被配置为至少基于最终着色结果和第二样本图像,调整神经网络模型的参数。The
根据一些实施例,第六单元660可以被进一步配置为:基于最终着色结果和所述第二样本图像,计算第一损失值;至少基于对象分割特征计算第二损失值;以及基于第一损失值和所述第二损失值,调整神经网络模型的参数。According to some embodiments, the
根据一些实施例,第五单元650可以被进一步配置为:将预着色结果和多个对象分割特征在特征通道维度进行合并;以及将合并得到的结果输入最终着色子网络,以获取最终着色结果。According to some embodiments, the
根据一些实施例,多个第一样本图像可以是待着色视频中的连续的多个帧。According to some embodiments, the plurality of first sample images may be consecutive frames in the video to be rendered.
根据一些实施例,连续的多个帧可以是连续的3个帧,并且其中,第三单元被可以进一步配置为:获取针对3个帧中的中间帧所预测的预着色结果。According to some embodiments, the consecutive plurality of frames may be consecutive 3 frames, and wherein the third unit may be further configured to obtain a predicted pre-coloring result for an intermediate frame among the 3 frames.
根据一些实施例,对象分割特征可以是对象分割掩膜。According to some embodiments, the object segmentation feature may be an object segmentation mask.
根据一些实施例,多个第一样本图像中的每个第一样本图像可以均包括相同种类和相同数量的对象,并且其中,第二样本图像可以包括与每个第一样本图像中的对象对应的彩色参考对象。According to some embodiments, each of the plurality of first sample images may include the same kind and the same number of objects, and wherein the second sample image may include the same objects as in each of the first sample images The object corresponding to the color reference object.
根据一些实施例,第二样本图像可以是Lab色彩空间的彩色图像。According to some embodiments, the second sample image may be a color image in the Lab color space.
图7示出了根据本公开的实施例的视频处理装置700的结构框图。FIG. 7 shows a structural block diagram of a
如图7所示,装置700包括:着色单元710,被配置为将待着色视频的目标着色帧、与目标着色帧相邻的至少一个帧、以及彩色参考图像输入神经网络模型,以获取针对目标着色帧的最终着色结果,其中,神经网络模型是通过根据上述的方法训练得到的。As shown in FIG. 7 , the
本公开还提供一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述任意一种方法。The present disclosure also provides an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to At least one processor is enabled to perform any of the methods described above.
本公开还提供一种存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机执行上述任意一种方法。The present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to perform any one of the above methods.
本公开还提供一种计算机程序产品,包括计算机程序,其中,计算机程序在被处理器执行时实现上述任意一种方法。The present disclosure also provides a computer program product, including a computer program, wherein the computer program implements any one of the above methods when executed by a processor.
参考图8,现将描述可以作为本公开的服务器或客户端的电子设备800的结构框图,其是可以应用于本公开的各方面的硬件设备的示例。电子设备旨在表示各种形式的数字电子的计算机设备,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。Referring to FIG. 8 , a structural block diagram of an
如图8所示,设备800包括计算单元801,其可以根据存储在只读存储器(ROM)802中的计算机程序或者从存储单元808加载到随机访问存储器(RAM)803中的计算机程序,来执行各种适当的动作和处理。在RAM 803中,还可存储设备800操作所需的各种程序和数据。计算单元801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。As shown in FIG. 8 , the
设备800中的多个部件连接至I/O接口805,包括:输入单元806、输出单元807、存储单元808以及通信单元809。输入单元806可以是能向设备800输入信息的任何类型的设备,输入单元806可以接收输入的数字或字符信息,以及产生与电子设备的用户设置和/或功能控制有关的键信号输入,并且可以包括但不限于鼠标、键盘、触摸屏、轨迹板、轨迹球、操作杆、麦克风和/或遥控器。输出单元807可以是能呈现信息的任何类型的设备,并且可以包括但不限于显示器、扬声器、视频/音频输出终端、振动器和/或打印机。存储单元808可以包括但不限于磁盘、光盘。通信单元809允许设备800通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据,并且可以包括但不限于调制解调器、网卡、红外通信设备、无线通信收发机和/或芯片组,例如蓝牙TM设备、802.11设备、WiFi设备、WiMax设备、蜂窝通信设备和/或类似物。Various components in the
计算单元801可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元801的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元801执行上文所描述的各个方法和处理,例如方法200。例如,在一些实施例中,根据本公开实施例的方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元808。在一些实施例中,计算机程序的部分或者全部可以经由ROM 802和/或通信单元809而被载入和/或安装到设备800上。当计算机程序加载到RAM 803并由计算单元801执行时,可以执行上文描述的方法的一个或多个步骤。备选地,在其他实施例中,计算单元801可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行根据本公开实施例的方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described herein above may be implemented in digital electronic circuitry, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, performs the functions/functions specified in the flowcharts and/or block diagrams. Action is implemented. The program code may execute entirely on the machine, partly on the machine, partly on the machine and partly on a remote machine as a stand-alone software package or entirely on the remote machine or server.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。A computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, a distributed system server, or a server combined with blockchain.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行、也可以顺序地或以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, the steps described in the present disclosure can be performed in parallel, sequentially or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, which are not limited herein.
虽然已经参照附图描述了本公开的实施例或示例,但应理解,上述的方法、系统和设备仅仅是示例性的实施例或示例,本发明的范围并不由这些实施例或示例限制,而是仅由授权后的权利要求书及其等同范围来限定。实施例或示例中的各种要素可以被省略或者可由其等同要素替代。此外,可以通过不同于本公开中描述的次序来执行各步骤。进一步地,可以以各种方式组合实施例或示例中的各种要素。重要的是随着技术的演进,在此描述的很多要素可以由本公开之后出现的等同要素进行替换。Although the embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it should be understood that the above-described methods, systems and devices are merely exemplary embodiments or examples, and the scope of the present invention is not limited by these embodiments or examples, but is limited only by the appended claims and their equivalents. Various elements of the embodiments or examples may be omitted or replaced by equivalents thereof. Furthermore, the steps may be performed in an order different from that described in this disclosure. Further, various elements of the embodiments or examples may be combined in various ways. Importantly, as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear later in this disclosure.
Claims (21)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210107911.9A CN114511811A (en) | 2022-01-28 | 2022-01-28 | Video processing method, video processing device, electronic equipment and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210107911.9A CN114511811A (en) | 2022-01-28 | 2022-01-28 | Video processing method, video processing device, electronic equipment and medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114511811A true CN114511811A (en) | 2022-05-17 |
Family
ID=81551818
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210107911.9A Pending CN114511811A (en) | 2022-01-28 | 2022-01-28 | Video processing method, video processing device, electronic equipment and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114511811A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118052893A (en) * | 2023-12-19 | 2024-05-17 | 江西泰豪动漫职业学院 | A video coloring method, device, equipment and storage medium |
| WO2025260926A1 (en) * | 2024-06-20 | 2025-12-26 | 阿里巴巴(中国)有限公司 | Video processing method and apparatus, and sports video processing method and apparatus |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140241625A1 (en) * | 2013-02-27 | 2014-08-28 | Takeshi Suzuki | Image processing method, image processing apparatus, and computer program product |
| CN104044352A (en) * | 2014-06-18 | 2014-09-17 | 浙江工业大学 | Automatic coloring method and device based on image segmentation |
| CN107770618A (en) * | 2017-11-02 | 2018-03-06 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
| CN110070551A (en) * | 2019-04-29 | 2019-07-30 | 北京字节跳动网络技术有限公司 | Rendering method, device and the electronic equipment of video image |
| CN111476863A (en) * | 2020-04-02 | 2020-07-31 | 北京奇艺世纪科技有限公司 | Method and device for coloring black and white cartoon, electronic equipment and storage medium |
| CN111783986A (en) * | 2020-07-02 | 2020-10-16 | 清华大学 | Network training method and device, attitude prediction method and device |
| CN113177451A (en) * | 2021-04-21 | 2021-07-27 | 北京百度网讯科技有限公司 | Training method and device of image processing model, electronic equipment and storage medium |
| CN113411550A (en) * | 2020-10-29 | 2021-09-17 | 腾讯科技(深圳)有限公司 | Video coloring method, device, equipment and storage medium |
| CN113888560A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
-
2022
- 2022-01-28 CN CN202210107911.9A patent/CN114511811A/en active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140241625A1 (en) * | 2013-02-27 | 2014-08-28 | Takeshi Suzuki | Image processing method, image processing apparatus, and computer program product |
| CN104044352A (en) * | 2014-06-18 | 2014-09-17 | 浙江工业大学 | Automatic coloring method and device based on image segmentation |
| CN107770618A (en) * | 2017-11-02 | 2018-03-06 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and storage medium |
| CN110070551A (en) * | 2019-04-29 | 2019-07-30 | 北京字节跳动网络技术有限公司 | Rendering method, device and the electronic equipment of video image |
| CN111476863A (en) * | 2020-04-02 | 2020-07-31 | 北京奇艺世纪科技有限公司 | Method and device for coloring black and white cartoon, electronic equipment and storage medium |
| CN111783986A (en) * | 2020-07-02 | 2020-10-16 | 清华大学 | Network training method and device, attitude prediction method and device |
| CN113411550A (en) * | 2020-10-29 | 2021-09-17 | 腾讯科技(深圳)有限公司 | Video coloring method, device, equipment and storage medium |
| CN113177451A (en) * | 2021-04-21 | 2021-07-27 | 北京百度网讯科技有限公司 | Training method and device of image processing model, electronic equipment and storage medium |
| CN113888560A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
Non-Patent Citations (3)
| Title |
|---|
| YIHAO LIU, HENGYUAN ZHAO, KELVIN C.K. CHAN, XINTAO WANG, CHEN CHANGE LOY, YU QIAO, CHAO DONG: "Temporally Consistent Video Colorization with Deep Feature Propagation and Self-regularization Learning", ARXIV, 9 October 2021 (2021-10-09) * |
| 何山;方利;张政;: "基于改进的区域全卷积神经网络和联合双边滤波的图像着色方法", 激光与光电子学进展, no. 12, 25 June 2020 (2020-06-25) * |
| 张娜;秦品乐;曾建潮;李启;: "基于密集神经网络的灰度图像着色算法", 计算机应用, no. 06, 21 January 2019 (2019-01-21) * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118052893A (en) * | 2023-12-19 | 2024-05-17 | 江西泰豪动漫职业学院 | A video coloring method, device, equipment and storage medium |
| WO2025260926A1 (en) * | 2024-06-20 | 2025-12-26 | 阿里巴巴(中国)有限公司 | Video processing method and apparatus, and sports video processing method and apparatus |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113807440B (en) | Method, apparatus, and medium for processing multimodal data using neural networks | |
| US20220301108A1 (en) | Image quality enhancing | |
| CN115147558B (en) | Three-dimensional reconstruction model training method, three-dimensional reconstruction method and device | |
| CN114511758A (en) | Image recognition method and device, electronic device and medium | |
| CN114648638A (en) | Semantic segmentation model training method, semantic segmentation method and device | |
| CN115438214B (en) | Method and device for processing text image and training method of neural network | |
| WO2023221422A1 (en) | Neural network used for text recognition, training method thereof and text recognition method | |
| CN113642740B (en) | Model training method and device, electronic device and medium | |
| CN114550313A (en) | Image processing method, neural network and its training method, equipment and medium | |
| CN116306862A (en) | Training method, apparatus and medium for text processing neural network | |
| CN114511811A (en) | Video processing method, video processing device, electronic equipment and medium | |
| CN115511779A (en) | Image detection method, device, electronic device and storage medium | |
| CN112561059B (en) | Method and apparatus for model distillation | |
| CN117273107A (en) | Training method and training device for text generation model | |
| CN114693977B (en) | Image processing method, model training method, device, equipment and medium | |
| CN115601555A (en) | Image processing method and apparatus, device and medium | |
| CN114119935B (en) | Image processing methods and devices | |
| CN114120420B (en) | Image detection method and device | |
| CN114429678A (en) | Model training method and device, electronic device and medium | |
| CN114330576A (en) | Model processing method, device, image recognition method and device | |
| CN114494524A (en) | Video processing method, apparatus, electronic device and medium | |
| CN116612200A (en) | Image processing method, device, equipment and medium | |
| CN114998403A (en) | Depth prediction method, device, electronic device, medium | |
| CN116310626A (en) | Model training and image processing method, device, equipment and storage medium | |
| CN115564992A (en) | Image classification method and training method of image classification model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| AD01 | Patent right deemed abandoned | ||
| AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20250307 |