Disclosure of Invention
The invention aims to provide an image processing method, an image processing device, electronic equipment and a computer readable storage medium based on fluorescence imaging, which can process image data captured by two different cameras in parallel and improve the overall throughput; and meanwhile, the processing delay of the third main board can be reduced.
To achieve the purpose, the invention adopts the following technical scheme:
the invention provides an image processing method based on fluorescence imaging, which comprises the following steps:
The method comprises the steps that a first main board is utilized to obtain a binocular visible light image under an application scene acquired through a first camera, the binocular visible light image is subjected to image processing, a binocular visible light processing image is obtained, one path of the binocular visible light processing image is sent to a third main board, and one path of the binocular visible light processing image can be subjected to video output;
Acquiring a binocular fluorescent image under an application scene acquired by a second camera by using a second main board, and performing image processing on the binocular fluorescent image to obtain a binocular fluorescent processing image, wherein one path of the binocular fluorescent processing image is sent to a third main board, and one path of the binocular fluorescent processing image can be subjected to video output;
and receiving the binocular visible light processing image and the binocular fluorescent processing image by using a third main board, and carrying out fusion processing on the binocular visible light processing image and the binocular fluorescent processing image, wherein the fused fluorescent binocular image can be output for 3D display, or the fluorescent binocular image can be segmented, and the fluorescent monocular image can be output for 2D display.
Further, the image processing at least comprises one of the following processing modes: 3D stereoscopic clipping processing, white balance processing, image quality adjustment, and noise reduction processing.
Further, the step of acquiring the binocular visible light image under the application scene acquired by the first camera includes:
acquiring MIPI digital signals acquired by a first camera;
Encoding the MIPI digital signal into a serial digital signal through a serial encoding chip;
Implementing remote coaxial transmission on the serial digital signal;
decoding the serial digital signal into an MIPI digital signal through a deserializing chip;
And performing video input processing on the MIPI digital signal to generate the binocular visible light image in a video format.
Further, the step of acquiring the binocular fluorescence image under the application scene acquired by the second camera includes:
acquiring MIPI digital signals acquired by a second camera;
Encoding the MIPI digital signal into a serial digital signal through a serial encoding chip;
Implementing remote coaxial transmission on the serial digital signal;
decoding the serial digital signal into an MIPI digital signal through a deserializing chip;
And performing video input processing on the MIPI digital signal to generate the binocular fluorescent image in a video format.
Further, the step of sending the image to the third main board includes:
And converting the processed image in the video format into an MIPI digital signal through a signal conversion chip so that the third main board receives the MIPI digital signal.
The embodiment of the application also provides an image processing device, which is characterized in that the device comprises:
The first image acquisition module is used for acquiring binocular visible light images to be processed;
The second image acquisition module is used for acquiring a binocular fluorescence image to be processed;
the first image processing module is used for performing image processing on the binocular visible light image to obtain a binocular visible light processing image;
the second image processing module is used for carrying out image processing on the binocular fluorescent image to obtain a binocular fluorescent processing image;
The first image output module is used for outputting binocular visible light processing images and is connected with the three-dimensional display;
The second image output module is used for outputting binocular fluorescence processing images;
the image input module is used for receiving the binocular visible light processing image and the binocular fluorescence processing image simultaneously;
The image fusion module is used for carrying out fusion processing on the binocular visible light processing image and the binocular fluorescence processing image;
the image segmentation module is used for carrying out monocular segmentation processing on the fluorescent binocular images after fusion processing;
and the third image output module is used for respectively outputting the fluorescent binocular image and the fluorescent monocular image.
Further, the first image acquisition module, the first image processing module and the first image output module are integrated on a first main board;
the second image acquisition module, the second image processing module and the second image output module are integrated on a second main board;
the image input module, the image fusion module, the image segmentation module and the third image output module are integrated on a third main board;
the first image output module and the second image output module are connected with the image input module through signal links.
Further, the first image output module and the second image output module are both connected with the image input module through the adapter plate.
The embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the processor is used for executing an executable computer program stored in the memory so as to realize the image processing method.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer executable program, and the computer executable program executes the image processing method when being executed by a processor.
The image processing method, the device, the electronic equipment and the computer readable storage medium based on fluorescence imaging provided by the embodiment of the application have at least the following beneficial effects:
The first main board is connected with the first camera, the second main board is connected with the second camera, corresponding real-time image data can be independently captured, tasks such as image processing can be independently carried out, the first main board and the second main board work simultaneously, image data captured by two different cameras can be processed in parallel, and overall throughput is improved.
On the one hand, the first main board and the second main board can independently output images, namely, three-dimensional visible light images and three-dimensional fluorescent images can be respectively output; on the other hand, the third main board receives the binocular visible light image and the binocular fluorescent image, and can perform fusion processing on the binocular visible light image and the binocular fluorescent image, and can output the fluorescent binocular image after fusion fluorescence for 3D display or cut into the monocular fluorescent image after fusion fluorescence for 2D display according to the requirement.
In addition, the image processing method can realize video output in various forms, meets the actual requirements under different application scenes, and has better adaptability.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or are directions or positional relationships conventionally put in use of the inventive product, are merely for convenience of describing the present invention and simplifying the description, and are not indicative or implying that the apparatus or element to be referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal," "vertical," "overhang" and the like do not denote a requirement that the component be absolutely horizontal or overhang, but may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
As shown in fig. 1, an embodiment of the present application provides an electronic device 10. The electronic device 10 may include a memory 11, a processor 12, and an image processing apparatus 100.
The specific type of the electronic device 10 is not limited, and may be set according to actual application requirements. For example, electronic devices that may include, but are not limited to, computers, tablets, and the like.
In detail, the memory 11 and the processor 12 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, electrical connection may be made to each other via one or more communication buses or signal lines. The image processing apparatus 100 includes at least one software functional module that may be stored in the memory 11 in the form of software or firmware (firmware). The processor 12 is configured to execute executable computer programs stored in the memory 11, for example, software functional modules and computer programs included in the image processing apparatus 100, so as to implement the image processing method provided by the embodiment of the present application.
The Memory 11 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 12 may be an integrated circuit chip having signal processing capabilities. The processor 12 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a System on Chip (SoC), etc.
It is to be understood that the configuration shown in fig. 1 is merely illustrative and that electronic device 10 may also include more or fewer components than those shown in fig. 1 or have a different configuration than that shown in fig. 1.
Referring to fig. 2, an embodiment of the present application further provides an image processing method applicable to the electronic device 10. The method steps defined by the flow related to the image processing method may be implemented by the electronic device 10, and the specific flow shown in fig. 2 will be described in detail below.
In step S110, the first main board 110 is used to obtain a binocular visible light image under the application scene acquired by the first camera 20, and perform image processing on the binocular visible light image to obtain a binocular visible light processing image, one path of the binocular visible light processing image is sent to the third main board 130, one path of the binocular visible light processing image can be used for video output, and three-dimensional visible light images can be output.
In the present embodiment, the step of acquiring the binocular visible light image under the application scene acquired by the first camera 20, referring to fig. 3, includes step S111, step S112, step S113, step S114, and step S115.
In step S111, the MIPI digital signal acquired by the first camera 20 is acquired. Alternatively, the first camera 20 may be a CMOS module, which converts the optical signal into the MIPI digital signal through photoelectric conversion.
Step S112, encoding the MIPI digital signal into a serial digital signal by the serial encoding chip. Alternatively, the MIPI digital signal may be transmitted through the FPC flex cable.
In the present embodiment, the MIPI digital signal may be transmitted to the first transmitting circuit board 1111 through the FPC flexible flat cable.
In step S113, the serial digital signal is transmitted coaxially over a long distance.
Alternatively, the signal may be transmitted through the coaxial line, and the transmission distance of the coaxial line may reach 0 to 6 meters, and transmitted to the first receiving circuit board 1112 at the far end through one or two coaxial lines.
In step S114, the serial digital signal is decoded into MIPI digital signal by the deserializing chip. Alternatively, the first receiving circuit board 1112 may decode the received serial digital signal into an MIPI digital signal through a deserializing chip.
In step S115, video input processing is performed on the MIPI digital signal to generate a to-be-processed image in a video format. Optionally, the decoded MIPI digital signal is transmitted to the video input module via a board-to-board connector or FPC flex cable.
In other embodiments, the image signal acquired by the first camera 20 may be directly acquired, and the image signal is an MIPI digital signal, which needs to be converted before image processing, and may be directly converted into a video format signal capable of image processing.
The video format may be, for example, a 24-bit RGB format or a BT656/1120 format.
In step S120, the second main board 120 is used to obtain a binocular fluorescent image under the application scene acquired by the second camera 30, and the binocular fluorescent image is subjected to image processing, so as to obtain a binocular fluorescent processing image, and the binocular fluorescent processing image is sent to the third main board 130.
In the present embodiment, the step of acquiring the binocular fluorescence image under the application scene acquired by the second camera 30, referring to fig. 4, includes steps S121, S122, S123, S124 and S125.
In step S121, the MIPI digital signal acquired by the second camera 30 is acquired. Alternatively, the second camera 30 may be a CMOS module that converts optical signals into MIPI digital signals through photoelectric conversion.
Step S122, encoding the MIPI digital signal into a serial digital signal by the serial encoding chip. Alternatively, the MIPI digital signal may be transmitted through the FPC flex cable.
In this embodiment, the MIPI digital signal may be transmitted to the second transmitting circuit board 1211 through the FPC flexible flat cable.
In step S123, the serial digital signal is transmitted coaxially over a long distance.
Alternatively, the signal may be transmitted through the coaxial line, and the transmission distance of the coaxial line may reach 0 to 6 meters, and transmitted to the second receiving circuit board 1212 at the far end through one or two coaxial lines.
In step S124, the serial digital signal is decoded into MIPI digital signal by the deserializing chip. Alternatively, the first receiving circuit board 1112 may decode the received serial digital signal into an MIPI digital signal through a deserializing chip.
In step S125, video input processing is performed on the MIPI digital signal to generate a to-be-processed image in a video format. Optionally, the decoded MIPI digital signal is transmitted to the video input module via a board-to-board connector or FPC flex cable.
In this embodiment, the image processing at least includes one of the following processing methods: 3D stereo clipping processing, white balance processing, image quality adjustment and noise reduction processing; of course, according to actual needs, other steps for image processing may also be included, which will not be described in detail herein.
In step S130, the binocular visible light processing image and the binocular fluorescence processing image are received by the third main board 130, and the binocular visible light processing image and the binocular fluorescence processing image are fused, so that the fused fluorescent binocular image can be output for 3D display, or the fluorescent binocular image can be split, and the fluorescent monocular image can be output for 2D display.
In this embodiment, the first main board 110 is connected to the first camera 20, the second main board 120 is connected to the second camera 30, so that corresponding real-time image data can be captured independently, tasks such as image processing can be performed independently, the first main board 110 and the second main board 120 work simultaneously, image data captured by two different cameras can be processed in parallel, and overall throughput is improved.
On the one hand, the first main board 110 and the second main board 120 can output images independently, i.e. can output three-dimensional visible light images and three-dimensional fluorescent images respectively; on the other hand, the third main board 130 may receive the binocular visible light processed image and the binocular fluorescence processed image, may perform fusion processing on the same, may output the fused fluorescence binocular image for 3D display, or may perform segmentation processing on the fluorescence binocular image, and output the fluorescence monocular image for 2D display. The third main board 130 is used as a fusion and display center and is responsible for receiving the image data of the first main board 110 and the second main board 120, performing fusion processing, fusing and displaying the two images in real time through an algorithm, and simultaneously reducing the processing delay of the third main board 130. In addition, the image processing method can realize video output in various forms, meets the actual requirements under different application scenes, and has better adaptability.
The step of performing the image being transmitted to the third main board 130 includes: the processed image in the video format is converted into the MIPI digital signal through the signal conversion chip so that the third main board 130 receives the MIPI digital signal.
For example, the model of the signal conversion chip may be LT8918.
To sum up, as shown in fig. 5, in this embodiment, the image processing process is to obtain the converted MIPI digital signal from the first main board 110, perform video input processing on the MIPI digital signal, convert the MIPI digital signal into an image signal in a video format, perform image processing on the image signal, and perform image processing on the image-processed image (i.e., a binocular visible light processing image), where one path of the processed image is output through video, and the other path of the processed image is converted into the MIPI digital signal again through signal conversion and is received by the third main board 130; in this process, the second main board 120 acquires the converted MIPI digital signal, performs video input processing on the MIPI digital signal, converts the MIPI digital signal into an image signal in a video format, performs image processing on the image signal, and performs image processing on the image-processed image (i.e., binocular fluorescence processed image), one path of the processed image is output through video, and the other path of the processed image signal is converted into the MIPI digital signal again through signal conversion and is received by the third main board 130.
The third main board 130 receives the binocular visible light processed image and the binocular fluorescence processed image at the same time, and then the MIPI digital signal is subjected to video input processing again to convert the MIPI digital signal into an image signal in a video format again, and the fused fluorescent binocular image can be output for 3D display or the fluorescent binocular image can be subjected to segmentation processing to output a fluorescent monocular image for 2D display as required.
The embodiment of the application also provides an image processing device 100, which can be applied to the electronic equipment 10. As shown in fig. 6, the image processing apparatus 100 may include a first image acquisition module 111, a first image processing module 112, a first image output module 113, a second image acquisition module 121, a second image processing module 122, a second image output module 123, an image input module 131, an image fusion module 132, an image segmentation module 133, and a third image output module 134.
Referring to fig. 6, the first image acquisition module 111, the first image processing module 112, and the first image output module 113 are integrated with the first main board 110; the second image acquisition module 121, the second image processing module 122, and the second image output module 123 are integrated with the second main board 120; the image input module 131, the image fusion module 132, the image segmentation module 133, and the third image output module 134 are integrated with the third main board 130.
Specifically, the first image output module 113 and the second image processing module 122 are both connected to the image input module 131 through signal links, so as to realize the transmission of signals from the first motherboard 110 and the second motherboard 120 to the third motherboard 130.
A first image acquisition module 111, configured to acquire a binocular visible light image to be processed. In the present embodiment, the first image acquisition module 111 is configured to perform step S110 shown in fig. 2.
Specifically, the image processing apparatus 100 further includes a first image acquisition module; as shown in fig. 7, the first image acquisition module includes a first transmitting circuit board 1111, a first receiving circuit board 1112, and a coaxial line connecting the two, the coaxial line being for transmitting at least an image signal and a control signal therebetween; the first transmitting circuit board 1111 is used to connect with the first camera 20; the first receiving circuit board 1112 is connected to the first motherboard 110.
A first image processing module 112, configured to perform image processing on the binocular visible light image to obtain a binocular visible light processed image; in this embodiment, the first image processing module 112 may be used to perform step S110 shown in fig. 2.
A first image output module 113 for connecting with the three-dimensional display to make the three-dimensional display a three-dimensional visible light image corresponding to the screen size thereof; in this embodiment, the first image output module 113 may be used to perform step S110 shown in fig. 2.
A second image acquisition module 121, configured to acquire a binocular fluorescence image to be processed. In the present embodiment, the second image acquisition module 121 is configured to perform step S120 shown in fig. 2.
Specifically, the image processing apparatus 100 further includes a second image acquisition module; as shown in fig. 8, the second image acquisition module includes a second transmitting circuit board 1211, a second receiving circuit board 1212, and a coaxial line connecting the two, the coaxial line being for transmitting at least an image signal and a control signal therebetween; the second transmitting circuit board 1211 is used for connection with the second camera 30; the second receiving circuit board 1212 is connected to the second motherboard 120.
A second image processing module 122, configured to perform image processing on the binocular fluorescent image to obtain a binocular fluorescent processed image; in this embodiment, the second image processing module 122 may be used to perform step S120 shown in fig. 2.
In the present embodiment, the second image output module 123 may be used to perform step S120 shown in fig. 2.
An image input module 131 for receiving the binocular visible light processed image and the binocular fluorescence processed image; the image fusion module 132 is used for carrying out fusion processing on the binocular visible light processing image and the binocular fluorescence processing image, and the image segmentation module 133 is used for carrying out monocular segmentation processing on the fused image; the third image output module 134 is configured to output a binocular fusion image or a monocular fusion image, so as to implement display of a three-dimensional fluorescence image and a monocular fluorescence image.
In this embodiment, the image input module 131, the image fusion module 132, the image segmentation module 133, and the third image output module 134 in combination may be used to perform step S130 shown in fig. 2.
In the three-dimensional imaging scene, the image input module 131 is configured to receive MIPI digital signals of the binocular visible light processing image and the binocular fluorescence processing image, and respectively perform video input processing on the MIPI digital signals to convert the MIPI digital signals into image signals in a video format, and then perform fusion processing through the image fusion module 132, one path of the image signals is output, and the other path of the image signals is subjected to segmentation processing through the image segmentation module 133, so as to obtain a left eye fluorescence image or a right eye fluorescence image.
In this embodiment, the first camera 20 and the second camera 30 both adopt a dual-CMOS module, and two paths of optical imaging can be acquired respectively and simultaneously through the dual-CMOS module, so as to improve the synchronization of left and right eyes and the consistency of left and right images.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.