[go: up one dir, main page]

HK1195943B - Methods for detecting objects, display methods and apparatuses - Google Patents

Methods for detecting objects, display methods and apparatuses Download PDF

Info

Publication number
HK1195943B
HK1195943B HK14109363.4A HK14109363A HK1195943B HK 1195943 B HK1195943 B HK 1195943B HK 14109363 A HK14109363 A HK 14109363A HK 1195943 B HK1195943 B HK 1195943B
Authority
HK
Hong Kong
Prior art keywords
data
dimensional
depth projection
projection image
baggage
Prior art date
Application number
HK14109363.4A
Other languages
Chinese (zh)
Other versions
HK1195943A (en
Inventor
张丽
陈志强
赵自然
李强
顾建平
孙运达
Original Assignee
清华大学
同方威视技术股份有限公司
Filing date
Publication date
Application filed by 清华大学, 同方威视技术股份有限公司 filed Critical 清华大学
Publication of HK1195943A publication Critical patent/HK1195943A/en
Publication of HK1195943B publication Critical patent/HK1195943B/en

Links

Description

Method for inspecting object, display method and apparatus
Technical Field
The present invention relates to a security inspection of an object, and more particularly, to a method of inspecting an object, a display method and apparatus.
Background
Perspective imaging is an important means in the field of security inspection, and a typical application process is as follows: the equipment carries out perspective scanning and imaging on luggage articles, and a drawing inspector manually marks a suspected area through drawing judgment, and makes semantic description on the area, such as 'lighter', 'bottle of wine' and the like. The process is excessively dependent on human factors, and the detection omission can be caused under the influence of factors such as the occurrence of real dangerous objects with extremely low probability, limited experience of image inspectors, fatigue and the like, so that serious consequences are caused.
Typical means to solve this problem are dominated by automatic assisted detection, assisted by increased equipment interaction with security personnel. Automatic detection is a main means, but the effect is not satisfactory. Typical technologies such as explosive detection, high-density alarm, etc. cannot sufficiently meet application requirements. The reason for this is that, on one hand, the technical conditions are limited, such as perspective overlapping in dedr (dual Energy digital radio), which causes object aliasing, and on the other hand, the research in the academic world is less, and newer technologies, such as dect (dual Energy Computed tomogry), require new detection algorithm for support.
DECT is an advantageous technique for solving this problem, which is developed from DR and CT techniques, and can obtain the effective atomic number and equivalent electron density inside an object while acquiring the three-dimensional structure information of a scanned object, thereby providing a condition for three-dimensional data understanding. However, the existing research target is often the target detection of a certain specific object, mainly depends on density and atomic number information, and lacks the knowledge of the information of the object.
Disclosure of Invention
In order to more accurately perform security inspection on baggage. A method of inspecting an object, a display method and an apparatus are provided.
According to an aspect of the present invention, a method for inspecting baggage in a CT imaging system is provided, comprising the steps of: acquiring tomographic data of the examined baggage using the CT imaging system; generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data; calculating a first depth projection image, a second depth projection image and a third depth projection image of the object in three directions based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images; calculating respective symmetry metric values, similarity metric values between every two, duty ratio and aspect ratio of the first depth projection image, the second depth projection image and the third depth projection image; generating a shape feature parameter of the object based on at least the respective symmetry metric, similarity metric between each two, and duty cycle and aspect ratio of the first through third depth projection images; classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object; outputting a semantic description including at least a quantifier description of the object.
According to another aspect of the present invention, there is provided an apparatus for inspecting baggage in a CT imaging system, comprising: means for acquiring tomographic data of the baggage under inspection using the CT imaging system; means for generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data; means for calculating a first depth projection image, a second depth projection image, and a third depth projection image of the object in three directions based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images; means for calculating a symmetry metric, a similarity metric between each of the first, second, and third depth projection images, a duty cycle, and an aspect ratio for each of the first, second, and third depth projection images; means for generating shape feature parameters of the object based at least on the respective measures of symmetry, similarity between each two, and duty cycle and aspect ratio of the first through third depth projection images; a device for classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object; means for outputting a semantic description including at least a quantifier description of the object.
In another aspect of the present invention, there is provided a method of displaying an object in a CT imaging system, comprising the steps of: acquiring tomographic data of the examined baggage using the CT imaging system; generating three-dimensional volume data for each object in the inspected baggage from the tomographic data; for each object, determining a semantic description including at least a quantifier description of the object based on the three-dimensional volume data; a user selection of an object is received, and a semantic description of the selected object is presented while a three-dimensional image of the object is displayed.
In another aspect of the present invention, there is provided an apparatus for displaying an object in a CT imaging system, including: means for acquiring tomographic data of the baggage under inspection using the CT imaging system; means for generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data; means for determining, for each object, a semantic description based on the three-dimensional volume data that includes at least a quantifier description of the object; and means for receiving a user selection of an object and presenting a semantic description of the selected object while displaying a three-dimensional image of the object.
By using the scheme, the automatic auxiliary detection of the object is realized through automatic detection and description. The object description result is a necessary supplement to human detection, is a means for enhancing human-computer interaction, and has strong application value in reducing the important problem of missing detection.
Drawings
The following figures illustrate embodiments of the present technology. These drawings and embodiments provide some examples of the present technology in a non-limiting, non-exhaustive manner, wherein:
FIG. 1 is a schematic diagram of a CT system according to an embodiment of the present invention;
FIG. 2 is a block diagram of a computer data processor according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a controller according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating depth projection from views View1, View2, View 3;
FIG. 5 is a flow chart describing a method for inspecting an object under inspection in a CT imaging system;
FIG. 6 is a flow chart depicting a method of displaying an object in a CT imaging system in accordance with another embodiment of the present invention;
FIG. 7 is a flow chart depicting a method for creating a three-dimensional model of an object in baggage in a CT imaging system in accordance with another embodiment of the present invention;
Detailed Description
Specific embodiments of the present invention will be described in detail below, and it should be noted that the embodiments described herein are only for illustration and are not intended to limit the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that: it is not necessary to employ these specific details to practice the present invention. In other instances, well-known structures, materials, or methods have not been described in detail in order to avoid obscuring the present invention.
Throughout the specification, reference to "one embodiment," "an embodiment," "one example," or "an example" means: the particular features, structures, or characteristics described in connection with the embodiment or example are included in at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example" or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. Further, as used herein, the term "and/or" will be understood by those of ordinary skill in the art to include any and all combinations of one or more of the associated listed items.
Aiming at the defect that the prior art only uses the physical attribute information of the object in the checked luggage for safety check, the embodiment of the invention provides a method for checking the luggage in a CT imaging system. After acquiring tomographic data of the baggage under inspection using the CT imaging system, three-dimensional volume data of at least one object in the baggage under inspection is generated from the tomographic data of the baggage under inspection. A first depth projection image, a second depth projection image, and a third depth projection image of the object in three directions are then calculated based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images. Next, a symmetry metric, a similarity metric between each of the first, second and third depth projection images, and a duty cycle and aspect ratio are calculated, respectively. Generating shape feature parameters of the inspected object based on at least the symmetry metric, the similarity metric between each two, and the duty cycle and aspect ratio of each of the first through third depth projection images. And classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object. Outputting a semantic description including at least the quantifier description of the object. In this way, the shape characteristics of the object are obtained by processing the data of the object obtained by the CT imaging system and are output in a semantic description mode, so that the inspector can intuitively and accurately obtain the specific description of the object in the checked luggage, and the omission ratio is reduced.
According to another embodiment of the present invention, a method of displaying an object in a CT imaging system is provided for reducing the false negative rate. After tomographic data of the baggage under inspection is acquired by the CT imaging system, three-dimensional volume data of each object in the baggage under inspection is generated from the tomographic data. Then, for each object, a semantic description is determined based on the three-dimensional volume data that includes at least a quantifier description of the object. A user selection of an object is received, and a semantic description of the selected object is presented while a three-dimensional image of the object is displayed. In this way, when the checked luggage is checked through the CT imaging device, not only the image of the object in the checked luggage is output on the screen, but also the semantic description of the object selected by the inspector is output, so that the description of the object is visually presented, and the missing rate is reduced.
According to another embodiment of the present invention, in order to be able to more accurately extract shape features of an object in an examined baggage, an embodiment of the present invention provides a method of creating a three-dimensional model of an object in an examined baggage in a CT imaging system. After the CT imaging system is used for acquiring the fault data of the checked luggage, the fault data is interpolated to generate the three-dimensional volume data of the checked luggage. Then, the three-dimensional volume data of the checked luggage is subjected to unsupervised segmentation to obtain a plurality of segmented regions, and then isosurface extraction is carried out on the plurality of segmented regions to obtain corresponding isosurfaces. And then, carrying out three-dimensional surface segmentation on the isosurface to form a three-dimensional model of each object. The obtained three-dimensional model of the object in the checked luggage can accurately describe the three-dimensional surface of the object in the checked luggage, and provides a good basis for the subsequent extraction of three-dimensional shape characteristics, so that the accuracy of safety inspection can be improved.
Fig. 1 is a schematic structural diagram of a CT apparatus according to an embodiment of the present invention. As shown in fig. 1, the CT apparatus according to the present embodiment includes: a rack 20, a carrying mechanism 40, a controller 50, a computer data processor 60, and the like. The gantry 20 includes a radiation source 10, such as an X-ray machine, which emits X-rays for examination, and a detection and acquisition device 30. The carrying mechanism 40 carries the baggage 70 to be inspected through the scanning region between the radiation source 10 of the gantry 20 and the detecting and collecting device 30, while the gantry 20 rotates around the advancing direction of the baggage 70 to be inspected, so that the radiation emitted by the radiation source 10 can penetrate the baggage 70 to be inspected and perform a CT scan of the baggage 70 to be inspected. The detecting and collecting device 30 is, for example, a detector with an integral module structure and a data collector, such as a flat panel detector, for detecting the radiation transmitted through the inspected liquid article, obtaining an analog signal, and converting the analog signal into a digital signal, thereby outputting projection data of the inspected baggage 70 for the X-ray. The controller 50 is used to control the synchronous operation of the various parts of the overall system. The computer data processor 60 is used to process the data collected by the data collector, process and reconstruct the data, and output the result.
As shown in fig. 1, the radiation source 10 is disposed on one side on which the object to be inspected can be placed, and the detecting and collecting device 30 is disposed on the other side of the baggage 70 to be inspected, and includes a detector and a data collector for acquiring transmission data and/or multi-angle projection data of the baggage 70 to be inspected. The data collector comprises a data amplifying and shaping circuit which can work in a (current) integration mode or a pulse (counting) mode. The data output cable of the detection and acquisition device 30 is connected to the controller 50 and the computer data processor 60, and the acquired data is stored in the computer data processor 60 according to the trigger command.
Fig. 2 shows a block diagram of the computer data processor 60 shown in fig. 1. As shown in fig. 2, the data collected by the data collector is stored in the memory 61 through the interface unit 68 and the bus 64. A Read Only Memory (ROM)62 stores configuration information of the computer data processor and a program. A Random Access Memory (RAM)63 is used to temporarily store various data during operation of the processor 66. In addition, the memory 61 also stores a computer program for performing data processing. The internal bus 64 connects the above-described memory 61, read only memory 62, random access memory 63, input device 65, processor 66, display device 67, and interface unit 68.
After an operation command input by a user through an input device 65 such as a keyboard and a mouse, an instruction code of the computer program instructs a processor 66 to execute a predetermined data processing algorithm, and after obtaining a data processing result, displays it on a display device 67 such as an LCD display, or directly outputs the processing result in the form of a hard copy such as printing.
Fig. 3 shows a block diagram of a controller according to an embodiment of the present invention. As shown in fig. 3, the controller 50 includes: a control unit 51 for controlling the radiation source 10, the carrying mechanism 40 and the detecting and collecting device 30 according to instructions from the computer 60; a trigger signal generating unit 52 for generating a trigger command for triggering the actions of the radiation source 10, the detecting and collecting device 30 and the carrying mechanism 40 under the control of the control unit; a first driving device 53 that drives the carrying mechanism 40 to convey the checked baggage 70 in accordance with a trigger command generated by the trigger signal generating unit 52 under the control of the control unit 51; and a second driving device 54 which rotates the housing 20 according to a trigger command generated by the trigger signal generating unit 52 under the control of the control unit 51.
The projection data obtained by the detection and acquisition device 30 is stored in the computer 60 for CT tomographic reconstruction to obtain tomographic data of the examined baggage 70. The computer 60 then extracts the three-dimensional shape parameters of at least one object in the inspected baggage 70 from the tomographic image data, for example, by executing software, thereby performing a security inspection. According to other embodiments, the CT imaging system may also be a dual-energy CT system, that is, the X-ray source 10 of the gantry 20 can emit two types of high-energy and low-energy rays, and after the detection and acquisition device 30 detects projection data at different energy levels, the computer data processor 60 performs dual-energy CT reconstruction to obtain equivalent atomic number and equivalent electron density data of each slice of the examined baggage 70.
Fig. 4 shows a schematic diagram of the definition of various views in a method according to an embodiment of the invention. Fig. 5 is a flow chart describing a method for inspecting baggage in a CT imaging system. In step S51, tomographic data of the examined baggage is acquired using the CT imaging system. For example, the baggage to be examined is subjected to a dual-energy CT examination based on the above-mentioned CT apparatus or other CT apparatuses, and tomographic data is obtained, where the tomographic data generally includes tomographic density map data and tomographic atomic number map data. However, in other embodiments, such as in the case of single energy CT, linear attenuation coefficient image data is obtained.
In step S52, three-dimensional volume data of at least one object in the examined baggage is generated from the tomographic data. For example, the tomographic data is subjected to interlayer interpolation, thereby obtaining three-dimensional volume data of the baggage under inspection. For another example, after a series of DECT density maps and atomic density maps under continuous faults are obtained, three-dimensional interpolation is performed on the DECT density maps and the atomic density maps respectively, so that the resolution of the images in the faults and between the faults is consistent. There are many known algorithms for three-dimensional interpolation, such as the commercially available Intel IPP (Intel integrated performance principles) function library and the open source software Kitware VTK (Visualization Toolkit) function library, which all provide this function. After interpolation, the two-dimensional tomographic data is converted into three-dimensional volume data.
In step S53, a first depth projection image, a second depth projection image, and a third depth projection image of the object in three directions are calculated based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images. According to another embodiment, the projection directions of the first depth projection image and the second depth projection image are as orthogonal as possible (e.g. substantially orthogonal), approaching the directions of maximum and minimum projected areas of the object, respectively.
Depth projection Depth Buffer, also called Z-Buffering, is a basic technique for three-dimensional surface display. Judging the shielding relation between the objects, and displaying the part without shielding on the screen. This technique is a typical technique used by 3DOR at present, but it often involves tens of projections, and is highly complex. As shown in fig. 4, in the present embodiment, only 3 depth projection maps are used. The first projection is defined as I1The aim is to obtain a "front view", here approximating it with the largest area projection. The projection shown by XOY plane in FIG. 4 is I1. The second projection is defined as I2The aim is to obtain a "top view", here approximated by the smallest-area projection. The projection shown by XOZ plane in FIG. 4 is I2. In fig. 4, the two projection directions are orthogonal to each other, but in reality, this condition is not necessarily satisfied, and therefore the angle formed by the two projection directions is also one of the features. The third projection is defined as I3The purpose is to obtain a "side view". In obtaining I1、I2Then projected again in the direction orthogonal to their projection direction to obtain I3, i.e. the projection on the YOZ plane in FIG. 4 is I3
It should be noted that the direction X, Y, Z in fig. 4 can project 6 images in the forward and backward directions. Since the details have been removed during the three-dimensional surface segmentation, the front and back projections are very similar. To reduce the algorithm complexity, only 3 projections are used here.
To obtain the maximum or minimum projection, it can be implemented by traversing all the rotation angles, but this is too complicated. Here, by referring to the rectilinerity algorithm, the first two projection directions are quickly estimated using a genetic algorithm.
I1~I3The symmetry of the image can reflect the self-similarity of the object and is an important shape feature. For the sake of convenience of calculation, I will be referred to herein1~I3PCA (Principal Component analysis) alignment is performed so that the divergence of the two-dimensional image in the x-axis is maximized, i.e., the top and bottom pairs of imagesThe symmetry is strongest. Hereinafter referred to as I1~I3Refers to the image subjected to the alignment process.
In step S54, a symmetry metric, a similarity metric between each of the first, second, and third depth projection images, and a duty ratio and an aspect ratio are calculated, respectively.
In some embodiments, I is extracted1~I3As one or more of the shape characteristics, or any combination thereof, as shape characteristic parameters. In addition, the foregoing I1、I2The included angle between the projection directions also reflects the shape of the object and is also used as one of the characteristic parameters. Also, the object volume reflects the size of the object, and is one of the features.
The gray scale value range of the depth projection image is set to be between [0, 1 ]. Where a gray value of 0 represents infinity, and a non-0 value represents the distance of the patch from the observation, the closer the value the greater. The method for acquiring the characteristics can be as follows:
i) to obtain I1~I3Upper and lower symmetry ofLet I1~I3Turning over the top and the bottom to obtain an image I1′~I3', then the upper and lower symmetry can be definedComprises the following steps:
that is, the up-down symmetry is obtained by using the average gray difference between the image after being turned upside down and the original image as a standard.
ii) solution of I1、I2、I3Duty ratio ofAnd the aspect ratio
The depth projection image size is defined by the viewport and does not reflect object properties. The depth projection aspect ratio and the duty ratio after alignment are solved, and the macroscopic characteristics of the object can be better described. To obtain IiThe outer surrounding rectangular frame is easy to obtain the aspect ratioThen counting the number of non-0 pixels, dividing by the area of the bounding box to obtain the duty ratio
iii) solution of I1、I2、I3Similarity between two
In ii) has obtained I1、I2Cutting out the image in the surrounding frame to obtainWill be provided withIs scaled to andare of the same size to obtainWill be provided withIs turned over from top to bottom to obtainAt this time, the product can be obtainedComprises the following steps:
the similarity algorithm is similar to the symmetry in equation (3), except that the image is size normalized. And getIs composed ofAndthe larger one of the similarity values. By the same token, can obtain I2、I3Similarity between themAnd also I3、I1Similarity between them
iv) in the depth projection process, projection I is known1、I2The angle theta between the two directions is obtained as a characteristic. The model volume V reflects the size of the object, also as a feature.
Combining the shape characteristic parameters to form a 14-dimensional shape characteristic vector F:
in step S55, the first to third depth projection images are projected based on at least the respective symmetry metricsA similarity measure between two and a duty cycle and an aspect ratio to generate shape characteristic parameters of the object in the inspected luggage. For example, in some embodiments, the projection I, the duty cycle and the aspect ratio are calculated based on the symmetry value, the similarity value, the duty cycle and the aspect ratio calculated in I) to iv) above1And I2One or more of the angles of the two directions form a shape characteristic parameter.
In step S56, the shape feature parameters are classified by a classifier based on the shape feature parameters, and a quantifier description that represents the shape of the object is obtained.
After the feature vector F is obtained, the process of constructing the classifier conforms to the general process of pattern recognition, and various types of classifiers can be used, such as linear classifiers, support vector machines, decision trees, neural networks, ensemble classifiers, and the like. After training, the shape classification and identification of the unknown target can be realized. Embodiments implement classifiers using RF (Random Forest). Many known function libraries contain RF algorithms, such as Opencv, which is an implementation function, and the process is not described here.
It should be noted that, in the training set, the object obtained in the fifth step needs to be labeled as one of "bag, sheet, block, bottle, can, tube, root, bag, box, individual" through manual drawing judgment. The differences between these several predicate constants are briefly described here.
"bag" refers to a flat package, with aspect ratio being an important feature. Such as soft packaged milk, flat bagged homogeneous food products, etc.;
"sheet" refers to an object of very low thickness, the aspect ratio being an important feature. The thin books, the box fillers and the cutters are all in the range;
"block" refers to a relatively low-similarity, low-space object, such as a plastic bagged homogeneous object, that would form a "block" if not flat packaged;
the bottle refers to an article similar to a mineral water bottle, and the main projection similarity, the side projection similarity, the duty ratio and the height-width ratio are important characteristics;
"can" refers to a similar easy open can, similar to a bottle, but with a higher duty cycle and aspect ratio;
"root" refers to a long object, the aspect ratio being an important feature. Such as sausage, wood, iron pipe, all within this range;
"tubes" are objects that are shorter than "roots" and have good symmetry, such as similar facial cleanser, glue, etc.;
the box is a rectangular article with a certain thickness, and has a larger duty ratio than the block. Duty cycle, aspect ratio are their main characteristics, such as soap, many cosmetics, food have similar characteristics;
"bag" refers to a large object, volume being an important characteristic. Such as computers, very large and thick books, large objects that can be judged to be of other shapes are not in this category;
"A" is used broadly to mean "another" object.
It can be seen that the above classification is different from the general understanding. Such as the cup shown in fig. 4, may be a "can" as defined by the above-identified nomenclature. This definition is related to security inspection, for example, solid explosives generally appear in the form of "bags", "sheets" and "blocks", liquids mainly appear in the form of "bottles", "cans" and "tubes", and control instruments "sheets" and "roots" are more in the form of. Other items such as "bag", "box" and "individual" are complementary to the common shapes.
In step S57, a semantic description including at least the quantifier description of the object is output.
After obtaining the semantic descriptions of the various objects in the inspected baggage, the user may be interacted with in a variety of ways. For example, the object outline is directly displayed in the result to remind the user of paying attention; or when the user clicks, the object is extracted from the screen and the description information is displayed, so that the user can further understand and label the object conveniently. In addition, in a specific occasion, the object semantics are limited, the object meeting the specific semantics is highlighted, the labor intensity of a picture inspector can be reduced, and the working efficiency is improved.
The position, weight and shape of the object are known, so that the description can be completed only by counting the average atomic number and electron density (or linear attenuation coefficient in the case of single-energy CT). Then, each predicate information is obtained and is arranged to obtain object semantic description, namely 'shape + weight + density + atomic number + position'
FIG. 6 is a flow chart describing a method of displaying an object in a CT imaging system in accordance with another embodiment of the present invention. According to the embodiment, automatic auxiliary detection of the object in the checked luggage is realized through automatic detection and description. The object description result is a necessary supplement to human detection, is a means for enhancing human-computer interaction, and has strong application value in reducing the important problem of missing detection.
In step S61, tomographic data of the examined baggage is acquired using the CT imaging system.
In step S62, three-dimensional volume data of each object in the checked baggage is generated from the tomographic data. Then, in step S63, for each object, a semantic description including at least the quantifier description of the object is determined based on the three-dimensional volume data.
In step S64, a user selection of an object is received, and a semantic description of the selected object is presented while a three-dimensional image of the object is displayed.
For example, all detected object positions are marked in the display window, and when the inspector selects that the position is within a certain object range by using a tool such as a mouse, the complete semantic description of the object is displayed. In addition, the image inspector can select the object by using a mouse or other tools, and can further label the object in detail after selection, so as to increase semantic description content. The semantic description may also be defined such that only objects that fit the definition are displayed. For example, only an object having a shape of "bottle" and a weight of 200 g or more is limited to be presented, and the position of a suspected object may be represented in a two-dimensional or three-dimensional image to assist the inspector in image interpretation. In addition, the object can be highlighted when being selected, and all other contents can be shielded, and only the object content is displayed. Alternatively, some of the limitations of the above embodiments, such as volume data threshold definition, object shape limitation, can be strengthened for automatic detection of specific objects, such as explosives, contraband.
In other embodiments, the process of generating a semantic description of the various objects in the checked baggage may refer to the embodiment described above in connection with FIG. 5.
According to the embodiment of the invention, before the depth projection is carried out, a three-dimensional model of an object in the checked luggage can be created so as to further extract the shape characteristic and carry out the safety check. Fig. 7 is a flow chart describing a method of creating a three-dimensional model of an object in an inspected baggage in a CT imaging system in accordance with another embodiment of the present invention.
As shown in fig. 7, in step S71, tomographic data of the baggage being examined is acquired using the CT imaging system. In step S72, the tomographic data is interpolated to generate three-dimensional volume data of the baggage to be examined. For example, in the dual energy case, the three-dimensional volume data includes density volume data and atomic number volume data. In the monoenergetic case, the three-dimensional volume data includes a linear attenuation coefficient.
After a series of DECT density maps and atom density maps under continuous faults are obtained, three-dimensional interpolation is carried out on the DECT density maps and the atom density maps respectively, so that the resolution of the images in the faults and between the faults is consistent. There are many known algorithms for three-dimensional interpolation, such as the commercially available Intel IPP (Intel Integrated performance principles) function library and the open source software Kitware VTK (Visualization Toolkit) function library, which all provide this function. After interpolation, the two-dimensional tomographic data is converted into three-dimensional volume data. The "volume data" includes density volume data and atomic number volume data, unless specified below.
Secondly, threshold value limitation is carried out on the volume data, and interference of careless sundries such as clothes and other articles is eliminated. In a specific application, this step can be omitted, but the calculation amount is increased, the number of obtained 'object' results is large, and the result usability is poor.
After that, volume data filtering is performed using three-dimensional bilateral filtering (bilateral filter). Embodiments use fast algorithms, which may also be implemented using ITK (insight Segmentation and Registration toolkit) libraries.
In step S73, the three-dimensional volume data is unsupervised divided to obtain a plurality of divided regions.
For a two-dimensional segmentation algorithm, segmentation is often realized based on 4/8 neighborhood, gradient information, and the like, where correlation processing needs to be expanded to three dimensions, for example, 4 neighborhood is expanded to 6 neighborhood; secondly, the division relates to two parts of volume data of density and courtyard ordinal, the weighted sum of the two parts can be adopted, or each voxel is represented by a two-dimensional vector to obtain a uniform division result; furthermore, the segmentation needs to achieve an effect of under-segmentation. Preferably, we use the Statistical Region Merging (SRM) algorithm and extend it to three-dimensional processing for purposes. The SRM is a bottom-up clustering segmentation algorithm, and the embodiment adopts the following expansion steps:
1) and connecting the atomic number and the density into a vector, namely, each voxel of the volume data is a two-dimensional variable. Replacing the gray level difference with the difference vector module value of the two vectors;
2) replacing the two-dimensional gradient with a three-dimensional gradient; replacing the area of the region pixels in two dimensions with the volume of the region voxels;
through the processing, unsupervised segmentation of DECT data can be realized. In addition, the complexity of the SRM segmentation result is limited by a complexity parameter. The under-segmentation effect can be achieved by setting a lower complexity value.
In step S74, iso-surface extraction is performed on the plurality of divided regions to obtain corresponding iso-surfaces.
Iso-surface extraction is performed on each of the divided regions obtained in step S73, and a corresponding iso-surface can be obtained. The Marching Cubes algorithm was used in the examples.
In step S75, three-dimensional surface segmentation is performed on the plane of equivalent values to form a three-dimensional model of each object.
Since the result of the step S73 is an under-segmentation result, a plurality of closely-connected objects with similar material characteristics cannot be segmented. It is therefore desirable to refine the three-dimensional segmentation results with surface segmentation. For example, a surface segmentation (MeshSegmentation) algorithm may be used. The curved surface is divided into a plurality of convex curved surfaces. The algorithm is a supervised algorithm, and the number of segmentation results needs to be specified. In practical application, the segmentation number can be obtained by firstly calculating or iterating through an unsupervised improvement idea similar to K-Means clustering. However, through our experiments, it is difficult to obtain good effect by the similar method, so we set the number of divisions to 10. For 10 segmentation results, curved surface Merge is performed by taking the star convex hypothesis as reference. If the center of the curved surface a and the center of the curved surface B are a and B, respectively, and if the connecting line of a and B is inside the overall surface obtained in the third step (or the proportion of voxels outside is less than a certain threshold), A, B is connected. The 10 segmentation results are connected in pairs to obtain the final segmentation result.
According to the embodiment of the invention, the segmented result can be processed, and the processing comprises three steps of hole filling, smoothing and model definition. The first two steps are basic operations in graphics, and can be realized by using a function library of an open source software Kitware VTK (Visualization Toolkit), which is not described herein again. Then, the model is voxelized, filled with density volume data values, and the area, volume and weight of the surface are counted. Model definition refers to the removal of smaller objects, including objects with smaller dimensions, volume, weight, etc. The reason for this limitation is twofold. Firstly, noise objects are removed, so that the result is more practical; secondly, local details of a plurality of objects are omitted, so that the shape recognition of the next step is more accurate. The threshold is specifically defined, in relation to the resolution of DECT, and is set according to the actual situation, for example, the weight may be set to 50 grams.
The foregoing detailed description has set forth numerous embodiments of a method of inspecting an object, a method of displaying, a method of creating a three-dimensional model, and an apparatus, using schematics, flowcharts, and/or examples. Where such diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of structures, hardware, software, firmware, or virtually any combination thereof. In one embodiment, portions of the subject matter described by embodiments of the invention may be implemented by Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to: recordable type media such as floppy disks, hard disk drives, Compact Disks (CDs), Digital Versatile Disks (DVDs), digital tapes, computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
While the present invention has been described with reference to several exemplary embodiments, it is understood that the terminology used is intended to be in the nature of words of description and illustration, rather than of limitation. As the present invention may be embodied in several forms without departing from the spirit or essential characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, but rather should be construed broadly within its spirit and scope as defined in the appended claims, and therefore all changes and modifications that fall within the meets and bounds of the claims, or equivalences of such meets and bounds are therefore intended to be embraced by the appended claims.

Claims (14)

1. A method of inspecting baggage in a CT imaging system, comprising the steps of:
acquiring tomographic data of the examined baggage using the CT imaging system;
generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data;
calculating a first depth projection image, a second depth projection image and a third depth projection image of the object in three directions based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images;
calculating respective symmetry metric values, similarity metric values between every two, duty ratio and aspect ratio of the first depth projection image, the second depth projection image and the third depth projection image;
generating shape feature parameters of the object based on the respective symmetry metric, similarity metric between each two, and duty cycle and aspect ratio of the first through third depth projection images;
classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object;
outputting a semantic description including at least the quantifier description of the object;
wherein the step of generating three-dimensional volume data of at least one object in the examined baggage from the tomographic data comprises:
carrying out interpolation on the fault data to obtain three-dimensional volume data of the checked luggage;
performing unsupervised segmentation on the three-dimensional volume data of the checked luggage to obtain a plurality of segmented regions;
extracting isosurface from the plurality of divided areas to obtain corresponding isosurface; and
performing three-dimensional surface segmentation on the isosurface to obtain three-dimensional volume data of each object in the checked luggage;
wherein the quantifier description includes at least one of: bags, sheets, blocks, bottles, jars, roots, boxes, bags, and individuals.
2. The method of claim 1, further comprising the steps of: calculating an included angle between projection directions of the first depth projection image and the second depth projection image;
wherein the shape feature parameters further include the included angle.
3. The method of claim 1, further comprising the steps of: calculating a volume value of the object based on the three-dimensional volume data;
wherein the shape feature parameter further comprises the volume value.
4. The method of claim 1 wherein the projection directions of the first depth projection image and the second depth projection image are orthogonal, approximating directions in which the projected area of the object is largest and smallest, respectively.
5. The method of claim 1, the semantic description further comprising at least one of a weight, a density, an atomic number, and a position of the object.
6. An apparatus for baggage inspection in a CT imaging system, comprising:
means for acquiring tomographic data of the baggage under inspection using the CT imaging system;
means for generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data;
means for calculating a first depth projection image, a second depth projection image, and a third depth projection image of the object in three directions based on the three-dimensional volume data, wherein a projection direction of the third depth projection image is orthogonal to projection directions of the first and second depth projection images;
means for calculating a symmetry metric, a similarity metric between each of the first, second, and third depth projection images, a duty cycle, and an aspect ratio for each of the first, second, and third depth projection images;
means for generating shape feature parameters of the object based on the respective measures of symmetry, similarity between each two, and duty cycle and aspect ratio of the first through third depth projection images;
a device for classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object;
means for outputting a semantic description including at least the quantifier description of the object;
wherein the means for generating three-dimensional volumetric data of at least one object in the inspected baggage from the tomographic data further comprises:
a device for interpolating the fault data to obtain three-dimensional volume data of the checked luggage;
means for unsupervised segmentation of the three-dimensional volume data of the inspected baggage into a plurality of segmented regions;
a device for extracting isosurface from the plurality of divided regions to obtain corresponding isosurface;
means for performing three-dimensional surface segmentation on said iso-surface to obtain a three-dimensional model of each object;
wherein the quantifier description includes at least one of: bags, sheets, blocks, bottles, jars, roots, boxes, bags, and individuals.
7. A method of displaying an object in a CT imaging system, comprising the steps of:
acquiring tomographic data of the examined baggage using the CT imaging system;
generating three-dimensional volume data for each object in the inspected baggage from the tomographic data;
for each object, determining a semantic description comprising a quantifier description of the object based on the three-dimensional volumetric data;
receiving a selection of a certain object by a user, and presenting the semantic description of the object while displaying a three-dimensional image of the selected object;
wherein the step of generating three-dimensional volume data for each object in the inspected baggage from the tomographic data comprises the steps of:
carrying out interpolation on the fault data to obtain three-dimensional volume data of the checked luggage;
performing unsupervised segmentation on the three-dimensional volume data of the checked luggage to obtain a plurality of segmented regions;
extracting isosurface from the plurality of divided areas to obtain corresponding isosurface;
carrying out three-dimensional surface segmentation on the isosurface to obtain a three-dimensional model of each object;
wherein the quantifier description includes at least one of: bags, sheets, blocks, bottles, jars, roots, boxes, bags, and individuals.
8. The method of claim 7, wherein determining, for each object, a semantic description of the object comprises:
calculating a first depth projection image, a second depth projection image and a third depth projection image of the object in three directions based on the three-dimensional volume data of the object, wherein the projection direction of the third depth projection image is orthogonal to the projection directions of the first and second depth projection images;
calculating respective symmetry metric values, similarity metric values between every two, duty ratio and aspect ratio of the first depth projection image, the second depth projection image and the third depth projection image;
generating shape feature parameters of the object based on the respective symmetry metric, similarity metric between each two, and duty cycle and aspect ratio of the first through third depth projection images;
classifying the shape characteristic parameters by using a classifier based on the shape characteristic parameters to obtain quantifier description reflecting the shape of the object;
outputting a semantic description including at least the quantifier description of the object.
9. The method of claim 7, the semantic description further comprising at least one of a weight, a density, an atomic number, and a position of the object.
10. The method of claim 8, further comprising the step of: calculating an included angle between projection directions of the first depth projection image and the second depth projection image;
wherein the shape feature parameters further include the included angle.
11. The method of claim 8, further comprising the step of: calculating a volume value of the object based on three-dimensional volume data of the object;
wherein the shape feature parameter further comprises the volume value.
12. The method of claim 8 wherein the projection directions of the first depth projection image and the second depth projection image are orthogonal, approximating directions in which the projected area of the object is largest and smallest, respectively.
13. The method of claim 7, wherein the selected object is highlighted while other objects are masked.
14. An apparatus for displaying an object in a CT imaging system, comprising:
means for acquiring tomographic data of the baggage under inspection using the CT imaging system;
means for generating three-dimensional volume data of at least one object in the inspected baggage from the tomographic data;
means for determining, for each object, a semantic description comprising a quantifier description of the object based on the three-dimensional volumetric data;
means for receiving a user selection of an object and presenting a semantic description of the selected object while displaying a three-dimensional image of the object;
wherein the means for generating three-dimensional volumetric data of at least one object in the examined baggage from the tomographic data comprises:
a device for interpolating the fault data to obtain three-dimensional volume data of the checked luggage;
means for unsupervised segmentation of the three-dimensional volume data of the inspected baggage into a plurality of segmented regions;
a device for extracting isosurface from the plurality of divided regions to obtain corresponding isosurface;
means for performing three-dimensional surface segmentation on said iso-surface to obtain a three-dimensional model of each object;
wherein the quantifier description includes at least one of: bags, sheets, blocks, bottles, jars, roots, boxes, bags, and individuals.
HK14109363.4A 2014-09-17 Methods for detecting objects, display methods and apparatuses HK1195943B (en)

Publications (2)

Publication Number Publication Date
HK1195943A HK1195943A (en) 2014-11-28
HK1195943B true HK1195943B (en) 2018-04-20

Family

ID=

Similar Documents

Publication Publication Date Title
CN103901489B (en) Check method, display methods and the equipment of object
CN103903303B (en) Threedimensional model creation method and equipment
US9412019B2 (en) Methods for extracting shape feature, inspection methods and apparatuses
CN101403710B (en) Liquid article inspection method and equipment
CN101358936B (en) Method and system for discriminating material by double-perspective multi energy transmission image
EP3257020B1 (en) Three-dimensional object image generation
CN102608135B (en) Method and equipment for confirming CT (Computed Tomography) scanning position in dangerous goods inspection system
CN105787919B (en) A method and device for operating a CT three-dimensional image for security inspection
CN102162798B (en) Method and equipment for inspecting liquid article
WO2016095776A1 (en) Method for positioning target in three-dimensional ct image and security check ct system
HK1195943B (en) Methods for detecting objects, display methods and apparatuses
HK1195943A (en) Methods for detecting objects, display methods and apparatuses
HK1195820A (en) 3-dimensional model creation methods and apparatuses
HK1195820B (en) 3-dimensional model creation methods and apparatuses
JP7530476B2 (en) Method for online training of radiation image identification model, radiation image identification method and device
EP4645216A1 (en) Method and apparatus for separating target object in three-dimensional ct image, and security inspection ct system
US10782441B2 (en) Multiple three-dimensional (3-D) inspection renderings
CN115508390A (en) Explosive identification method
HK1195805A (en) Methods for extracting shape feature, inspection methods and apparatuses
HK1195805B (en) Methods for extracting shape feature, inspection methods and apparatuses
CN115452870A (en) System and method for identifying explosives