WO2019178666A1 - System, apparatus and method for performing security screening at a checkpoint using x-ray and ct scanning devices and gui configured for use in connection with same - Google Patents
System, apparatus and method for performing security screening at a checkpoint using x-ray and ct scanning devices and gui configured for use in connection with same Download PDFInfo
- Publication number
- WO2019178666A1 WO2019178666A1 PCT/CA2018/051226 CA2018051226W WO2019178666A1 WO 2019178666 A1 WO2019178666 A1 WO 2019178666A1 CA 2018051226 W CA2018051226 W CA 2018051226W WO 2019178666 A1 WO2019178666 A1 WO 2019178666A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- item
- image data
- display
- screening station
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
- G01N23/046—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V5/00—Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
- G01V5/20—Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
- G01V5/22—Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/643—Specific applications or type of materials object on conveyor
Definitions
- TITLE SYSTEM, APPARATUS AND METHOD FOR PERFORMING SECURITY
- the present invention relates generally to security systems and, more particularly, to a security screening system for assisting screening operators in screening items, in particular in connection with carry-on luggage, and to a method and/or apparatus for improving the efficiency of security screening processes at security checkpoints of the type present at secured facilities.
- checkpoint security- screening systems make use of scanning devices that use penetrating radiation to scan items (such as pieces of carry-on luggage or other items) in order to obtain image data conveying information pertaining to the contents of the items.
- scanning devices generally include a conveyor on which the items are positioned, either directly or on a support such as a tray.
- the conveyor displaces the objects positioned thereon towards an inspection area, also referred to as the scanning tunnel, where the objects are subjected to penetrating radiation in order to generate image data conveying information on the contents and/or composition of the items.
- Typical scanning devices that may be used to provide image data in this context include X-ray scanners and computed tomography (CT)-scanners.
- CT computed tomography
- X-ray scanners typically use penetrating radiation to generate one or more 2D images of items under inspection. For a given item, in the case of a single view X-ray scanner, one 2D image of the item is generated and in the case of a multi-view X-ray scanner, two or more 2D images of the item may be generated.
- CT- scanners typically use penetrating radiation to generate a plurality of “slices” of items under inspection in order to generate a 3D representation of the item.
- the scanning devices are in communication with one or more display devices on which images derived from the generated image data may be rendered through a displayed Graphical User Interface (GUI).
- GUI Graphical User Interface
- human operators visually inspect the images, alone or with the assistance of information generated by one or more automated threat detection (ATD) tools, in order to assess whether the items may present a threat.
- ATD automated threat detection
- an ATD tool may make use of an image processing algorithm to process image data associated with an item under inspection to identify shapes and/or materials that may indicate that the item is likely to present a potential threat (e.g. the item may include or hold one or more objects such as guns, knives, bottles of liquid or other objects that may be considered to present a potential threat).
- the human operator may perform a number of functions including manipulating the displayed image and assigning a threat level to the item under inspection (e.g. low, medium, high level of threat).
- a threat level e.g. low, medium, high level of threat.
- the human operator may also use controls on the GUI to control the displacement of the items under inspection through the security checkpoint, for example by generating control signal for controlling switches of the conveyor.
- each scanning device includes one or more dedicated display devices connected thereto on which GUIs specifically configured to operate with that scanning device are used to render images for display to human operators and to provider tailored tools to manipulate the images and/or perform specific functions.
- a challenge with providing remote screening for security checkpoints having different types of scanning devices is that it is typically necessary to provide different types of GUIs adapted to the different devices to view the images generated and to allow for different functionality to be provided depending on the scanning device that was used to generate the image data.
- An added challenge arises when there are different types of scanning devices using different technologies that are used to generate image data at a security checkpoint using remote screening, in particular when there are some X-ray scanners and some CT scanners. Given the different types of image data generated and the different types of desired functionality/manipulation that are often expected when using CT scanners compared to X-ray scanners, different GUIs are essentially used for each type of device.
- a deficiency in using different GUIs in the context of providing remote screening capabilities for a security checkpoint is that they require human operators to be trained on different types of interfaces/tools, which increases the training time and thus increases the training costs for the operators.
- An alternative is to have different human operators for each of the different types of scanning devices each being trained on a specific GUI. However, this alternative increases the labor costs associated with providing security screening as additional staff would likely need to be hired.
- Systems and methods are disclosed for performing security screening for a security checkpoint.
- a GUI configured for use in performing the security screening is also disclosed.
- a single GUI is provided for screening different items scanned by different scanners.
- alternate image data conveying a 2D image of an item is obtained from 3D image data, and the 2D image is first displayed.
- a method for screening an item at a security checkpoint includes a checkpoint screening station.
- the method is implemented by a system including at least one programmable processor.
- the method includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station.
- the image data conveys a 3D image of the item.
- the method further includes processing the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item.
- the method further includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station.
- the remote screening station is in communication with the system over a computer network.
- the method further includes transmitting the image data conveying the 3D image of the item for display on the display screen at the remote screening station. Transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
- the image data conveying the 3D image of the item includes CT data
- the derived alternate image data conveys a simulated X-ray image of the item.
- the simulated X-ray image may be derived by applying simulation operations to the CT data, e.g. by projecting the CT data.
- the system hedges against latency associated with the transmittal and processing for rendering of 3D images by making the 2D image available for display faster than the 3D image would have been available.
- screening of 2D images through visual examination by a human operator has been observed in some circumstances to be faster than screening of 3D images which may be due to the reduced complexity of the images, which may assist in improving screening efficiency/speed of inspection.
- Faster screening may in turn result in faster passenger flow through a checkpoint screening station that uses a CT scanner.
- a corresponding system is disclosed.
- the system is for screening an item at a security checkpoint, where the security checkpoint includes a checkpoint screening station.
- the system includes a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station.
- the image data conveys a 3D image of the item.
- the system further includes a processor in communication with the memory.
- the processor is programmed to: (i) process the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network; (ii) cause transmission of the image data conveying the 3D image of the item for display on the display screen at the remote screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate image data conveying the 2D image of the item.
- a method for screening items at a security checkpoint includes a first checkpoint screening station with a screening device of a first type (e.g. a CT scanner) and a second checkpoint screening station with a screening device of a second type (e.g. an X-ray scanner) that is distinct from the first type.
- the method is implemented by a system including at least one programmable processor.
- the method includes receiving first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station.
- the first image data conveys a 3D image of the first item.
- the method further includes processing the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item.
- the method further includes transmitting the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station.
- the remote screening station is in communication with the system over a computer network.
- the method further includes transmitting the image data conveying the 3D image of the first item for display on the display screen at the remote screening station. Transmission of the image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item.
- the method further includes receiving second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station.
- the second image data conveys a 2D image of the second item.
- the method further includes transmitting the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station.
- a first processor associated with the first checkpoint screening station performs the receiving of the first image data, the processing of the first image data, the transmitting of the derived alternate image data conveying the 2D image of the first item, and the transmitting of the image data conveying the 3D image of the first item.
- a second processor associated with the second checkpoint screening station performs the receiving the second image data and transmitting the second image data conveying the 2D image of the second item.
- a system including at least one memory for storing first image data derived from scanning a first item with penetrating radiation at a first checkpoint screening station.
- the first image data conveys a 3D image of the first item.
- At least one processor is programmed to: (i) process the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network; (iii) cause transmission of the first image data conveying the 3D image of the first item for display on the display screen at the remote screening station, wherein transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item.
- the at least one memory is also for storing second image data derived from scanning a second item with penetrating radiation at a second checkpoint screening station, the second image data conveying a 2D image of the second item.
- the at least one processor is further programmed to transmit the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station.
- the at least one memory may be first and second separate memories. The first memory is associated with the first checkpoint screening station and stores the first image data. The second memory is associated with the second checkpoint screening station and stores the second image data.
- the at least one processor may be first and second processors.
- the first processor is programmed to process the first image data, cause transmission of the derived alternate image data conveying the 2D image of the first item, and cause transmission of the first image data conveying the 3D image of the first item.
- the second processor is programmed to transmit the derived alternate image data conveying the 2D image of the second item.
- a system including a remote screening station having a display screen for displaying images of items scanned at the security checkpoint.
- the system further includes a first computing device in network communication with both the remote screening station and a first checkpoint screening station.
- the first computing device includes: (i) a first memory for storing first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station, the first image data conveying a 3D image of the first item; (ii) a first processor programmed to: (1) process the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item; (2) cause transmission of the derived alternate image data conveying the 2D image of the first item over the network for display on the display screen at the remote screening station; and (3) cause transmission of the first image data conveying the 3D image of the first item over the network for display on the display screen at the remote screening station, wherein transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item.
- the system further includes a second computing device in network communication with both the remote screening station and a second checkpoint screening station.
- the second computing device includes: (i) a second memory for storing second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station, the second image data conveying a 2D image of the second item; and (ii) a second processor programmed to transmit the second image data conveying the 2D image of the second item over the network for display on the display screen at the remote screening station.
- a method for screening an item at a security checkpoint is implemented by a system including at least one programmable processor.
- the at least one programmable processor is configured for implementing a GUI on a display screen.
- the GUI is configured for: (i) displaying a 2D image of the item; (ii) providing an input object operable by an operator.
- the input object is configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item.
- the method includes displaying the 2D image of the item on the GUI and causing the input object to acquire the disabled state.
- the method further includes dynamically adapting the GUI to subsequently cause the input object to acquire the enabled state after a delay period.
- the delay period is based at least in part on receipt of image data conveying the 3D image of the item.
- the input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI.
- the method includes dynamically adapting the GUI to display the 3D image of the item on the display screen. The method may be performed at a remote screening station.
- a human operator may be accustomed to screening 2D X-ray images from an X-ray scanning device, and therefore may be accustomed to using an associated GUI and user interface tools developed for screening such 2D images.
- a single GUI may now be provided to allow the operator to screen both 2D images and 3D images, where the 3D images may originate from a CT scanner, and where the 2D images may originate from an X-ray scanner or be a simulated X-ray image originating from CT data.
- tools and general interactions with the user may be implemented so as to provide a similar user experience whether the scanned item to be screened originates from a 2D X-ray scanner or a 3D CT scanner.
- the delay period for dynamically adapting the GUI to cause the input object to acquire the enabled state may be based at least in part on receipt of image data conveying the 3D image and upon an intentional delay period.
- the intentional delay period may be used to encourage a human operator to screen an item using a displayed 2D simulated X-ray image, rather than using the 3D image of the item, which may in turn make the screening process faster.
- a system corresponding to the above described method includes a non-transitory memory for storing image data, a display screen, and at least one processor programmed to implement a GUI on the display screen.
- the GUI is configured for: (i) displaying a 2D image of the item; (ii) providing an input object operable by an operator, the input object being configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item.
- the processor is further programmed to display the 2D image of the item on the GUI, the input object being in the disabled state when the display of the 2D image is initiated. Following the display of the 2D image, the processor is further programmed to dynamically adapt the GUI to subsequently cause the input object to acquire the enabled state after a delay period. The delay period is based at least in part on receipt of image data conveying the 3D image of the item.
- the input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI.
- the processor is further programmed to dynamically adapt the GUI to display the 3D image of the item on the display screen.
- a method for screening an item at a security checkpoint including a checkpoint screening station.
- the method is implemented by a system including at least one programmable processor.
- the method includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a 3D image of the item.
- the method further includes processing the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item.
- processing the image data includes defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item.
- the method further includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station.
- the method further includes transmitting the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
- the method above is not specific to a remote screening station and may in some implementations be used to assist with passenger throughput even when screening at a local screening station, e.g. at a lane. More specifically, the display of a 2D image first may encourage faster screening because a 2D image may be all that is needed for many items, and a 2D image may be inspected more quickly than a 3D image.
- a system for screening an item at a security checkpoint.
- the security checkpoint includes a checkpoint screening station.
- the system includes: (a) a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a 3D image of the item; and (b) a processor in communication with said memory.
- the processor is programmed to: (i) process the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item, wherein the processor is programmed to process the image data by defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station, wherein the screening station is in communication with the system over a computer network; and (iii) cause transmission of the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate
- At least one processor to perform the methods disclosed herein.
- a system to perform the methods disclosed herein.
- at least one computer readable medium having stored thereon computer executable instructions that, when executed, cause at least one processor to perform at least part of one or more of the methods disclosed herein.
- FIG. 1 is a block diagram of a security checkpoint screening system, according to one embodiment
- FIG. 2 is a flowchart of a method performed by a security checkpoint screening station and remote screening station, according to one embodiment
- FIG. 3 is a block diagram of a security checkpoint screening system, according to another embodiment
- FIG. 4 is a flowchart of a method performed by a security checkpoint screening station and remote screening station, according to another embodiment
- FIG. 5 is a block diagram of a security checkpoint screening station, according to one embodiment
- FIG. 6 is a flowchart of a method performed by a local security checkpoint screening station, according to one embodiment
- FIG. 7 is a flowchart of a method performed by a local security checkpoint screening station, according to another embodiment
- FIG. 8 is a block diagram of a security checkpoint screening system, according to another embodiment.
- FIG. 9 is a method performed at a remote screening station, according to one embodiment.
- FIG. 10 illustrates an example of a generating a flattened view from 3D data
- FIG. 11 illustrates an example of a generating a projected view from 3D data
- FIG. 12 illustrates a k lh slice of reconstructed density data, according to one embodiment
- FIG. 13 illustrates one example of a flowchart for a colouring algorithm
- FIG. 14 illustrates determining projection coordinates of comers of a 3D region of interest (ROI), according to one embodiment
- FIG. 15 illustrates an example flow chart for the creation of at least one 2D image from 3D data
- FIGs. 16 to 38 illustrate example GUI displays at a screening station
- FIG. 39 illustrates an example flowchart for processing images of items scanned by an X-ray scanner
- FIG. 40 illustrates an example flowchart for processing images of items scanned by a CT scanner
- FIG. 41 illustrates an example of object removal
- FIG. 42 illustrates an example of scene removal
- FIG. 43 illustrates an example GUI display at a local screening station
- FIG. 44 illustrates an example system for performing security screening
- FIGs. 45 to 47 are flowcharts of example methods for screening items for a security checkpoint.
- the items under inspection are pieces of carry-on luggage. It is, however, to be appreciated that the concepts presented herein are applicable in situations where the items under inspection are objects other than pieces of carry-on luggage, for example containers of liquid, shoes, laptops, purses, wallets, keys or any other type of objects screened at a security checkpoint. Moreover, while the present application may refer to“carry-on luggage” in the context of certain embodiments of the inventions configured for airport security checkpoints, it is to be appreciated that the concepts presented may be used in the context of security checkpoints more generally and their use is not limited to security checkpoints at airports.
- FIG. 1 is a block diagram of a security checkpoint screening system 102, according to one embodiment.
- the system 102 includes a security checkpoint having three security checkpoint screening stations l04a, l04b, and l04c, although more or fewer may be present.
- Security checkpoint screening station l04a includes a pre-scan area l06a, a scanning device l08a, and a post-scan area l lOa.
- the scanning device l08a is a computed tomography (CT) scanner, and will be referred to as CT scanner l08a.
- CT scanner l08a A scanner may also be referred to as a screening device.
- a conveyor in the pre-scan area l06a conveys hand luggage into the CT scanner l08a.
- Hand luggage may also be called“carry on luggage” or“cabin luggage”.
- the word “luggage” is used to describe any articles that are scanned, e.g. bags, personal items, jackets, etc.
- the CT scanner l08a performs a CT scan of each item.
- An“item” refers to a standalone piece of luggage, a standalone group of luggage, or a tray having one or more pieces of luggage therein.
- the CT scanner l08a scans the item with penetrating radiation and generates CT image data, which is three-dimensional (3D) data, e.g. reconstructed density and/or effective atomic number (“Zeff”) data.
- 3D data three-dimensional
- Zeff effective atomic number
- the CT image data from the CT scanner l08a conveys a 3D image of the item and is stored in memory H2a.
- the CT scanner l08a may optionally also provide automatic threat detection (ATD) results associated with the scanned item, in which case the ATD results are also stored in memory H2a.
- ATD results encompasses detection results that may be in the form of identified objects and/or identified regions. The results are automatically detected, but they may not inherently be threats, e.g. the ATD results may identify regions and/or objects of interest without necessarily presenting threats.
- a processor H4a is coupled to memory H2a and can process the data from the CT scanner l08a.
- a human screener may be located at the security checkpoint screening station l04a, and may access a user interface, including a graphical user interface (GUI) via a display device 116a to view images obtained from the CT scanner data.
- GUI graphical user interface
- the GUI may be presented on one display monitor or a set of display monitors (e.g. one display monitor displaying a bottom view, and the other display monitor displaying a 3-D view). Therefore, the term“display device” (e.g. display device H6a) is not limited to a single monitor implementation, but may be implemented by multiple monitors. The same remark applies to the other display devices mentioned herein, e.g. display device 216 described later. Also, the term“display screen” is used interchangeably with“display device” herein.
- the post-scan area l lOa includes two conveyor sections alongside one another, one to receive the cleared items, and the other to receive items flagged for secondary screening.
- a divider 118a may isolate the conveyor section receiving the items flagged for secondary screening. If an item is flagged for secondary screening, it may be manually diverted, or it may be electronically diverted via one or more switches that are automatically controlled, e.g. based on threat information associated with the item.
- the threat assessment information may be from the ATD, from a human screener, or from both. It is to be appreciated that while the embodiment of FIG.
- Security checkpoint screening stations l04b and l04c each include similar components as security checkpoint screening station l04a, with one notable exception. In each of security checkpoint screening stations l04b and l04c an X-ray scanner is used instead of a CT scanner. Specifically, security checkpoint screening station l04b includes scanning device l08b, which is an X-ray scanner and will be referred to as X-ray scanner l08b.
- the X-ray scanner l08b scans the item with penetrating radiation to generate X-ray image data, which is two-dimensional (2D) image data.
- the image data from the X-ray scanner l08a conveys a 2D image of the item and is stored in memory H2b.
- the X-ray scanner l08b may optionally also provide ATD results associated with the 2D X-ray image data for a scanned item, in which case the ATD results would also be stored in memory H2b.
- a processor H4b is coupled to memory 1 l2b and can process the data from the X-ray scanner l08b.
- security checkpoint screening station l04c includes X-ray scanner l08c, and the image data from the X-ray scanner l08c is stored in memory H2c and processed by processor H4c.
- Security checkpoint screening stations l04b and l04c are illustrated as having X- ray scanning devices instead of a CT scanning device to emphasize the fact that there may be implementations in which only one or some of the security checkpoint screening stations have a CT scanner.
- CT scanning devices are typically more expensive than X-ray scanning devices, and so a CT scanner may only be present at some screening stations.
- Some embodiments provided below describe a single GUI that allows for the screening of images originating from both CT scanners and X-ray scanners. Other embodiments described below only pertain to the handling of 3D image data, e.g. CT data, and so in those embodiments it may optionally be that there are no X-ray scanners and only one or more CT scanners.
- the security checkpoint screening stations l04a-c are illustrated as each having the same physical layout, but this is not necessary. Also, the specific physical layout of an illustrated security checkpoint screening station is only an example, and variations are possible. For example, the use of two conveyor sections in the post-scan area is optional, and if the two conveyor sections exist, they do not have to be alongside one another.
- the divider in the post scan area e.g. divider 118a
- the local operator and/or local GUI e.g. the GUI displayed on display device H6a
- the conveyor may be configured in a different way from what is illustrated.
- each security checkpoint screening station l04a-c is illustrated as having its own respective processor and memory (e.g. processor H4a and memory H2a at station l04a), alternatively there may be a single centralized processor and/or memory. That is, instead of processors H4a-c and memories H2a-c, there may be a single centralized processor and/or memory that is connected to scanning devices l08a-c, e.g. via a computer network (not shown).
- Security checkpoint screening system 102 supports remote screening, and therefore a remote screening station 204 is also illustrated.
- the term“remote screening station” refers to a screening station that is not local to a particular security checkpoint screening station.
- a remote screening station may be in the same or another room as the security checkpoint screening stations l04a-c, in another (possibly adjacent) section of the airport, or it may be offsite, e.g. not even at the airport.
- a screening station adjacent to a particular lane at a security checkpoint may still be considered a remote screening station if it receives images originating from one or more scanners of other lanes. In such a case, the screening station is remote in relation to those other lanes.
- an analysis station at a lane that receives images coming from one or more scanners at the checkpoint or airport other than the scanner at that lane.
- that analysis station would be considered to be a remote screening station in relation to the other lanes. Only one remote screening station 204 is shown in FIG. 1, but alternatively there may be a plurality of remote screening stations.
- the remote screening station 204 includes a local memory 212 and a processor 214 that processes received image data and ATD results (if any) implements a user interface, including a GUI on a display screen. The GUI is presented on the display device 216.
- the remote screening station 204 is configured to receive scanned images from the security checkpoint screening stations l04a-c via a computer network, which may be wired and/or wireless.
- a stippled line 252 is used to illustrate the network connection between the remote screening station 204 and the security checkpoint screening stations l04a-c.
- the processor at each security checkpoint screening station may communicate with the processor at the remote screening station.
- each of processors H4a-c may be coupled to a respective network interface (not illustrated) and may be configured to use their respective network interface to send data to and receive data from processor 214 over the network.
- the processor 214 may also be coupled to a respective network interface (not illustrated) and be configured to use the network interface to send data to and/or receive data from processors 1 l4a-c.
- processor 214 and memory 212 are illustrated as part of remote screening station 204, in some implementations the processor 214 and/or the memory 212 may be centrally located, and with the remote display device 216 connected to the processor 214 over a network.
- the processor 214 and/or memory 212 may also be the processor and/or memory for other remote screening stations (not illustrated). Therefore, when the term“remote screening station” is being used herein, it also encompasses an implementation in which the display device of the remote screening station is coupled to a processor and/or memory that may not be located in the vicinity of the display device.
- the processor and/or memory may also be part of other remote screening stations and/or perform other functions.
- CT scanner l08a When an item is scanned by CT scanner l08a, the CT scanner l08a generates CT data in the form of three-dimensional (3D) data, e.g. reconstructed density and Zeff data. The amount of data generated is more than that generated by an X-ray scanner. This is because the CT scanner l08a provides 3D data to render a 3D image of the scanned item, whereas an X-ray scanner provides only 2D data to render a 2D image of the scanned item. A technical problem occurs by the presence of a CT scanner compared to an X-ray scanner.
- 3D three-dimensional
- the increased amount of data associated with a 3D CT scan of an item, compared to a 2D X-ray scan means that it takes longer for a processor to perform processing for rendering the 3D CT scanned item on a GUI of a display device, compared to processing for rendering a 2D X-ray image of the item.
- the processing for rendering includes any processing for preparing the data for rendering. Additionally, it takes longer to transmit the 3D CT scanned data over the network to the remote screening station 204, compared to transmitting a 2D X-ray image of the item. Therefore, a 2D image of a scanned item (e.g.
- a 2D image originating from an X-ray scanner may be displayed faster on a GUI of a display device and may be transmitted faster to the remote screening station 204 than a 3D image rendered from a CT scan of the item.
- the relative delay associated with transmitting and/or processing for rendering a 3D image of a CT scanned item is undesirable because it may delay the flow of passengers through the security checkpoint screening stations. For example, it is undesirable to have the flow of passengers through security checkpoint screening station l04a be slower than the flow of passengers though security checkpoint screening stations l04b-c.
- some embodiments disclosed below provide particular systems and methods for processing, transmitting, and displaying data from a CT scanner.
- simulation operations are performed on the CT data conveying the 3D image of the item in order to derive alternate image data conveying a 2D image of the item.
- the 2D image may be a simulated X-ray image of the item.
- the simulated X-ray image is generated from the CT 3D image data by projecting the 3D image data to obtain a 2D image.
- the alternate image data conveying the 2D image of the item is first transmitted to the remote screening station and displayed on the GUI of the remote screening station.
- FIG. 2 is a flowchart of a method performed by security checkpoint screening station l04a and remote screening station 204, according to one embodiment.
- step 262 an item is scanned by CT scanner l08a.
- step 264 the data from the CT scanner l08a that is associated with the scanned item is received and stored in memory 112a at the security checkpoint screening station l04a.
- the data includes 3D data used for rendering a 3D image of the scanned item (e.g. the 3D data may comprise reconstructed density and Zeff data), and optionally also includes ATD data.
- the ATD data may have been generated by the CT scanner l08a when scanning the item.
- the ATD data is generated by processor H4a or received from another external computing module.
- the processor H4a processes the 3D data used for rendering the 3D image of the scanned item in order to generate at least one corresponding 2D image.
- the at least one corresponding 2D image may be a simulated X-ray image.
- two 2D images may be generated, e.g. a 2D bottom view of the scanned item and a 2D side view of the scanned item.
- a material mask is also generated for each 2D image. Different ways in which to generate the at least one 2D image from the 3D CT image data are explained later. However, the exact method used is implementation specific.
- At least one 2D image is generated that is a projected view of the 3D image, and optionally has the“look and feel” of a 2D X-ray image generated by an X-ray scanner, such as a specific model of an X-ray scanner.
- step 268 the at least one 2D image data is then transmitted to the remote screening station 204, possibly along with the ATD data.
- the 3D data used for rendering the 3D image of the scanned item is transmitted to the remote screening station 204.
- Transmitting the 3D data is relatively slow due to the quantity of data, and therefore is shown as beginning at step 270 and as finishing at step 272.
- the transmission of the 3D data used to render the 3D image will typically be slower than transmitting one or a small subset (e.g. two or three) of 2D images.
- step 270 may occur in parallel to step 268. More generally, the 3D data may be sent to the remote screening station in parallel to the at least one 2D image data. In such a scenario, it is expected that the at least one 2D image data is finished being received before the 3D data is finished being received.
- step 272 the transmission of the 3D data is complete, and in step 274 the 3D data is received and stored in memory 212 of the remote screening station 204.
- the processor 214 processes the 3D data to render or load a 3D image for possible viewing on the GUI of the display device 216.
- Step 276 may take a non-negligible amount of time, e.g. a few seconds.
- step 280 the at least one 2D image data is received at the remote screening station 204, and it is stored in memory 212 in step 282.
- step 284 the processor 214 causes the at least one 2D image to be displayed on the GUI of the display device 216.
- the remote human screener may not be necessary for the remote human screener to view the 3D image on the GUI of display device 216. For example, if there is clearly nothing of concern shown in the 2D image(s) displayed, then the item may be cleared without the human screener viewing the corresponding 3D image. However, in other cases, the human screener may want to display the corresponding 3D image. Therefore, the method of FIG. 2 includes the following additional optional steps (shown in stippled lines).
- the processor 214 receives an input originating from the user interface of the remote screening station 204.
- the input indicates that the 3D image is to be viewed on the GUI of the display device 216.
- the way in which the input is received is implementation specific, e.g. the operator (human screener) at the remote screening station 204 may select a button on a touch screen display, or may provide the selection via keyboard or mouse.
- the GUI may include an input object configured to receive the specific user request from the operator. For example, the operator may submit the specific user request through the input object by selection via a touch screen or using a keyboard, mouse, or other physical device part of the user interface.
- the input object may be a button or other object on the GUI that may be selected using a touch screen, keyboard, mouse, or other physical device part of the user interface.
- the processor 214 causes the rendered 3D image to be retrieved from memory 212 and viewed on the GUI of the display device 216.
- the user interface at the remote screening station 204 may be configured to prevent the human screener from inputting a command requesting display of the 3D image until the 3D image is available to be displayed (e.g. until completion of step 274 or 276).
- the GUI of the display device 216 may display an element (e.g. button) that is only rendered selectable once the 3D image is available to be displayed.
- the GUI may be configured for selectively enabling and disabling the input object through which the request is made to view the 3D image of the item.
- the GUI may be configured for selectively causing the input object to acquire one of an enabled state and a disabled state based on the 3D image of the item being available for display at the remote screening station.
- the computer functionality of the security checkpoint screening system 102 is improved in the manner described above.
- the delay associated with transmitting and/or processing for rendering the 3D image from the 3D data may be mitigated by first generating at least one 2D image from the 3D data, and then transmitting and displaying the at least one 2D image while the 3D data is being transmitted and rendered.
- FIG. 2 illustrates a variation of FIG. 1 in which the remote screening station 204 serves only two security checkpoint screening stations, each having a CT scanner.
- the illustration of two security checkpoint screening stations is only an example, e.g. there may be only one or more than two security checkpoint screening stations.
- the specific physical layout of each security checkpoint screening station in FIG. 3 is not relevant and may be modified.
- FIG. 4 is a variation of FIG. 2 in which step 276 is not performed until an input is received requesting that the 3D image be displayed.
- a user input control item e.g. an input object such as a button
- steps 286, 276’, and 288 of FIG. 4 are illustrated in stippled lines in order to show that they are optional, i.e. only executed if the human screener decides to view the 3D image.
- FIG. 2 and/or 4 may also be adapted for a situation in which the screening station is not remote, but is located at a security checkpoint screening station, e.g. as in FIG. 5, which shows a local screening station 302.
- An additional remote screening station may or may not be present and therefore is not illustrated in FIG. 5.
- FIG. 6 illustrates the method of FIG. 2 adapted for operation with local screening station 302 of FIG. 5.
- an item is scanned by CT scanner 108.
- the data from the CT scanner 108 that is associated with the scanned item is stored in memory 112.
- the data includes 3D data used for rendering a 3D image of the scanned item (e.g. the 3D data may comprise reconstructed density and Zeff data), and optionally also includes ATD data.
- the ATD data may have been generated by the CT scanner 108 when scanning the item.
- the ATD data is generated by processor 114.
- the processor 114 processes the 3D data used for rendering the 3D image of the scanned item in order to generate at least one corresponding 2D image.
- the at least one corresponding 2D image may be a simulated X-ray image.
- two 2D images may be generated, e.g. a 2D bottom view of the scanned item and a 2D side view of the scanned item.
- the at least one 2D image is stored in memory 112.
- the at least one 2D image is displayed on the GUI of the display device 116.
- the processor 114 processes the 3D data to render a 3D image for possible viewing on the GUI of the display device 116.
- the method of FIG. 6 includes the following additional optional steps (shown in stippled lines).
- the processor 114 receives an input originating from the user interface of the local screening station. The input indicates that the 3D image is to be viewed on the GUI of the display device 116.
- the processor 114 causes the rendered 3D image to be retrieved from memory 112 and viewed on the GUI of the display device 116.
- the delay in sending the 3D CT data over the network to a remote screening station is not relevant in FIG. 6 because the display device 116 is local.
- the method of FIG. 6 still provides technical benefits, e.g. 1) the 2D image(s) obtained from the 3D data may be displayed first while the 3D image is being processed for rendering to be available for display on the GUI of the display device 116; e.g. 2) the 2D image is generated and displayed because it may be more quickly loaded and/or more quickly viewed by a human screener compared to the 3D image, and/or the human screener may be more famili r with/comfortable with viewing 2D images, which may simulate X-ray images from a classical X-ray machine.
- FIG. 7 illustrates a variation of FIG. 6 in which step 372 of FIG. 6 is optional and present after step 374.
- the security checkpoint screening system 102 may include both analysis workstations and recheck workstations.
- An example is illustrated in FIG. 8. Only one security checkpoint screening station l04a is illustrated in FIG. 8, but more may be present in actual implementation.
- Analysis workstations 204a-c and 303 are used to perform primary screening analysis. Analysis workstations 204a-c are remote screening stations, whereas analysis workstation 303 is a local screening station because it is installed near a lane and only used to screen images coming from that lane.
- analysis workstation 303 may instead be a remote screening station if it is not local to a particular security checkpoint screening station, for example if it receives images originating from different lanes/scanning devices.
- analysis workstation 303 may be a remote screening station located in a separate room or separate area adjacent to the lanes.
- Workstations 305 and 307 are recheck workstations, e.g. to display the image of a scanned item plus the results on the screening performed at the analysis stations (e.g. regions of interest (ROIs) created by the human screener at the analysis workstations).
- ROIs regions of interest
- workstation 305 is a main recheck station and workstation 307 is a secondary recheck station.
- the computing device implementing the main recheck station 305 may also to be use to implement other processing functionality of the security checkpoint screening station 102, for example it may be programmed to gather data from the scanning device and perform processing of the data prior to screening, e.g. it performs generation of the 2D image from 3D data originating from a CT scanner as described elsewhere in the present document. Alternatively, the implementation of such functionality may be implemented by a separate physical computing device.
- a network switch 309 is used to facilitate network communication between the components depicted in Figure 8. In FIG. 8, 3D CT data and/or corresponding 2D simulated X-ray images and/or data from an X-ray scanner may possibly be communicated to any one, some, or all of the workstations, depending upon the implementation.
- the 3D data originating from the CT scanner and associated with a scanned item is processed in order to generate at least one corresponding 2D image, e.g. a simulated X-ray image.
- the 2D image is then displayed prior to the rendered 3D image, and displaying the 3D image is optional and at the discretion of the operator (the human screener).
- the screening process may be more efficient by first viewing the 2D image(s), and only requesting the 3D image when necessary; (2) processing for rendering the 3D image takes time (delay), and while this delay is being incurred, the 2D image(s) may be viewed; (3) human screeners may be more used to screening and viewing 2D images compared to 3D images, because a 2D image is closer to an image from a classical X-ray machine, and so may screen 2D images faster.
- the 3D image generated from the 3D data from the CT scanner is beneficial in that the 3D image may convey information in a way that provides more insight compared to just 2D image(s), e.g. compared to 2D X-ray images from a classical X-ray machine.
- viewing and manipulating the 3D image by the human screener may increase screening time. Not only does the 3D image need to be processed for rendering, which takes time, but once rendered the human screener may also be more likely to spend additional time viewing the 3D image, e.g. rotating it on the screen to view it at all angles. Viewing the 3D image serves an important security function, but for many scanned items it is not necessary.
- the 3D image would primarily only be used when the 2D image(s) shows a possible suspicious item that the human screener wants to look at in more detail, and the human screener determines that the 3D image would assist. For example, just the 2D image(s) should be adequate for a scanned tray having only a sweater in it, whereas the 3D image may be consulted if the scanned tray has unusual objects that pose a possible threat.
- the availability of the 3D image to the human screener e.g. the duration of time the human screener must wait until the 3D image is made available to be displayed on the GUI, may be controlled to encourage efficient screening.
- an item is scanned by the CT scanner l08a, and data associated with the scanned item is stored in memory H2a.
- the CT data includes 3D data used for rendering a 3D image of the scanned item.
- the processor H4a processes the 3D data in order to generate at least one corresponding 2D image, e.g. a simulated X-ray image.
- the image data conveying the corresponding 2D image is transmitted to the remote screening station 204 and the 2D image is displayed on the display 216 of the remote screening station 204.
- the 3D data is also transmitted to the remote screening station 204, stored in memory 212, and the 3D data is processed (by the processor 214) for rendering a 3D image so that it is available for possible viewing on the display 216.
- the processor 214 waits a further intentional delay before allowing the 3D image to be displayed on the display 216.
- the GUI of the display 216 may include a button that, when selected by the human screener, instructs the processor 214 to display the rendered 3D image on the display 216. The button is disabled when the 3D image is not available for display, i.e. the human screener cannot select the button when the 3D image is not even available for display.
- the processor 214 may still keep the button disabled (not selectable) for an additional period of time, i.e. an intentional delay.
- This intentional delay may encourage the human screener to just use the currently displayed 2D image(s).
- the intentional delay may be configurable, e.g. by the operator (human screener) at the remote screening station and/or by a manager, supervisor, or system administrator.
- the intentional delay may be configured through the remote screening station user interface or another user interface.
- a configuration tool may be used to manage all or many of the configurations of the system, and the user interface that is part of the configuration tool may be used to configure the intentional delay.
- the GUI of the remote screening station may be configured for selectively enabling and disabling an input object (e.g. a button) through which the request is made to view the 3D image of the item.
- the GUI is then configured for selectively causing the input object to acquire one of an enabled state and a disabled state based on the 3D image being available for display and based on any intentional delay.
- the input object may acquire the enabled state upon completion of the added intentional delay.
- FIG. 16 is discussed later.
- the intentional delay added by the processor 214 before allowing the 3D image to be displayed may be adjustable dependent upon different factors. For example, in one embodiment, if the 3D image originates from a general passenger lane, then the added intentional delay is zero seconds, e.g. the 3D image is available to be selected and displayed as soon as the 3D image is ready for rendering. Whereas if the 3D image originates from a low security risk lane (e.g. a security lane that is only for employees of the airport and airlines, or a security lane that is for trusted travellers, such as NEXUS card holders), then the added intentional delay is five seconds, e.g. the 3D image is available to be selected and displayed five seconds after the 3D image is ready for rendering.
- a low security risk lane e.g. a security lane that is only for employees of the airport and airlines, or a security lane that is for trusted travellers, such as NEXUS card holders
- the low security risk lane should have fewer potential security threats compared to a general passenger lane, and so unnecessary use of the 3D image is discouraged for scanned items originating from the low security risk lane by making the human screener wait an additional five seconds to view the 3D image.
- the 3D image is a scan of a relatively complex item (e.g. a suitcase)
- the added intentional delay is zero seconds
- the 3D image is a scan of a relatively simple item (e.g. a coat)
- the added intentional delay is five seconds.
- the added intentional delay value of five seconds is just an example.
- the actual added intentional delay value would be implementation specific.
- the 3D image is first processed for rendering so that it is ready for display, and then the added intentional delay is applied.
- the added intentional delay is first added, and upon completion of the added intentional delay the human screener is provided with the option, at the user interface, of being able to request viewing of the 3D image. Then, only if the human screener requests viewing of the 3D image is the 3D image processed for rendering and rendered for display.
- the human screener may be immediately provided with the option, at the user interface, of being able to request viewing of the 3D image. If the human screener requests to view the 3D image, then the added intentional delay is incurred before the 3D image is actually presented on the display.
- the added intentional delay described above is not implemented for X-ray images originating from X-ray scanners (if there are any X-ray scanners in the security checkpoint system) because X-ray scanners do not generate 3D image data.
- the added intentional delay may be different for images originating from different CT scanners. For example, if a first CT scanner is in a general passenger lane and a second CT scanner is in a trusted passenger lane, then no added delay may be incurred before allowing for display of 3D images originating from the first CT scanner, whereas an added delay of five seconds may be incurred before allowing for display of 3D images originating from the second CT scanner.
- FIG. 9 is a method performed at remote screening station 204, according to one embodiment.
- 2D image data is received at the remote screening station 204 and stored in memory 212.
- the 2D image data was generated from 3D data originating from a CT scanner.
- the 2D image data may be a simulated X-ray image.
- the 2D image data is received over a computer network from security checkpoint station l04a.
- step 424 the processor 214 causes the 2D image to be displayed on the GUI of the display device 216.
- step 426 the corresponding 3D data is received at the remote screening station 204 and stored in memory 212.
- step 428 the processor 214 processes the 3D data to prepare for rendering a 3D image that can be displayed on the GUI of the display device 216. The processing may include decoding.
- step 430 the processor 214 waits an added intentional delay of x seconds before modifying the user interface of the remote screening station to allow the human screener to request display of the 3D image.
- step 432 once the added delay of x seconds has expired, the processor 214 modifies the user interface to allow the human screener to request display of the 3D image.
- step 434 the processor 214 receives an input originating from the user interface of the remote screening station 204.
- the input indicates that the 3D image is to be viewed on the GUI of the display device 216.
- step 436 the processor 214 causes a rendered 3D image to be displayed and viewed on the GUI of the display device 216.
- the implementation of an intentional delay is not specific to remote screening applications. The intentional delay may be implemented at a local screening station, e.g. in the embodiment described in relation to FIG. 5, if desired.
- the 3D data originating from the CT scanner is processed in order to generate at least one corresponding 2D image, e.g. a simulate X-ray image.
- One way to generate a 2D image from the 3D data is to obtain the portion of the 3D data corresponding to a view of the item from a particular perspective (e.g. top view of the item), and then“flattening”, e.g. creating a flattened view by having all projection paths parallel to one another.
- the projection is typically performed along one of the main axes of the Cartesian coordinates system (x, y or z).
- FIG. 10 illustrates an example of generating a flattened view from 3D data.
- the scanned item 442 is represented by 3D data originating from the CT scanner.
- a projection process 444 is applied to a 2D slice (e.g.
- the image data conveying the 3D image of the item may be processed to derive alternate image data conveying a 2D image of the item, where the 2D image is a simulated X-ray image.
- the simulated X-ray image may be derived by applying simulation operations on the 3D CT data.
- the simulated X-ray image may be a projected view of the 3D image.
- Such a 2D image will be referred to as a“projected view”, or alternatively as a“2D projected view”, “projected image”, or“2D projected image”.
- the simulation operations may involve generating the projection using non-parallel projection paths, in other words projection paths that extend along convergent (or divergent) axes e.g. as described in FIG 11 below.
- FIG. 11 illustrates an example of generating a projected view from 3D data originating from a CT scanner, in order to result in a simulated X-ray image.
- the simulated X- ray image may resemble a classical X-ray image.
- the scanned item 442 is represented by 3D data originating from the CT scanner.
- An X-ray source 450 is simulated, and an array of X-ray detectors 452 is also simulated.
- the X-ray detectors 452 may also be called X-ray sensors.
- the projection paths 446 extend from the simulated X-ray source 450 to each of the simulated X-ray sensors 452.
- the position of the simulated source 450 and of the simulated sensors 452 relative to the scene is substantially the same as a real X-ray source and real X-ray detectors in a specific classical X-ray machine to thereby simulate a classical X-ray image.
- the projection paths 446 are not parallel to one another but rather extend along axes that diverge from one another as they move away from the simulated X-ray source 450 towards the simulated sensors 452 (or alternatively axes that converge at the simulated X-ray source 450).
- a projection process 444 is applied to a 2D slice using the non-parallel projection paths 446.
- the 3D data is projected into 2D.
- the projection process is performed for each of the slices of the object, i.e. the projection of one slice, as shown in FIG. 11, provides the values for one column of the 2D projected view.
- the example projection algorithm comprises three steps: generate the 2D image corresponding to the X-ray attenuation image; generate the 2D image corresponding to the projected Zeff image; and use standard coloring algorithms to generate the color image.
- Generation of the X-ray attenuation image :
- the X-ray attenuation image simulates the X-ray intensity received at the simulated detector locations from a simulated mono energetic source for the analyzed scene.
- u is one of the two quantities that are reconstructed by the CT process.
- one output of the CT scan i.e. one part of the 3D data
- the density data is a cube of voxels providing the value of u (or a value proportional to it) for each location of the reconstructed volume.
- Each pixel of the X-ray attenuation image is the result of Equation #2 for one specific projection path.
- the pixel of the k lh column and 5 th line of the X-ray attenuation image corresponds to the attenuation between the simulated source and the 5 th simulated sensor for the k lh slice of the reconstructed data.
- FIG. 12 illustrates the k lh slice of the reconstructed density data, according to one embodiment.
- the slice is a grid of / x / square or rectangular voxels 454.
- the computed X-ray attenuation I ks corresponding to the value of the X-ray attenuation image at the coordinates ( k, s ) is: (Equation #3) where u j is the value of u for the voxel having the coordinates (i ) in the k lh slice of the density data and t- j is the length of the intersection of the projection path with the voxels.
- the X-ray attenuation image is then computed by applying Equation #3 for every sensor and for each slice of the density data.
- Z effective (Zeff) data Another quantity typically generated by a CT scanner is Z effective (Zeff) data. That is, the 3D data originating from a CT scanner in relation to an item scanned by the CT scanner typically includes Zeff data. For example, the CT scanner may output a cube of voxels providing the Zeff value for each location of the reconstructed volume.
- the resolution of the Zeff reconstruction may differ from the resolution of the density data, e.g. the voxel sizes may differ.
- some CT machines may not reconstruct the Zeff data for the whole volume or may not reconstruct the Zeff data at all. In that case, in some embodiments, the Zeff data is approximated from the density data alone, e.g. a Zeff value is associated with all possible density values.
- the projected Zeff image is a 2D image representing the Zeff of the scanned item along the projection path.
- the projection paths are the same as for the density projection.
- the formula used is: (Equation #4),
- p is the density of the compound
- a, ⁇ is the fractional element of electrons per gram
- z, ⁇ is the atomic number of the i th atomic element of the chemical formula of the compound a, ⁇ and z; are the relative weight and effective atomic number of the i th element, respectively, and p is equal to 2.78.
- Equation #5 the projected Zeff, called Z ks , for the projection path going from the simulated source to the 5 th simulated sensor for the k lh slice of the Zeff data, corresponding to the value of the projected Zeff image at the coordinates k, s), can be approximated as: (Equation #5)
- z j is the value of z for the voxel having the coordinates ( i,j ) in the k lh slice of the Zeff data.
- the standard output of a classical X-ray machine is a pair of X-ray attenuation images (high energy and low energy) from which a projected Zeff image can be computed. Coloring algorithms are applied to these sets of images to create the colored image.
- FIG. 13 illustrates one example of a flowchart for a colouring algorithm. Different variations are possible.
- a step that generally differs from typical algorithms used with classical X-ray images is the step of interpolation.
- the interpolation step operates by matching the aspect ratio of the colored image with the one of the images generated by the classical X-ray system that is simulated.
- the resolution along the conveyor belt direction of a CT scanner differs from that of a classical X-ray.
- the 3D density and Zeff data may be processed using simulation operations to produce two sets of 2D simulated images, e.g. to produce two sets of projected density and projected Zeff images.
- the projections may emulate the views generated by a classical X-ray scanner.
- the projected images may be processed to create sets of colour and material mask images.
- the colour image may be the one displayed on the GUI in“normal” or“default” mode, and the material mask may be used by some image enhancement tools.
- the colour image and material mask may be stored in memory to be ready to be sent to other viewing stations, e.g. to one or more remote screening stations.
- X-ray simulation is also discussed in International PCT application PCT/CA2009/000811, which is published as W02010091493, and which is incorporated herein by reference.
- a set of simulated 2D images comprising multiple simulated 2D images may be derived from a same set of 3D image data, wherein respective simulated 2D images in the set may be associated with different simulated X-ray sources and/or different positions (angles and/or distances) for the simulated X- ray sources and detectors.
- the set of simulated 2D images may allow simulating the behavior of a multi- view X-ray system in which there are multiple X-ray sources and detectors positioned at different distances/angles from the item being inspected.
- the amount of time required to screen an item may be reduced if the visual inspection is performed first based on the display of a 2D image without the need for rendering and viewing the 3D image of the item.
- 3D region of interest (ROI) projection into 2D may also be performed.
- a 3D ROI (or 3D rectangular bounding box) defines a 3D subspace in a 3D scene. In some embodiments, it may be described by eight comers having a specific set of 3D coordinates ( x,y, z ). Once projected in the 2D image, the 3D ROI will become a 2D ROI (a rectangle) defining a sub-region in the 2D image.
- each column corresponds to a specific slice (or z value) in the CT data, and each line corresponds to a simulated sensor.
- the projection process of a 3D ROI may therefore comprise computing the projection coordinates (x p , y p ) in the 2D image of each of the eight comers of the 3D ROI, e.g. as follows:
- the column (y p ) coordinate is directly given by the slice number, which is the z coordinate in the CT data.
- Finding the line (x p ) coordinate consists in finding the simulated sensor for which the projection path is the closest to the comer position. Basic mathematics are used to compute the distance between a point (the corner position) and a line (the projection path).
- FIG. 14 illustrates determining the projection coordinates of corners of the 3D ROI, according to one embodiment.
- Projection paths 472 and 474 are the paths that are closest to the comer positions of the 3D ROI 476 at the illustrated slice. These dictate the projection coordinates to generate the 2D ROI.
- the width of the rectangle is equal to the difference between the maximal value among all the projected corners and X p .
- the height of the rectangle is equal to the difference between the maximal y value among all the projected comers and y p .
- the ATD results, when present, may also be projected in the manner described above to generate one or more 2D ROIs in one or more of the 2D generated image(s).
- FIG. 15 illustrates an example flow chart for the generation of at least one 2D simulated X-ray image from 3D data originating from a CT scanner for a scanned item.
- the operations of FIG. 15 may be performed by the processor at the security checkpoint screening station (e.g. by processor H4a).
- operations are performed relating to receiving the 3D data originating from the CT scanner for the scanned item.
- the operations include: receiving reconstmcted density and Zeff data (two sets of 3D data); storing the 3D data in memory, e.g. in cache; and sending the 3D data to projection algorithms. If the 3D data needs to be sent to any viewing stations, it is sent from the memory, e.g. from the cache.
- box 488 operations are performed in relation to receiving ATD results associated with the scanned item.
- Box 488 is optional, e.g. if ATD is not performed then box 488 is omitted.
- the operations in box 488 include: processing and storing the ATD data in memory, e.g. in a database, so that it is ready to be sent to other viewing stations as needed; projected ROIs for 2D views are also computed from any received 3D ROIs and stored in the database.
- the ATD results received as per box 488 are received asynchronously from the 3D data.
- operations are performed in relation to generating the 2D projected view(s) from the 3D data.
- the operations in box 490 include: processing the 3D density and Zeff data to produce at least one set of projected density and projected Zeff images, where the projections emulate the view generated by a classical X-ray scanner; process the projected images to create at least one set of colour and material mask images, where the colour image is displayed on the GUI in normal mode, and the material mask is used by some image enhancement tools; store the color image and material mask in memory, e.g. in a database, so that they are ready to send to any viewing stations.
- the density and Zeff 3D data may be preprocessed (e.g. filtered, thresholded, corrected) to try to improve the image quality.
- a 2D projected view is created from the 3D data.
- the 2D projected view may be created by applying transformations (e.g. angles of the projection path, colouring, type of transparency), which make the projected view resemble or look familiar to a 2D X-ray image instead of looking like a flattened out 3D image.
- (A) Receive two 3D datasets (cubes of data) representing the density and the Zeff of the scanned item. These cubes provide the information about the physical nature of the scanned item in each of the 3D points.
- (B) Use the physics of X-rays (actual and approximation equations) to simulate the images that would be generated if the item was scanned in a classical X-ray scanner. For example, knowing the geometry of the classical X-ray scanner (“the real machine”), and referring back to FIG. 11: i. Conceptually place a source 450 at the same relative position from the data cube 442 as in the real machine ii.
- Each plane of the cube leads to a column of the simulated image
- Horizontal interpolation can then be used to have the same horizontal resolution as the simulated machine iv.
- these projected images use the same algorithms (or very similar algorithms) to the ones used with a classical X-ray machine to generate the color images (and create the material mask) that are displayed on the GUI.
- the result is the projected image. It emulates the image one would have from an actual X-ray machine.
- the process may be repeated to create two projected views (e.g. bottom view and side view).
- (A) 3D bounding boxes are deduced from the 3D ROIs, as follows: a. 3D ROIs take the form of masks: the processor receives the list of voxels coordinates (one voxel is one point in the cube, so a "3D pixel") defining the object of interest. b. Bounding boxes can be found by finding the minimal and maximal values for each of the x ,y and z coordinates. c. 3D bounding boxes can then be defined by eight sets of (x,y,z) coordinates (one set for each of the 3D bounding boxes comers) i. Note: Edges of the bounding box are parallel to the x, y and z axes d. Note, in one implementation, the ROIs may be identified by ATD algorithms or identified by a user. In any case, the bounding boxes may be deduced from voxel coordinates.
- (C) A new 2D bounding box is deduced by computing the minimal and maximal x and y values of each of the eight projected corners.
- the 2D bounding box can be defined by four pairs of (x, y) coordinates. Edges of the bounding box are parallel to the x and y axes in the 2D image.
- the remote screening station 204 is configured to receive both 2D data from X-ray scanners l08b and l08c, and 3D data from CT scanner l08a.
- the 3D data may or may not be processed in the manner described earlier (e.g. in relation to FIG. 2).
- the 3D data may be sent after the corresponding 2D image(s) data is generated and sent to the remote screening station 204 (as in FIG. 2), or the 3D data may be sent first without necessarily generating and sending corresponding 2D image(s) data, or the corresponding 2D image(s) and the 3D image may be sent in parallel.
- a human screener at the remote screening station 204 is conventionally used to screening 2D X-ray images from an X-ray scanning device, and therefore is used to using an associated GUI and user interface tools developed for screening such 2D images.
- the computer functionality is improved to allow for the GUI and user interface tools conventionally used for screening 2D classical X-ray images to be enhanced to also accommodate the additional screening of items scanned by a CT scanner.
- a common GUI is provided for screening a sequence of items, where some of the items were scanned by an X- ray scanner and are therefore associated with 2D data, and where others of the items were scanned by a CT scanner and are therefore associated with 3D data.
- a single user interface including a single GUI on display device 216 is used for the screening of items scanned by either an X-ray or CT scanner.
- the same GUI, tools, and general behaviour may be implemented whether the scanned item to be screened originates from a 2D X-ray scanner or a 3D CT scanner.
- FIG. 16 illustrates one example of a GUI 502, e.g. which may be generated by processor 214 and displayed on display device 216 of the remote screening station 204.
- the GUI 502 illustrates 2D images (side and bottom view) of an item that was scanned by an X-ray scanner.
- Buttons 506 are disabled or“greyed out” because buttons 506 are specific to items scanned by a CT scanner where display of a 3D image is available.
- Buttons 506 comprise the following buttons, which will be explained later:“3D”,“bottom view preset”,“side view preset”, and“itemize”. Other buttons are not disabled and allow for user selection of enhancement- related tools, such as (but not necessarily):
- organic which shows organic material only in the image, e.g. organic material is displayed in orange, and the rest is grayed out in the image;
- inorganic which shows inorganic material only in the image, e.g. inorganic material is displayed in green, and the rest is grayed out in the image;
- The“show threats” button (sometimes instead called“hide threats”), which is illustrated on the GUI 502, is a button that allows for objects of interest (e.g. laptops, bottles, metal bars of the types used in carry-on luggage) to be removed from the displayed images to reveal content previously obstructed by the objects of interest. Removal of objects of interest is described in more detail later, e.g. in relation to FIG. 41 which shows an example of removal of an object of interest. It is to be appreciated that while the button is labelled“show threats” in the GUI 502, the objects of interest may not inherently be threats, but may be typical objects of interest, such as without being limited to electronic devices such as lap-tops, tablets, phones, cameras and the like.
- buttons may be arranged differently on the display.
- The“show bag” button may be used to enable/disable the hide bag functionality.
- An example of the hide bag functionality is scene removal, i.e. removing the entire scene except for the detected object(s) of interest. Scene removal is described in more detail later, e.g. in relation to FIG. 42.
- the“camera” button may be enabled most or all of the time to allow for a camera image to be displayed.
- FIG. 17 illustrates the GUI 502 displaying an item that has been scanned by a CT scanner. 3D data is available to display a 3D image of the item, and so the“3D” button 522 becomes enabled, i.e. available for selection. As discussed earlier (e.g. in relation to FIG. 9), in some embodiments an additional intentional delay may be enforced by the processor after the 3D image is available to display, but before the“3D” button 522 become enabled for selection.
- the GUI 502 is currently only displaying 2D projected views generated from the 3D data.
- the 2D projected views were generated in the manner explained earlier. Specifically, a 2D bottom view 532 and 2D side view 534 are illustrated in FIG. 17.
- These 2D views 532 and 534 are simulated X-ray images in the form of projected views emulating views generated from an X-ray scanner. Example ways to generate such 2D views are described earlier.
- the 2D simulated X-ray images are only displayed, not the 3D view.
- the human screener may want to display the corresponding 3D image. If that is the case, the human screener may select the“3D” button 522, which causes the 3D image to be displayed, as shown in FIG. 18.
- FIG. 18 shows the 2D side view 534 replaced with the 3D image 538. If the display device 216 had enough display screen space (e.g.
- the 3D image 538 may be displayed in addition to the 2D images shown in FIG. 17. Because the 3D image 538 is now displayed, the following three buttons 540, which were previously disabled (“greyed-out”) are now enabled: i. “bottom view preset”, which when selected resets the orientation of the 3D image 538 to bottom view (i.e. item is viewed in 3D from below). ii. “side view preset”, which when selected resets the orientation of the 3D image 538 to side view (i.e. item is viewed in 3D from the side). iii. “itemize”, which when selected causes the 3D image of the item to look like it has been segmented.
- FIG. 19 shows another example for the case in which“bottom view preset” is selected (item viewed from below).
- FIG. 21 shows another example for the case in which “side view preset” is selected (item viewed from side).
- FIGs. 22 to 35 each illustrate an example of an image processing operation performed on images derived from penetrating radiation.
- the images are greyscale digital photograph images. They are not conducive to replacement by black and white line drawings, and it would detract from clarity to replace them with black and white line drawings. They are simply to illustrate examples of some of the image enhancement operations that may be possible in some embodiments. A person skilled in the art would find the images clear in view of the description and context.
- FIG. 22 illustrates an example of the“invert” operation mentioned above.
- FIG. 23 illustrates an example of the“super clear” operation mentioned above.
- FIG. 24 illustrates an example of the“high penetration” operation mentioned above.
- FIG. 25 illustrates an example of the“organic” operation mentioned above.
- FIG. 26 illustrates an example of the“metallic” operation mentioned above.
- FIG. 27 illustrates an example of the “inorganic” operation mentioned above.
- FIG. 28 illustrates an example of the“grey scale” operation mentioned above.
- FIG. 29 illustrates an example of the“dynamic range” operation mentioned above, for a low value.
- FIG. 30 illustrates an example of the“dynamic range” operation mentioned above, for a high value.
- FIG. 31 illustrates an example of the“brightness” operation mentioned above, for a low value.
- FIG. 32 illustrates an example of the“brightness” operation mentioned above, for a high value.
- FIG. 33 illustrates an example of the“contrast” operation mentioned above, for a low value.
- FIG. 34 illustrates an example of the“contrast” operation mentioned above, for a high value. Note that for contrast and brightness, a low value means that the contrast/brightness is lower than in the normal image (and the contrary for a high value). For dynamic range, a low value means that the range of stretched values are the low values (below the average), and the contrary for a high value.
- FIG. 35 illustrates an example of the “sharpening” operation mentioned above.
- the same mlers in the 2D and 3D views may be the same mlers in the 2D and 3D views (e.g. on the sides of the screens). Origin may be the bottom left comer.
- the rulers may be configurable in inches or cm.
- the center of rotation may be based on mouse position and depth position of the items.
- when a ROI is identified in 3D it is automatically projected on the 2D view as well (e.g. FIG. 36 described below).
- bounding boxes are created around the whole scene in 2D and 3D (e.g. FIG. 38 described below).
- FIG. 36 illustrates an example of a ROI 602 createdin the 3D image being projected onto the 2D image.
- the ROI 602 in the 3D image may be created by the screener, e.g. by using user operable controls provided through the GUI to allow a user to select from a displayed 3D image an object of interest.
- FIG. 37 illustrates an example of a ROI 604 identified by a user in the 2D image (e.g. by placing a box around the area of interest), but the ROI is not automatically identified by the processor in the 3D image.
- FIG. 38 illustrates that a ROI 606 placed around the whole item will show in both the 2D view and the 3D view.
- FIG. 39 An example flowchart for processing images of items scanned by an X-ray scanner (“2D scenes”) is shown in FIG. 39.
- the 2D scenes e.g. colour images and detection results
- the received 2D images are stored in memory at the remote screening station.
- the images may be received from memory at a security checkpoint screening station.
- all generic/conventional image enhancement tools are available on the GUI; the“3D view” button and buttons associated with 3D specific tools are disabled, e.g. not visible or grayed out; different screen setups on the GUI are possible, e.g. if the remote screening station only has one screen then the bottom view may be displayed, if the remote screening station has two screens then the bottom and side views may be displayed, if the remote screening station has three screens then the bottom view, side view, and camera image may be displayed.
- scene manipulation may be performed by the human screener using the user interface.
- zoom and pan are performed independently on each displayed view; any generic/conventional image enhancement tools are applied equivalently to all displayed views; threats/suspicious items are identified with bounding boxes; when the user adds a bounding box in one of the views, no bounding boxes are created in the other views; when the user deletes a bounding box that is linked to bounding boxes in the other views, all linked boxes are deleted.
- threat results (e.g. threat bounding boxes, threat type) that are entered by the user at the remote screening station are transmitted back to the memory at the security checkpoint screening station.
- FIG. 40 An example flowchart for processing images of items scanned by a CT scanner (“3D scenes”) is shown in FIG. 40.
- the corresponding 2D scenes e.g. colour images and material masks
- the received 2D images are stored in memory at the remote screening station.
- the images may be received from memory at a security checkpoint screening station.
- buttons associated with 3D specific tools are disabled (greyed out) because the 3D data is not yet received at the remote screening station; different screen setups on the GUI are possible, e.g. if the remote screening station only has one screen then the bottom view may be displayed, if the remote screening station has two screens then the bottom and side views may be displayed, if the remote screening station has three screens then the bottom view, side view, and camera image may be displayed. If not all views are displayed simultaneously (e.g.
- the remote screening station only has one screen
- then“view” button(s) on the GUI may be selected to display a specific view or to cycle through the display of the different views.
- the 3D data is received at the remote screening station.
- the 3D scene data may be received from the memory at the security screening checkpoint.
- the“3D view” button becomes available (no longer greyed out), so that it may be selected to begin rendering of the 3D image; however, the buttons associated with 3D specific tools are still disabled (e.g. grayed out) because the rendered 3D image is not displayed; the 3D data is stored in memory at remote station.
- the“3D view” button is selected by the human screener at the user interface.
- the 3D data is loaded in the rendering engine and is rendered and displayed; and the buttons associated with 3D specific tools become available to be selected.
- box 672 scene manipulation occurs.
- zoom and pan are performed independently on each displayed view; any generic/conventional image enhancement tools will be applied equivalently in all displayed view (2D and/or 3D); threats/suspicious items are identified with bounding boxes; bounding boxes of automatically detected threats are visible in both 2D and 3D; when the user adds a bounding box in 3D, it creates automatically projected bounding boxes in 2D; when the user adds a bounding box in one of the 2D views, no bounding boxes are created in the other views (2D and 3D); when the user deletes a bounding box that is linked to bounding boxes in the other views, all linked boxes are deleted; and when the user changes a threat type of a bounding box that is linked to bounding boxes in the other views, all linked boxes are changed.
- threat results (e.g. threat bounding boxes, threat type) that are entered by the user at the remote screening station are transmitted back to the memory at the security checkpoint screening station.
- a“bag deconstruction” operation may be performed.
- FIG. 41 illustrates an example of object removal.
- Object removal from a 3D image is shown at 712.
- Detected object 714 is removed from the 3D image scene so that the human screener may view the rest of the scene without having parts hidden by the object 714.
- object 714 may have been identified by ATD, which is why object 714 is surrounded by a bounding box 716 in the 3D image.
- the ATD operations may have been performed by the CT scanner or by the processor at the local or remote screening station, or performed by another external module.
- object 714 does not have to be an object identified by ATD, e.g. the object 714 may be an object of interest, such as a laptop, identified by the human screener.
- the human screener may use the user interface to insert the cube 716 around the object 714.
- Object removal from a 2D image is shown at 718.
- Object 720 is removed from the 2D image so that the human screener may view the rest of the scene without having parts hidden by the object 720.
- FIG. 42 illustrates an example of scene removal.
- Scene removal from a 3D image is shown at 722.
- Object 714 is a detected object of interest and so the surrounding 3D image scene is removed.
- Scene removal from a 2D image is shown at 724.
- Object 720 is a detected object of interest and so the surrounding 2D image scene is removed.
- a bag deconstruction operation such as object removal or scene removal
- the same operation is automatically performed on one or more other views of the scanned item. For example, if the GUI displays both a 3D image of the item and a 2D projected view of the item, such as in FIG. 18, and if an object of interest is removed from the 3D image, then the object of interest will automatically be removed from the 2D projected view. Methods for implementing this operation are described later.
- an element when an element is removed from the image, it is not displayed on another part of the screen, which has the benefit of saving screen real estate.
- the human screener uses the user interface to request that object 714 be removed from the scene, e.g. by selecting object 714 and selecting an“erase” button on the GUI, then the object 714 is removed from the displayed image and is not displayed anywhere else on the screen.
- the human screener uses the user interface to request that the scene around object 714 be removed, then the scene is removed from the displayed image.
- the object 714 is not extracted from the scene and displayed elsewhere. In this way, the bag deconstruction functionality does not affect the screen real estate, it only removes the selected elements from the currently displayed image. There is not a dedicated part of the screen for the removed objects, and there is no exploded view generated.
- a bag deconstruction operation that is performed in the 3D view is also performed in the corresponding 2D view.
- the 3D data from a CT scanner may be processed to allow the human screener to remove an object from the 3D image, e.g. remove a laptop from the 3D image and view the image without the laptop.
- the at least one corresponding 2D image that was generated from the 3D data is regenerated by a processor with the object also removed.
- Such an operation may be performed by processor 214 if the screener at the remote station requests that the object be removed from the scene.
- such an operation may be performed by a local processor at the security checkpoint screening station if a local screener at the security checkpoint screening station requests that the object be removed from the scene.
- the input used to apply the bag deconstruction functionality is the set of 3D coordinates in the CT data corresponding to the detected objects, e.g. the voxels corresponding to the detected objects must be known. One set of coordinates is needed for each detected item.
- the direct approach consists of setting to zero all the voxels corresponding to the objects to be removed in the density data originating from the CT scanner, and then regenerating the 2D projected view with the removed objects by reapplying the projection and coloring algorithms described earlier.
- This operation may be done dynamically, e.g. when the human screener selects a button on the GUI, or the operation may be precomputed, i.e. performed in advance and stored, in which case all possible combinations of displayed items may need to be precomputed.
- the voxels corresponding to the objects to be removed are all the voxels of the scene that do not belong to the detected object(s) remaining in the scene.
- (A) Receive the mask defining the object(s) to be removed from the scene, e.g. receive the list(s) of voxels coordinates defining the object of interest, where one voxel is one point in the cube.
- the process may be performed dynamically, e.g. in response to the user requesting the object be removed (e.g. when the user selects the“erase” button).
- the process may be performed as a preprocessing step and the results kept for future use, e.g. the preprocessing may be performed on one of the screening stations (e.g. by processor H4a) or by a computing station not used for screening, and then the results sent to viewing stations, or the preprocessing may be performed on the viewing stations on reception of the 3D data.
- preprocessing all possible combinations (if there is more than one object that can be deleted) may be computed.
- the layer approach consists of computing a“pre-density” layer and a“pre-Zeff’ layer for each of the detected objects and for the rest of the scene.
- the 2D colored image with removed objects is then computed from the layers. If N objects have been detected in a scene, then N + 1 pairs of layer images are computed (one for each object and one for the rest of the scene).
- the pre-density layer and the pre-Zeff layer of one detected object are computed as follow:
- Equation #6 Compute the pre-density layer by applying the following equation to the modified CT data (Equation #6) where the notation in Equation #6 is defined earlier in relation to Equation #3.
- Equation #7 Equation #7 where the notation of Equation #7 is defined earlier in relation to Equation #5.
- the values pf and pZ° are used to respectively denote the pre-density and pre- Zeff layers corresponding to all of the scene but the detected objects.
- the X-ray attenuation image, called I and defined by Equation #3, and the projected Zeff image, called Z and defined by Equation #5 can be computed from the pre-density and pre-Zeff layers:
- bag deconstruction functionalities e.g. object removal and scene removal functionalities
- bag deconstruction functionalities e.g. object removal and scene removal functionalities
- Equation #8 (1) Compute the modified X-ray attenuation image corresponding to the modified scene using Equation #8 by using only the needed layers.
- Equation #9 Compute the modified projected Zeff image corresponding to the modified scene using Equation #9 by using only the needed layers.
- the layer approach typically uses much less processing power.
- the complete projection process (which is costly from a computation point of view) must be done each time an object is removed, which is not the case with the layer approach. Therefore, in general, the layer approach will lead to a faster generation of the image.
- Other possible advantages of the layer approach include: less processing power needed at the remote station, which may save cost; and deconstruction may be faster to be applied, which may result in a better user experience.
- N + 1 variations are created for the original data cubes (density and Zeff): i) One where all the voxels corresponding to objects are set to zero; ii) One cube for each object where all the voxels but the ones of the object are set to zero.
- the N + 1 cube creation may be conceptual: in other words the processor does not have to actually have N + 1 sets of cubes in memory.
- (C) Use the physics of X-rays (actual and approximations) to generate the projected images for each of the cubes, e.g. reapply, for each of the N + 1 pairs of cubes, the simulation process described for the generation of the projected image emulating an actual classical X-ray machine. This creates N + 1 pairs of layers or preDensity and preZeff images.
- the method would typically be performed at the station receiving the data from the CT scanner, e.g. by processor 114a, as it may significantly lower the needed processing power on all other viewing stations.
- one option is to transmit to the viewing stations all the N + 1 pairs of layers or preDensity and preZeff images.
- Another option is to transmit to the viewing stations all the N + 1 pairs of layers or preDensity and preZeff images plus the original (the whole) color image and material mask.
- the main images could be computed as was done previously or based on the layers.
- a user operable input object of a GUI may receive a request that an object in a displayed image be removed from the displayed image.
- the object may be removed from the displayed image to reveal content previously obstructed/hidden by the object.
- the object is also removed from the corresponding 2D image(s).
- removing the object from the displayed image comprises: (1) obtaining modified 3D data (e.g. CT data) by setting to zero voxels in the 3D data that correspond to the object, and (2) regenerating the 2D image using the modified 3D data.
- the image data conveying the 3D image may be processed to derive alternate image data conveying the 2D image, where the image data conveying the 3D image is modified by setting voxel values to zero that correspond to the object.
- removing the object from the displayed image may instead follow the“layer” approach discussed above.
- the object to be removed may be a first object, and the method may include:
- each layer is computed by modifying the image data conveying the 3D image, in order to set to zero voxels that do not correspond to the respective object;
- (3) removing the first object from the displayed image comprises combining the plurality of layers, not including the layer corresponding to the first object, to form the displayed image.
- each layer is projected into 2D, thereby generating a plurality of 2D projected layers.
- Generating the 2D simulated image with the first object removed comprises combining the plurality of 2D projected layers, not including the 2D projected layer corresponding to the first object.
- the center of rotation may be based on view port (screen) center and depth position of the items.
- all views for the horizontal position may be linked. For example, upon zooming in on one view, all views may be zoomed in on the same horizontal position. As another example, upon panning in one view, all view showing the horizontal pan are linked.
- the user interface (e.g. GUI 502) described above is primarily described in the context of a remote screening station.
- a screening station at a security checkpoint e.g. station 302 in FIG. 5 and/or the local stations having processors H4a-c respectively in FIG. 1 may use the same (or substantially the same) user interface.
- the images displayed may only be associated with items scanned at that screening station, e.g. from the CT scanner 108.
- the user interface may be enhanced to also display a sequence of thumbnail images of items scanned by the scanner. An example is illustrated in FIG. 43, which shows a GUI substantially the same as GUI 502 described earlier, but with a sequence of thumbnail images 588 also displayed.
- the sequence of thumbnail images 588 may assist the human screener in identifying the item to be checked.
- the whole display/manipulation process starts when the scene of the item is selected in the user interface; depending on the configuration, a thumbnail presenting the camera image or colour 2D bottom view may be displayed; and if the same computer (processor and memory) is being used for the screening as the computer that received the data from the scanner (e.g. from the CT scanner), then all the data transfer is done locally.
- the analysis results e.g. bounding box of threat items created at the analysis station, such as at the remote screening station
- 2D and 3D scenes are displayed using the exact same user interface (UI) on a same machine in a same session, e.g. receive a 2D scene, then a 3D, then a 2D, etc.
- Projected 2D views aim to match existing standard X-ray images, and processing tools (e.g. for image enhancement) may be applied in real time equivalently in 2D and 3D. This may allow for screeners used to screening 2D images more easily transition to 3D screening.
- the processors disclosed herein may each be implemented by one or more general-purpose processors that execute instructions stored in a memory.
- the instructions when executed, cause the processor to perform the operations described herein as applicable, e.g. receiving the data from a scanner, transmitting/receiving the data over the network to/from the remote screening station, generating the at least one 2D image from the 3D data, performing the 2D projection views from the 3D data, ROI projection, rendering the 3D data for display, bag deconstruction, etc.
- processors disclosed herein may be implemented using dedicated circuitry, such as an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or a programmed field programmable gate array (FPGA) for performing the operations of the processor.
- ASIC application specific integrated circuit
- GPU graphics processing unit
- FPGA field programmable gate array
- FIG. 44 illustrates an example system 802 for performing security screening.
- the system 802 is a computing device and includes at least one memory 804, at least one programmable processor 806, and a computer network interface 808.
- the processor 806 may be programmed to implement the methods described herein.
- the memory 804 may store image data 810 derived from scanning an item with penetrating radiation at a checkpoint screening station.
- the image data conveys a 3D image of the item. Examples of image data conveying a 3D image of the item include density and/or Zeff data. For example, there may be two blocks of 3D data from a CT scanner: one for density and one for Zeff, although this is not necessary. For example, there may only be density data.
- the image data that conveys the 3D image of the item comprises data used to obtain the 3D image of the item.
- the 3D image data 810 may have been received through the network interface 808 from the scanning device at the checkpoint screening station.
- the processor 806 may be programmed to process the 3D image data 810 to derive alternate image data conveying a 2D image of the item, e.g. a simulated X-ray image.
- the processor 806 may be programmed to cause transmission of the alternate image data conveying the 2D image and/or the image data conveying the 3D image for display on a display device.
- the display device may be at a remote screening station. The transmission may occur through the network interface 808.
- the system 802 may be a computing device in network communication with the CT scanner l08a and the remote screening station 204.
- the memory 804 may be memory 1 l2a
- the processor 806 may be processor 1 l4a.
- FIG. 45 is a flowchart of an example method for screening an item at a security checkpoint.
- the security checkpoint includes a checkpoint screening station.
- the method may be implemented by a system having at least one programmable processor, e.g. system 802 having processor 806.
- Step 1002 includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station.
- the image data conveys a 3D image of the item.
- Step 1004 includes processing the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item.
- Step 1006 includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen.
- Step 1008 includes transmitting the image data conveying the 3D image of the item for display on the display screen (at the local or remote screening station).
- transmission of the image data conveying the 3D image of the item may be performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item. That is, in some embodiments transmission of the 2D and 3D image data may occur in parallel, or the 3D image data may be transmitted before the 2D image data. But in general, the transmission of the derived alternate image data conveying the 2D image is complete before the transmission of the image data conveying the 3D image is complete.
- the image data conveying the 3D image of the item includes CT data.
- the alternate image data conveys a simulated X-ray image of the item.
- the simulated X-ray image is derived by applying simulation operations to the CT data.
- the simulation operations may include projection operations.
- the method includes displaying the 2D image of the item on the display screen without displaying the 3D image of the item.
- the displaying may occur at the remote screening station.
- the method includes displaying the 2D image of the item on the display screen prior to the 3D image of the item being ready to be displayed.
- the method includes receiving an input provided by an operator (e.g. at the remote screening station), where the input requests that the 3D image of the item be displayed on the display screen.
- the method in response to receipt of the input, includes displaying the 3D image of the item.
- the 3D image of the item is displayed concurrently with the 2D image of the item.
- the method includes receiving an input provided by an operator (e.g. at the remote screening station), where the input requests that the 3D image of the item be displayed on the display screen.
- the method may include modifying the display screen at the remote screening station to: display the 3D image of the item; and cease displaying the 2D image of the item.
- the method includes directing the remote screening station to implement a GUI.
- the GUI may be configured for: displaying the 2D image of the item; providing an input object configured to receive a request from an operator of the remote screening station to display the 3D image of the item; and in response to a specific user request to display the 3D image of the item through the input object, adapting the GUI to display the 3D image of the item.
- the 3D image of the item is displayed concurrently with the 2D image of the item in response to the specific user request to display the 3D image of the item.
- the 3D image of the item is displayed instead of the 2D image of the item in response to the specific user request to display the 3D image of the item.
- the GUI is configured for selectively causing the input object to acquire one of an enabled state and a disabled state at least in part based on the 3D image of the item being available for display at the remote screening station.
- the GUI is configured for causing the input object to acquire the enabled state following an intentional delay period.
- the intentional delay period may be measured from the displaying of the 2D image of the item.
- the intentional delay period may be measured from the 3D image of the item being available for display at the remote screening station.
- the intentional delay period may be configurable.
- the GUI may be configured for: providing a user operable input object configured to receive a delay period duration, and in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
- the GUI may be configured for: (a) providing an image manipulation input object configured to receive a request from the operator of the remote screening station to remove a component shown in the 3D image of the item; and (b) in response to a specific user request to remove a specific component shown in the 3D image of the item, adapting the GUI to display an altered version of the 3D image of the item in which the specific component is omitted to reveal contents of the item previously obstructed by the specific component.
- the GUI may also or instead be adapted to display an altered version of the 2D image of the item in which the specific component is omitted.
- an input may be received requesting that an object be removed from a displayed image of the item, and in response removing the object from the displayed image to reveal content previously blocked by the object.
- removing the object from the displayed image may include obtaining modified CT data by setting to zero voxels in the CT data that correspond to the object, and regenerating the 2D image of the item using the modified CT data.
- the object is a first object
- the method includes computing a plurality of layers, each layer corresponding to a respective different object in the item, and one of the layers corresponding to the first object. Each layer may be computed using the CT data modified by setting to zero voxels in the CT data that do not correspond to the respective object.
- Removing the first object from the displayed image may include combining the plurality of layers, not including the layer corresponding to the first object, to form the displayed image.
- processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes: defining a plurality projection paths through the 3D image of the item; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item.
- at least some projection paths in the plurality projection paths are non-parallel projection paths, in that they extend along axes that diverge from one another and/or that converge at a same point.
- the non parallel projection paths originate from (or converge at) a same starting point but end at different ones of a set of end points.
- the starting point corresponds to a simulated penetrating radiation source.
- the set of end points corresponds to a set of simulated radiation sensors.
- the image data conveying the 3D image of the item includes CT data conveying 3D density data and 3D Zeff data.
- the image data conveying the 3D image of the item includes CT data conveying 3D density data and not conveying Zeff data.
- the CT data includes a plurality of slices of CT data.
- projecting the image data conveying the 3D image of the item along the projection paths includes projecting slices in the plurality of slices of CT data along the projection paths.
- the image data conveying the 3D image of the item further includes information conveying a region of interested (ROI) in the 3D image.
- processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes processing the information conveying the ROI in the 3D image to derive information conveying a corresponding ROI in the 2D image.
- processing the information conveying the ROI in the 3D image to derive information conveying the corresponding ROI in the 2D image includes performing operations including: (a) defining the ROI in the 3D image by defining a plurality of 3D coordinates, at least some of the 3D coordinates corresponding to edges or corners of a 3D region, where the 3D region defines the ROI in the 3D image; and (b) projecting the plurality of 3D coordinates into a two dimensional space to obtain the corresponding ROI in the 2D image.
- FIG. 46 is a flowchart of an example method for screening items at a security checkpoint.
- the security checkpoint includes a first checkpoint screening station with a screening device of a first type and a second checkpoint screening station with a screening device of a second type distinct from the first type.
- the method is implemented by a system including at least one programmable processor.
- Step 1022 includes receiving first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station.
- the first image data conveys a 3D image of the first item. Examples of image data conveying a 3D image of an item include density and/or Zeff data.
- Step 1024 includes processing the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item.
- Step 1026 includes transmitting the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station.
- the remote screening station is in communication with the system over a computer network.
- Step 1028 includes also transmitting the first image data conveying the 3D image of the first item for display on the display screen at the remote screening station.
- transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item.
- Step 1030 includes receiving second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station.
- the second image data conveys a 2D image of the second item.
- the second image data may include a pair of black and white images (or perhaps a pair of black and white images generated or to be generated from a continuous stream of data).
- Step 1032 includes transmitting the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station, e.g. once the analysis of the first item is complete.
- the same processor performs all steps.
- a first processor e.g. processor H4a
- a second processor e.g. H4b
- a first processor associated with the first checkpoint screening station performs the receiving of the first image data, the processing of the first image data, the transmitting of the derived alternate image data conveying the 2D image of the first item, and the transmitting of the image data conveying the 3D image of the first item
- a second processor e.g. H4b associated with the second checkpoint screening station performs the receiving the second image and transmitting the image data conveying the 2D image of the second item.
- the first image data includes CT data
- the derived alternate image data conveys a simulated X-ray image of the first item.
- the simulated X-ray image is derived by applying simulation operations to the CT data.
- the second image data includes X-ray image data (e.g. the second screening device may be an X-ray scanner).
- the method incudes: at the remote screening station, displaying the 2D image of the first item on the display screen without displaying the 3D image of the first item.
- the method includes: at the remote screening station, displaying the 2D image of the first item on the display screen prior to the 3D image of the first item being ready to be displayed at the remote screening station.
- FIG. 47 is a flowchart of another example method for screening items for a security checkpoint.
- the method is implemented by a system having at least one programmable processor.
- the system may part of a remote screening station.
- the at least one programmable processor is configured for performing the method steps.
- a GUI is implemented on a display screen.
- the GUI is configured for: (i) displaying a 2D image of the item, and (ii) providing an input object operable by an operator.
- the input object is configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item.
- Step 1054 includes displaying the 2D image of the item on the GUI and causing the input object to acquire the disabled state.
- Step 1056 includes dynamically adapting the GUI to subsequently cause the input object to acquire the enabled state after a delay period.
- the delay period is based at least in part on receipt of image data conveying the 3D image of the item. For example, the delay period may expire upon receipt of the image data conveying the 3D image of the item and after the 3D image is rendered and available to be displayed.
- the input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI.
- step 1058 in response to receipt through the input object of a specific user request to display the 3D image of the item, dynamically adapting the GUI to display the 3D image of the item on the display screen.
- the GUI may be configured to display the 3D image of the item only after the 2D image of the item has been displayed.
- the 3D image of the item is displayed concurrently with the 2D image of the item in response to the user request to display the 3D image of the item.
- the 3D image of the item is displayed instead of the 2D image of the item in response to the user request to display the 3D image of the item.
- the delay period for dynamically adapting the GUI to cause the input object to acquire the enabled state may be based at least in part on receipt of image data conveying the 3D image and upon an intentional delay period.
- the intentional delay period is measured from the displaying of the 2D image of the item.
- the intentional delay period is measured from the 3D image of the item being available for display at the remote screening station.
- the intentional delay period is configurable.
- the GUI is configured for: (a) providing another user operable input object configured to receive a delay period duration; and (b) in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
- the image data conveying the 3D image of the item may include CT data and the 2D image of the item may include simulated X-ray image data.
- Example 1 A method comprising: scanning an item with a CT scanner to obtain data used for displaying a 3D image of the item; generating a 2D image from the data; displaying the 2D image.
- Example 2 The method of example 1, further comprising transmitting the 2D image to a remote screening station, and wherein the displaying the 2D image occurs at the remote screening station.
- Example 3 The method of example 2, further comprising transmitting the data to the remote screening station after transmitting the 2D image or in parallel.
- Example 4 The method of any one of examples 1 to 3, further comprising displaying the 2D image and not displaying the 3D image of the item.
- Example 5 The method of example 4, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
- Example 6 The method of example 5, wherein the input is not activated until the 3D image is available for display.
- Example 7 The method of example 6, wherein the 3D image is available for display once it is received at a remote screening station.
- Example 8 The method of example 6 or 7, wherein the 3D image is available for display once it is rendered from the data.
- Example 9 The method of any one of examples 5 to 8, wherein the 3D image is displayed concurrently with the 2D image.
- Example 10 The method of any one of examples 1 to 9, wherein the data is 3D data.
- Example 11 The method of example 10, wherein the 3D data comprises reconstructed density and/or Zeff data.
- Example 12 The method of any one of examples 1 to 11, wherein generating the 2D image from the data comprises generating a 2D projected view of the 3D image.
- Example 13 The method of any one of examples 1 to 12, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
- Example 14 The method of any one of examples 1 to 13, wherein generating the
- 2D image from the data comprises: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
- Example 15 The method of example 14, wherein the projected images are processed to create colour and material mask images.
- Example 16 The method of any one of examples 1 to 15, wherein generating the 2D image from the data comprises: applying at least one transformation to generate a 2D projected view.
- Example 17 The method of any one of examples 1 to 16, wherein generating the 2D image from the data comprises: obtaining two 3D datasets, e.g. representing density and the Zeff of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
- Example 18 The method of example 17, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube; simulate what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
- Example 19 The method of any one of examples 1 to 18, further comprising obtaining ATD information associated with the item.
- Example 20 The method of example 19, wherein the ATD information is received from the CT scanner.
- Example 21 The method of example 19 or example 20, wherein the ATD information indicates a region of interest (ROI).
- ROI region of interest
- Example 22 The method of any one of examples 1 to 20, further comprising obtaining a ROI, e.g. from ATD information and/or from a user input.
- Example 23 The method of example 21 or example 22, comprising projecting the ROI in the 2D image.
- Example 24 The method of example 23, wherein the ROI comprise a bounding box.
- Example 25 The method of example 24, wherein projecting the ROI in the 2D image comprises computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprise computing the minimal and maximal x and y values of projected corners.
- Example 26 The method of any one of examples 1 to 25, further comprising receiving an input indicating that an object in the 3D image is to be removed, and removing the object from the 2D image.
- Example 27 The method of example 26, wherein removing the object from the 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
- Example 28 The method of any one of examples 1 to 27, further comprising scanning another item with an X-ray scanner and displaying, on the same GUI as the 2D image, the image from the X-ray scanner.
- Example 29 The method of any one of examples 1 to 28, wherein the image generated from the data is a first 2D image, and the method further comprises generating a second 2D image from the data.
- Example 30 The method of example 29, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
- Example 31 The method of example 30, wherein at least one of the first 2D image and the second 2D image is displayed on a display at the same time as the 3D image.
- Example 32 A method comprising: receiving data from a CT scanner, the data used for generating a 3D image of an item scanned by the CT scanner; generating a 2D image from the data.
- Example 33 The method of example 32, further comprising transmitting the 2D image to remote screening station.
- Example 34 The method of example 33, further comprising transmitting the data to the remote screening station after transmitting the 2D image or in parallel to transmitting the 2D image.
- Example 35 The method of any one of examples 32 to 34, wherein the data is 3D data.
- Example 36 The method of example 35, wherein the 3D data comprises reconstructed density and/or Zeff data.
- Example 37 The method of any one of examples 32 to 36, wherein generating the 2D image from the data comprises generating a 2D projected view of the 3D image.
- Example 38 The method of any one of examples 32 to 37, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
- Example 39 The method of any one of examples 32 to 38, wherein generating the 2D image from the data comprises: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
- Example 40 The method of example 39, wherein the projected images are processed to create colour and material mask images.
- Example 41 The method of any one of examples 32 to 40, wherein generating the 2D image from the data comprises: applying at least one transformation to generate a 2D projected view.
- Example 42 The method of any one of examples 32 to 41, wherein generating the 2D image from the data comprises: obtaining two 3D datasets, e.g. representing density and the Zeff of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
- Example 43 The method of example 42, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube; simulating what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
- Example 44 The method of any one of examples 32 to 43, further comprising obtaining, from the CT scanner, ATD information associated with the item.
- Example 45 The method of any one of examples 32 to 44, wherein the 2D image generated from the data is a first 2D image, and the method further comprises generating a second 2D image from the data.
- Example 46 The method of example 45, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
- Example 47 The method of example 46, wherein the first 2D image and the second 2D image are transmitted to a remote screening station before the data used for generating the 3D image.
- Example 48 A method comprising: receiving 2D image data that was generated from data obtained by a CT scanner; displaying the 2D image.
- Example 49 The method of example 48, further comprising receiving the 2D image from a security checkpoint screening station, and wherein the displaying the 2D image occurs at a remote screening station.
- Example 50 The method of example 49, further comprising receiving the data obtained by the CT scanner, at the remote screening station, after receiving the 2D image; wherein the data obtained by the CT scanner is used for displaying a 3D image of the item.
- Example 51 The method of any one of examples 48 to 50, further comprising displaying the 2D image and not displaying the 3D image of the item.
- Example 52 The method of example 51, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
- Example 53 The method of example 52, wherein the input is not activated until the 3D image is available for display.
- Example 54 The method of example 53, wherein the 3D image is available for display once it is received at the remote screening station.
- Example 55 The method of example 53 or 54, wherein the 3D image is available for display once it is rendered from the data.
- Example 56 The method of any one of examples 52 to 55, wherein the 3D image is displayed concurrently with the 2D image.
- Example 57 The method of any one of examples 48 to 56, wherein the data is 3D data.
- Example 58 The method of example 57, wherein the 3D data comprises reconstructed density and/or Zeff data.
- Example 59 The method of any one of examples 48 to 58, wherein the 2D image is a 2D projected view of the 3D image.
- Example 60 The method of any one of examples 48 to 59, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
- Example 61 The method of any one of examples 48 to 60, wherein the 2D image was generated from the data by: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
- Example 62 The method of example 61, wherein the projected images are processed to create colour and material mask images.
- Example 63 The method of any one of examples 48 to 62, wherein the 2D image was generated from the data by: applying at least one transformation to generate a 2D projected view.
- Example 64 The method of any one of examples 48 to 63, wherein the 2D image was generated from the data by: obtaining two 3D datasets, e.g. representing density and the Zeff of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
- Example 65 The method of example 64, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube; simulating what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
- Example 66 The method of any one of examples 48 to 65, further comprising receiving ATD information associated with the item.
- Example 67 The method of example 66, wherein the ATD information originates from the CT scanner.
- Example 68 The method of example 66 or example 67, wherein the ATD information indicates a region of interest (ROI).
- ROI region of interest
- Example 69 The method of any one of examples 48 to 67, further comprising receiving a ROI, e.g. from ATD information and/or from a user input.
- Example 70 The method of example 68 or example 69, wherein the ROI is projected in the 2D image.
- Example 71 The method of example 70, wherein the ROI comprise a bounding box.
- Example 72 The method of example 71, wherein projecting the ROI in the 2D image comprised computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprised computing the minimal and maximal x and y values of projected comers.
- Example 73 The method of any one of examples 48 to 72, further comprising receiving an input indicating that an object in a 3D image is to be removed, and removing the object from the 2D image.
- Example 74 The method of example 73, wherein removing the object from the 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
- Example 75 The method of any one of examples 48 to 74, further comprising receiving another 2D image associated with the scanning of another item from an X-ray scanner.
- Example 76 The method of example 75, wherein the image from the X-ray scanner is displayed on the same GUI as the image originating from the CT scanner.
- Example 77 The method of any one of examples 48 to 76, wherein the 2D image generated from the data is a first 2D image, and the method further comprises receiving a second 2D image generated from the data.
- Example 78 The method of example 77, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
- Example 79 The method of example 78, wherein at least one of the first 2D image and the second 2D image is displayed on a display at the same time as the 3D image.
- Example 80 A method comprising: receiving and displaying a first 2D image of a first item that was scanned by an X-ray scanner; receiving and displaying a second 2D image of a second item that was scanned by a CT scanner; optionally wherein the method is performed at a remote screening station.
- Example 81 The method of example 80, wherein the first 2D image is no longer displayed when the second 2D image is displayed.
- Example 82 The method of example 80 or 81, further comprising receiving the first 2D image from a first security checkpoint screening station, and receiving the second 2D image from a second different security checkpoint system.
- Example 83 The method of any one of examples 80 to 82, further comprising: receiving 3D data associated with the second item that was scanned by the CT scanner, wherein the 3D data is for displaying a 3D image of the second item.
- Example 84 The method of example 83, wherein the 3D data is received after the second 2D image.
- Example 85 The method of example 83 or 84, further comprising displaying the second 2D image and not displaying the 3D image.
- Example 86 The method of 85, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
- Example 87 The method of example 86, wherein the input is not activated until the 3D image is available for display.
- Example 88 The method of example 87, wherein the 3D image is available for display once it is received at the remote screening station.
- Example 89 The method of example 87 or 88, wherein the 3D image is available for display once it is rendered from the 3D data.
- Example 90 The method of any one of examples 86 to 89, wherein the 3D image is displayed concurrently with the second 2D image.
- Example 91 The method of any one of examples 83 to 90, wherein the 3D data comprises reconstructed density and/or Zeff data.
- Example 92 The method of any one of examples 83 to 91, wherein the second 2D image is a 2D projected view of the 3D image; optionally wherein the second 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner; optionally wherein the second 2D image was generated from the data from the CT scanner by: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image; optionally wherein the projected images are processed to create colour and material mask images.
- Example 93 The method of any one of examples 80 to 92, wherein the second 2D image was generated from data from the CT scanner by applying at least one transformation to the data to generate a 2D projected view.
- Example 94 The method of any one of examples 80 to 93, further comprising receiving a ROI associated with the second scanned item, e.g. from ATD information and/or from a user input; and optionally wherein the ROI is projected in the second 2D image; and optionally wherein the ROI comprise a bounding box, and optionally wherein projecting the ROI in the second 2D image comprises computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprises computing the minimal and maximal x and y values of projected corners.
- Example 95 The method of any one of examples 80 to 94, further comprising receiving an input indicating that an object in a 3D image is to be removed, and removing the object from the second 2D image.
- Example 96 The method of example 95, wherein removing the object from the second 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
- Example 97 At least one processor configured to perform the method of any one of examples 1 to 96.
- Example 98 A system to perform the method of any one of examples 1 to 96.
- Example 99 At least one computer readable medium having stored thereon computer executable instructions that, when executed, cause at least one processor to perform the method of any one of examples 1 to 96.
- Example 100 A system for use in screening pieces of carry-on luggage at a security checkpoint.
- the system comprises: (a) at least two scanning devices for scanning the pieces of carry-on luggage with penetrating radiation to derive image data associated with the pieces of carry-on luggage, wherein the at least two scanning devices include: (i) an X-ray scanner configured for generating X-ray image data associated with at least some of the pieces of carry-on luggage; (ii) a CT scanner configured for generating CT image data associated with at least some other one of the pieces of carry-on luggage; (b) a screening station in communication with said at least two scanning devices, said screening station implementing a GUI module configured for displaying: (i) images derived from the X-ray image data associated with at least some of the pieces of carry-on luggage; and (ii) images derived from CT image data associated with at least some other one of the pieces of carry-on luggage; wherein the GUI is configured for providing at least one user operable control configured for manipulating both (i) images derived
- the images derived from CT image data associated with at least some other one of the pieces of carry-on luggage include at least one simulated X-ray image derived by processing the CT image data.
- the graphical user interface module is further configured for: (a) providing a user interface tool for allowing the human operator to provide at the remote screening station threat assessment information associated with the image being displayed; (b) in response to receipt of threat assessment information provided by the human operator, causing the threat assessment information provided by the human operator to be conveyed to an on-site screening technician associated with the one of the at least two scanning devices.
- Example 101 A system for use in screening pieces of carry-on luggage at a security checkpoint.
- the system comprises at least two scanners for scanning the pieces of carry-on luggage with penetrating radiation to derive image data, wherein at least one of the at least two scanners is an X-ray scanner and another one of the at least two scanners is a CT scanner.
- the system also comprises a computing device including an input for receiving the image data and implementing a GUI, GUI being configured for receiving and processing X-ray image data and CT image data.
- Example 102 A method for screening a plurality of items at a security checkpoint, the security checkpoint including at least two scanning devices including at least one X-ray scanner and at least one CT scanner. The method comprises: (a) scanning a first item amongst the plurality of items to be screened at the security checkpoint using the X-ray scanner to generate X-ray image data conveying information on the contents of the first item; (b) scanning a second item distinct from the first item amongst the plurality of items to be screened at the security checkpoint using the CT scanner to generate CT image data conveying information on the contents of the second item; (c) transmitting the X-ray image data conveying information on the contents of the first item to a screening station for display on a GUI for visual inspection by a human operator; (d) transmitting image data derived from the CT image data conveying information on the contents of the second item to the screening station for display on the GUI for visual inspection by a human operator, wherein transmitting the image data derived from the CT image data includes:
- the screening station is a remote screening station located remotely from the X- ray scanner and the CT scanner.
- local display devices associated with respective ones of the at least two scanning devices are provided for conveying threat assessment information to on-site screening technicians associated with the scanning devices.
- the threat assessment information provided by a human operator at the remote screening station is conveyed to the on site screening technician associated with one of the at least two scanning devices through an associated one of the local display devices.
- the threat assessment information indicates to the on-site screening technician whether a piece of luggage is marked as“clear” or marked for further inspection.
- the method may further comprise determining whether to subject respective ones of the images derived by the at least two scanning devices to a visual inspection by the human operator at the remote screening station, wherein the determining is made at least in part based on results obtained by using an automated threat detection engine.
- the processor is further programmed to cause at least some of the images derived by the at least two scanning devices to by-pass visual inspection by the human operator at the remote screening station.
- the images displayed at the remote screening station are associated with results obtained by applying an ATD operation, so that“on demand” the human operator views both the image of the piece of luggage as well as the associated ATD results.
- any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storage of information, such as computer/processor readable instructions, data structures, program modules, and/or other data.
- non-transitory computer/processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM), digital video discs or digital versatile disc (DVDs), Blu-ray DiscTM, or other optical storage, volatile and non-volatile, removable and non removable media implemented in any method or technology, random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology. Any such non-transitory computer/processor storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using computer/processor readable/executable instructions that may be stored or otherwise held by such non-transitory computer/processor readable storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pulmonology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- High Energy & Nuclear Physics (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A method, and associated system and computer program product, for screening an item at a security checkpoint including a checkpoint screening station are presented. Image data derived from scanning the item with penetrating radiation is received, the image data conveying a three-dimensional (3D) image of the item. The image data is processed to derive alternate image data conveying a two dimensional (2D) image of the item. The image data conveying the 2D and 3D images may be transmitted for display on a display screen at a remote screening station by transmitting the derived alternate (2D) image data first (or at least in parallel) to the 3D data thereby making the 2D image available for display faster than the 3D image would have been available. Optionally, the 2D image may first be displayed before providing the operator with an option to display the 3D image.
Description
TITLE: SYSTEM, APPARATUS AND METHOD FOR PERFORMING SECURITY
SCREENING AT A CHECKPOINT USING X-RAY AND CT SCANNING DEVICES AND GUI CONFIGURED FOR USE IN CONNECTION WITH SAME
CROSS-REFERENCE TO RELATED APPLICATION
[1] For the purposes of the United States, this application claims the benefit of priority under 35 USC § 119(e) based on co-pending U.S. Provisional Patent Application Serial No. 62/645,052, which was filed on March 19, 2018 and which is presently pending. The contents of the above-noted application are incorporated herein by reference.
FIELD
[2] The present invention relates generally to security systems and, more particularly, to a security screening system for assisting screening operators in screening items, in particular in connection with carry-on luggage, and to a method and/or apparatus for improving the efficiency of security screening processes at security checkpoints of the type present at secured facilities.
BACKGROUND
[3] Security in airports, train stations, ports, mail sorting facilities, office buildings and other public and/or private venues is becoming increasingly important, particularly in light of recent violent events.
[4] Typically, checkpoint security- screening systems make use of scanning devices that use penetrating radiation to scan items (such as pieces of carry-on luggage or other items) in order to obtain image data conveying information pertaining to the contents of the items. Such scanning devices generally include a conveyor on which the items are positioned, either directly or on a support such as a tray. The conveyor displaces the objects positioned thereon towards an inspection area, also referred to as the scanning tunnel, where the objects are subjected to penetrating radiation in order to generate image data conveying information on the contents and/or composition of the items.
[5] Typical scanning devices that may be used to provide image data in this context include X-ray scanners and computed tomography (CT)-scanners. X-ray scanners typically use penetrating radiation to generate one or more 2D images of items under inspection. For a given item, in the case of a single view X-ray scanner, one 2D image of the item is generated and in the case of a multi-view X-ray scanner, two or more 2D images of the item may be generated. CT- scanners typically use penetrating radiation to generate a plurality of “slices” of items under inspection in order to generate a 3D representation of the item.
[6] The scanning devices are in communication with one or more display devices on which images derived from the generated image data may be rendered through a displayed Graphical User Interface (GUI). Using the GUIs, human operators visually inspect the images, alone or with the assistance of information generated by one or more automated threat detection (ATD) tools, in order to assess whether the items may present a threat. For example, an ATD tool may make use of an image processing algorithm to process image data associated with an item under inspection to identify shapes and/or materials that may indicate that the item is likely to present a potential threat (e.g. the item may include or hold one or more objects such as guns, knives, bottles of liquid or other objects that may be considered to present a potential threat). Through control elements provided on the GUI, the human operator may perform a number of functions including manipulating the displayed image and assigning a threat level to the item under inspection (e.g. low, medium, high level of threat). In some cases, the human operator may also use controls on the GUI to control the displacement of the items under inspection through the security checkpoint, for example by generating control signal for controlling switches of the conveyor.
[7] In traditional security checkpoints, each scanning device includes one or more dedicated display devices connected thereto on which GUIs specifically configured to operate with that scanning device are used to render images for display to human operators and to provider tailored tools to manipulate the images and/or perform specific functions.
[8] In recent years at security checkpoints, for example at airports, in view of improving throughput while seeking to concurrently control costs and provide high screening quality/accuracy, operators have been turning to remote screening systems. Generally speaking,
in remote screening systems, image data is derived by scanning the items using one or more scanning devices, and the display device(s) on which are displayed the images for visual inspection is (are) located remotely from the one or more scanning devices. This provides a number of advantages including, without being limited to, allowing images originating from multiple devices to be pooled and presented on a GUI displayed on a same display device in a centralized location enabling a reduction in the number of screeners. A specific example of a method and system for use in performing security screening providing remote screening system capabilities is described in U.S. Patent No. 9,014,425 issued on April 21, 2015. The contents of the aforementioned patent are incorporated herein by reference.
[9] In practical security checkpoints, it is not uncommon for different scanners made by different manufacturers and/or using different technology (e.g. single-view X-ray, multi-view X-ray and CT) to be used alongside each other. In particular, a current trend for airport security checkpoints is to replace existing X-ray scanners with CT scanners. However, due to the costs of the new equipment, this transition is likely to be performed in steps over the span of several years. As such, it is expected that airport security checkpoints will, for the next several years, need to perform screening with both X-ray scanners and CT scanners operating side-by-side.
[10] A challenge with providing remote screening for security checkpoints having different types of scanning devices (made by different manufacturers and/or using different technology) is that it is typically necessary to provide different types of GUIs adapted to the different devices to view the images generated and to allow for different functionality to be provided depending on the scanning device that was used to generate the image data. An added challenge arises when there are different types of scanning devices using different technologies that are used to generate image data at a security checkpoint using remote screening, in particular when there are some X-ray scanners and some CT scanners. Given the different types of image data generated and the different types of desired functionality/manipulation that are often expected when using CT scanners compared to X-ray scanners, different GUIs are essentially used for each type of device. A deficiency in using different GUIs in the context of providing remote screening capabilities for a security checkpoint is that they require human operators to be trained on different types of interfaces/tools, which increases the training time and thus increases
the training costs for the operators. An alternative is to have different human operators for each of the different types of scanning devices each being trained on a specific GUI. However, this alternative increases the labor costs associated with providing security screening as additional staff would likely need to be hired.
[11] To date, no suitable approach has been presented for providing remote screening capabilities for a security checkpoint in which different types of scanning devices, for example some X-ray scanners and some CT scanners, are used to generate image data.
[12] Another challenge arises from the use of CT scanners at security checkpoints for screening carry-on luggage. In particular, while data generated using CT scanners is considered to provide more complete information on the items under inspection, the data generated using CT scanners is voluminous compared to conventional (2D) X-ray image data, requiring increased bandwidth for transmittal of data and requiring significant processing power in order to render images that can be displayed and manipulated. As a result, there are some increased delays when using CT image data compared to the use of conventional (2D) X-ray image data. While for most applications, such as checked luggage applications, such latency may not materially impact the performance of the overall screening system, this is not the case for a security checkpoint checking carry-on luggage where passengers must wait in queue to be screened. The above deficiencies pertaining to latency and delays arise in connection with the use of CT scanners irrespective of whether or not remote screening functionality is provided at the checkpoint. However, the delay effects are compounded with remote screening functionality due to the requirements to transmit large amounts of data (the CT image data) over a computer network.
[13] In view of the above, there is a need in the industry for providing an improved security checkpoint screening system, including one that may provide remote screening capabilities, that addresses at least some of the deficiencies of existing screening systems.
SUMMARY
[14] Systems and methods are disclosed for performing security screening for a security checkpoint. A GUI configured for use in performing the security screening is also
disclosed. In some aspects, a single GUI is provided for screening different items scanned by different scanners. In some aspects, alternate image data conveying a 2D image of an item is obtained from 3D image data, and the 2D image is first displayed.
[15] In accordance with one aspect, there is provided a method for screening an item at a security checkpoint. The security checkpoint includes a checkpoint screening station. The method is implemented by a system including at least one programmable processor. The method includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station. The image data conveys a 3D image of the item. The method further includes processing the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item. The method further includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station. The remote screening station is in communication with the system over a computer network. The method further includes transmitting the image data conveying the 3D image of the item for display on the display screen at the remote screening station. Transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
[16] In some implementations, the image data conveying the 3D image of the item includes CT data, and the derived alternate image data conveys a simulated X-ray image of the item. The simulated X-ray image may be derived by applying simulation operations to the CT data, e.g. by projecting the CT data.
[17] By transmitting the derived alternate image data conveying the 2D image first (or at least in parallel with transmission of the image data conveying the 3D image), the system hedges against latency associated with the transmittal and processing for rendering of 3D images by making the 2D image available for display faster than the 3D image would have been available. In addition, screening of 2D images through visual examination by a human operator has been observed in some circumstances to be faster than screening of 3D images which may be due to the reduced complexity of the images, which may assist in improving screening
efficiency/speed of inspection. Faster screening may in turn result in faster passenger flow through a checkpoint screening station that uses a CT scanner.
[18] In another aspect, a corresponding system is disclosed. The system is for screening an item at a security checkpoint, where the security checkpoint includes a checkpoint screening station. The system includes a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station. The image data conveys a 3D image of the item. The system further includes a processor in communication with the memory. The processor is programmed to: (i) process the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network; (ii) cause transmission of the image data conveying the 3D image of the item for display on the display screen at the remote screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate image data conveying the 2D image of the item.
[19] In accordance with another aspect, there is provided a method for screening items at a security checkpoint. The security checkpoint includes a first checkpoint screening station with a screening device of a first type (e.g. a CT scanner) and a second checkpoint screening station with a screening device of a second type (e.g. an X-ray scanner) that is distinct from the first type. The method is implemented by a system including at least one programmable processor. The method includes receiving first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station. The first image data conveys a 3D image of the first item. The method further includes processing the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item. The method further includes transmitting the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station. The remote screening station is in communication with the system over a computer network. The method further includes transmitting the image data conveying the 3D image of the
first item for display on the display screen at the remote screening station. Transmission of the image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item. The method further includes receiving second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station. The second image data conveys a 2D image of the second item. The method further includes transmitting the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station.
[20] In some implementations, a first processor associated with the first checkpoint screening station performs the receiving of the first image data, the processing of the first image data, the transmitting of the derived alternate image data conveying the 2D image of the first item, and the transmitting of the image data conveying the 3D image of the first item. A second processor associated with the second checkpoint screening station performs the receiving the second image data and transmitting the second image data conveying the 2D image of the second item.
[21] Systems corresponding to the method above are also provided.
[22] In another aspect, a system is provided including at least one memory for storing first image data derived from scanning a first item with penetrating radiation at a first checkpoint screening station. The first image data conveys a 3D image of the first item. At least one processor is programmed to: (i) process the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network; (iii) cause transmission of the first image data conveying the 3D image of the first item for display on the display screen at the remote screening station, wherein transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item. The at least one memory is also for storing second image data derived from scanning a second item with penetrating radiation at a second
checkpoint screening station, the second image data conveying a 2D image of the second item. The at least one processor is further programmed to transmit the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station. In some implementations, the at least one memory may be first and second separate memories. The first memory is associated with the first checkpoint screening station and stores the first image data. The second memory is associated with the second checkpoint screening station and stores the second image data. In some implementations, the at least one processor may be first and second processors. The first processor is programmed to process the first image data, cause transmission of the derived alternate image data conveying the 2D image of the first item, and cause transmission of the first image data conveying the 3D image of the first item. The second processor is programmed to transmit the derived alternate image data conveying the 2D image of the second item.
[23] In another aspect, a system is provided including a remote screening station having a display screen for displaying images of items scanned at the security checkpoint. The system further includes a first computing device in network communication with both the remote screening station and a first checkpoint screening station. The first computing device includes: (i) a first memory for storing first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station, the first image data conveying a 3D image of the first item; (ii) a first processor programmed to: (1) process the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item; (2) cause transmission of the derived alternate image data conveying the 2D image of the first item over the network for display on the display screen at the remote screening station; and (3) cause transmission of the first image data conveying the 3D image of the first item over the network for display on the display screen at the remote screening station, wherein transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item. The system further includes a second computing device in network communication with both the remote screening station and a second checkpoint screening station. The second computing device includes: (i) a second memory for storing second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station, the
second image data conveying a 2D image of the second item; and (ii) a second processor programmed to transmit the second image data conveying the 2D image of the second item over the network for display on the display screen at the remote screening station.
[24] In accordance with another aspect, there is provided a method for screening an item at a security checkpoint. The method is implemented by a system including at least one programmable processor. The at least one programmable processor is configured for implementing a GUI on a display screen. The GUI is configured for: (i) displaying a 2D image of the item; (ii) providing an input object operable by an operator. The input object is configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item. The method includes displaying the 2D image of the item on the GUI and causing the input object to acquire the disabled state. The method further includes dynamically adapting the GUI to subsequently cause the input object to acquire the enabled state after a delay period. The delay period is based at least in part on receipt of image data conveying the 3D image of the item. The input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI. In response to receipt through the input object of a specific user request to display the 3D image of the item, the method includes dynamically adapting the GUI to display the 3D image of the item on the display screen. The method may be performed at a remote screening station.
[25] Conventionally, a human operator may be accustomed to screening 2D X-ray images from an X-ray scanning device, and therefore may be accustomed to using an associated GUI and user interface tools developed for screening such 2D images. In accordance with some embodiments presented, a single GUI may now be provided to allow the operator to screen both 2D images and 3D images, where the 3D images may originate from a CT scanner, and where the 2D images may originate from an X-ray scanner or be a simulated X-ray image originating from CT data. In that same GUI, tools and general interactions with the user may be implemented so as to provide a similar user experience whether the scanned item to be screened originates from a 2D X-ray scanner or a 3D CT scanner.
[26] In some implementations, the delay period for dynamically adapting the GUI to cause the input object to acquire the enabled state may be based at least in part on receipt of image data conveying the 3D image and upon an intentional delay period. The intentional delay period may be used to encourage a human operator to screen an item using a displayed 2D simulated X-ray image, rather than using the 3D image of the item, which may in turn make the screening process faster.
[27] In accordance with another aspect, a system corresponding to the above described method is also disclosed. The system includes a non-transitory memory for storing image data, a display screen, and at least one processor programmed to implement a GUI on the display screen. The GUI is configured for: (i) displaying a 2D image of the item; (ii) providing an input object operable by an operator, the input object being configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item. The processor is further programmed to display the 2D image of the item on the GUI, the input object being in the disabled state when the display of the 2D image is initiated. Following the display of the 2D image, the processor is further programmed to dynamically adapt the GUI to subsequently cause the input object to acquire the enabled state after a delay period. The delay period is based at least in part on receipt of image data conveying the 3D image of the item. The input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI. In response to receipt through the input object of a specific user request to display the 3D image of the item, the processor is further programmed to dynamically adapt the GUI to display the 3D image of the item on the display screen.
[28] In accordance with another aspect, there is provided a method for screening an item at a security checkpoint including a checkpoint screening station. The method is implemented by a system including at least one programmable processor. The method includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a 3D image of the item. The method further includes processing the image data conveying the 3D image of the item to derive alternate image data
conveying a 2D image of the item. In one embodiment, processing the image data includes defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item. The method further includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station. The method further includes transmitting the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
[29] The method above is not specific to a remote screening station and may in some implementations be used to assist with passenger throughput even when screening at a local screening station, e.g. at a lane. More specifically, the display of a 2D image first may encourage faster screening because a 2D image may be all that is needed for many items, and a 2D image may be inspected more quickly than a 3D image.
[30] In another aspect, a system is also disclosed for screening an item at a security checkpoint. The security checkpoint includes a checkpoint screening station. The system includes: (a) a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a 3D image of the item; and (b) a processor in communication with said memory. The processor is programmed to: (i) process the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item, wherein the processor is programmed to process the image data by defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item; (ii) cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station, wherein the screening station is in
communication with the system over a computer network; and (iii) cause transmission of the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate image data conveying the 2D image of the item.
[31] In accordance with another aspect, there is provided at least one processor to perform the methods disclosed herein. In another aspect, there is provided a system to perform the methods disclosed herein. In another aspect, there is provided at least one computer readable medium having stored thereon computer executable instructions that, when executed, cause at least one processor to perform at least part of one or more of the methods disclosed herein.
[32] All features of embodiments which are described in this disclosure and are not mutually exclusive can be combined with one another. Elements of one embodiment can be utilized in the other embodiments without further mention.
[33] Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[34] A detailed description of specific exemplary embodiments is provided herein below with reference to the accompanying drawings in which:
[35] FIG. 1 is a block diagram of a security checkpoint screening system, according to one embodiment;
[36] FIG. 2 is a flowchart of a method performed by a security checkpoint screening station and remote screening station, according to one embodiment;
[37] FIG. 3 is a block diagram of a security checkpoint screening system, according to another embodiment;
[38] FIG. 4 is a flowchart of a method performed by a security checkpoint screening station and remote screening station, according to another embodiment;
[39] FIG. 5 is a block diagram of a security checkpoint screening station, according to one embodiment;
[40] FIG. 6 is a flowchart of a method performed by a local security checkpoint screening station, according to one embodiment;
[41] FIG. 7 is a flowchart of a method performed by a local security checkpoint screening station, according to another embodiment;
[42] FIG. 8 is a block diagram of a security checkpoint screening system, according to another embodiment;
[43] FIG. 9 is a method performed at a remote screening station, according to one embodiment;
[44] FIG. 10 illustrates an example of a generating a flattened view from 3D data;
[45] FIG. 11 illustrates an example of a generating a projected view from 3D data;
[46] FIG. 12 illustrates a klh slice of reconstructed density data, according to one embodiment;
[47] FIG. 13 illustrates one example of a flowchart for a colouring algorithm;
[48] FIG. 14 illustrates determining projection coordinates of comers of a 3D region of interest (ROI), according to one embodiment;
[49] FIG. 15 illustrates an example flow chart for the creation of at least one 2D image from 3D data;
[50] FIGs. 16 to 38 illustrate example GUI displays at a screening station;
[51] FIG. 39 illustrates an example flowchart for processing images of items scanned by an X-ray scanner;
[52] FIG. 40 illustrates an example flowchart for processing images of items scanned by a CT scanner;
[53] FIG. 41 illustrates an example of object removal;
[54] FIG. 42 illustrates an example of scene removal;
[55] FIG. 43 illustrates an example GUI display at a local screening station;
[56] FIG. 44 illustrates an example system for performing security screening; and
[57] FIGs. 45 to 47 are flowcharts of example methods for screening items for a security checkpoint.
[58] In the drawings, exemplary embodiments are illustrated by way of example. It is to be expressly understood that the description and drawings are only for the purpose of illustrating certain embodiments and are an aid for understanding. They are not intended to be a definition of the limits of the invention.
DETAILED DESCRIPTION
[59] A detailed description of one or more specific embodiments of the invention is provided below along with accompanying Figures that illustrate principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any specific embodiment. The scope of the invention is limited only by the claims. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of describing non limiting examples and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in great detail so that the invention is not unnecessarily obscured.
[60] For illustrative purposes, specific example embodiments will now be explained in greater detail below in conjunction with the figures. In a specific example, the items under inspection are pieces of carry-on luggage. It is, however, to be appreciated that the concepts presented herein are applicable in situations where the items under inspection are objects other
than pieces of carry-on luggage, for example containers of liquid, shoes, laptops, purses, wallets, keys or any other type of objects screened at a security checkpoint. Moreover, while the present application may refer to“carry-on luggage” in the context of certain embodiments of the inventions configured for airport security checkpoints, it is to be appreciated that the concepts presented may be used in the context of security checkpoints more generally and their use is not limited to security checkpoints at airports.
[61] FIG. 1 is a block diagram of a security checkpoint screening system 102, according to one embodiment. The system 102 includes a security checkpoint having three security checkpoint screening stations l04a, l04b, and l04c, although more or fewer may be present.
[62] Security checkpoint screening station l04a includes a pre-scan area l06a, a scanning device l08a, and a post-scan area l lOa. The scanning device l08a is a computed tomography (CT) scanner, and will be referred to as CT scanner l08a. A scanner may also be referred to as a screening device.
[63] A conveyor in the pre-scan area l06a conveys hand luggage into the CT scanner l08a. Hand luggage may also be called“carry on luggage” or“cabin luggage”. The word “luggage” is used to describe any articles that are scanned, e.g. bags, personal items, jackets, etc.
[64] The CT scanner l08a performs a CT scan of each item. An“item” refers to a standalone piece of luggage, a standalone group of luggage, or a tray having one or more pieces of luggage therein. For a given item that is scanned, the CT scanner l08a scans the item with penetrating radiation and generates CT image data, which is three-dimensional (3D) data, e.g. reconstructed density and/or effective atomic number (“Zeff”) data. For example, there may be two sets of 3D data generated by the CT scanner l08a for an item: a first set representing the density data, and a second set representing the Zeff data. In other implementations, there may only be density data generated for the item. In other implementations, there may be density data and partial Zeff data generated for the item (e.g. Zeff data for only some regions of the complete volume).
[65] The CT image data from the CT scanner l08a conveys a 3D image of the item and is stored in memory H2a. The CT scanner l08a may optionally also provide automatic threat detection (ATD) results associated with the scanned item, in which case the ATD results are also stored in memory H2a. It will be appreciated that the term“ATD results”, as used herein, encompasses detection results that may be in the form of identified objects and/or identified regions. The results are automatically detected, but they may not inherently be threats, e.g. the ATD results may identify regions and/or objects of interest without necessarily presenting threats.
[66] A processor H4a is coupled to memory H2a and can process the data from the CT scanner l08a. A human screener may be located at the security checkpoint screening station l04a, and may access a user interface, including a graphical user interface (GUI) via a display device 116a to view images obtained from the CT scanner data. The GUI may be presented on one display monitor or a set of display monitors (e.g. one display monitor displaying a bottom view, and the other display monitor displaying a 3-D view). Therefore, the term“display device” (e.g. display device H6a) is not limited to a single monitor implementation, but may be implemented by multiple monitors. The same remark applies to the other display devices mentioned herein, e.g. display device 216 described later. Also, the term“display screen” is used interchangeably with“display device” herein.
[67] The post-scan area l lOa includes two conveyor sections alongside one another, one to receive the cleared items, and the other to receive items flagged for secondary screening. A divider 118a may isolate the conveyor section receiving the items flagged for secondary screening. If an item is flagged for secondary screening, it may be manually diverted, or it may be electronically diverted via one or more switches that are automatically controlled, e.g. based on threat information associated with the item. The threat assessment information may be from the ATD, from a human screener, or from both. It is to be appreciated that while the embodiment of FIG. 1 has shown a local display device 116a as being positioned close to the exit of scanning device l08a, in alternative embodiments, local display device H6a may be positioned further on in the displacement flow of the screening station, for example closer to the conveyor portion associated with secondary screening.
[68] Security checkpoint screening stations l04b and l04c each include similar components as security checkpoint screening station l04a, with one notable exception. In each of security checkpoint screening stations l04b and l04c an X-ray scanner is used instead of a CT scanner. Specifically, security checkpoint screening station l04b includes scanning device l08b, which is an X-ray scanner and will be referred to as X-ray scanner l08b. For a given item that is scanned, the X-ray scanner l08b scans the item with penetrating radiation to generate X-ray image data, which is two-dimensional (2D) image data. The image data from the X-ray scanner l08a conveys a 2D image of the item and is stored in memory H2b. The X-ray scanner l08b may optionally also provide ATD results associated with the 2D X-ray image data for a scanned item, in which case the ATD results would also be stored in memory H2b. A processor H4b is coupled to memory 1 l2b and can process the data from the X-ray scanner l08b.
[69] Similarly, security checkpoint screening station l04c includes X-ray scanner l08c, and the image data from the X-ray scanner l08c is stored in memory H2c and processed by processor H4c.
[70] Security checkpoint screening stations l04b and l04c are illustrated as having X- ray scanning devices instead of a CT scanning device to emphasize the fact that there may be implementations in which only one or some of the security checkpoint screening stations have a CT scanner. CT scanning devices are typically more expensive than X-ray scanning devices, and so a CT scanner may only be present at some screening stations. Some embodiments provided below describe a single GUI that allows for the screening of images originating from both CT scanners and X-ray scanners. Other embodiments described below only pertain to the handling of 3D image data, e.g. CT data, and so in those embodiments it may optionally be that there are no X-ray scanners and only one or more CT scanners.
[71] The security checkpoint screening stations l04a-c are illustrated as each having the same physical layout, but this is not necessary. Also, the specific physical layout of an illustrated security checkpoint screening station is only an example, and variations are possible. For example, the use of two conveyor sections in the post-scan area is optional, and if the two conveyor sections exist, they do not have to be alongside one another. The divider in the post scan area (e.g. divider 118a), if even present, may have a different configuration. The local
operator and/or local GUI (e.g. the GUI displayed on display device H6a) is optional. The conveyor may be configured in a different way from what is illustrated. Also, although each security checkpoint screening station l04a-c is illustrated as having its own respective processor and memory (e.g. processor H4a and memory H2a at station l04a), alternatively there may be a single centralized processor and/or memory. That is, instead of processors H4a-c and memories H2a-c, there may be a single centralized processor and/or memory that is connected to scanning devices l08a-c, e.g. via a computer network (not shown).
[72] Security checkpoint screening system 102 supports remote screening, and therefore a remote screening station 204 is also illustrated. The term“remote screening station” refers to a screening station that is not local to a particular security checkpoint screening station. A remote screening station may be in the same or another room as the security checkpoint screening stations l04a-c, in another (possibly adjacent) section of the airport, or it may be offsite, e.g. not even at the airport. A screening station adjacent to a particular lane at a security checkpoint may still be considered a remote screening station if it receives images originating from one or more scanners of other lanes. In such a case, the screening station is remote in relation to those other lanes. For example, in one practical implementation there may be an analysis station at a lane that receives images coming from one or more scanners at the checkpoint or airport other than the scanner at that lane. In other words, that analysis station would be considered to be a remote screening station in relation to the other lanes. Only one remote screening station 204 is shown in FIG. 1, but alternatively there may be a plurality of remote screening stations.
[73] The remote screening station 204 includes a local memory 212 and a processor 214 that processes received image data and ATD results (if any) implements a user interface, including a GUI on a display screen. The GUI is presented on the display device 216. The remote screening station 204 is configured to receive scanned images from the security checkpoint screening stations l04a-c via a computer network, which may be wired and/or wireless. A stippled line 252 is used to illustrate the network connection between the remote screening station 204 and the security checkpoint screening stations l04a-c. The processor at each security checkpoint screening station may communicate with the processor at the remote
screening station. For example, each of processors H4a-c may be coupled to a respective network interface (not illustrated) and may be configured to use their respective network interface to send data to and receive data from processor 214 over the network. The processor 214 may also be coupled to a respective network interface (not illustrated) and be configured to use the network interface to send data to and/or receive data from processors 1 l4a-c.
[74] Although the processor 214 and memory 212 are illustrated as part of remote screening station 204, in some implementations the processor 214 and/or the memory 212 may be centrally located, and with the remote display device 216 connected to the processor 214 over a network. The processor 214 and/or memory 212 may also be the processor and/or memory for other remote screening stations (not illustrated). Therefore, when the term“remote screening station” is being used herein, it also encompasses an implementation in which the display device of the remote screening station is coupled to a processor and/or memory that may not be located in the vicinity of the display device. The processor and/or memory may also be part of other remote screening stations and/or perform other functions.
[75] In operation, when an item that is scanned by any one of scanners l08a-c is to be reviewed by an operator at remote screening station 204, then the scanned data is sent over the computer network and received at remote screening station 204, stored in memory 212 at the remote screening station, and then processed by the processor 214 to be rendered on the GUI on the display device 216 of the remote screening station.
Processing of items scanned by a CT scanner
[76] When an item is scanned by CT scanner l08a, the CT scanner l08a generates CT data in the form of three-dimensional (3D) data, e.g. reconstructed density and Zeff data. The amount of data generated is more than that generated by an X-ray scanner. This is because the CT scanner l08a provides 3D data to render a 3D image of the scanned item, whereas an X-ray scanner provides only 2D data to render a 2D image of the scanned item. A technical problem occurs by the presence of a CT scanner compared to an X-ray scanner. Specifically, the increased amount of data associated with a 3D CT scan of an item, compared to a 2D X-ray scan, means that it takes longer for a processor to perform processing for rendering the 3D CT scanned item on a GUI of a display device, compared to processing for rendering a 2D X-ray image of
the item. The processing for rendering includes any processing for preparing the data for rendering. Additionally, it takes longer to transmit the 3D CT scanned data over the network to the remote screening station 204, compared to transmitting a 2D X-ray image of the item. Therefore, a 2D image of a scanned item (e.g. a 2D image originating from an X-ray scanner) may be displayed faster on a GUI of a display device and may be transmitted faster to the remote screening station 204 than a 3D image rendered from a CT scan of the item. The relative delay associated with transmitting and/or processing for rendering a 3D image of a CT scanned item is undesirable because it may delay the flow of passengers through the security checkpoint screening stations. For example, it is undesirable to have the flow of passengers through security checkpoint screening station l04a be slower than the flow of passengers though security checkpoint screening stations l04b-c.
[77] In view of the above, some embodiments disclosed below provide particular systems and methods for processing, transmitting, and displaying data from a CT scanner. In some embodiments, simulation operations are performed on the CT data conveying the 3D image of the item in order to derive alternate image data conveying a 2D image of the item. The 2D image may be a simulated X-ray image of the item. In some embodiments, the simulated X-ray image is generated from the CT 3D image data by projecting the 3D image data to obtain a 2D image. In some embodiments, the alternate image data conveying the 2D image of the item is first transmitted to the remote screening station and displayed on the GUI of the remote screening station.
[78] FIG. 2 is a flowchart of a method performed by security checkpoint screening station l04a and remote screening station 204, according to one embodiment. In step 262, an item is scanned by CT scanner l08a. In step 264, the data from the CT scanner l08a that is associated with the scanned item is received and stored in memory 112a at the security checkpoint screening station l04a. The data includes 3D data used for rendering a 3D image of the scanned item (e.g. the 3D data may comprise reconstructed density and Zeff data), and optionally also includes ATD data. The ATD data may have been generated by the CT scanner l08a when scanning the item. In other embodiments, the ATD data is generated by processor H4a or received from another external computing module. In step 266, the processor H4a
processes the 3D data used for rendering the 3D image of the scanned item in order to generate at least one corresponding 2D image. The at least one corresponding 2D image may be a simulated X-ray image. As one example, two 2D images may be generated, e.g. a 2D bottom view of the scanned item and a 2D side view of the scanned item. In some embodiments, a material mask is also generated for each 2D image. Different ways in which to generate the at least one 2D image from the 3D CT image data are explained later. However, the exact method used is implementation specific. In one example, at least one 2D image is generated that is a projected view of the 3D image, and optionally has the“look and feel” of a 2D X-ray image generated by an X-ray scanner, such as a specific model of an X-ray scanner.
[79] In step 268, the at least one 2D image data is then transmitted to the remote screening station 204, possibly along with the ATD data.
[80] In one embodiment, after the at least one 2D image data is finished being transmitted to the remote screening station 204, then the 3D data used for rendering the 3D image of the scanned item is transmitted to the remote screening station 204. Transmitting the 3D data is relatively slow due to the quantity of data, and therefore is shown as beginning at step 270 and as finishing at step 272. For any given transmission rate over the network, the transmission of the 3D data used to render the 3D image will typically be slower than transmitting one or a small subset (e.g. two or three) of 2D images.
[81] Alternatively, step 270 may occur in parallel to step 268. More generally, the 3D data may be sent to the remote screening station in parallel to the at least one 2D image data. In such a scenario, it is expected that the at least one 2D image data is finished being received before the 3D data is finished being received.
[82] At step 272, the transmission of the 3D data is complete, and in step 274 the 3D data is received and stored in memory 212 of the remote screening station 204. In step 276, the processor 214 processes the 3D data to render or load a 3D image for possible viewing on the GUI of the display device 216. Step 276 may take a non-negligible amount of time, e.g. a few seconds.
[83] Meanwhile, in parallel to steps 270 to 276, the following steps are performed at the remote screening station 204. In step 280, the at least one 2D image data is received at the remote screening station 204, and it is stored in memory 212 in step 282. In step 284, the processor 214 causes the at least one 2D image to be displayed on the GUI of the display device 216.
[84] In many cases, it may not be necessary for the remote human screener to view the 3D image on the GUI of display device 216. For example, if there is clearly nothing of concern shown in the 2D image(s) displayed, then the item may be cleared without the human screener viewing the corresponding 3D image. However, in other cases, the human screener may want to display the corresponding 3D image. Therefore, the method of FIG. 2 includes the following additional optional steps (shown in stippled lines).
[85] In step 286, the processor 214 receives an input originating from the user interface of the remote screening station 204. The input indicates that the 3D image is to be viewed on the GUI of the display device 216. The way in which the input is received is implementation specific, e.g. the operator (human screener) at the remote screening station 204 may select a button on a touch screen display, or may provide the selection via keyboard or mouse. The GUI may include an input object configured to receive the specific user request from the operator. For example, the operator may submit the specific user request through the input object by selection via a touch screen or using a keyboard, mouse, or other physical device part of the user interface. The input object may be a button or other object on the GUI that may be selected using a touch screen, keyboard, mouse, or other physical device part of the user interface. In step 288, the processor 214 causes the rendered 3D image to be retrieved from memory 212 and viewed on the GUI of the display device 216.
[86] Because the human screener at the remote screening station 204 will likely not want to view the 3D image right away (if at all) this typically provides time for steps 272 to 276 to complete before step 286. In some embodiments, the user interface at the remote screening station 204 may be configured to prevent the human screener from inputting a command requesting display of the 3D image until the 3D image is available to be displayed (e.g. until completion of step 274 or 276). For example, the GUI of the display device 216 may display an
element (e.g. button) that is only rendered selectable once the 3D image is available to be displayed. In some embodiments, the GUI may be configured for selectively enabling and disabling the input object through which the request is made to view the 3D image of the item. For example, the GUI may be configured for selectively causing the input object to acquire one of an enabled state and a disabled state based on the 3D image of the item being available for display at the remote screening station.
[87] By performing the method of FIG. 2, the computer functionality of the security checkpoint screening system 102 is improved in the manner described above. In particular, the delay associated with transmitting and/or processing for rendering the 3D image from the 3D data may be mitigated by first generating at least one 2D image from the 3D data, and then transmitting and displaying the at least one 2D image while the 3D data is being transmitted and rendered.
[88] The method of FIG. 2 is not specific to the embodiment described in relation to FIG. 1. For example, the method of FIG. 2 may operate in conjunction with a CT scanner at any security checkpoint screening station, regardless of the number of security checkpoint screening stations, and regardless of whether there are any X-ray scanning devices. For example, FIG. 3 illustrates a variation of FIG. 1 in which the remote screening station 204 serves only two security checkpoint screening stations, each having a CT scanner. The illustration of two security checkpoint screening stations is only an example, e.g. there may be only one or more than two security checkpoint screening stations. Also, the specific physical layout of each security checkpoint screening station in FIG. 3 (e.g. conveyor sections alongside each other, divider wall, etc.) is not relevant and may be modified.
[89] FIG. 4 is a variation of FIG. 2 in which step 276 is not performed until an input is received requesting that the 3D image be displayed. For example, at the completion of step 272 or 274, a user input control item (e.g. an input object such as a button) may be activated that, when selected, causes the 3D data to be rendered or loaded and then viewed, as per steps 286, 276’, and 288 of FIG. 4. Steps 286, 276’, and 288 of FIG. 4 are illustrated in stippled lines in order to show that they are optional, i.e. only executed if the human screener decides to view the 3D image.
[90] The method of FIGs. 2 and/or 4 may also be adapted for a situation in which the screening station is not remote, but is located at a security checkpoint screening station, e.g. as in FIG. 5, which shows a local screening station 302. An additional remote screening station may or may not be present and therefore is not illustrated in FIG. 5.
[91] FIG. 6 illustrates the method of FIG. 2 adapted for operation with local screening station 302 of FIG. 5. In step 362, an item is scanned by CT scanner 108. In step 364, the data from the CT scanner 108 that is associated with the scanned item is stored in memory 112. The data includes 3D data used for rendering a 3D image of the scanned item (e.g. the 3D data may comprise reconstructed density and Zeff data), and optionally also includes ATD data. The ATD data may have been generated by the CT scanner 108 when scanning the item. In other embodiments, the ATD data is generated by processor 114. In step 366, the processor 114 processes the 3D data used for rendering the 3D image of the scanned item in order to generate at least one corresponding 2D image. The at least one corresponding 2D image may be a simulated X-ray image. As one example, two 2D images may be generated, e.g. a 2D bottom view of the scanned item and a 2D side view of the scanned item. Different ways in which to generate the at least one 2D image from the 3D data are explained later. In step 368, the at least one 2D image is stored in memory 112. In step 370, the at least one 2D image is displayed on the GUI of the display device 116. In step 372, the processor 114 processes the 3D data to render a 3D image for possible viewing on the GUI of the display device 116.
[92] In some cases, it may not be necessary for the human screener to view the 3D image on the GUI of display device 116. However, in other cases, the human screener may want to display the corresponding 3D image. Therefore, the method of FIG. 6 includes the following additional optional steps (shown in stippled lines). In step 374, the processor 114 receives an input originating from the user interface of the local screening station. The input indicates that the 3D image is to be viewed on the GUI of the display device 116. In step 376, the processor 114 causes the rendered 3D image to be retrieved from memory 112 and viewed on the GUI of the display device 116.
[93] Note that the delay in sending the 3D CT data over the network to a remote screening station is not relevant in FIG. 6 because the display device 116 is local. However, the
method of FIG. 6 still provides technical benefits, e.g. 1) the 2D image(s) obtained from the 3D data may be displayed first while the 3D image is being processed for rendering to be available for display on the GUI of the display device 116; e.g. 2) the 2D image is generated and displayed because it may be more quickly loaded and/or more quickly viewed by a human screener compared to the 3D image, and/or the human screener may be more famili r with/comfortable with viewing 2D images, which may simulate X-ray images from a classical X-ray machine.
[94] FIG. 7 illustrates a variation of FIG. 6 in which step 372 of FIG. 6 is optional and present after step 374.
[95] In some implementations, the security checkpoint screening system 102 may include both analysis workstations and recheck workstations. An example is illustrated in FIG. 8. Only one security checkpoint screening station l04a is illustrated in FIG. 8, but more may be present in actual implementation. Analysis workstations 204a-c and 303 are used to perform primary screening analysis. Analysis workstations 204a-c are remote screening stations, whereas analysis workstation 303 is a local screening station because it is installed near a lane and only used to screen images coming from that lane. Alternatively, analysis workstation 303 may instead be a remote screening station if it is not local to a particular security checkpoint screening station, for example if it receives images originating from different lanes/scanning devices. For example, analysis workstation 303 may be a remote screening station located in a separate room or separate area adjacent to the lanes. Workstations 305 and 307 are recheck workstations, e.g. to display the image of a scanned item plus the results on the screening performed at the analysis stations (e.g. regions of interest (ROIs) created by the human screener at the analysis workstations).
[96] In one embodiment, workstation 305 is a main recheck station and workstation 307 is a secondary recheck station. In a non-limiting practical implementation, the computing device implementing the main recheck station 305 may also to be use to implement other processing functionality of the security checkpoint screening station 102, for example it may be programmed to gather data from the scanning device and perform processing of the data prior to screening, e.g. it performs generation of the 2D image from 3D data originating from a CT scanner as described elsewhere in the present document. Alternatively, the implementation of
such functionality may be implemented by a separate physical computing device. A network switch 309 is used to facilitate network communication between the components depicted in Figure 8. In FIG. 8, 3D CT data and/or corresponding 2D simulated X-ray images and/or data from an X-ray scanner may possibly be communicated to any one, some, or all of the workstations, depending upon the implementation.
Displaying the 2D simulated X-ray image before the 3D image
[97] In some embodiments described above, the 3D data originating from the CT scanner and associated with a scanned item is processed in order to generate at least one corresponding 2D image, e.g. a simulated X-ray image. The 2D image is then displayed prior to the rendered 3D image, and displaying the 3D image is optional and at the discretion of the operator (the human screener). This provides the following possible benefits: (1) the screening process may be more efficient by first viewing the 2D image(s), and only requesting the 3D image when necessary; (2) processing for rendering the 3D image takes time (delay), and while this delay is being incurred, the 2D image(s) may be viewed; (3) human screeners may be more used to screening and viewing 2D images compared to 3D images, because a 2D image is closer to an image from a classical X-ray machine, and so may screen 2D images faster.
[98] The 3D image generated from the 3D data from the CT scanner is beneficial in that the 3D image may convey information in a way that provides more insight compared to just 2D image(s), e.g. compared to 2D X-ray images from a classical X-ray machine. However, viewing and manipulating the 3D image by the human screener may increase screening time. Not only does the 3D image need to be processed for rendering, which takes time, but once rendered the human screener may also be more likely to spend additional time viewing the 3D image, e.g. rotating it on the screen to view it at all angles. Viewing the 3D image serves an important security function, but for many scanned items it is not necessary. Ideally, the 3D image would primarily only be used when the 2D image(s) shows a possible suspicious item that the human screener wants to look at in more detail, and the human screener determines that the 3D image would assist. For example, just the 2D image(s) should be adequate for a scanned tray having only a sweater in it, whereas the 3D image may be consulted if the scanned tray has unusual objects that pose a possible threat.
[99] In some embodiments, the availability of the 3D image to the human screener, e.g. the duration of time the human screener must wait until the 3D image is made available to be displayed on the GUI, may be controlled to encourage efficient screening.
[100] As one example, and with reference to FIG. 1, an item is scanned by the CT scanner l08a, and data associated with the scanned item is stored in memory H2a. The CT data includes 3D data used for rendering a 3D image of the scanned item. The processor H4a processes the 3D data in order to generate at least one corresponding 2D image, e.g. a simulated X-ray image. The image data conveying the corresponding 2D image is transmitted to the remote screening station 204 and the 2D image is displayed on the display 216 of the remote screening station 204. The 3D data is also transmitted to the remote screening station 204, stored in memory 212, and the 3D data is processed (by the processor 214) for rendering a 3D image so that it is available for possible viewing on the display 216. However, even though the 3D image is available for display on the display 216, the processor 214 waits a further intentional delay before allowing the 3D image to be displayed on the display 216. For example, the GUI of the display 216 may include a button that, when selected by the human screener, instructs the processor 214 to display the rendered 3D image on the display 216. The button is disabled when the 3D image is not available for display, i.e. the human screener cannot select the button when the 3D image is not even available for display. However, even when the 3D image becomes available for display, the processor 214 may still keep the button disabled (not selectable) for an additional period of time, i.e. an intentional delay. This intentional delay may encourage the human screener to just use the currently displayed 2D image(s). The intentional delay may be configurable, e.g. by the operator (human screener) at the remote screening station and/or by a manager, supervisor, or system administrator. The intentional delay may be configured through the remote screening station user interface or another user interface. For example, a configuration tool may be used to manage all or many of the configurations of the system, and the user interface that is part of the configuration tool may be used to configure the intentional delay.
[101] In some embodiments, the GUI of the remote screening station may be configured for selectively enabling and disabling an input object (e.g. a button) through which the request is
made to view the 3D image of the item. The GUI is then configured for selectively causing the input object to acquire one of an enabled state and a disabled state based on the 3D image being available for display and based on any intentional delay. For example, the input object may acquire the enabled state upon completion of the added intentional delay. An example showing a button on the GUI disabled is illustrated in FIG. 16. FIG. 16 is discussed later.
[102] The intentional delay added by the processor 214 before allowing the 3D image to be displayed may be adjustable dependent upon different factors. For example, in one embodiment, if the 3D image originates from a general passenger lane, then the added intentional delay is zero seconds, e.g. the 3D image is available to be selected and displayed as soon as the 3D image is ready for rendering. Whereas if the 3D image originates from a low security risk lane (e.g. a security lane that is only for employees of the airport and airlines, or a security lane that is for trusted travellers, such as NEXUS card holders), then the added intentional delay is five seconds, e.g. the 3D image is available to be selected and displayed five seconds after the 3D image is ready for rendering. The low security risk lane should have fewer potential security threats compared to a general passenger lane, and so unnecessary use of the 3D image is discouraged for scanned items originating from the low security risk lane by making the human screener wait an additional five seconds to view the 3D image. In another example, if the 3D image is a scan of a relatively complex item (e.g. a suitcase), then the added intentional delay is zero seconds, whereas if the 3D image is a scan of a relatively simple item (e.g. a coat), then the added intentional delay is five seconds.
[103] In the examples above, the added intentional delay value of five seconds is just an example. The actual added intentional delay value would be implementation specific.
[104] In some embodiments, the 3D image is first processed for rendering so that it is ready for display, and then the added intentional delay is applied. In other embodiments, the added intentional delay is first added, and upon completion of the added intentional delay the human screener is provided with the option, at the user interface, of being able to request viewing of the 3D image. Then, only if the human screener requests viewing of the 3D image is the 3D image processed for rendering and rendered for display. In other embodiments, the human screener may be immediately provided with the option, at the user interface, of being able
to request viewing of the 3D image. If the human screener requests to view the 3D image, then the added intentional delay is incurred before the 3D image is actually presented on the display.
[105] The added intentional delay described above is not implemented for X-ray images originating from X-ray scanners (if there are any X-ray scanners in the security checkpoint system) because X-ray scanners do not generate 3D image data. Also, the added intentional delay may be different for images originating from different CT scanners. For example, if a first CT scanner is in a general passenger lane and a second CT scanner is in a trusted passenger lane, then no added delay may be incurred before allowing for display of 3D images originating from the first CT scanner, whereas an added delay of five seconds may be incurred before allowing for display of 3D images originating from the second CT scanner.
[106] FIG. 9 is a method performed at remote screening station 204, according to one embodiment. In step 422, 2D image data is received at the remote screening station 204 and stored in memory 212. The 2D image data was generated from 3D data originating from a CT scanner. For example, the 2D image data may be a simulated X-ray image. In one embodiment, the 2D image data is received over a computer network from security checkpoint station l04a.
[107] In step 424, the processor 214 causes the 2D image to be displayed on the GUI of the display device 216. In step 426, the corresponding 3D data is received at the remote screening station 204 and stored in memory 212. In step 428, the processor 214 processes the 3D data to prepare for rendering a 3D image that can be displayed on the GUI of the display device 216. The processing may include decoding. In step 430, the processor 214 waits an added intentional delay of x seconds before modifying the user interface of the remote screening station to allow the human screener to request display of the 3D image. In step 432, once the added delay of x seconds has expired, the processor 214 modifies the user interface to allow the human screener to request display of the 3D image. In step 434, the processor 214 receives an input originating from the user interface of the remote screening station 204. The input indicates that the 3D image is to be viewed on the GUI of the display device 216. In step 436, the processor 214 causes a rendered 3D image to be displayed and viewed on the GUI of the display device 216.
[108] The implementation of an intentional delay, as described in the examples above, is not specific to remote screening applications. The intentional delay may be implemented at a local screening station, e.g. in the embodiment described in relation to FIG. 5, if desired.
Generating the 2D simulated X-ray image from the 3D data
[109] In embodiments described above, the 3D data originating from the CT scanner is processed in order to generate at least one corresponding 2D image, e.g. a simulate X-ray image.
[110] One way to generate a 2D image from the 3D data is to obtain the portion of the 3D data corresponding to a view of the item from a particular perspective (e.g. top view of the item), and then“flattening”, e.g. creating a flattened view by having all projection paths parallel to one another. To generate the flattened view, the projection is typically performed along one of the main axes of the Cartesian coordinates system (x, y or z). FIG. 10 illustrates an example of generating a flattened view from 3D data. The scanned item 442 is represented by 3D data originating from the CT scanner. A projection process 444 is applied to a 2D slice (e.g. perpendicular to the conveyor belt direction) using parallel projection paths 446. On each projection path the 3D data is projected into 2D. The projection process is performed for each of the slices of the object. The projection of one slice, as shown in FIG. 10, provides the values for one column of the 2D flattened image. When parallel projection paths are used, e.g. like in FIG. 10, the view generated will be referred to herein as a“flattened view”, which is different from the“projection views” described later.
[111] The possible drawback of generating a 2D image that is a flattened view is that the 2D image will typically not have the“look and feel” of a 2D X-ray image that originates from a classical X-ray scanner. Most human screeners are conventionally used to viewing classical X-ray 2D images, and so improving the functionality of the computer to instead generate a corresponding 2D image from the 3D data that resembles a classical X-ray image is desirable. Specific example ways to do this are described below.
[112] In some embodiments, the image data conveying the 3D image of the item may be processed to derive alternate image data conveying a 2D image of the item, where the 2D image is a simulated X-ray image. The simulated X-ray image may be derived by applying simulation
operations on the 3D CT data. In some embodiments, the simulated X-ray image may be a projected view of the 3D image. Such a 2D image will be referred to as a“projected view”, or alternatively as a“2D projected view”, “projected image”, or“2D projected image”. The simulation operations may involve generating the projection using non-parallel projection paths, in other words projection paths that extend along convergent (or divergent) axes e.g. as described in FIG 11 below.
[113] FIG. 11 illustrates an example of generating a projected view from 3D data originating from a CT scanner, in order to result in a simulated X-ray image. The simulated X- ray image may resemble a classical X-ray image.
[114] In FIG. 11, the scanned item 442 is represented by 3D data originating from the CT scanner. An X-ray source 450 is simulated, and an array of X-ray detectors 452 is also simulated. The X-ray detectors 452 may also be called X-ray sensors. The projection paths 446 extend from the simulated X-ray source 450 to each of the simulated X-ray sensors 452. The position of the simulated source 450 and of the simulated sensors 452 relative to the scene is substantially the same as a real X-ray source and real X-ray detectors in a specific classical X-ray machine to thereby simulate a classical X-ray image. Note that the projection paths 446 are not parallel to one another but rather extend along axes that diverge from one another as they move away from the simulated X-ray source 450 towards the simulated sensors 452 (or alternatively axes that converge at the simulated X-ray source 450). In FIG. 11 a projection process 444 is applied to a 2D slice using the non-parallel projection paths 446. On each projection path the 3D data is projected into 2D. The projection process is performed for each of the slices of the object, i.e. the projection of one slice, as shown in FIG. 11, provides the values for one column of the 2D projected view.
[115] One example projection algorithm for generating a 2D projected view is described as follows. It will be appreciated that the following is only a specific example, and that a different implementation may use a different or modified version of the algorithm below.
[116] The example projection algorithm comprises three steps: generate the 2D image corresponding to the X-ray attenuation image; generate the 2D image corresponding to the projected Zeff image; and use standard coloring algorithms to generate the color image.
1. Generation of the X-ray attenuation image:
[117] The X-ray attenuation image simulates the X-ray intensity received at the simulated detector locations from a simulated mono energetic source for the analyzed scene. The output intensity Iout of an X-ray having an input intensity /,·„ that passes through a material having a thickness of t is: hut = Iίh^~mΐ (Equation #1) where u is a quantity proportional to the density of the material.
[118] If the X-ray passes through more than one material (say materials 1, 2, ..., K) the equation becomes: iout = /ine å™=^mtm (Equation #2) where um and tm are the values of u and the thickness of the mth material, respectively.
[119] u is one of the two quantities that are reconstructed by the CT process. In particular, one output of the CT scan (i.e. one part of the 3D data), generally called the density data, is a cube of voxels providing the value of u (or a value proportional to it) for each location of the reconstructed volume.
[120] Each pixel of the X-ray attenuation image is the result of Equation #2 for one specific projection path. In particular, the pixel of the klh column and 5th line of the X-ray attenuation image corresponds to the attenuation between the simulated source and the 5 th simulated sensor for the klh slice of the reconstructed data.
[121] FIG. 12 illustrates the klh slice of the reconstructed density data, according to one embodiment. In general, the slice is a grid of / x / square or rectangular voxels 454. For the projection path going from the simulated source 450 to the 5 th simulated sensor 456, the computed X-ray attenuation Iks corresponding to the value of the X-ray attenuation image at the coordinates ( k, s ) is:
(Equation #3)
where u j is the value of u for the voxel having the coordinates (i ) in the klh slice of the density data and t-j is the length of the intersection of the projection path with the voxels.
[122] The X-ray attenuation image is then computed by applying Equation #3 for every sensor and for each slice of the density data.
2. Generation of the projected Zeff image:
[123] Another quantity typically generated by a CT scanner is Z effective (Zeff) data. That is, the 3D data originating from a CT scanner in relation to an item scanned by the CT scanner typically includes Zeff data. For example, the CT scanner may output a cube of voxels providing the Zeff value for each location of the reconstructed volume.
[124] In some implementations, the resolution of the Zeff reconstruction may differ from the resolution of the density data, e.g. the voxel sizes may differ. Also, some CT machines may not reconstruct the Zeff data for the whole volume or may not reconstruct the Zeff data at all. In that case, in some embodiments, the Zeff data is approximated from the density data alone, e.g. a Zeff value is associated with all possible density values.
[125] The projected Zeff image is a 2D image representing the Zeff of the scanned item along the projection path. The projection paths are the same as for the density projection. In one embodiment, the formula used is: (Equation #4),
where p is the density of the compound, a,· is the fractional element of electrons per gram, and z,· is the atomic number of the ith atomic element of the chemical formula of the compound a,· and z; are the relative weight and effective atomic number of the ith element, respectively, and p is equal to 2.78.
[126] Based on Equation #4, and given the available outputs of the CT scanner, the projected Zeff, called Zks, for the projection path going from the simulated source to the 5th simulated sensor for the klh slice of the Zeff data, corresponding to the value of the projected Zeff image at the coordinates k, s), can be approximated as:
(Equation #5)
where z j is the value of z for the voxel having the coordinates ( i,j ) in the klh slice of the Zeff data.
3. Use standard coloring algorithms to generate the color image:
[127] The standard output of a classical X-ray machine is a pair of X-ray attenuation images (high energy and low energy) from which a projected Zeff image can be computed. Coloring algorithms are applied to these sets of images to create the colored image.
[128] In the CT context, once the X-ray attenuation and the projected Zeff images are computed, similar algorithms can be used to obtain the colored images. FIG. 13 illustrates one example of a flowchart for a colouring algorithm. Different variations are possible. A step that generally differs from typical algorithms used with classical X-ray images is the step of interpolation. The interpolation step operates by matching the aspect ratio of the colored image with the one of the images generated by the classical X-ray system that is simulated. Typically, the resolution along the conveyor belt direction of a CT scanner differs from that of a classical X-ray.
[129] The algorithm described above for generating a 2D simulated X-ray image from 3D CT data is just an example. More generally, in some embodiments, the 3D density and Zeff data may be processed using simulation operations to produce two sets of 2D simulated images, e.g. to produce two sets of projected density and projected Zeff images. The projections may emulate the views generated by a classical X-ray scanner. In some embodiments, the projected images may be processed to create sets of colour and material mask images. The colour image may be the one displayed on the GUI in“normal” or“default” mode, and the material mask may be used by some image enhancement tools. The colour image and material mask may be stored in memory to be ready to be sent to other viewing stations, e.g. to one or more remote screening stations.
[130] Note that X-ray simulation is also discussed in International PCT application PCT/CA2009/000811, which is published as W02010091493, and which is incorporated herein by reference.
[131] While the above description has focussed on deriving one simulated 2D image from 3D image data by preforming a projection, it is to be appreciated that a set of simulated 2D images comprising multiple simulated 2D images may be derived from a same set of 3D image data, wherein respective simulated 2D images in the set may be associated with different simulated X-ray sources and/or different positions (angles and/or distances) for the simulated X- ray sources and detectors. The set of simulated 2D images may allow simulating the behavior of a multi- view X-ray system in which there are multiple X-ray sources and detectors positioned at different distances/angles from the item being inspected. This may allow for improved security screening of an item without necessarily having to view the 3D image of the item. As discussed above, the amount of time required to screen an item may be reduced if the visual inspection is performed first based on the display of a 2D image without the need for rendering and viewing the 3D image of the item.
[132] In some embodiments, 3D region of interest (ROI) projection into 2D may also be performed.
[133] A 3D ROI (or 3D rectangular bounding box) defines a 3D subspace in a 3D scene. In some embodiments, it may be described by eight comers having a specific set of 3D coordinates ( x,y, z ). Once projected in the 2D image, the 3D ROI will become a 2D ROI (a rectangle) defining a sub-region in the 2D image.
[134] As described above, in the projected images (before the interpolation process), each column corresponds to a specific slice (or z value) in the CT data, and each line corresponds to a simulated sensor. The projection process of a 3D ROI may therefore comprise computing the projection coordinates (xp, yp) in the 2D image of each of the eight comers of the 3D ROI, e.g. as follows:
(1) The column (yp) coordinate is directly given by the slice number, which is the z coordinate in the CT data.
(2) Finding the line (xp) coordinate consists in finding the simulated sensor for which the projection path is the closest to the comer position. Basic mathematics are used to compute the distance between a point (the corner position) and a line (the projection path).
[135] FIG. 14 illustrates determining the projection coordinates of corners of the 3D ROI, according to one embodiment. Projection paths 472 and 474 are the paths that are closest to the comer positions of the 3D ROI 476 at the illustrated slice. These dictate the projection coordinates to generate the 2D ROI.
[136] Once the eight corners of the 3D ROI have been projected, the 2D ROI is defined as follow:
(1) The origin of the rectangle {xp,yp ) is determined from the minimal x and y values among all the projected comers.
(2) The width of the rectangle is equal to the difference between the maximal value among all the projected corners and Xp.
(3) The height of the rectangle is equal to the difference between the maximal y value among all the projected comers and yp .
[137] If interpolation is used for the creation of the color image, the coordinates and dimensions of the 2D ROI are modified accordingly to match the dimensions of the color image.
[138] The ATD results, when present, may also be projected in the manner described above to generate one or more 2D ROIs in one or more of the 2D generated image(s).
[139] FIG. 15 illustrates an example flow chart for the generation of at least one 2D simulated X-ray image from 3D data originating from a CT scanner for a scanned item. The operations of FIG. 15 may be performed by the processor at the security checkpoint screening station (e.g. by processor H4a).
[140] In box 486, operations are performed relating to receiving the 3D data originating from the CT scanner for the scanned item. The operations include: receiving reconstmcted density and Zeff data (two sets of 3D data); storing the 3D data in memory, e.g. in cache; and
sending the 3D data to projection algorithms. If the 3D data needs to be sent to any viewing stations, it is sent from the memory, e.g. from the cache.
[141] In box 488, operations are performed in relation to receiving ATD results associated with the scanned item. Box 488 is optional, e.g. if ATD is not performed then box 488 is omitted. The operations in box 488 include: processing and storing the ATD data in memory, e.g. in a database, so that it is ready to be sent to other viewing stations as needed; projected ROIs for 2D views are also computed from any received 3D ROIs and stored in the database. In some embodiments, the ATD results received as per box 488 are received asynchronously from the 3D data.
[142] In box 490, operations are performed in relation to generating the 2D projected view(s) from the 3D data. The operations in box 490 include: processing the 3D density and Zeff data to produce at least one set of projected density and projected Zeff images, where the projections emulate the view generated by a classical X-ray scanner; process the projected images to create at least one set of colour and material mask images, where the colour image is displayed on the GUI in normal mode, and the material mask is used by some image enhancement tools; store the color image and material mask in memory, e.g. in a database, so that they are ready to send to any viewing stations. In some embodiments, before generating the 2D simulated images, the density and Zeff 3D data may be preprocessed (e.g. filtered, thresholded, corrected) to try to improve the image quality.
[143] As described above, in some embodiments a 2D projected view is created from the 3D data. The 2D projected view may be created by applying transformations (e.g. angles of the projection path, colouring, type of transparency), which make the projected view resemble or look familiar to a 2D X-ray image instead of looking like a flattened out 3D image.
[144] For completeness, an overview of how to generate a 2D projected view from the 3D data according to one embodiment, is as follows:
(A) Receive two 3D datasets (cubes of data) representing the density and the Zeff of the scanned item. These cubes provide the information about the physical nature of the scanned item in each of the 3D points.
(B) Use the physics of X-rays (actual and approximation equations) to simulate the images that would be generated if the item was scanned in a classical X-ray scanner. For example, knowing the geometry of the classical X-ray scanner (“the real machine”), and referring back to FIG. 11: i. Conceptually place a source 450 at the same relative position from the data cube 442 as in the real machine ii. Conceptually place an array of X-ray detectors 452 at the same relative position from the data cube as in the real machine iii. Use the physics of X-ray to simulate what would be the measured data (intensity and Zeff) in an actual X-ray machine for each of the sensors:
1. Each plane of the cube leads to a column of the simulated image
2. Horizontal interpolation can then be used to have the same horizontal resolution as the simulated machine iv. With these projected images, use the same algorithms (or very similar algorithms) to the ones used with a classical X-ray machine to generate the color images (and create the material mask) that are displayed on the GUI. v. The result is the projected image. It emulates the image one would have from an actual X-ray machine. The process may be repeated to create two projected views (e.g. bottom view and side view).
[145] For completeness, an overview of how to perform 3D ROI projection in 2D to generate a ROI in a 2D projected image, according to one embodiment, is as follows:
(A) 3D bounding boxes are deduced from the 3D ROIs, as follows: a. 3D ROIs take the form of masks: the processor receives the list of voxels coordinates (one voxel is one point in the cube, so a "3D pixel") defining the object of interest.
b. Bounding boxes can be found by finding the minimal and maximal values for each of the x ,y and z coordinates. c. 3D bounding boxes can then be defined by eight sets of (x,y,z) coordinates (one set for each of the 3D bounding boxes comers) i. Note: Edges of the bounding box are parallel to the x, y and z axes d. Note, in one implementation, the ROIs may be identified by ATD algorithms or identified by a user. In any case, the bounding boxes may be deduced from voxel coordinates.
(B) Knowing the geometry of the classical X-ray machine to simulate, the relative position of the cube and the position of the bounding box corners in the cube, basic geometry equations are used to deduce the position of each of the corners in the projected images (in other words, each corner is projected in the 2D image).
(C) A new 2D bounding box is deduced by computing the minimal and maximal x and y values of each of the eight projected corners. The 2D bounding box can be defined by four pairs of (x, y) coordinates. Edges of the bounding box are parallel to the x and y axes in the 2D image.
Single common user interface for CT and X-ray images
[146] Returning to FIG. 1, the remote screening station 204 is configured to receive both 2D data from X-ray scanners l08b and l08c, and 3D data from CT scanner l08a. When an item is scanned by a CT scanner, the 3D data may or may not be processed in the manner described earlier (e.g. in relation to FIG. 2). For example, the 3D data may be sent after the corresponding 2D image(s) data is generated and sent to the remote screening station 204 (as in FIG. 2), or the 3D data may be sent first without necessarily generating and sending corresponding 2D image(s) data, or the corresponding 2D image(s) and the 3D image may be sent in parallel.
[147] In any case, a human screener at the remote screening station 204 is conventionally used to screening 2D X-ray images from an X-ray scanning device, and therefore is used to using an associated GUI and user interface tools developed for screening such 2D
images. In some embodiments, the computer functionality is improved to allow for the GUI and user interface tools conventionally used for screening 2D classical X-ray images to be enhanced to also accommodate the additional screening of items scanned by a CT scanner. A common GUI is provided for screening a sequence of items, where some of the items were scanned by an X- ray scanner and are therefore associated with 2D data, and where others of the items were scanned by a CT scanner and are therefore associated with 3D data.
[148] That is, in one embodiment, a single user interface including a single GUI on display device 216 is used for the screening of items scanned by either an X-ray or CT scanner. The same GUI, tools, and general behaviour may be implemented whether the scanned item to be screened originates from a 2D X-ray scanner or a 3D CT scanner.
[149] FIG. 16 illustrates one example of a GUI 502, e.g. which may be generated by processor 214 and displayed on display device 216 of the remote screening station 204. The GUI 502 illustrates 2D images (side and bottom view) of an item that was scanned by an X-ray scanner. Buttons 506 are disabled or“greyed out” because buttons 506 are specific to items scanned by a CT scanner where display of a 3D image is available. Buttons 506 comprise the following buttons, which will be explained later:“3D”,“bottom view preset”,“side view preset”, and“itemize”. Other buttons are not disabled and allow for user selection of enhancement- related tools, such as (but not necessarily):
(1)“organic”: which shows organic material only in the image, e.g. organic material is displayed in orange, and the rest is grayed out in the image;
(2)“inorganic”, which shows inorganic material only in the image, e.g. inorganic material is displayed in green, and the rest is grayed out in the image;
(3) “metallic”, which shows metallic material only in the image, e.g. metallic material is displayed in blue, and the rest is grayed out in the image;
(4)“grey scale”, which shows a grey scale of the image;
(5)“invert”, which provides colour inversion in the image;
(6)“high penetration”, which acts to try to increase the visibility through dense material in the image;
(7)“super clear”, which increases contrast and edge enhancements;
(8)“dynamic range”, which stretches a dynamically set range of the image intensity;
(9)“brightness”, which sets the image brightness;
(10)“contrast”, which sets the image contrast;
(11)“sharpening”, which provides dynamic edge enhancement.
[150] The“show threats” button (sometimes instead called“hide threats”), which is illustrated on the GUI 502, is a button that allows for objects of interest (e.g. laptops, bottles, metal bars of the types used in carry-on luggage) to be removed from the displayed images to reveal content previously obstructed by the objects of interest. Removal of objects of interest is described in more detail later, e.g. in relation to FIG. 41 which shows an example of removal of an object of interest. It is to be appreciated that while the button is labelled“show threats” in the GUI 502, the objects of interest may not inherently be threats, but may be typical objects of interest, such as without being limited to electronic devices such as lap-tops, tablets, phones, cameras and the like.
[151] Note that the GUI 502 is just an example and in alternative implementations, the GUI 502 may look different than what is illustrated. For example, in a variation, the buttons may be arranged differently on the display. In a variation, there may be more or fewer buttons. For example, there may be a“show bag” button beside the“show threats” button. The“show bag” button may be used to enable/disable the hide bag functionality. An example of the hide bag functionality is scene removal, i.e. removing the entire scene except for the detected object(s) of interest. Scene removal is described in more detail later, e.g. in relation to FIG. 42. In some variations, the“camera” button may be enabled most or all of the time to allow for a camera image to be displayed. In some variations, the small icons at the top right of the GUI (the icons used to close or minimize the window) may not be displayed. Many other variations are possible.
[152] FIG. 17 illustrates the GUI 502 displaying an item that has been scanned by a CT scanner. 3D data is available to display a 3D image of the item, and so the“3D” button 522 becomes enabled, i.e. available for selection. As discussed earlier (e.g. in relation to FIG. 9), in some embodiments an additional intentional delay may be enforced by the processor after the 3D image is available to display, but before the“3D” button 522 become enabled for selection.
[153] If the“3D” button is enabled and selected by the user, then the 3D image will be displayed. However, in FIG. 17, the GUI 502 is currently only displaying 2D projected views generated from the 3D data. The 2D projected views were generated in the manner explained earlier. Specifically, a 2D bottom view 532 and 2D side view 534 are illustrated in FIG. 17. These 2D views 532 and 534 are simulated X-ray images in the form of projected views emulating views generated from an X-ray scanner. Example ways to generate such 2D views are described earlier.
[154] In FIG. 17, the 2D simulated X-ray images are only displayed, not the 3D view. As mentioned earlier, in many cases it may not be necessary for the human screener to view the 3D image on the GUI 502. For example, if there is clearly nothing of concern shown in the 2D images displayed, then the item may be cleared without the human screener viewing the corresponding 3D image. However, in other cases, the human screener may want to display the corresponding 3D image. If that is the case, the human screener may select the“3D” button 522, which causes the 3D image to be displayed, as shown in FIG. 18. FIG. 18 shows the 2D side view 534 replaced with the 3D image 538. If the display device 216 had enough display screen space (e.g. three display screen monitors), the 3D image 538 may be displayed in addition to the 2D images shown in FIG. 17. Because the 3D image 538 is now displayed, the following three buttons 540, which were previously disabled (“greyed-out”) are now enabled: i. “bottom view preset”, which when selected resets the orientation of the 3D image 538 to bottom view (i.e. item is viewed in 3D from below). ii. “side view preset”, which when selected resets the orientation of the 3D image 538 to side view (i.e. item is viewed in 3D from the side).
iii. “itemize”, which when selected causes the 3D image of the item to look like it has been segmented. The objective is to try to highlight objects in the 3D image 538 of the item. An example is shown in FIG. 19. FIG. 20 shows another example for the case in which“bottom view preset” is selected (item viewed from below). FIG. 21 shows another example for the case in which “side view preset” is selected (item viewed from side).
[155] When a 3D image of the item is present, the image enhancement options (1) to (11) mentioned above, when selected, are performed on both the 3D image of the item and the corresponding 2D image(s) (e.g. the 2D projected image(s)). Examples are illustrated in FIGs. 22 to 35. FIGs. 22 to 35 each illustrate an example of an image processing operation performed on images derived from penetrating radiation. The images are greyscale digital photograph images. They are not conducive to replacement by black and white line drawings, and it would detract from clarity to replace them with black and white line drawings. They are simply to illustrate examples of some of the image enhancement operations that may be possible in some embodiments. A person skilled in the art would find the images clear in view of the description and context.
[156] FIG. 22 illustrates an example of the“invert” operation mentioned above. FIG. 23 illustrates an example of the“super clear” operation mentioned above. FIG. 24 illustrates an example of the“high penetration” operation mentioned above. FIG. 25 illustrates an example of the“organic” operation mentioned above. FIG. 26 illustrates an example of the“metallic” operation mentioned above. FIG. 27 illustrates an example of the “inorganic” operation mentioned above. FIG. 28 illustrates an example of the“grey scale” operation mentioned above. FIG. 29 illustrates an example of the“dynamic range” operation mentioned above, for a low value. FIG. 30 illustrates an example of the“dynamic range” operation mentioned above, for a high value. FIG. 31 illustrates an example of the“brightness” operation mentioned above, for a low value. FIG. 32 illustrates an example of the“brightness” operation mentioned above, for a high value. FIG. 33 illustrates an example of the“contrast” operation mentioned above, for a low value. FIG. 34 illustrates an example of the“contrast” operation mentioned above, for a high value. Note that for contrast and brightness, a low value means that the contrast/brightness is
lower than in the normal image (and the contrary for a high value). For dynamic range, a low value means that the range of stretched values are the low values (below the average), and the contrary for a high value. FIG. 35 illustrates an example of the “sharpening” operation mentioned above.
[157] In some embodiments, there may be the same mlers in the 2D and 3D views (e.g. on the sides of the screens). Origin may be the bottom left comer. The rulers may be configurable in inches or cm. In some embodiments, in the 3D view, the center of rotation may be based on mouse position and depth position of the items. In some embodiments, when a ROI is identified in 3D, it is automatically projected on the 2D view as well (e.g. FIG. 36 described below). In some embodiments, when the whole item/scene is tagged as a threat, bounding boxes are created around the whole scene in 2D and 3D (e.g. FIG. 38 described below).
[158] FIG. 36 illustrates an example of a ROI 602 createdin the 3D image being projected onto the 2D image. The ROI 602 in the 3D image may be created by the screener, e.g. by using user operable controls provided through the GUI to allow a user to select from a displayed 3D image an object of interest. FIG. 37 illustrates an example of a ROI 604 identified by a user in the 2D image (e.g. by placing a box around the area of interest), but the ROI is not automatically identified by the processor in the 3D image. FIG. 38 illustrates that a ROI 606 placed around the whole item will show in both the 2D view and the 3D view.
[159] An example flowchart for processing images of items scanned by an X-ray scanner (“2D scenes”) is shown in FIG. 39. In box 652, the 2D scenes (e.g. colour images and detection results), which are based on data from an X-ray scanner, are received at a remote screening station, and are displayed at the remote screening station. The received 2D images are stored in memory at the remote screening station. The images may be received from memory at a security checkpoint screening station. As mentioned in box 652: all generic/conventional image enhancement tools are available on the GUI; the“3D view” button and buttons associated with 3D specific tools are disabled, e.g. not visible or grayed out; different screen setups on the GUI are possible, e.g. if the remote screening station only has one screen then the bottom view may be displayed, if the remote screening station has two screens then the bottom and side views may
be displayed, if the remote screening station has three screens then the bottom view, side view, and camera image may be displayed.
[160] In box 654, scene manipulation may be performed by the human screener using the user interface. As mentioned in box 654: zoom and pan are performed independently on each displayed view; any generic/conventional image enhancement tools are applied equivalently to all displayed views; threats/suspicious items are identified with bounding boxes; when the user adds a bounding box in one of the views, no bounding boxes are created in the other views; when the user deletes a bounding box that is linked to bounding boxes in the other views, all linked boxes are deleted.
[161] In box 656, threat results, (e.g. threat bounding boxes, threat type) that are entered by the user at the remote screening station are transmitted back to the memory at the security checkpoint screening station.
[162] An example flowchart for processing images of items scanned by a CT scanner (“3D scenes”) is shown in FIG. 40. In box 666, the corresponding 2D scenes (e.g. colour images and material masks), which are generated from 3D data from a CT scanner, are received at a remote screening station, and are displayed at the remote screening station. The received 2D images are stored in memory at the remote screening station. The images may be received from memory at a security checkpoint screening station. As mentioned in box 666: all generic/conventional image enhancement tools are available for selection by the user (although the image enhancement tools would typically not be available for the camera image); the“3D view button” and buttons associated with 3D specific tools are disabled (greyed out) because the 3D data is not yet received at the remote screening station; different screen setups on the GUI are possible, e.g. if the remote screening station only has one screen then the bottom view may be displayed, if the remote screening station has two screens then the bottom and side views may be displayed, if the remote screening station has three screens then the bottom view, side view, and camera image may be displayed. If not all views are displayed simultaneously (e.g. the remote screening station only has one screen), then“view” button(s) on the GUI may be selected to display a specific view or to cycle through the display of the different views.
[163] In box 668, the 3D data is received at the remote screening station. The 3D scene data may be received from the memory at the security screening checkpoint. As mentioned in box 668: the“3D view” button becomes available (no longer greyed out), so that it may be selected to begin rendering of the 3D image; however, the buttons associated with 3D specific tools are still disabled (e.g. grayed out) because the rendered 3D image is not displayed; the 3D data is stored in memory at remote station.
[164] In box 670, the“3D view” button is selected by the human screener at the user interface. In response: the 3D data is loaded in the rendering engine and is rendered and displayed; and the buttons associated with 3D specific tools become available to be selected.
[165] In box 672, scene manipulation occurs. As mentioned in box 672: zoom and pan are performed independently on each displayed view; any generic/conventional image enhancement tools will be applied equivalently in all displayed view (2D and/or 3D); threats/suspicious items are identified with bounding boxes; bounding boxes of automatically detected threats are visible in both 2D and 3D; when the user adds a bounding box in 3D, it creates automatically projected bounding boxes in 2D; when the user adds a bounding box in one of the 2D views, no bounding boxes are created in the other views (2D and 3D); when the user deletes a bounding box that is linked to bounding boxes in the other views, all linked boxes are deleted; and when the user changes a threat type of a bounding box that is linked to bounding boxes in the other views, all linked boxes are changed.
[166] In box 674, threat results, (e.g. threat bounding boxes, threat type) that are entered by the user at the remote screening station are transmitted back to the memory at the security checkpoint screening station.
Bag Deconstruction
[167] In some embodiments, a“bag deconstruction” operation may be performed.
[168] One type of bag deconstruction operation is object removal, i.e. the human screener can remove detected objects (all the detected objects or one at a time) from the scene. FIG. 41 illustrates an example of objet removal. Object removal from a 3D image is shown at 712. Detected object 714 is removed from the 3D image scene so that the human screener may
view the rest of the scene without having parts hidden by the object 714. As an example, object 714 may have been identified by ATD, which is why object 714 is surrounded by a bounding box 716 in the 3D image. The ATD operations may have been performed by the CT scanner or by the processor at the local or remote screening station, or performed by another external module. However, object 714 does not have to be an object identified by ATD, e.g. the object 714 may be an object of interest, such as a laptop, identified by the human screener. In some embodiments, the human screener may use the user interface to insert the cube 716 around the object 714.
[169] Object removal from a 2D image is shown at 718. Object 720 is removed from the 2D image so that the human screener may view the rest of the scene without having parts hidden by the object 720.
[170] Another type of bag deconstruction operation is scene removal, i.e. the human screener can remove the entire scene except for the detected object(s) of interest. The detected object(s) may then be viewed at their actual location without being impacted by the rest of the scene. FIG. 42 illustrates an example of scene removal. Scene removal from a 3D image is shown at 722. Object 714 is a detected object of interest and so the surrounding 3D image scene is removed. Scene removal from a 2D image is shown at 724. Object 720 is a detected object of interest and so the surrounding 2D image scene is removed.
[171] In some embodiments, when a bag deconstruction operation such as object removal or scene removal is performed on one displayed view of the scanned item, then the same operation is automatically performed on one or more other views of the scanned item. For example, if the GUI displays both a 3D image of the item and a 2D projected view of the item, such as in FIG. 18, and if an object of interest is removed from the 3D image, then the object of interest will automatically be removed from the 2D projected view. Methods for implementing this operation are described later.
[172] In some embodiments, when an element is removed from the image, it is not displayed on another part of the screen, which has the benefit of saving screen real estate. For example, and with reference to FIG. 41, if the human screener uses the user interface to request that object 714 be removed from the scene, e.g. by selecting object 714 and selecting an“erase”
button on the GUI, then the object 714 is removed from the displayed image and is not displayed anywhere else on the screen. As another example, and with reference to FIG. 42, if the human screener uses the user interface to request that the scene around object 714 be removed, then the scene is removed from the displayed image. The object 714 is not extracted from the scene and displayed elsewhere. In this way, the bag deconstruction functionality does not affect the screen real estate, it only removes the selected elements from the currently displayed image. There is not a dedicated part of the screen for the removed objects, and there is no exploded view generated.
[173] As mentioned above, in some embodiments a bag deconstruction operation that is performed in the 3D view is also performed in the corresponding 2D view. For example, the 3D data from a CT scanner may be processed to allow the human screener to remove an object from the 3D image, e.g. remove a laptop from the 3D image and view the image without the laptop. In some embodiments, when an object is removed from the 3D image, the at least one corresponding 2D image that was generated from the 3D data is regenerated by a processor with the object also removed. Such an operation may be performed by processor 214 if the screener at the remote station requests that the object be removed from the scene. Or such an operation may be performed by a local processor at the security checkpoint screening station if a local screener at the security checkpoint screening station requests that the object be removed from the scene.
[174] In some embodiments, the input used to apply the bag deconstruction functionality is the set of 3D coordinates in the CT data corresponding to the detected objects, e.g. the voxels corresponding to the detected objects must be known. One set of coordinates is needed for each detected item.
[175] Two different ways for applying the bag deconstruction functionalities in the 2D images are described below. The first way described will be referred to as“the direct approach”, and the second way described will be referred to as“the layer based approach”.
Direct approach:
[176] In one embodiment, the direct approach consists of setting to zero all the voxels corresponding to the objects to be removed in the density data originating from the CT scanner,
and then regenerating the 2D projected view with the removed objects by reapplying the projection and coloring algorithms described earlier. This operation may be done dynamically, e.g. when the human screener selects a button on the GUI, or the operation may be precomputed, i.e. performed in advance and stored, in which case all possible combinations of displayed items may need to be precomputed.
[177] For the scene removal functionality, the voxels corresponding to the objects to be removed are all the voxels of the scene that do not belong to the detected object(s) remaining in the scene.
[178] For the sake of completeness, an example specific method of the direct approach will now be described for regenerating a 2D image to omit an object removed from the 3D image. The following operations are performed by the processor:
(A) Receive the mask defining the object(s) to be removed from the scene, e.g. receive the list(s) of voxels coordinates defining the object of interest, where one voxel is one point in the cube.
(B) In the density cube, set the voxels corresponding to the object to zero. This may be done on a copy of the cube or directly in the used cube. In this latter case, the processor maintains a copy of the erased values.
(C) Reapply the 2D projection and coloring algorithms described earlier for creating a 2D projection of an image.
Notes regarding operations (A) to (C) directly above: In some embodiments, the process may be performed dynamically, e.g. in response to the user requesting the object be removed (e.g. when the user selects the“erase” button). In some embodiments, the process may be performed as a preprocessing step and the results kept for future use, e.g. the preprocessing may be performed on one of the screening stations (e.g. by processor H4a) or by a computing station not used for screening, and then the results sent to viewing stations, or the preprocessing may be performed on the viewing stations on reception of the 3D data. In the case of preprocessing,
all possible combinations (if there is more than one object that can be deleted) may be computed.
Layer approach:
[179] In one embodiment, the layer approach consists of computing a“pre-density” layer and a“pre-Zeff’ layer for each of the detected objects and for the rest of the scene. The 2D colored image with removed objects is then computed from the layers. If N objects have been detected in a scene, then N + 1 pairs of layer images are computed (one for each object and one for the rest of the scene).
[180] In one embodiment, the pre-density layer and the pre-Zeff layer of one detected object are computed as follow:
(1) In the density data originating from the CT scanner, set to zero all of the voxels not corresponding to the detected object.
(2) Compute the pre-density layer by applying the following equation to the modified CT data
(Equation #6) where the notation in Equation #6 is defined earlier in relation to Equation #3.
(3) Compute the pre-Zeff layer by applying the following equation to the modified CT data:
(Equation #7) where the notation of Equation #7 is defined earlier in relation to Equation #5.
[181] The values pf and pZ° are used to respectively denote the pre-density and pre- Zeff layers corresponding to all of the scene but the detected objects. The values pf and pZ' where n = I, . . , N are used to respectively denote the pre-density and pre-Zeff layers corresponding to each of the N detected objects. The X-ray attenuation image, called I and defined by Equation #3, and the projected Zeff image, called Z and defined by Equation #5, can be computed from the pre-density and pre-Zeff layers:
[184] From these equations, the bag deconstruction functionalities (e.g. object removal and scene removal functionalities) based on layers may be applied as follow:
(1) Compute the modified X-ray attenuation image corresponding to the modified scene using Equation #8 by using only the needed layers. Example 1: if an object corresponding to n = 1 must be removed, then remove n = 1 from the sum of Equation #8. Example 2: if all the scene but the detected objects must be removed, then remove n = 0 from the sum of Equation #8.
(2) Compute the modified projected Zeff image corresponding to the modified scene using Equation #9 by using only the needed layers. Example 1: if an object corresponding to n = 1 must be removed, then remove n = 1 from the sum of Equation #9. Example 2: if all the scene but the detected objects must be removed, then remove n = 0 from the sum of Equation #9.
(3) Use the modified X-ray attenuation image and the modified projected Zeff image to generate the modified colored image. Generating a colored image is described earlier.
[185] Compared to the direct approach, the layer approach typically uses much less processing power. In particular, with the direct approach, the complete projection process (which is costly from a computation point of view) must be done each time an object is removed, which is not the case with the layer approach. Therefore, in general, the layer approach will lead to a faster generation of the image. Other possible advantages of the layer approach include: less processing power needed at the remote station, which may save cost; and deconstruction may be faster to be applied, which may result in a better user experience.
[186] For the sake of completeness, an overview of one example of the layer approach will now be described for regenerating a 2D image to omit an object removed from the 3D image. The following operations are performed by the processor:
(A) Receive the mask defining the object(s) to remove from the scene, e.g. receive the list(s) of voxels coordinates defining the object of interest, where one voxel is one point in the cube.
(B) If N different objects are received, then N + 1 variations are created for the original data cubes (density and Zeff): i) One where all the voxels corresponding to objects are set to zero; ii) One cube for each object where all the voxels but the ones of the object are set to zero. The N + 1 cube creation may be conceptual: in other words the processor does not have to actually have N + 1 sets of cubes in memory.
(C) Use the physics of X-rays (actual and approximations) to generate the projected images for each of the cubes, e.g. reapply, for each of the N + 1 pairs of cubes, the simulation process described for the generation of the projected image emulating an actual classical X-ray machine. This creates N + 1 pairs of layers or preDensity and preZeff images.
(D) To create a color image, where one or more than one object could have been deleted: (i) use the physics of X-ray (actual and approximations) to combine all the precomputed images but the ones of the deleted objects and create the resulting intensity and Zeff images; (ii) from the resulting images, use the same algorithms as the one previously described to create the color images and the material masks.
Notes regarding operations (A) to (D) directly above: The method would typically be performed at the station receiving the data from the CT scanner, e.g. by processor 114a, as it may significantly lower the needed processing power on all other viewing stations. In this case, one option is to transmit to the viewing stations all the N + 1 pairs of layers or preDensity and preZeff images. Another option is to transmit to the viewing stations all the N + 1 pairs of layers or preDensity and preZeff images plus the original (the whole) color image and material mask. The main images could be computed as was done previously or based on the layers.
[187] In summary, in some embodiments a user operable input object of a GUI may receive a request that an object in a displayed image be removed from the displayed image. In response,
the object may be removed from the displayed image to reveal content previously obstructed/hidden by the object. In some embodiments, when the object is removed from the 3D image, the object is also removed from the corresponding 2D image(s).
[188] In some embodiments, removing the object from the displayed image comprises: (1) obtaining modified 3D data (e.g. CT data) by setting to zero voxels in the 3D data that correspond to the object, and (2) regenerating the 2D image using the modified 3D data. In particular, the image data conveying the 3D image may be processed to derive alternate image data conveying the 2D image, where the image data conveying the 3D image is modified by setting voxel values to zero that correspond to the object.
[189] In some embodiments, removing the object from the displayed image may instead follow the“layer” approach discussed above. For example, the object to be removed may be a first object, and the method may include:
(1) computing a plurality of layers, each layer corresponding to a respective different object, and one of the layers corresponding to the first object;
(2) each layer is computed by modifying the image data conveying the 3D image, in order to set to zero voxels that do not correspond to the respective object;
(3) removing the first object from the displayed image comprises combining the plurality of layers, not including the layer corresponding to the first object, to form the displayed image.
[190] In some embodiments, each layer is projected into 2D, thereby generating a plurality of 2D projected layers. Generating the 2D simulated image with the first object removed comprises combining the plurality of 2D projected layers, not including the 2D projected layer corresponding to the first object.
Other variations
[191] Other example image manipulation and/or enhancement features are as follows. In some embodiments, for the 3D image, the center of rotation may be based on view port (screen) center and depth position of the items. In some embodiments, all views for the
horizontal position may be linked. For example, upon zooming in on one view, all views may be zoomed in on the same horizontal position. As another example, upon panning in one view, all view showing the horizontal pan are linked.
[192] The user interface (e.g. GUI 502) described above is primarily described in the context of a remote screening station. A screening station at a security checkpoint (e.g. station 302 in FIG. 5 and/or the local stations having processors H4a-c respectively in FIG. 1) may use the same (or substantially the same) user interface. The images displayed may only be associated with items scanned at that screening station, e.g. from the CT scanner 108. In some embodiments, the user interface may be enhanced to also display a sequence of thumbnail images of items scanned by the scanner. An example is illustrated in FIG. 43, which shows a GUI substantially the same as GUI 502 described earlier, but with a sequence of thumbnail images 588 also displayed. The sequence of thumbnail images 588 may assist the human screener in identifying the item to be checked. In one embodiment, as part of the workflow for the screener at the security checkpoint screening station: the whole display/manipulation process starts when the scene of the item is selected in the user interface; depending on the configuration, a thumbnail presenting the camera image or colour 2D bottom view may be displayed; and if the same computer (processor and memory) is being used for the screening as the computer that received the data from the scanner (e.g. from the CT scanner), then all the data transfer is done locally. However, the analysis results (e.g. bounding box of threat items created at the analysis station, such as at the remote screening station) may be received over a network.
[193] In some examples above, 2D and 3D scenes are displayed using the exact same user interface (UI) on a same machine in a same session, e.g. receive a 2D scene, then a 3D, then a 2D, etc. Projected 2D views aim to match existing standard X-ray images, and processing tools (e.g. for image enhancement) may be applied in real time equivalently in 2D and 3D. This may allow for screeners used to screening 2D images more easily transition to 3D screening.
[194] The processors disclosed herein (e.g. processor H4a, H4b, H4c, 214) may each be implemented by one or more general-purpose processors that execute instructions stored in a memory. The instructions, when executed, cause the processor to perform the operations described herein as applicable, e.g. receiving the data from a scanner, transmitting/receiving the
data over the network to/from the remote screening station, generating the at least one 2D image from the 3D data, performing the 2D projection views from the 3D data, ROI projection, rendering the 3D data for display, bag deconstruction, etc. Alternatively, some or all of the processors disclosed herein may be implemented using dedicated circuitry, such as an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or a programmed field programmable gate array (FPGA) for performing the operations of the processor.
Additional embodiments
[195] FIG. 44 illustrates an example system 802 for performing security screening. The system 802 is a computing device and includes at least one memory 804, at least one programmable processor 806, and a computer network interface 808. The processor 806 may be programmed to implement the methods described herein. The memory 804 may store image data 810 derived from scanning an item with penetrating radiation at a checkpoint screening station. The image data conveys a 3D image of the item. Examples of image data conveying a 3D image of the item include density and/or Zeff data. For example, there may be two blocks of 3D data from a CT scanner: one for density and one for Zeff, although this is not necessary. For example, there may only be density data. The image data that conveys the 3D image of the item comprises data used to obtain the 3D image of the item. The 3D image data 810 may have been received through the network interface 808 from the scanning device at the checkpoint screening station. The processor 806 may be programmed to process the 3D image data 810 to derive alternate image data conveying a 2D image of the item, e.g. a simulated X-ray image. The processor 806 may be programmed to cause transmission of the alternate image data conveying the 2D image and/or the image data conveying the 3D image for display on a display device. The display device may be at a remote screening station. The transmission may occur through the network interface 808. As an example, the system 802 may be a computing device in network communication with the CT scanner l08a and the remote screening station 204. The memory 804 may be memory 1 l2a, and the processor 806 may be processor 1 l4a.
[196] FIG. 45 is a flowchart of an example method for screening an item at a security checkpoint. The security checkpoint includes a checkpoint screening station. The method may be
implemented by a system having at least one programmable processor, e.g. system 802 having processor 806. Step 1002 includes receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station. The image data conveys a 3D image of the item. Step 1004 includes processing the image data conveying the 3D image of the item to derive alternate image data conveying a 2D image of the item. Step 1006 includes transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen. The display screen may be at a remote screening station, although this is not necessary, e.g. the display screen may be local to the checkpoint screening station. If the derived alternate image data conveying the 2D image is transmitted to the remote screening station, then the remote screening station may be in communication with the system over a computer network. Step 1008 includes transmitting the image data conveying the 3D image of the item for display on the display screen (at the local or remote screening station). In some embodiments, transmission of the image data conveying the 3D image of the item may be performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item. That is, in some embodiments transmission of the 2D and 3D image data may occur in parallel, or the 3D image data may be transmitted before the 2D image data. But in general, the transmission of the derived alternate image data conveying the 2D image is complete before the transmission of the image data conveying the 3D image is complete.
[197] In some embodiments, the image data conveying the 3D image of the item includes CT data. In some embodiments, the alternate image data conveys a simulated X-ray image of the item. In some embodiments, the simulated X-ray image is derived by applying simulation operations to the CT data. The simulation operations may include projection operations.
[198] In some embodiments, the method includes displaying the 2D image of the item on the display screen without displaying the 3D image of the item. The displaying may occur at the remote screening station. In some embodiments, the method includes displaying the 2D image of the item on the display screen prior to the 3D image of the item being ready to be displayed. In some embodiments, the method includes receiving an input provided by an operator (e.g. at the remote screening station), where the input requests that the 3D image of the
item be displayed on the display screen. In some embodiments, in response to receipt of the input, the method includes displaying the 3D image of the item. In some embodiments, in response to receipt of the input requesting that the 3D image of the item be displayed on the display screen, the 3D image of the item is displayed concurrently with the 2D image of the item.
[199] In some embodiments, the method includes receiving an input provided by an operator (e.g. at the remote screening station), where the input requests that the 3D image of the item be displayed on the display screen. In response to receipt of the input, the method may include modifying the display screen at the remote screening station to: display the 3D image of the item; and cease displaying the 2D image of the item.
[200] In some embodiments, the method includes directing the remote screening station to implement a GUI. The GUI may be configured for: displaying the 2D image of the item; providing an input object configured to receive a request from an operator of the remote screening station to display the 3D image of the item; and in response to a specific user request to display the 3D image of the item through the input object, adapting the GUI to display the 3D image of the item. In some embodiments, the 3D image of the item is displayed concurrently with the 2D image of the item in response to the specific user request to display the 3D image of the item. In some embodiments, the 3D image of the item is displayed instead of the 2D image of the item in response to the specific user request to display the 3D image of the item. In some embodiments, the GUI is configured for selectively causing the input object to acquire one of an enabled state and a disabled state at least in part based on the 3D image of the item being available for display at the remote screening station. In some embodiments, the GUI is configured for causing the input object to acquire the enabled state following an intentional delay period. The intentional delay period may be measured from the displaying of the 2D image of the item. The intentional delay period may be measured from the 3D image of the item being available for display at the remote screening station. The intentional delay period may be configurable. In some embodiments, the GUI may be configured for: providing a user operable input object configured to receive a delay period duration, and in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
[201] In some embodiments, the GUI may be configured for: (a) providing an image manipulation input object configured to receive a request from the operator of the remote screening station to remove a component shown in the 3D image of the item; and (b) in response to a specific user request to remove a specific component shown in the 3D image of the item, adapting the GUI to display an altered version of the 3D image of the item in which the specific component is omitted to reveal contents of the item previously obstructed by the specific component. The GUI may also or instead be adapted to display an altered version of the 2D image of the item in which the specific component is omitted.
[202] In some embodiments, an input may be received requesting that an object be removed from a displayed image of the item, and in response removing the object from the displayed image to reveal content previously blocked by the object. In some embodiments, removing the object from the displayed image may include obtaining modified CT data by setting to zero voxels in the CT data that correspond to the object, and regenerating the 2D image of the item using the modified CT data. In some embodiments, the object is a first object, and the method includes computing a plurality of layers, each layer corresponding to a respective different object in the item, and one of the layers corresponding to the first object. Each layer may be computed using the CT data modified by setting to zero voxels in the CT data that do not correspond to the respective object. Removing the first object from the displayed image may include combining the plurality of layers, not including the layer corresponding to the first object, to form the displayed image.
[203] In some embodiments, processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes: defining a plurality projection paths through the 3D image of the item; and projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item. In some embodiments, at least some projection paths in the plurality projection paths are non-parallel projection paths, in that they extend along axes that diverge from one another and/or that converge at a same point. In some embodiments, the non parallel projection paths originate from (or converge at) a same starting point but end at different ones of a set of end points. In some embodiments, the starting point corresponds to a simulated
penetrating radiation source. In some embodiments, the set of end points corresponds to a set of simulated radiation sensors. In some embodiments, the image data conveying the 3D image of the item includes CT data conveying 3D density data and 3D Zeff data. In some embodiments, the image data conveying the 3D image of the item includes CT data conveying 3D density data and not conveying Zeff data. In some embodiments, the CT data includes a plurality of slices of CT data. In some embodiments, projecting the image data conveying the 3D image of the item along the projection paths includes projecting slices in the plurality of slices of CT data along the projection paths.
[204] In some embodiments, the image data conveying the 3D image of the item further includes information conveying a region of interested (ROI) in the 3D image. In some embodiments, processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes processing the information conveying the ROI in the 3D image to derive information conveying a corresponding ROI in the 2D image. In some embodiments, processing the information conveying the ROI in the 3D image to derive information conveying the corresponding ROI in the 2D image includes performing operations including: (a) defining the ROI in the 3D image by defining a plurality of 3D coordinates, at least some of the 3D coordinates corresponding to edges or corners of a 3D region, where the 3D region defines the ROI in the 3D image; and (b) projecting the plurality of 3D coordinates into a two dimensional space to obtain the corresponding ROI in the 2D image.
[205] FIG. 46 is a flowchart of an example method for screening items at a security checkpoint. The security checkpoint includes a first checkpoint screening station with a screening device of a first type and a second checkpoint screening station with a screening device of a second type distinct from the first type. The method is implemented by a system including at least one programmable processor. Step 1022 includes receiving first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station. The first image data conveys a 3D image of the first item. Examples of image data conveying a 3D image of an item include density and/or Zeff data. Step 1024 includes processing the first image data conveying the 3D image of the first item to derive alternate image data conveying a 2D image of the first item. Step 1026 includes transmitting the derived alternate
image data conveying the 2D image of the first item for display on a display screen at a remote screening station. The remote screening station is in communication with the system over a computer network. Step 1028 includes also transmitting the first image data conveying the 3D image of the first item for display on the display screen at the remote screening station. In some embodiments, transmission of the first image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item. But in general transmission of the derived alternate image data conveying the 2D image is complete before transmission of the first image data conveying the 3D image. Step 1030 includes receiving second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station. The second image data conveys a 2D image of the second item. As an example, the second image data may include a pair of black and white images (or perhaps a pair of black and white images generated or to be generated from a continuous stream of data). Step 1032 includes transmitting the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station, e.g. once the analysis of the first item is complete.
[206] In some embodiments, the same processor performs all steps. In other embodiments, a first processor (e.g. processor H4a) associated with the first checkpoint screening station performs the receiving of the first image data, the processing of the first image data, the transmitting of the derived alternate image data conveying the 2D image of the first item, and the transmitting of the image data conveying the 3D image of the first item, and a second processor (e.g. H4b) associated with the second checkpoint screening station performs the receiving the second image and transmitting the image data conveying the 2D image of the second item.
[207] In some embodiments, the first image data includes CT data, and the derived alternate image data conveys a simulated X-ray image of the first item. In some embodiments, the simulated X-ray image is derived by applying simulation operations to the CT data. In some embodiments, the second image data includes X-ray image data (e.g. the second screening device may be an X-ray scanner).
[208] In some embodiments, the method incudes: at the remote screening station, displaying the 2D image of the first item on the display screen without displaying the 3D image of the first item. In some embodiments, the method includes: at the remote screening station, displaying the 2D image of the first item on the display screen prior to the 3D image of the first item being ready to be displayed at the remote screening station.
[209] FIG. 47 is a flowchart of another example method for screening items for a security checkpoint. The method is implemented by a system having at least one programmable processor. The system may part of a remote screening station. The at least one programmable processor is configured for performing the method steps. In step 1052, a GUI is implemented on a display screen. The GUI is configured for: (i) displaying a 2D image of the item, and (ii) providing an input object operable by an operator. The input object is configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3D image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item.
[210] Step 1054 includes displaying the 2D image of the item on the GUI and causing the input object to acquire the disabled state. Step 1056 includes dynamically adapting the GUI to subsequently cause the input object to acquire the enabled state after a delay period. In some embodiments, the delay period is based at least in part on receipt of image data conveying the 3D image of the item. For example, the delay period may expire upon receipt of the image data conveying the 3D image of the item and after the 3D image is rendered and available to be displayed. In some embodiments, the input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI. In step 1058, in response to receipt through the input object of a specific user request to display the 3D image of the item, dynamically adapting the GUI to display the 3D image of the item on the display screen.
[211] In some embodiments, the GUI may be configured to display the 3D image of the item only after the 2D image of the item has been displayed. In some embodiments, the 3D image of the item is displayed concurrently with the 2D image of the item in response to the user request to display the 3D image of the item. In some embodiments, the 3D image of the item is
displayed instead of the 2D image of the item in response to the user request to display the 3D image of the item.
[212] In some embodiments, the delay period for dynamically adapting the GUI to cause the input object to acquire the enabled state may be based at least in part on receipt of image data conveying the 3D image and upon an intentional delay period. In some embodiments, the intentional delay period is measured from the displaying of the 2D image of the item. In some embodiments, the intentional delay period is measured from the 3D image of the item being available for display at the remote screening station. In some embodiments, the intentional delay period is configurable. In some embodiments, the GUI is configured for: (a) providing another user operable input object configured to receive a delay period duration; and (b) in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
[213] In some embodiments, the image data conveying the 3D image of the item may include CT data and the 2D image of the item may include simulated X-ray image data.
Examples
[214] In view of, and in addition to the above, the following examples are disclosed.
[215] Example 1: A method comprising: scanning an item with a CT scanner to obtain data used for displaying a 3D image of the item; generating a 2D image from the data; displaying the 2D image.
[216] Example 2: The method of example 1, further comprising transmitting the 2D image to a remote screening station, and wherein the displaying the 2D image occurs at the remote screening station.
[217] Example 3: The method of example 2, further comprising transmitting the data to the remote screening station after transmitting the 2D image or in parallel.
[218] Example 4: The method of any one of examples 1 to 3, further comprising displaying the 2D image and not displaying the 3D image of the item.
[219] Example 5: The method of example 4, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
[220] Example 6: The method of example 5, wherein the input is not activated until the 3D image is available for display.
[221] Example 7: The method of example 6, wherein the 3D image is available for display once it is received at a remote screening station.
[222] Example 8: The method of example 6 or 7, wherein the 3D image is available for display once it is rendered from the data.
[223] Example 9: The method of any one of examples 5 to 8, wherein the 3D image is displayed concurrently with the 2D image.
[224] Example 10: The method of any one of examples 1 to 9, wherein the data is 3D data.
[225] Example 11: The method of example 10, wherein the 3D data comprises reconstructed density and/or Zeff data.
[226] Example 12: The method of any one of examples 1 to 11, wherein generating the 2D image from the data comprises generating a 2D projected view of the 3D image.
[227] Example 13: The method of any one of examples 1 to 12, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
[228] Example 14: The method of any one of examples 1 to 13, wherein generating the
2D image from the data comprises: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
[229] Example 15: The method of example 14, wherein the projected images are processed to create colour and material mask images.
[230] Example 16: The method of any one of examples 1 to 15, wherein generating the 2D image from the data comprises: applying at least one transformation to generate a 2D projected view.
[231] Example 17: The method of any one of examples 1 to 16, wherein generating the 2D image from the data comprises: obtaining two 3D datasets, e.g. representing density and the Zeff of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
[232] Example 18: The method of example 17, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube; simulate what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
[233] Example 19: The method of any one of examples 1 to 18, further comprising obtaining ATD information associated with the item.
[234] Example 20: The method of example 19, wherein the ATD information is received from the CT scanner.
[235] Example 21: The method of example 19 or example 20, wherein the ATD information indicates a region of interest (ROI).
[236] Example 22: The method of any one of examples 1 to 20, further comprising obtaining a ROI, e.g. from ATD information and/or from a user input.
[237] Example 23: The method of example 21 or example 22, comprising projecting the ROI in the 2D image.
[238] Example 24: The method of example 23, wherein the ROI comprise a bounding box.
[239] Example 25: The method of example 24, wherein projecting the ROI in the 2D image comprises computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprise computing the minimal and maximal x and y values of projected corners.
[240] Example 26: The method of any one of examples 1 to 25, further comprising receiving an input indicating that an object in the 3D image is to be removed, and removing the object from the 2D image.
[241] Example 27: The method of example 26, wherein removing the object from the 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
[242] Example 28: The method of any one of examples 1 to 27, further comprising scanning another item with an X-ray scanner and displaying, on the same GUI as the 2D image, the image from the X-ray scanner.
[243] Example 29: The method of any one of examples 1 to 28, wherein the image generated from the data is a first 2D image, and the method further comprises generating a second 2D image from the data.
[244] Example 30: The method of example 29, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
[245] Example 31: The method of example 30, wherein at least one of the first 2D image and the second 2D image is displayed on a display at the same time as the 3D image.
[246] Example 32: A method comprising: receiving data from a CT scanner, the data used for generating a 3D image of an item scanned by the CT scanner; generating a 2D image from the data.
[247] Example 33: The method of example 32, further comprising transmitting the 2D image to remote screening station.
[248] Example 34: The method of example 33, further comprising transmitting the data to the remote screening station after transmitting the 2D image or in parallel to transmitting the 2D image.
[249] Example 35: The method of any one of examples 32 to 34, wherein the data is 3D data.
[250] Example 36: The method of example 35, wherein the 3D data comprises reconstructed density and/or Zeff data.
[251] Example 37: The method of any one of examples 32 to 36, wherein generating the 2D image from the data comprises generating a 2D projected view of the 3D image.
[252] Example 38: The method of any one of examples 32 to 37, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
[253] Example 39: The method of any one of examples 32 to 38, wherein generating the 2D image from the data comprises: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
[254] Example 40: The method of example 39, wherein the projected images are processed to create colour and material mask images.
[255] Example 41: The method of any one of examples 32 to 40, wherein generating the 2D image from the data comprises: applying at least one transformation to generate a 2D projected view.
[256] Example 42: The method of any one of examples 32 to 41, wherein generating the 2D image from the data comprises: obtaining two 3D datasets, e.g. representing density and the Zeff of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
[257] Example 43: The method of example 42, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube;
simulating what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
[258] Example 44: The method of any one of examples 32 to 43, further comprising obtaining, from the CT scanner, ATD information associated with the item.
[259] Example 45: The method of any one of examples 32 to 44, wherein the 2D image generated from the data is a first 2D image, and the method further comprises generating a second 2D image from the data.
[260] Example 46: The method of example 45, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
[261] Example 47: The method of example 46, wherein the first 2D image and the second 2D image are transmitted to a remote screening station before the data used for generating the 3D image.
[262] Example 48: A method comprising: receiving 2D image data that was generated from data obtained by a CT scanner; displaying the 2D image.
[263] Example 49: The method of example 48, further comprising receiving the 2D image from a security checkpoint screening station, and wherein the displaying the 2D image occurs at a remote screening station.
[264] Example 50: The method of example 49, further comprising receiving the data obtained by the CT scanner, at the remote screening station, after receiving the 2D image; wherein the data obtained by the CT scanner is used for displaying a 3D image of the item.
[265] Example 51: The method of any one of examples 48 to 50, further comprising displaying the 2D image and not displaying the 3D image of the item.
[266] Example 52: The method of example 51, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
[267] Example 53: The method of example 52, wherein the input is not activated until the 3D image is available for display.
[268] Example 54: The method of example 53, wherein the 3D image is available for display once it is received at the remote screening station.
[269] Example 55: The method of example 53 or 54, wherein the 3D image is available for display once it is rendered from the data.
[270] Example 56: The method of any one of examples 52 to 55, wherein the 3D image is displayed concurrently with the 2D image.
[271] Example 57: The method of any one of examples 48 to 56, wherein the data is 3D data.
[272] Example 58: The method of example 57, wherein the 3D data comprises reconstructed density and/or Zeff data.
[273] Example 59: The method of any one of examples 48 to 58, wherein the 2D image is a 2D projected view of the 3D image.
[274] Example 60: The method of any one of examples 48 to 59, wherein the 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner.
[275] Example 61: The method of any one of examples 48 to 60, wherein the 2D image was generated from the data by: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a projected Zeff image.
[276] Example 62: The method of example 61, wherein the projected images are processed to create colour and material mask images.
[277] Example 63: The method of any one of examples 48 to 62, wherein the 2D image was generated from the data by: applying at least one transformation to generate a 2D projected view.
[278] Example 64: The method of any one of examples 48 to 63, wherein the 2D image was generated from the data by: obtaining two 3D datasets, e.g. representing density and the Zeff
of the item scanned by the CT scanner; simulating an image that would be generated if the item was scanned by an X-ray scanner.
[279] Example 65: The method of example 64, wherein the simulating comprises a processor: conceptually placing a source at a same relative position from a data cube; conceptually placing an array of x-ray detectors at the same relative position from the data cube; simulating what would be measured data in an x-ray detector to generate a projected image and optionally generating a colour image and/or material mask for display as part of the 2D image.
[280] Example 66: The method of any one of examples 48 to 65, further comprising receiving ATD information associated with the item.
[281] Example 67: The method of example 66, wherein the ATD information originates from the CT scanner.
[282] Example 68: The method of example 66 or example 67, wherein the ATD information indicates a region of interest (ROI).
[283] Example 69: The method of any one of examples 48 to 67, further comprising receiving a ROI, e.g. from ATD information and/or from a user input.
[284] Example 70: The method of example 68 or example 69, wherein the ROI is projected in the 2D image.
[285] Example 71: The method of example 70, wherein the ROI comprise a bounding box.
[286] Example 72: The method of example 71, wherein projecting the ROI in the 2D image comprised computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprised computing the minimal and maximal x and y values of projected comers.
[287] Example 73: The method of any one of examples 48 to 72, further comprising receiving an input indicating that an object in a 3D image is to be removed, and removing the object from the 2D image.
[288] Example 74: The method of example 73, wherein removing the object from the 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
[289] Example 75: The method of any one of examples 48 to 74, further comprising receiving another 2D image associated with the scanning of another item from an X-ray scanner.
[290] Example 76: The method of example 75, wherein the image from the X-ray scanner is displayed on the same GUI as the image originating from the CT scanner.
[291] Example 77: The method of any one of examples 48 to 76, wherein the 2D image generated from the data is a first 2D image, and the method further comprises receiving a second 2D image generated from the data.
[292] Example 78: The method of example 77, wherein the first 2D image is a bottom view and the second 2D image is a side view, or wherein the first 2D image is a side view and the second 2D image is a bottom view.
[293] Example 79: The method of example 78, wherein at least one of the first 2D image and the second 2D image is displayed on a display at the same time as the 3D image.
[294] Example 80: A method comprising: receiving and displaying a first 2D image of a first item that was scanned by an X-ray scanner; receiving and displaying a second 2D image of a second item that was scanned by a CT scanner; optionally wherein the method is performed at a remote screening station.
[295] Example 81: The method of example 80, wherein the first 2D image is no longer displayed when the second 2D image is displayed.
[296] Example 82: The method of example 80 or 81, further comprising receiving the first 2D image from a first security checkpoint screening station, and receiving the second 2D image from a second different security checkpoint system.
[297] Example 83: The method of any one of examples 80 to 82, further comprising: receiving 3D data associated with the second item that was scanned by the CT scanner, wherein the 3D data is for displaying a 3D image of the second item.
[298] Example 84: The method of example 83, wherein the 3D data is received after the second 2D image.
[299] Example 85: The method of example 83 or 84, further comprising displaying the second 2D image and not displaying the 3D image.
[300] Example 86: The method of 85, further comprising receiving an input at a user interface, the input requesting that the 3D image be displayed, and in response displaying the 3D image.
[301] Example 87: The method of example 86, wherein the input is not activated until the 3D image is available for display.
[302] Example 88: The method of example 87, wherein the 3D image is available for display once it is received at the remote screening station.
[303] Example 89: The method of example 87 or 88, wherein the 3D image is available for display once it is rendered from the 3D data.
[304] Example 90: The method of any one of examples 86 to 89, wherein the 3D image is displayed concurrently with the second 2D image.
[305] Example 91: The method of any one of examples 83 to 90, wherein the 3D data comprises reconstructed density and/or Zeff data.
[306] Example 92: The method of any one of examples 83 to 91, wherein the second 2D image is a 2D projected view of the 3D image; optionally wherein the second 2D image emulates or resembles a 2D image that would be produced by an X-ray scanner; optionally wherein the second 2D image was generated from the data from the CT scanner by: processing 3D density and 3D Zeff data to produce projected images comprising a projected density image and a
projected Zeff image; optionally wherein the projected images are processed to create colour and material mask images.
[307] Example 93: The method of any one of examples 80 to 92, wherein the second 2D image was generated from data from the CT scanner by applying at least one transformation to the data to generate a 2D projected view.
[308] Example 94: The method of any one of examples 80 to 93, further comprising receiving a ROI associated with the second scanned item, e.g. from ATD information and/or from a user input; and optionally wherein the ROI is projected in the second 2D image; and optionally wherein the ROI comprise a bounding box, and optionally wherein projecting the ROI in the second 2D image comprises computing a 2D bounding box, and optionally wherein computing the 2D bounding box comprises computing the minimal and maximal x and y values of projected corners.
[309] Example 95: The method of any one of examples 80 to 94, further comprising receiving an input indicating that an object in a 3D image is to be removed, and removing the object from the second 2D image.
[310] Example 96: The method of example 95, wherein removing the object from the second 2D image comprises removing the object from the 3D image to obtain a modified 3D image, and then generating a 2D projection of the modified 3D image.
[311] Example 97: At least one processor configured to perform the method of any one of examples 1 to 96.
[312] Example 98: A system to perform the method of any one of examples 1 to 96.
[313] Example 99: At least one computer readable medium having stored thereon computer executable instructions that, when executed, cause at least one processor to perform the method of any one of examples 1 to 96.
[314] Example 100: A system for use in screening pieces of carry-on luggage at a security checkpoint. The system comprises: (a) at least two scanning devices for scanning the pieces of carry-on luggage with penetrating radiation to derive image data associated with the
pieces of carry-on luggage, wherein the at least two scanning devices include: (i) an X-ray scanner configured for generating X-ray image data associated with at least some of the pieces of carry-on luggage; (ii) a CT scanner configured for generating CT image data associated with at least some other one of the pieces of carry-on luggage; (b) a screening station in communication with said at least two scanning devices, said screening station implementing a GUI module configured for displaying: (i) images derived from the X-ray image data associated with at least some of the pieces of carry-on luggage; and (ii) images derived from CT image data associated with at least some other one of the pieces of carry-on luggage; wherein the GUI is configured for providing at least one user operable control configured for manipulating both (i) images derived from the X-ray image data and (ii) images derived from CT image data. In some specific implementations, the images derived from CT image data associated with at least some other one of the pieces of carry-on luggage include at least one simulated X-ray image derived by processing the CT image data. In some specific implementations, the graphical user interface module is further configured for: (a) providing a user interface tool for allowing the human operator to provide at the remote screening station threat assessment information associated with the image being displayed; (b) in response to receipt of threat assessment information provided by the human operator, causing the threat assessment information provided by the human operator to be conveyed to an on-site screening technician associated with the one of the at least two scanning devices.
[315] Example 101: A system for use in screening pieces of carry-on luggage at a security checkpoint. The system comprises at least two scanners for scanning the pieces of carry-on luggage with penetrating radiation to derive image data, wherein at least one of the at least two scanners is an X-ray scanner and another one of the at least two scanners is a CT scanner. The system also comprises a computing device including an input for receiving the image data and implementing a GUI, GUI being configured for receiving and processing X-ray image data and CT image data.
[316] Example 102: A method for screening a plurality of items at a security checkpoint, the security checkpoint including at least two scanning devices including at least one X-ray scanner and at least one CT scanner. The method comprises: (a) scanning a first item
amongst the plurality of items to be screened at the security checkpoint using the X-ray scanner to generate X-ray image data conveying information on the contents of the first item; (b) scanning a second item distinct from the first item amongst the plurality of items to be screened at the security checkpoint using the CT scanner to generate CT image data conveying information on the contents of the second item; (c) transmitting the X-ray image data conveying information on the contents of the first item to a screening station for display on a GUI for visual inspection by a human operator; (d) transmitting image data derived from the CT image data conveying information on the contents of the second item to the screening station for display on the GUI for visual inspection by a human operator, wherein transmitting the image data derived from the CT image data includes: (i) processing the CT image data to derive data conveying a simulated X-ray image of the second item; (ii) transmitting the data conveying the simulated X- ray image of the second item to the screening station for display on the GUI for visual inspection by the human operator; (iii) transmitting the CT image data to the screening station for display on the GUI for visual inspection by the human operator; (d) wherein the GUI at the screening station is configured for: (i) displaying an X-ray image derived from the X-ray image data conveying information on the contents of the first item for visual inspection by a human operator; (ii) displaying an X-ray image derived from the simulated X-ray image data conveying information on the contents of the second item for visual inspection by a human operator; (iii) presenting a user operable control for selectively causing an image derived from the CT image data to be displayed on the GUI, wherein the control is dynamically enabled in dependence on receipt of the CT image data at the screening station. In some specific practical implementations, the screening station is a remote screening station located remotely from the X- ray scanner and the CT scanner. In accordance with a specific implementation, local display devices associated with respective ones of the at least two scanning devices are provided for conveying threat assessment information to on-site screening technicians associated with the scanning devices. In accordance with a specific implementation, the threat assessment information provided by a human operator at the remote screening station is conveyed to the on site screening technician associated with one of the at least two scanning devices through an associated one of the local display devices. In non-limiting examples of implementation, the threat assessment information indicates to the on-site screening technician whether a piece of
luggage is marked as“clear” or marked for further inspection. In accordance with a specific implementation, the method may further comprise determining whether to subject respective ones of the images derived by the at least two scanning devices to a visual inspection by the human operator at the remote screening station, wherein the determining is made at least in part based on results obtained by using an automated threat detection engine. In accordance with a specific example of implementation, the processor is further programmed to cause at least some of the images derived by the at least two scanning devices to by-pass visual inspection by the human operator at the remote screening station. In some specific implementations, the images displayed at the remote screening station are associated with results obtained by applying an ATD operation, so that“on demand” the human operator views both the image of the piece of luggage as well as the associated ATD results.
Conclusion
[317] Although the present invention has been described with reference to specific features and embodiments thereof, various modifications and combinations can be made thereto without departing from the invention. The description and drawings are, accordingly, to be regarded simply as an illustration of some embodiments of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present invention. Therefore, although the present invention and its advantages have been described in detail, various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
[318] Moreover, any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storage of information, such as computer/processor readable instructions, data structures, program modules, and/or other data. A non-exhaustive list of examples of non-transitory computer/processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM), digital video discs or digital versatile disc (DVDs), Blu-ray Disc™, or other optical storage, volatile and non-volatile, removable and non removable media implemented in any method or technology, random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology. Any such non-transitory computer/processor storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using computer/processor readable/executable instructions that may be stored or otherwise held by such non-transitory computer/processor readable storage media.
[319] The foregoing is considered as illustrative only of the principles of the invention. Since numerous modifications and changes will become readily apparent to those skilled in the art in light of the present description, it is not desired to limit the invention to the exact examples and embodiments shown and described, and accordingly, suitable modifications and equivalents may be resorted to. It will be understood by those of skill in the art that throughout the present specification, the term“a” used before a term encompasses embodiments containing one or more to what the term refers. It will also be understood by those of skill in the art that throughout the present specification, the term “comprising”, which is synonymous with “including,” “containing,” or“characterized by,” is inclusive or open-ended and does not exclude additional, un-recited elements or method steps.
[320] Although various embodiments have been illustrated, this was for the purpose of describing, but not limiting, the invention. Various modifications will become apparent to those skilled in the art and are within the scope of this invention, which is defined more particularly by the attached claims.
[321] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention pertains. In the case of conflict, the present document, including definitions will control.
[322] Although the present invention has been described in considerable detail with reference to certain embodiments thereof, variations and refinements are possible and will become apparent to the person skilled in the art in view of the present description. The invention is defined more particularly by the attached claims.
Claims
1. A method for screening an item at a security checkpoint including a checkpoint screening station, said method being implemented by a system including at least one programmable processor and comprising:
a. receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a three-dimensional (3D) image of the item;
b. processing the image data conveying the 3D image of the item to derive alternate image data conveying a two dimensional (2D) image of the item;
c. transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network; d. transmitting the image data conveying the 3D image of the item for display on the display screen at the remote screening station, wherein transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
2. A method as defined in claim 1, wherein the image data conveying the 3D image of the item includes computed tomography (CT) data.
3. A method as defined in claim 2, wherein the alternate image data conveys a simulated X-ray image of the item and is derived by applying simulation operations to the CT data.
4. A method as defined in any one of claims 1 to 3, wherein said method comprises, at the remote screening station, displaying the 2D image of the item on the display screen without displaying the 3D image of the item.
5. A method as defined in any one of claims 1 to 4, wherein said method comprises, at the remote screening station, displaying the 2D image of the item on the display screen prior to the 3D image of the item being ready to be displayed at the remote screening station.
6. A method as defined in any one of claims 4 to 5, further comprising:
a. receiving an input provided by an operator at the remote screening station, the input requesting that the 3D image of the item be displayed on the display screen; b. in response to receipt of the input, displaying the 3D image of the item.
7. A method as defined in claim 6, wherein in response to receipt of the input requesting that the 3D image of the item be displayed on the display screen, the 3D image of the item is displayed concurrently with the 2D image of the item.
8. A method as defined in any one of claims 4 to 5, further comprising:
a. receiving an input provided by an operator at the remote screening station, the input requesting that the 3D image of the item be displayed on the display screen; b. in response to receipt of the input, modifying the display screen at the remote screening station to:
i. display the 3D image of the item;
ii. cease displaying the 2D image of the item.
9. A method as defined in any one of claims 1 to 5, further comprising directing the remote screening station to implement a Graphical User Interface (GUI), wherein the GUI is configured for:
a. displaying the 2D image of the item;
b. providing an input object configured to receive a request from an operator of the remote screening station to display the 3D image of the item;
c. in response to a specific user request to display the 3D image of the item through the input object, adapting the GUI to display the 3D image of the item.
10. A method as defined in claim 9, wherein the 3D image of the item is displayed concurrently with the 2D image of the item in response to the specific user request to display the 3D image of the item.
11. A method as defined in claim 9, wherein the 3D image of the item is displayed instead of the 2D image of the item in response to the specific user request to display the 3D image of the item.
12. A method as defined in any one of claims 9 to 11, wherein the GUI is configured for selectively causing the input object to acquire one of an enabled state and a disabled state at least in part based on the 3D image of the item being available for display at the remote screening station.
13. A method as defined in claim 12, wherein the GUI is configured for causing the input object to acquire the enabled state following an intentional delay period.
14. A method as defined in claim 13, wherein the intentional delay period is measured from the displaying of the 2D image of the item.
15. A method as defined in claim 13, wherein the intentional delay period is measured from the 3D image of the item being available for display at the remote screening station.
16. A method as defined in any one of claims 13 to 15, wherein the intentional delay period is configurable.
17. A method as defined in claim 16, wherein the GUI is configured for:
a. providing an input object configured to receive a delay period duration;
b. in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
18. A method as defined in any one of claims 9 to 17, wherein the GUI is configured for:
a. providing an image manipulation input object configured to receive a request from the operator of the remote screening station to remove a component shown in the 3D image of the item;
b. in response to a specific user request to remove a specific component shown in the 3D image of the item:
i. adapting the GUI to display an altered version of the 3D image of the item in which the specific component is omitted to reveal contents of the item previously obstructed by the specific component;
ii. adapting the GUI to also display an altered version of the 2D image of the item in which the specific component is also omitted.
19. A method as defined in any one of claims 1 to 18, wherein processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes:
a. defining a plurality projection paths through the 3D image of the item; and b. projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item.
20. A method as defined in claim 19, wherein at least some projection paths in the plurality projection paths are non-parallel projection paths.
21. A method as defined in claim 20, wherein the non-parallel projection paths originate from a same starting point but end at different ones of a set of end points.
22. A method as defined in claim 21, wherein the starting point corresponds to a simulated penetrating radiation source, and wherein the set of end points corresponds to a set of simulated radiation sensors.
23. A method as defined in any one of claims 19 to 22, wherein the image data conveying the 3D image of the item includes computed tomography (CT) data conveying 3D density data and 3D Zeff data.
24. A method as defined in claim 23, wherein the CT data includes a plurality of slices of CT data, and wherein projecting the image data conveying the 3D image of the item along the projection paths includes projecting slices in the plurality of slices of CT data along the projection paths.
25. A method as defined in any one of claims 1 to 24, wherein the image data conveying the 3D image of the item further comprises information conveying a region of interested (ROI) in the 3D image, and wherein processing the image data conveying the 3D image of the item to derive the alternate image data conveying the 2D image of the item includes processing the information conveying the ROI in the 3D image to derive information conveying a corresponding ROI in the 2D image.
26. A method as defined in claim 25, wherein processing the information conveying the ROI in the 3D image to derive information conveying the corresponding ROI in the 2D image includes performing operations comprising:
a. defining the ROI in the 3D image by defining a plurality of 3D coordinates, at least some of the 3D coordinates corresponding to edges or corners of a 3D region, wherein the 3D region defines the ROI in the 3D image;
b. projecting the plurality of 3D coordinates into a two dimensional space to obtain the corresponding region of interested (ROI) in the 2D image.
27. A system for screening an item at a security checkpoint, the security checkpoint including a checkpoint screening station, said system comprising:
a. a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a three- dimensional (3D) image of the item;
b. a processor in communication with said memory, said processor being programmed to:
i. process the image data conveying the 3D image of the item to derive alternate image data conveying a two dimensional (2D) image of the item;
ii. cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen at a remote screening station,
wherein the remote screening station is in communication with the system over a computer network;
iii. cause transmission of the image data conveying the 3D image of the item for display on the display screen at the remote screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate image data conveying the 2D image of the item.
28. A system as defined in claim 27, wherein the image data conveying the 3D image of the item includes computed tomography (CT) data, the alternate image data conveys a simulated X- ray image of the item, and wherein the processor is programmed to derive the alternate image data by applying simulation operations to the CT data.
29. A system as defined in any one of claims 27 to 28, further comprising the remote screening station, wherein the remote screening station includes a processor programmed to implement a Graphical User Interface (GUI) on the display screen and wherein the GUI is configured to display the 2D image of the item without displaying the 3D image of the item.
30. A system as defined in claim 29, wherein the GUI is configured by the processor at the remote screening station to:
a. display the 2D image of the item;
b. provide an input object configured to receive a request from an operator of the remote screening station to display the 3D image of the item;
c. in response to a specific user request to display the 3D image of the item through the input object, adapt the GUI to display the 3D image of the item.
31. A system as defined in claim 30, wherein the 3D image of the item is configured to display concurrently with the 2D image of the item in response to the specific user request to display the 3D image of the item.
32. A system as defined in any one of claims 27 to 31, further comprising a CT scanner to scan the item with the penetrating radiation at the checkpoint screening station.
33. A method for screening items at a security checkpoint including a first checkpoint screening station with a screening device of a first type and a second checkpoint screening station with a screening device of a second type distinct from the first type, said method being implemented by a system including at least one programmable processor and comprising: a. receiving first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station, the first image data conveying a three-dimensional (3D) image of the first item;
b. processing the first image data conveying the 3D image of the first item to derive alternate image data conveying a two dimensional (2D) image of the first item; c. transmitting the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening station, wherein the remote screening station is in communication with the system over a computer network;
d. transmitting the image data conveying the 3D image of the first item for display on the display screen at the remote screening station, wherein transmission of the image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item;
e. receiving second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station, the second image data conveying a 2D image of the second item;
f. transmitting the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station.
34. A method as defined in claim 33, wherein a first processor associated with the first checkpoint screening station performs the receiving of the first image data, the processing of the first image data, the transmitting of the derived alternate image data conveying the 2D
image of the first item, and the transmitting of the image data conveying the 3D image of the first item; and wherein a second processor associated with the second checkpoint screening station performs the receiving the second image data and transmitting the second image data conveying the 2D image of the second item.
35. A method as defined in any one of claims 33 to 34, wherein the first image data includes computed tomography (CT) data, and wherein the derived alternate image data conveys a simulated X-ray image of the first item and is derived by applying simulation operations to the CT data.
36. A method as defined in claim 35, wherein the second image data includes X-ray image data.
37. A method as defined in any one of claims 33 to 36, wherein said method comprises, at the remote screening station, displaying the 2D image of the first item on the display screen without displaying the 3D image of the first item.
38. A method as defined in any one of claims 33 to 37, wherein said method comprises, at the remote screening station, displaying the 2D image of the first item on the display screen prior to the 3D image of the first item being ready to be displayed at the remote screening station.
39. A system for screening items at a security checkpoint, the system comprising:
a. at least one memory for storing first image data derived from scanning a first item with penetrating radiation at a first checkpoint screening station, the first image data conveying a three-dimensional (3D) image of the first item;
b. at least one processor programmed to:
i. process the first image data conveying the 3D image of the first item to derive alternate image data conveying a two dimensional (2D) image of the first item;
ii. cause transmission of the derived alternate image data conveying the 2D image of the first item for display on a display screen at a remote screening
station, wherein the remote screening station is in communication with the system over a computer network;
iii. cause transmission of the image data conveying the 3D image of the first item for display on the display screen at the remote screening station, wherein transmission of the image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item; c. the at least one memory also for storing second image data derived from scanning a second item with penetrating radiation at a second checkpoint screening station, the second image data conveying a 2D image of the second item;
d. the at least one processor further programmed to transmit the second image data conveying the 2D image of the second item for display on the display screen at the remote screening station.
40. A system as defined in claim 39, wherein the at least one memory comprises:
a. a first memory associated with the first checkpoint screening station and for storing the first image data;
b. and a second memory associated with the second checkpoint screening station and for storing the second image data;
and wherein the at least one processor comprises:
c. a first processor programmed to process the first image data, cause transmission of the derived alternate image data conveying the 2D image of the first item, and cause transmission of the image data conveying the 3D image of the first item; and d. a second processor programmed to transmit the second image data conveying the 2D image of the second item.
41. A system as defined in any one of claims 39 to 40, wherein the first image data includes computed tomography (CT) data, wherein the derived alternate image data conveys a simulated X-ray image of the first item and is derived by applying simulation operations to the CT data, and wherein the second image data includes X-ray image data.
42. A system as defined in any one of claims 39 to 41, further comprising the remote screening station; and wherein the remote screening station includes a processor programmed to implement a Graphical User Interface (GUI) on the display; and wherein the GUI is configured to display the 2D image of the first item without displaying the 3D image of the first item.
43. A system as defined in any one of claims 39 to 42, said system further comprising:
a. the first checkpoint screening station including a CT scanner for deriving the first image data; and
b. the second checkpoint screening station including an X-ray scanner for deriving the second image data.
44. A system for screening items at a security checkpoint, the system comprising:
a. a remote screening station including a display screen for displaying images of the items scanned at the security checkpoint;
b. a first computing device in network communication with the remote screening station and with a first checkpoint screening station, wherein the first computing device includes:
i. a first memory for storing first image data derived from scanning a first item with penetrating radiation at the first checkpoint screening station, the first image data conveying a three-dimensional (3D) image of the first item;
ii. a first processor programmed to:
(1) process the first image data conveying the 3D image of the first item to derive alternate image data conveying a two dimensional (2D) image of the first item;
(2) cause transmission of the derived alternate image data conveying the 2D image of the first item over the network for display on the display screen at the remote screening station;
(3) cause transmission of the image data conveying the 3D image of the first item over the network for display on the display screen at the remote
screening station, wherein transmission of the image data conveying the 3D image of the first item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the first item;
c. a second computing device in network communication with the remote screening station and with a second checkpoint screening station, wherein the second computing device includes:
i. a second memory for storing second image data derived from scanning a second item with penetrating radiation at the second checkpoint screening station, the second image data conveying a 2D image of the second item; ii. a second processor programmed to transmit the second image data conveying the 2D image of the second item over the network for display on the display screen at the remote screening station.
45. A method for screening an item at a security checkpoint, said method being implemented by a system including at least one programmable processor, said at least one programmable processor being configured for:
a. implementing a Graphical User Interface (GUI) on a display screen, wherein the GUI is configured for:
i. displaying a 2-dimensional (2D) image of the item;
ii. providing an input object operable by an operator, said input object being configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3-dimensional (3D) image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item;
b. displaying the 2D image of the item on the GUI and causing the input object to acquire the disabled state;
c. dynamically adapting the GUI to subsequently cause the input object to acquire the enabled state after a delay period, the delay period being based at least in part on receipt of image data conveying the 3D image of the item, wherein the input object is
configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI;
d. in response to receipt through the input object of a specific user request to display the 3D image of the item, dynamically adapting the GUI to display the 3D image of the item on the display screen.
46. A method as defined in claim 45, wherein the GUI is configured to display the 3D image of the item only after the 2D image of the item has been displayed.
47. A method as defined in any one of claims 45 to 46, wherein the 3D image of the item is displayed concurrently with the 2D image of the item in response to the user request to display the 3D image of the item.
48. A method as defined in any one of claims 45 to 46, wherein the 3D image of the item is displayed instead of the 2D image of the item in response to the user request to display the 3D image of the item.
49. A method as defined in any one of claims 45 to 48, wherein the delay period for dynamically adapting the GUI to cause the input object to acquire the enabled state is based at least in part on receipt of image data conveying the 3D image and upon an intentional delay period.
50. A method as defined in claim 49, wherein the intentional delay period is measured from the displaying of the 2D image of the item.
51. A method as defined in claim 49, wherein the intentional delay period is measured from the 3D image of the item being available for display at the remote screening station.
52. A method as defined in any one of claims 49 to 51, wherein the intentional delay period is configurable.
53. A method as defined in claim 52, wherein the GUI is configured for:
a. providing another input object configured to receive a delay period duration;
b. in response to receipt of a specific delay period duration, configuring the intentional delay period based upon the received delay period duration.
54. A method as defined in any one of claims 45 to 53, wherein the image data conveying the 3D image of the item includes computed tomography (CT) data and wherein the 2D image of the item includes simulated X-ray image data.
55. A system for screening an item at a security checkpoint, the system comprising a non- transitory memory for storing image data, a display screen, and at least one processor programmed to:
a. implement a Graphical User Interface (GUI) on the display screen, wherein the GUI is configured for:
i. displaying a 2-dimensional (2D) image of the item;
ii. providing an input object operable by an operator, said input object being configured to selectively acquire: (1) an enabled state in which the input object is able to receive a user request to display a 3-dimensional (3D) image of the item; and (2) a disabled state in which the input object is unable to receive the user request to display the 3D image of the item;
b. display the 2D image of the item on the GUI, the input object being in the disabled state when the display of the 2D image is initiated;
c. following the display of the 2D image, dynamically adapt the GUI to subsequently cause the input object to acquire the enabled state after a delay period, the delay period being based at least in part on receipt of image data conveying the 3D image of the item, wherein the input object is configured to remain in the disabled state at least until the 3D image of the item is available for display on the GUI;
d. in response to receipt through the input object of a specific user request to display the 3D image of the item, dynamically adapt the GUI to display the 3D image of the item on the display screen.
56. A system as defined in claim 55, wherein the GUI is configured to display the 3D image of the item only after the 2D image of the item has been displayed.
57. A method for screening an item at a security checkpoint including a checkpoint screening station, said method being implemented by a system including at least one programmable processor and comprising:
a. receiving image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a three-dimensional (3D) image of the item;
b. processing the image data conveying the 3D image of the item to derive alternate image data conveying a two dimensional (2D) image of the item, wherein processing the image data comprises:
i. defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and
ii. projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item;
c. transmitting the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station;
d. transmitting the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with transmission of the derived alternate image data conveying the 2D image of the item.
58. A system for screening an item at a security checkpoint, the security checkpoint including a checkpoint screening station, said system comprising:
a. a memory to store image data derived from scanning the item with penetrating radiation at the checkpoint screening station, the image data conveying a three- dimensional (3D) image of the item;
b. a processor in communication with said memory, said processor being programmed to:
i. process the image data conveying the 3D image of the item to derive alternate image data conveying a two dimensional (2D) image of the item, wherein the processor is programmed to process the image data by:
1. defining a plurality of projection paths through the 3D image of the item, at least some projection paths through the 3D image in said plurality of projection paths extending along convergent or divergent axes; and
2. projecting the image data conveying the 3D image of the item along the projection paths to derive the alternate image data conveying the 2D image of the item;
ii. cause transmission of the derived alternate image data conveying the 2D image of the item for display on a display screen of a screening station, wherein the screening station is in communication with the system over a computer network;
iii. cause transmission of the image data conveying the 3D image of the item for display on the display screen of the screening station, wherein the transmission of the image data conveying the 3D image of the item is performed subsequent to or in parallel with the transmission of the derived alternate image data conveying the 2D image of the item.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862645052P | 2018-03-19 | 2018-03-19 | |
| US62/645,052 | 2018-03-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019178666A1 true WO2019178666A1 (en) | 2019-09-26 |
Family
ID=67988257
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2018/051226 Ceased WO2019178666A1 (en) | 2018-03-19 | 2018-09-28 | System, apparatus and method for performing security screening at a checkpoint using x-ray and ct scanning devices and gui configured for use in connection with same |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019178666A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111598147A (en) * | 2020-04-28 | 2020-08-28 | 合肥格泉智能科技有限公司 | International express image checking device and system based on CT equipment |
| CN115165937A (en) * | 2022-05-23 | 2022-10-11 | 东软医疗系统股份有限公司 | Air correction method and device, storage medium and computer equipment |
| WO2024000251A1 (en) * | 2022-06-29 | 2024-01-04 | 京东方科技集团股份有限公司 | Display control module and display control method, and display apparatus |
| CN119693935A (en) * | 2025-02-21 | 2025-03-25 | 杭州睿影科技有限公司 | Method and device for marking items |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6707879B2 (en) * | 2001-04-03 | 2004-03-16 | L-3 Communications Security And Detection Systems | Remote baggage screening system, software and method |
| US20130120535A1 (en) * | 2011-11-11 | 2013-05-16 | Hongrae Cha | Three-dimensional image processing apparatus and electric power control method of the same |
| US9286538B1 (en) * | 2014-05-01 | 2016-03-15 | Hrl Laboratories, Llc | Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition |
| WO2017005757A1 (en) * | 2015-07-06 | 2017-01-12 | Danmarks Tekniske Universitet | A method of security scanning of carry-on items, and a carry-on items security scanning system |
-
2018
- 2018-09-28 WO PCT/CA2018/051226 patent/WO2019178666A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6707879B2 (en) * | 2001-04-03 | 2004-03-16 | L-3 Communications Security And Detection Systems | Remote baggage screening system, software and method |
| US20130120535A1 (en) * | 2011-11-11 | 2013-05-16 | Hongrae Cha | Three-dimensional image processing apparatus and electric power control method of the same |
| US9286538B1 (en) * | 2014-05-01 | 2016-03-15 | Hrl Laboratories, Llc | Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition |
| WO2017005757A1 (en) * | 2015-07-06 | 2017-01-12 | Danmarks Tekniske Universitet | A method of security scanning of carry-on items, and a carry-on items security scanning system |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111598147A (en) * | 2020-04-28 | 2020-08-28 | 合肥格泉智能科技有限公司 | International express image checking device and system based on CT equipment |
| CN115165937A (en) * | 2022-05-23 | 2022-10-11 | 东软医疗系统股份有限公司 | Air correction method and device, storage medium and computer equipment |
| WO2024000251A1 (en) * | 2022-06-29 | 2024-01-04 | 京东方科技集团股份有限公司 | Display control module and display control method, and display apparatus |
| US12477244B2 (en) | 2022-06-29 | 2025-11-18 | Beijing Boe Technology Development Co., Ltd. | Display control apparatus, display control method and display device |
| CN119693935A (en) * | 2025-02-21 | 2025-03-25 | 杭州睿影科技有限公司 | Method and device for marking items |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019178666A1 (en) | System, apparatus and method for performing security screening at a checkpoint using x-ray and ct scanning devices and gui configured for use in connection with same | |
| US10019833B2 (en) | Luggage visualization and virtual unpacking | |
| US20070297560A1 (en) | Method and system for electronic unpacking of baggage and cargo | |
| US9019795B2 (en) | Method of object tracking using sonar imaging | |
| JP4588736B2 (en) | Image processing method, apparatus, and program | |
| JP5763551B2 (en) | Apparatus and method for viewing objects | |
| CN101604458A (en) | The method that is used for the computer aided diagnosis results of display of pre-rendered | |
| WO2014145908A2 (en) | Method and pipeline processing system for facilitating responsive interaction | |
| US20130009957A1 (en) | Image processing system, image processing device, image processing method, and medical image diagnostic device | |
| US20050251038A1 (en) | Multiple volume exploration system and method | |
| CN112037324B (en) | Box image three-dimensional reconstruction method, computing device and storage medium | |
| US20250308147A1 (en) | Systems and methods for automated rendering | |
| CN107693039A (en) | X-ray detection device, cone-beam CT-systems and its imaging method | |
| CN110520900A (en) | Object projection in CT X-ray images | |
| KR20220043170A (en) | System and method for generating a 3D color representation of 2D grayscale images | |
| US12033268B2 (en) | Volumetric dynamic depth delineation | |
| US11967081B2 (en) | Information processing apparatus, non-transitory computer-readable storage medium, and information processing method | |
| US20180308255A1 (en) | Multiple Three-Dimensional (3-D) Inspection Renderings | |
| US11138791B2 (en) | Voxel to volumetric relationship | |
| US11113868B2 (en) | Rastered volume renderer and manipulator | |
| Khmelev et al. | Visualization system for a radio images | |
| Donatsch et al. | 3D conversion using vanishing points and image warping | |
| Song et al. | Three-dimensional electronic unpacking of packed bags using 3-D CT images | |
| Neubauer et al. | Novel volume visualisation of GPR data inspired by medical applications | |
| Ritz et al. | Seamless and non-repetitive 4D texture variation synthesis and real-time rendering for measured optical material behavior |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18910532 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18910532 Country of ref document: EP Kind code of ref document: A1 |