GB2265784A - Video image motion classification and display - Google Patents
Video image motion classification and display Download PDFInfo
- Publication number
- GB2265784A GB2265784A GB9225180A GB9225180A GB2265784A GB 2265784 A GB2265784 A GB 2265784A GB 9225180 A GB9225180 A GB 9225180A GB 9225180 A GB9225180 A GB 9225180A GB 2265784 A GB2265784 A GB 2265784A
- Authority
- GB
- United Kingdom
- Prior art keywords
- motion
- pixel
- image
- filter
- flag
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 156
- 230000004044 response Effects 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 54
- 230000008859 change Effects 0.000 claims description 38
- 238000001514 detection method Methods 0.000 claims description 27
- 238000004040 coloring Methods 0.000 claims description 23
- 238000013461 design Methods 0.000 claims description 13
- 238000011156 evaluation Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 8
- 230000035945 sensitivity Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000001228 spectrum Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 claims 1
- 238000009877 rendering Methods 0.000 claims 1
- 230000000979 retarding effect Effects 0.000 claims 1
- 238000012163 sequencing technique Methods 0.000 claims 1
- 230000009012 visual motion Effects 0.000 claims 1
- 230000004438 eyesight Effects 0.000 abstract description 4
- 230000002123 temporal effect Effects 0.000 description 47
- 239000013598 vector Substances 0.000 description 29
- 230000006870 function Effects 0.000 description 21
- 239000011159 matrix material Substances 0.000 description 19
- 230000008569 process Effects 0.000 description 18
- 230000011218 segmentation Effects 0.000 description 10
- 230000008901 benefit Effects 0.000 description 7
- 230000037361 pathway Effects 0.000 description 7
- 239000002131 composite material Substances 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000005293 physical law Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009885 systemic effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000191291 Abies alba Species 0.000 description 1
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- 235000006843 Elaeocarpus angustifolius Nutrition 0.000 description 1
- 240000008395 Elaeocarpus angustifolius Species 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008033 biological extinction Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- JHIVVAPYMSGYDF-UHFFFAOYSA-N cyclohexanone Chemical compound O=C1CCCCC1 JHIVVAPYMSGYDF-UHFFFAOYSA-N 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000002939 deleterious effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000008571 general function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19634—Electrical details of the system, e.g. component blocks for carrying out specific functions
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19691—Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/015—High-definition television systems
- H04N7/0152—High-definition television systems using spatial or temporal subsampling
- H04N7/0155—High-definition television systems using spatial or temporal subsampling using pixel blocks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Systems (AREA)
Abstract
Motion in an input video image is classified (classes 14) according to whether motion occurs or not in two successive intraframe periods p, q. The colour and/or brightness of pixels within the displayed image is determined by the class of motion detected and may be used in surveillance systems to alert the viewer to motion or in machine/computer vision and guidance intelligence as used in robotics. The system uses a finite temporal-domain impulse-response filter (FT-DIR filter) for coding the motion. <IMAGE>
Description
2265784 F=::; Co Fi T F' T Ic-) N "%,ision Motion classification and E
Display " C 11 ainms The present invention primarily relates to a method of movement detection within a television display image to produce a coded screen overlay upon the determined luminance (or B&W) image area which is by this device highlighted in image, as additional brightness or colouring hue or both, in accordance with a catagorisation by four separately identified motion classes or types The purposes envisaged cover scene surveillance for trespass, intrusion, crime, security and/or reconnaissance for areas under guard or observation while using a scanned video, or other television system, of either the broadcast standard or other (e g CCTV) type These have either a digitised or a related digitisable pixel basis (see Fig 1 and 9 as the development).
The same capable method for the electronic perception of motion will provide suitable and convenient data for machine/computer vision and guidence intelligence as used in robotics and other forms of cybernetic or mechatronic systems technique Within a sensitive scene certainty of pixel quantity values the interpreted kinematic behaviour of the dynamic environment is impulse-modelled over the forest of spaced pixels, or sampled data points (Pk(n),x,y) for k(n) comprising a video frame (see Fig 3) in the period of a 3-frame cascade The digital FT-DIR-filter can thus specify a pixel data map in relationship to scene object and motion conditions, and so thus identify pixel categories for intelligence purposes in any context (see Fig 1, 2, & 3) The classified image areas correspond to class 3 indication in relating to positive or forward temporal difference where the image object will only just be within the later 'q' quanta intrafield (being the image number field, as existing between two temporally-adjacent video data frames, e-g k(n-1) and k(n)) and a class 2 indication in relating to negative or backward temporal difference where the image object will only just be in the earlier 'p' intrafield (see Fig 3).
The reference plane for both these statements upon the condition of the image object are taken about the central video frame position k(n) ( see Fig-1 & 3 again).
The image computation required in algorithm is derived from pixel data relating to a series of three video frames, in the time sequence of a data cascade and as related to the natural laws of physics found of kinematics.
The kinematic identifier number (m) of flag b(m), as the boolean quaternary to any pixel reference point (Pk(n), x, y) within the video frame dimensions, will enable a cybernetic determination of the motion catagory to be found of the video environment in the image point for robotic/machine intelligence; to be boolean decision dependent, under flag number control, on movement within the access field of the mechanism action.
3-frame cascade model of the scanned and pixelised video-signal allows the understanding of a theoretical description The circuit processes and image properties incorporated within the method are used to distinguish motion features of the objective environment as data and/or display.
The method proceeds by detecting picture change occurring at the interframe image point found between two successive pixel frames (e g analogue/NTSC/PAL, D/HD-NAC, fully-digital HDTV or future HD-Divine) Within a staticised 3-frame cascade held for a frame period interval these can produce resultant logical boolean or finite automata change variables 'The filtration of class categories can be, by a data processing scheme, under dedicated computer, multiple array or distributed microprocessor control ( see Fig 2, 3 & 4).
Thereby four clear and distinct classes of motion are evaluated from an analysis of the temporal intrafield for boolean variables as 'p' and 'q' to reason from two U Np.
A q = m) into one of four motion classification numbers (m) where Em Il to 4 l is for b(m) which is more generally known as a picture flag The fundamental acuity aperture consistent with the centre principle of the image filter performance need only be one pixel, pel or picture point element taken through three cross-sections taken laterally to the time-domain axis.
Thus the flag number b(m) in boolean form can be kept as a matrix component and hence a vector determinant within the video frame dimensions and metric of a frame array as lm I 1 to 4 l for either the picture display, by a segmentation process in 'pseudocolour', or alternatively for further procedure or communication within an information (IT) system such as a machine intelligence (CI/IKBS) process.
The motion class numbers or flags (as m of Fig 7 & 8) are categorically given to each and every pixel point, and can be used to singularly or by group segment the appropriate highlighted colour, or 'brightened' picture areas; as an overlay signal upon the luminance for a monochrome video picture This provides for the viewing of both the original kinematic scene detail in monochrome, with machine cognitive understandings of the objective phenomenon in motion by colour only Being in accordance with disclosed methods and procedure to form a single composite picture display of an observed scene In this way the enhanced kinematic reality and visual awareness of motion change modes in visual perception are more quickly brought to human attention, and human cognitive comprehension, by a motion-coded colouring in an overlay with a pixel segmentation of the scene area (see Fig 9).
The motion-coded colouring of an image using the pseudocolour overlay principle described can aid the analytical and inductive observation in such sciences and related studies as physics, medicine, astrophysics, astronomy and biological or life sciences Here the motion class determined of a stage, or phase of movement, or physical temporal change, can be brought to a normal visual reality, or alternatively presented in a suitable form for unit measurement or other quantification on a VDU, monitor or TV screen.
The objective image movements are classed as (flag 1) the object motion in continuation, (flag 2) the initial commencement or start of motion, (flag 3) the final stopping or cessation of motion, or (flag 4) the absence of a detectable motion event The first determination is the infinitessimal qualifier Sq / At, or quite simply Sq, where A t is the interframe periodic time based upon a pixel/digitised video signal (e g CCIR 601-1 and 709 recommendations, covering digital and high definition TV system standards) for television display A Sq/,Zt qualifier occurres at a physical edge or other object discontinuity Each pixel can produce an exclusive and individual flag number and so in an algorithm- based procedure will result in part with a colour-coded segment identification on a screen of the commonly-affected image areas under motion An observer's attention is thus drawn to such indicated disturbances for human intervention, command and/or control.
Additionally the method can provide the multichannel specific b'(m) flag, generated in pixel addressable order of the input data, in potentially being fully computer compatible to artificial intelligence AI/IKBS systems This can be a further or extended means of vision intelligence, security, caution and alert The scanned video signal, taken from the source object on the method input, following signal component sampling if and as necessary, for the Finite Temporal-Domain Impulse Response Filter (FT-DIR-filter) can be fed from the television camera Any video signal found sensitive to temporal change of a source object in the picture image on detection, in a camera tube, as an electrical response to whatever part of the electromagnetic, or sonic/ultrasonic spectrum (e g visible, IR, UV, etc), or other signalable source by a transducer, and exceptionally need only have a single scanning waveform or matrix pathway (e-g monochrome or luminance) as a initial basis Although normal colour signals (e.g RGB, Y & U/V or YCC, etc) can in video be individually and/or in combination provide the video input data for the 3-frame cascade property of the image This is also a physical phenomenological description of motion kinematics within the rectangular co-ordinates of the (Pk(n),x,y) video frame reference point, containing a pixel forest and using discrete mathematical algebra to physically specify the actual physical mapping function of the device The phenomenon is not based on highest fundamental frequency (hff) considerations.
The particular b'(m) flag can be found from a number of independent b(m) flags, each with a differing sensitivity in, say, originating signals by using various spectral wavelengths or hues Only a single flag number is thus given to any pixel position located within the display image area.
Eynchronous C O peration- The FT-DIR-filter described here is understandably related to the main types of digital signal processing filters Latter parts of the change filter relate to the common FIR type but an impulse response is found so the IIR type function also compares However the nearest relationship is with the lattice type, in a normal spatial plane for a temporal image A 11 data of the image cascade is ordinal to a commonly-related time axis but a recursive pixel function of 'q I upon I Pl provides two important temporal vectors to additionally define uncovering' class 2 and 'covering' class 3 motion conditions for any reference pixel The 'uncovering' condition is met by a temporal 'retroversal' vector and 'covering' by a temporal reversal' vector, each under mutually exclusive conditions.
The temporal image reversal vector (when m = 3) operates in the feedforward direction while that for the retroversal vector (when m = 2) operates in the feedback direction There are also physical definitions for the continuous or straight through 'entrant' class 1 and 'null' class 4 vectors (see Fig 2 & 3).
The vertical or normal frame rate (fit) of the video image scanning may be at the U K and European rate of 50 Hz, the U S rate of 60 Hz, or otherwise as may be required in advantage for a particular application However there is some additional advantage to be had for a communication channel by using ultra-low-data-rate methods where the actual 3-frame cascade has a sampling time of > 2 x E t, or fi/2 Hz, due to Nyquist criterion) Hence the frame period may need to be at a very low frequency of perhaps only several Hz Then each 3-frame sample may benefit in having a slightly longer observation time and also reduce communication data and thus analyser traffic density This may be important with remotely displayed screens were a very small time delay (of about A t) may be considered negligible The FT-DIR-filter properties are independent of image point (or spot) scan velocity, however image points or pixels need to be linearly corresponding with time- The essential coding method can be used with either frame field-interlacing (as for 625 line PAL System I Standard for broadcast television) or a simple and sequential single-frame scan without alternate line field interlace to produce the overall spatial-temporal image as a display raster.
The preferred number of pixel points in each row (x) and/or column (y) may be freely chosen within this given lattice filter system which may be considered advanced, or be in accord with any television system standard (analogue/digital), and may have any aspect ratio and spatial parameters for the pixelisation Within the mode of operation described there will be a slight error delay to be found at the extreme edges of the display screen if spatial filters are applied to the image field Additionally this may be due in part of the spatial ( al) parametricisation processes which are optionally included in this specification In some particular applications an assurance of reasonable certainty in the corrective perception and/or logical cognition of movement will suggest the better systemic use of digital image filters in the domain of spatial rather than the intrinsically essential and fundamental temporal relationships as described here These items will work with the already derived digitised data already abstracted from the intrafield.
With the use of multiplex analogue component (-MAC) based TV and video systems, using sampling, the pixel absolute difference (PAD-transform, see later) amplitude values are nyquist-criterion sampled subtractions (during time period L t) as the differences leaving the analogue (PAD-transform) as the temporal field component for the resultant The absolute difference limit transfer (CDLT-value, see later) is then more simply analogue at the value of comparison when a boolean logic variable of q can be evaluated for the component referenced pixel in the central temporal location of (Pk(n),x,y) However a digitally-based reference of the ADLT-value (see Fig 6 c) will prove to be both compatible and more stable whichever method appropriate, analogue or digital, is to be utilised for the PAD-transform, as finite field components for the comparison of data on input The PAD-transform component represents the given (p & q) intrafield information as data for following and more analytic functions.
The three exact pixel points lP(n-1),x, y), (P(n), x, y) and (P(n+ 1), x, y)l should each temporally be linearly and accurately superimposed (i e in a straight line) with time, giving precision to the picture display and accuracy to the segmentation location over the original scene point.
The given use of digital spatial-image filters will improve practical performances in removing some of the deleterious effect of noise in the video signal values and carrying waveform at intermediate points, within the FT-DIR-filter operation but they do not actually contribute to the fundamental principle, in essence, of the invention as detailed here.
although the motion classes to be captured are given as flag numbers of 1 to 4, any other algebraic alphabet or symbolic representation could alternatively be matched to also identify with the significance of the distinct classes, and be in keeping with the other elements of the information system described here; but in code this will maintain the same data transmission or access rate, on export, at a system interface.
m-otion-class related flag numbers, either b(m) or b'(m) which have been derived, by the FT-DIR-filter method given, may also be used to give motion class display and/or indication without the original scene picture detail This also may be produced by other semiconductor (e g led, laser-diode, etc) or gas plasma type display panels, etc.
Final indication with a single pixel basis from a k'(n) frame and/or a resultant as determined from data derived from a spatial group of such pixels, of a k'(n) segment divided frame, may be used Single 'led' type indications of motion class occurrence are a further possible feature (e g for indication of motion class 2 on the initial commencement/start of movement) of the image property for either single or group multiples of pixels over any screen space.
In accordance with the picture frame rate as the frequency (in Hz), the concurrent sequence of 'colouring' is used to update the central k'(n) frame to related motion class(es).
Consequently the indicative pseudocolour overlay, of a colour-coded image, provides an immediate identification of motion or movement-related change This is to be immediately obvious by use of a colour or hue change, at the near instant of recognition by the computation The specific technique may be automatically sentinel' in operation and so can improve the subjective content of surveillance presentation.
other uses for motion class determination, as given fundamentally by a basic FT-DIR-filter technique, for any envisageable purpose, are incorporated within this proposal.
Here two practical equipment implementations are discussed.
The first use of the flag number b'(m) or b(m) array is for scene image display, with motion class coding, and the second for intelligent machine vision utilising relational data.
-he o Cperational specification.
Fy way of example only some specific embodiments of the present invention will now be further described, with reference to the accompanying drawings, in which:- Fig 1 is a space diagram overview of the motion classification, input to output image with / t delay; Fig 2 is a diagrammatic view of the pixel image properties for a picture line-row in their temporal relation to motion class and image 'segmentation class'; Fi S 3 is a schematic view illustrating the digital signal properties, identified from a 3-frame cascade of the video picture, for each motion class giving a 'segmentation class'; Fig 4 is a chart showing image intrafield assignmenrt in relation to motion class number; Fig 5 a suggested computation specification to determine the DLT-vaiue parameter and characteristic; Fig E a/b/c/d/ in an individual series of staged processes illustrates the sequential scheme of intrafield processes upon which the overall system of the invention is both comprised; Pk(n-1) = Field-1, Pk(n) = Field-3, Pk(n+ 1) =
Field-5, intrafield 'p' is between fields 1/3 and 'q' 3/5.
Fig 7 is a partial illustrative circuit of the electronic device in data process and pathway detail for the FT-DIR-filter method; Fig 8 is a block diagram of the circuit processes originating from q intrafield determination to pixel flag evaluation and output; Fig-9 is a motion-coded diagram for a picture overlay as is given by a 'pseudocolour' display image.
General Frinciples of F D IR-F-ilter operation.
(see Fig 7 & 8) recursive or pulled-through' image method on the binary-boolean intrafield existentials of motion classification, by a minimum entropy coding upon binary data, is used to provide the quaternary data of b'(m) where lm Il 4 l is a number coded component, within an array, which from a matrix form gives a pseudocolour frame for screen display From this temporary coded storage a motion class colouring for a picture of segmented areas can be composed.
The temporal reference within the cascade is at the point of time occupied by the plane of frame k'(n) The appropriate flag number (m) is the source data used to colour the image when augmented onto the 'black and white' or monochrome picture contrast of the objective image by the luminance (Y) signal- The Image F-roperty Model (see Fig 2 & 3) From Fig 2 we can see that adjacent pixels of the temporal series found of k(n-1), k(n), and k(n+ 1) frames and show the duals of the binary boolean change conditions between the temporal pairs, with the illustrated elemental or simplest binary alternation in sequence as follows:- o or o -This is rather than a temporal pair of pixels with each showing each a matching (q S q) pixel value, as pixel point pairs during the 3-frame cascade sequence as now follows:- o o or The pixel temporal duals (o,-) are where a o represents a high picture (pixel) value on a motion event as can represent a low one The graphical patterns portrayed can represent each of the four class categories A definite (q > ) change detection results in a binary 'q' = 1 evaluation.
specific logical connectives define the discrete relationship between these difference ('q') and/or similarity (-q) pairs Combinatorially these provide a set of four equations in boolean logic in each defining a natural physical law, in a rigorous determination of combination for a motion class onto frame k(n) In configuration with the objective scene, and as deduced from the Fig 2 drawing, it may be seen how any actual picture cascade of linearly superimposed pixel values can, by rule, fit only one of the four motion classifications and thus specify the required performance of the filter.
The resultant data move is directly signified as a simple linear 'informational vector' (as a unit-length temporal vector, A t) between equivalent frame pixel co-ordinates (Pk(n),x,y) from either direction (+/ A t) onto k(n) Its detection is dependent upon illustrated Fig 2 image properties found of the 3-frame cascade, and as may be compared with Fig.
3 & 4.
These image data properties are computational and are derived from the two temporally-adjacent but contiguous intrafields (p & q) between the frame-scanning of the source object image These temporal picture vectors depict the minimum effective shift of data towards the central k(n) display frame for a local and/or distant screen.
The word 'pixel' may also represent any single spatial-temporal sample, or element, of a video picture signal which has linear correspondence through three points of the consecutive video frames For fully-analogue video the picture signal must necessarily be suitably sampled, and the component held at the pixel location The two intrafield booleans ('p' & 'q') for pixel informational changes (see Fig 3) to be brought to finite automata:L ocation of change existentials l a l & p I and l S l i ql between frames, are respectively each a singleton in both the pixel and the temporal location of their intrafields; within the 3-frame cascade, and representational model.
Filter E Data _jalue Filtering ( finite 'mean' and discrete median').
U.jnique singleton l 31 l transfers as a boolean variable flag-values can normally be lost at a pixel following the effective action of the mean' window filter on the topological surface of the PAD-transform forest The position and function of the 'mean' filter upon the digital component values of the individual PAD-transform outcomes is to the removing of extraneous small value error, noise and distortion products.
H ence the need for fourier spatial-temporal component filtering; 'mean' will depend upon the quality of the video signal input, and the associated system methods (e g.
analogue/digital) used in transposing the temporal PAD-transform values.
In method the PAD-transform process itself is cancelling of long-term periodic frequency, noise and some other deficiencies in the temporal domain which may therefore need only low filter co-efficients Both filter procedures operate upon a window or neighbourhood group of pixel organised components Later, on the discrete flag-number values, the median' filter can entirely remove spatially isolated fragments and single pixel flags on grounds of relative simplicity, assurrance of true change detection and a lower resultant demand upon any linking communication channel These additional filters follow in need the theory of a maximumlikelihood detection of significant picture change ( S and/or m) to be found in the video signal.
%/ideo signal C hange D etection ( see Fig 2, 3, & 4).
g, singleton occurrence (C) of pixel information in perspective through the 3-frame cascade can initially be assumed to be specific and analytic on a l 3 l & ql or to be later as l l p if as a change event ( 6) which subsequently, upon threshold or limit, gives a binary boolean 1 ' on existential proof Many intrafield scanning cycles take place in practice without a change event ( 6 q) so usually the recurrent binary-boolean evaluation is a 'o' for the preliminary or 'q' digital frame-store.
The identifier filter is for the discrete codification of appropriate motion attributes for each pixel and is based upon a finite linear element or vector basis between equivalent pixel point co-ordinates (Pk(n),x,y) of each of two adjacent and facing facing video frames These binaries are to be found between two serial frames about a central reference to the frame (kn) which is the simultaneous display frame for the four identifiable classes of image motion.
B y the assignment of a binary logic value, of either of l O '/'1 ', for each and every directly pixel-related location for both of the intrafields q and p as data, is stored a doublet' of respectively, the current and the previous intrafield evaluations, as two binary booleans, each in its array of a digital frame-store The connective logic between the distinct boolean variables 'p' and 'q' combinatorially gives the four variables of motion class, as the finite automata resultants, in the number set and range l 1-4 l The pixel address co-ordinate remains the same for each video frame through the 3-frame cascade within the finite time domain and continuity analysis An Initial o Cutline of the Image Fpicture Fjroperty (see Fig 2, 3, 4 & 9) as a temporal Karnaugh map to Venn diagram.
The cascade apparatus for the video lattice may also be more generally seen in the normal form of the (FIR) transversal section of a Finite (Temporal-Domain) Impulse Response Filter to identify image object motion class (as here a FT-DIR-filter), principally for the outcome as flags for the following four classes, used to define the kinematic attribute aspects of image processing purposes (Fig 7 & 8 later) and similar protocols.
The Notion-Attribute Inference Statements for the
Near-microscopic Pixels (Pk(n),x,y) as Conservative Axiomatic Postulates; upon a pair of temporally-related variables:
(compare Fig 3, 6 a/b/c/d, 7 & 8) class 1: picture information as data gives a class 1 number flag as b'( 1) in identification of motion existing in both intrafields given upon a conjunction of booleans q' and p' Ai single complete computation upon video frame k(n) is based upon these intrafields within the kinematic 3-frame cascade Hence a motion class flag '1 ' is placed as a boolean component into the flag array as b( 1) Class ( 1) identifies with the central part(s) of a body or object in the continuing motion of translation as analysed in intrafields q and p (with a suggested pseudocolour ( 1) of green Fig 9).
class 2: picture change or motion information as alternation data is detected conditionally in an intrafield by boolean p' alone within any 3-frame cascade (thence t l p = TRUE = '1 ' and I 16 q = FALSE = '0 ') Hence a motion class flag '2 ' is placed as a boolean component into the flag array as b( 2).
The class ( 2) identifies with the areas behind or at the rearward edge of a body or object in motion, as analysed in intrafields q and p (pseudocolour ( 2) suggestion red Fig 9) class 3: picture change or motion information as alternation data is detected conditionally in an intrafield by boolean q' alone within any 3-frame cascade (thence v l 6 q = TRUE = '1 ' and 3:16 p = FALSE = '0 ') Hence a motion class flag '3 ' is placed as a boolean component into the flag array as b( 3).
The class ( 3) identifies with the areas forward of or at the leading edge of a body, or object, in motion as analysed in intrafields q and p (pseudocolour ( 3) suggestion blue Fig 9) class 4: picture change or motion information as data is not equivalently detected bi-directionally in neither intrafield by boolean 'p' nor 'q' within any 3-frame cascade so leaving both logically false or '0 ' Hence a motion class flag 4 ' is placed as a boolean component into the flag array as b( 4) The class ( 4) identifies with a body or object analysed not to be in motion in either of intrafields p or q (pseudocolour ( 4) suggestion magenta Fig 9) n _ummerative Description of o Cbject p Motion as gased on
Image Input and onto AÄttributes (Fig 2, 3, 4 & 8) for F-rediction.
c Glass 1: the object is in continuous motion throughout at least part of both of the two referential and contiguous frame intervals, thence from both change events (I p and 6 q) which are identified within the 3-frames of video, in the cascade, hence intrafields giving first a value 'q', becoming one frame interval later the value 'p', each time having a value 1'.
class 2: the object image is in motion during the first interframe interval (hence intrafield 'p' from i p) but is stationary during the second interframe intrafield (q).
class 3: the object image is stationary in the first interframe intrafield (p) but is in motion during the second interframe interval (hence intrafield 'q' from 9 q).
class 4: the object image is stationary during both frame intervals (of intrafields 'p' and 'q', thence neither & p nor 9 q were effected) and with no detected change found within the cascade scene.
In this way an exclusive attribute formulation describing any image objectmotion state can be classified and this specifies the computational flag number designated in the digital frame-store which is referenced for k(n) It is by a computational reference to this four-level store of determinacy data that a survellience and/or security display overlay, or automata drive for a machine vision scheme, for a robotics or similar appliance, can be brought into effect (again maybe for an AI/IKBS type system) This specific ontological data from the flag array is also systemic for outbound assimilation at the export of m-values into a system elsewhere The motion class number categories as the determined image picture components in a quaternary data array can, in filter evaluated outcome, provide the outlet source matrix as the logic partition for a map onto other schemas of the overall system There any special significance of the motion class and indicative flag number b'(m) may be important.
The deliberate inclusion of temporal absolute difference mean' and also b(m) digital spatial filtering, within the specification, will reduce the flag number resolution limit.
But the contra advantage in eliminating unwanted spurious noise or spatial component distortion, in the video signal, is usually worth achieving Computationally flag-derived action, taken in subsequence, can thus be far more reliable The fundamental design principle as conceived here, within the FT-DIR-filter lattice, again remains intrinsic, regardless of where such other filtering is incorporated in any particular implementation.
group of final (b' (m), s) identical flag numbers lm I 1 to 4 l and of pixel row (x) with cardinality (s) is possible from unity to the greatest upper bound (typically where Cs Il to 722/1440 & beyond for any maximum spatial length sequencel of pixels, etc This as a set may be for CCIR 601-1 and 709 video recommendations, a digital video standard, and for a colour, or otherwise segmented, motion class display on the screen In realisation a chosen screen colour can be directly linked to each of the motion class flags; by a direct overlay with relatively little resolution error when co-sited with backcast registration.
-There are associated logical meanings by the following glossary of terms as applied over the 3-frame cascade:- (suggested pseudocolours are given for Fig 9) class 1 image 'fully motive'; colouring 1 coded (e g.
green).
class 2 image 'uncovering'; colouring 2 coded (e g red).
class 3 image 'covering'; colouring 3 coded (e g blue).
class 4 image 'stationary'; colouring 4 coded (e g.
magenta).
In machine finite automata, the motion class numbers ( 1 to 4) are created, stored and manipulated as a single identifiable binary word or boolean component b'(m) in an array matrix.
T-he Mrethod of o Dperation for a E Dynamic L attice :ensory Motion classifier and F=lag Generator (compare Fig 6 a/b/c/d/ and 8).
-The enumeration of motion class flag indication for any frame pixel point is evaluated over a time period which is sequenced by the 3-frame cascade held in a temporally- staticised, or time-locked video frame-store Only one intrafield variable, a boolean 'q', is under interframe computation at any one instance of time A linear identification, seeking a match by an entity mapping, is made between the coincident intercept of data sets for the pixel samples of frames (Pk(n+i),x,y) and (Pk(n),x,y) in the dedicated hardware of the filter The processes are completed for any intrafield, within the interframe period (/ t), so as to initially determine a value for binary boolean 'q'.
-he interframe periodic time as chosen (say 4 t) in any particular application is not important to the operating principle and processes of the method described here These however form a contiguous 3-frame cascade at the video frame cycle-per-second frequency (Hz) of the source signal The 3-frame sequence is utilised to designate a series as follows:preceding firstly k(n-1), then centrally k(n) and proceeding lastly k(n+ 1) frames (or fields), so for a convolution filter algorithm it quantitatively defines:- Pk(n-1) Pk(n) Pk(n+l) = b(m) these general function symbols are the interval intrafields (q and frame delayed p, respectively) between either of the two video frames of the picture signal It is across each interframe period (C) of the intrafield that a change detection of a 'q', to be the same value an intrafield interval later the 'p are analysed by a recursive convolution function This then is to evaluate b(m) for analytic elements from a simultaneous picture video image.
For equal pixel (Pk(n),x,y) co-ordination points between the k(n-1) preceding or first frame and the central or intermediate k(n) frame, see (Fig 6 a), an intrafield (PAD-transform) Pixel Absolute Difference transform calculation (magnitude differencing the lPk(n-1),x,yl and lPk(n),x,yl pixel component values alone) as a given modulus value of difference, is distinctively made between each of the respective luminance (Y) or, if wanted, the chrominance (U/V) picture data values for the pixel triad (P as Y & U/V, etc), see (Fig 6 b) If the determination of the intrafield
PAD-transform' value, as a discrete, finite temporal or fourier element, accedes an instantaneous Absolute Difference Limit Transfer-transform value (ADLT-value), for a pixel, a positive binary boolean is set (see Fig 5) The ADLT-value is held as an adaptive filter parameter of the pixels so a boolean logic transfer is set to a positive binary logic of one ('1 ') see (Fig 6 c) This can be a moving-average evaluation and be recurrently assessed upon the spatial centre (P(n), x, y) for each and every pixel These methods can include the consecutive line structure of any objective video image The on-going parameter value is spatially modified upon the interpixel basis that can be gained from a number of the immediately-processed pixel data values in a logic-gated circuit, and related to the overall dimensions of change, as may be determined by the video frame size (see Fig 5) An ultimate sensitivity may be limited by the quality of the video system as to the parameter standard on input, where the defects present in the signal may still be due to a variety of factors such as extaneous noise and distortion.
Temporal Image c hange-lDetection (Fig 2, 5, 7 & 8) (initially as l S Iq and temporally implicite follows 3 a 8 p) If quantitatively the finite fourier PAD-transform is significantly defined on threshold value as follows when:- PAD-transform ≥ ADLT-value, as threshold tested in scanning sequence order by retrospective and parametricised comparison, then 'q' and so alternatively later by a matrix transform 'p' both equal one ('1 ') Then a true detection (C) of image motion is made for the intrafield period A t, in a sampling process executed between the two video frames in respect to the central frame k(n) Hence respectively the event follows:- l E q = 'q', for a boolean detection of an intrafield pixel point, and so a frame interval A t later, without a further need for the identity detection, 21 8 p = 'p' has been proved by a partial change (C) in the interframe period by the detection method as already given Where primarily the quality lGq/A tl is found to be finite and true as a PAD-transform and is equal, or acceding greater than, a set ADLT-value parameter So a sharp edge is defined to uniquely exist l l Sql at the image location of assignment for a logical value of 'q' to be found Hence the infinitessimal in physical calculus for the frame period and intrafield of l 6 q/A tl translates into a unique (l) existance ( 1) to become coded by lR)Sql in finite computational automata at the point of perceivable change In motion recognition a catagory is primed with a logical notation connecting the value for q' with a previous value 'p' for the same pixel frame location (say for example (Pk(n),x,y)).
In a chosen temporal adjustment implicite to a specification these two existential variables (p & q) can be considered a logically connected pair within a complex lateral time domain under the inferences required of a unique quaternary variable or class number as an alternative motion descriptor.
The pixelwise ADLT-value is numerically a dynamically-weighted picture parameter having an appropriate characteristic in accordance with Y, U, or V component values, (see Fig 5, also 6 a/b/c/d/ which are a fully computed series) and hence is significant in data terms of image information aquisition and coding performance gained.
If any transfer process fails, maybe because of minimum scene or textural detail and/or lack of change in signal detail transferred, a conditional binary logic of zero C'O') subsumes in the computational code The image process can be repeated for every frame pixel co-ordinate point and this forms a temporally-dependent binary matrix This contains a tensor-field set as a component comprised of finite elements lx,y, & 'q'l in an initial data array with 'q' temporal which is for matrix manipulation according to the equation set to the final matrix-output array See Fig 6 b/c/ for the first picture intrafield inference for the boolean q'.
Conceptually this may be seen as being temporally between the first k(n-1) preceding and the second k(n) central frame but may be seen to be displaced into a complex plane from the main picture-signal forming the 3-frame cascade (Fig 1 & 2).
The first pair of video frames can provide a complex boolean Iq' from the intrafield between k Mn+ 1) and k(n), in the
3-frame triple cascade model By transferring this first binary boolean value of q I kept in a binary digital frame-store into a similar secondary intrafield 'p' store, the two intrafield lateral 'doublet' for cascade analysis under class axioms is set up (see Fig 6 c/d/) The now essential temporally recursive connection is provided by a frame-delay (q A, t) in a re-entrant transfer pathway comprising the 'doublet'.
Hence for the leading interframe interval the intrafield q processes of PAD-transform assignment and ADLT-value comparison can, on a validation test, give a 'O'/'l' value for the 'q' store With the advance of the proceeding video frame k(n+ 1) for further filter processing the 'q' values, representative of the preceding intrafield, are transferred as a recursive re-allocated to become the 'p' values as determined That is to say the circuit connectivity, for the value comparison with heuristic algebraic logic need only produce one component matix for a 'q' frame-store Obviously the q' intrafield component array is transferred at the frame-rate frequency (fi) of the video signal to be during frame interval E t the 'p' component in the array for the p intrafield matrix, the second part of the same axiom 'doublet' as derived from the 3-frame cascade (see Fig 6 b/c/).
BY the natural physical laws defining the characterisation of motion class lm I 1 to 4 l, the kinematic motion classifier, as the connective logic processer section, analyses the matrices of the 'p' and 'q' binaries, in their intrafield component array The recursively-based digital resultant is produced by a combinatorial algorithmic procedure in boolean proof testing and then allocating the class flag number b'(m) lm I 1 to 4 l on a computational rule basis.
-he operational derivation of motion class, as an image segmentation using the ADLT-value, may also be understood from the accompanying cross-sectional series of four digital video-frames as process cross-sections (see Figs 6 a/b/c/d/) Fig 6 a illustrates an overlay assembly for a row of pixel cross-sections of a horizontal table-tennis bat in normal vertical motion Fig 6 b represents the PAD-transform values derived from interframe luminance data, also known-as the finite fourier-difference of values for the two intrafields p and q Referring to Fig 6 c the intrafield digital value set q is here 'mean' magnitude filtered spatially to give a smoother pixel or surface contour characterisation Referring to Fig 6 d a fixed QDLT-value is shown drawn across a scanned frame-line or row of pixels through the smoothed PAD-transform values For the added requirement of a variable ADLT-value characteristic, the illustrated 'Luminance' scale equivalence for the intrafield booleans ('p' & 'q') would effectively move the line up or down in accordance with immediately preceding pixel values in the vicinity of the parametisation This existentially (:I) is intercept for any one pixel comparison (l) for ADLT-value So if the frame-line or row characteristic accedes the concurrent ADLT-value then a binary logic value of '1 ' is given for the intrafield boolean under a matrix evaluation of the
PAD-transform array A quick, instantaneous or fast sharp-edge spatial change (ax) detection is here the required objective.
The same computation process continues along the frame-line in being along the pixel row, and then down the pixel columns row-wise to complete the whole q intrafield to form the frame-store digits in scan sequence.
-he combinational control operation.
-he resulting 'doublet' relationship of the l'q' R 'p'l boolean pair evaluation is 'recursive' drawing on the 'p' and 'q' intrafields in separate and distinct time domains, with the same logical parameters, to form a dual image-motion detection scheme towards a motion class number for each and every computed screen-pixel point of the central source k(n) video frame The positive logic of motion detection by a one ('1 ') indicates an appropriately parametised (ADLT) detection (≥) of image motion having occurred within the picture intrafield period (as either a 'q' or later recursively a p' in Figs 1 & 2) The two tensor matrices of binary booleans (of 'p' and 'q' separately) are the outcome for each intrafield (PAD-transform) as a complex fourier variable, of finite temporal (A t) element, in a discrete evaluation.
The resultant tensor field lx,y, 'q'l R Ix,y, 'p'l, practically as a matrices 'doublet' of distinctly a p' and a q' binary component as lx,y,('p' A 'q')l With analytic logical recursion these give a flag (m) output of a 3-bit binary word for the boolean quaternary by the variable taxonomy of the connectives The interdependent result in the temporal interconnectivity of 'q' and 'p ' events is the output attribute in convolution of b(m) of the motion class filter (Figs 2 & 6 d) The combinatorial boolean lm I 1 to 4 l is a binary 3-bit word to form the component of a frame dimensioned matrix to be a current store for screen display or computer system interfacing (e g AI/IKBS).
-he flag number (RAN-type) store, as a matrix of frame dimensions and component cardinality containing lb'(m)Il to 4 l), can by a flag-number transcoder (R O N-type) device or otherwise, be used in further process to identify and replicate the necesary 'colouring' as the display hue and saturation for any pixel, line(s) of pixels (b'(m),s) or miscellaneous group of pixels, for a motion colouring overlay upon a luminance (b&w) contrast picture From this data a single composite k(n) frame can have two layers of data-based information with separate spatial-image boundaries as follows:- i) a first information layer presented as the normal luminance (Y) sigrnal as a (b&w) picture with contrast detail for the display in monochrom Te only from the input signal, ii) a second information layer of a (e g RGB or Y/U/V) chrominance signal of motionr-classified colouring detail, given by overlaying, is representative of the instantaneous motion state lmll to 4 l for a particular pixel and/or picture display area.
-he composite luminance picture can then have either a simple outline (e g 2-level binary), or a continuous analogue contrasted (b&w) luminance format to give a screen image.
Then a separate, pixel-based chrominence overlay of coloured segmentation areas which relate to dependent catagories of motion class, can be again coded to give a TV, monitor or VDU final display image for an observer.
For machine vision, or similar robotic/mechatronic purposes, the flag number b'(m) array, which relates to the image video-frame dimensions, can be used (of tensor format Ex,y,b'(m)l) for kinematic and thus dynamic or velocity referencing for robotic-machine guidence The spatially mapped areas of common motion-class catagory can provide a following system input or feedback indication of initial or resultant movement as a further basis for procedural action or development in the mode of operation There will be a minimum overall delay of one interframe period (A t, the time interval between two video frames) The Flag matrix as suggested is under continuous update of component values at frame-rate frequency (fx Hz).
-he most general case for a pixel-derived boolean attribute logic is, 'q' R 'p' = TRUE ('1 ') then by a combinatorial evaluation for b'(m) is bounded to the scalar range Em I 1 to 4 l The motion-detection booleans 'p' and 'q' taken in temporal sequence and catagory (see Figs 2 & 3) define motion class law within a characterisation of any 3-frame cascade.
If motion is found during intrafield q then by existential detection ( 1 l q = TRUE = '1 ') the boolean variable is set true as q' for logic '1 ' In practical circuit design the older 'p values are transferred in parallel from the forward time domain of the 'q' binary values in the matrix doublet' for all pixel points The binary boolean values of p and q' exist finitely in distinctly separate time domains between two different frames so the latter p intrafield variable is connectively pixel recursive upon the more current 'q' variable in the analytic circuit of the infinite filter.
The motion class equations for discrete pairing relations in defining the detection of motion-detection or finite impulse condition attributes which follow are a complete mathematical homogeneity in a distinctively discrete characterisation by four boolean relationships upon these two temporal binary variables.
mhe M 1 otion Classifier Filter section (see Fig 7 & 8); describing the data lines of the superposition connective network between the boolean logic values of 'p' & 'q' The motion class defining equations l 1 to 4 l for an intrafield and temporal doublet (p' & Iq') as the set of pixel co-ordinates (Pk(n),x,y) to determine class colouring.
The complete filter circuit is of a routed data continuity by number, in being dependently controlled upon the logical connectivity given by the intrinsic protocol rules of the equation set given here, as an analysis of the temporal code used for display image colouring, communication, or interpolation these m ay suggest onto' laws under the statements as follows:-
Canon class 1 In the normal instance of near-microscopic pixels, associated within the boundary of the intrafields, as having detected motion defining or both booleans 'p' and 'q', as being found true and logically positive, for when motion during an interframe period (A t) exists and is detected A class one motion flag ( 1) further defining a single picture frame k(n) is in evaluation of or bears the pixel definition:- (p /\ q) = TRUE = 1 for motion class 1 is when m = 1.
Nrjeither intrafield entity of informational data shows a coincidence on accedance to the threshold greater than (≥) for the pixel sets involved in k(n) to define an image dimension vector for a screen display which is bound within the discrete vector model.
-he block sections of the digital data filter ascertains the data system conditions for motion class detection The boundary conditions given by class 4 define an informational entrant' ( 1 l)vector, in existence during both the intrafields p and q From this data alone the overall extent and direction of the motion in full certainty cannot be found without also the points of commencement (class 2) and cessation (class 3) for a scene object in continuing motion, over the screen image pixel forest.
(anon (-lass 2 For when boolean p is found to be at logic one ( 1 '1) positive and boolean 'q' is found to be at logic zero ('O') negative, such that (nq) would be true, then a class two motion flag ( 2) is evaluated or defined for a pixel by:- (p A -n q) = TRUE = 2 for motion class 2 is when m = 2, or (p I q) = 2, a Schaeffer stroke connective or ordered AND NOT truth function.
C Only the second or q intrafield determines a coincidence identity on threshold for the PAD-transform data sets to define the bound points of the temporal 'retroversal' vector of the pixel sets This picture image property of 'uncovering' ( 2) critically defines the pixel locations where motion has commenced in having been detected for the time interval of the second intrafield q This is also the determined point of commencement for a temporal 'retroversal' information vector onto frame k(n), in the temporal domain determination The pixel 'imaging' point may well be contiguous to a number of continuing class 1 'entrant' vector and pixel point locations.
The vector pathway so defined is the actual line of a directed vector which may be over many frame intervals This derived information can allow the plotting on the display screen of a class-directed vector pathway of motion change, by way of the pseudocolour overlay The colouring image area formed can be easily understood by the observer using a VDU, monitor or TV display screen.
C-anon C(lass For when boolean 'q' is found to be at logic one ('1 ') positive and boolean 'p' is found to be at logic zero ('O') negative, or when (C-p) would be true, then a class three motion flag ( 3) is evaluated or defined for a pixel by:- ( p A q) = TRUE = 3 for motion class 3 is when m = 3, a ordered NOT AND truth function.
<Dnly the first or p intrafield determines the coincidence identity on threshold for the PAD-transform data sets to define the bound points of the temporal 'reversal' vector involved.
This picture image property of 'covering' ( 3) critically defines the image locations were motion has finished or ceased in having been detected during the time interval (A t) of intrafield q The resultant is a temporal 'reversal' vector which determines the end termination of any contiguous spatial pathway and thus is vector directed onto its location This pixel-bounded 'reversal' vector again defines data for screen display by a pseudocolour or segmentation overlay.
(-anon c(lass Residually when both 'p' and 'q' are found to be at logic zero (O') negative, then a motion class four ( 4) flag is evaluated or defined by:- (_ p A _ q) = TRUE = 4, or (p I q) = 4 for motion class 4 is when m = 4, a NEITHER NOR truth function.
With no change detection having operated during either of the intrafields p or q then a motion class 4 flag is given in nullity for the respective k(n) pixels to be so designated.
E Both the image intrafields p and q determine a coincidence identity on the data sets to define the bound of the temporal null' vector involved Here the temporal directions of the identity vectors in intrafields p and q would oppose each other.
Therefore no information to produce or direct a vector can can found from the video picture data, and hence the only inference is that at this picture point no motion or movement is found to exist The result is a pathway 'null' vector for the display of information by a pseudocolour segmentation overlay on the screen which may optionally be left blank under the nullity definition ( 4).
1-jence any pixel can be assigned a digital binary word in three bits (of a boolean variable) to represent the flag number using the logical connectives and functions as set out above The kinematic organisation of these four binary words is a principal objective in the overall design of the FT-DIR-filter here given A flag-number array has components of scalar values related 'onto' point references as an image pixel forest with rectangular co-ordinates organised as is the video frame The data input value comes from ontological functions, as determined by the laws of nature and the video signal characteristics Unique singleton identification of these digital functions gives the motion class quaternaries to be digitally implemented (see Fig 7).
In the description a mathematical law for úm Il to 4 l has been used to define the four motion classes based upon a pixel point reference (say Pk(n),x,y) which may carry the data for the luminance source information (Y) or any of the chrominence information signals (e g U/V, etc), as any video reference signal for whatever signal wavelength found of a object scene can be used to exploit the image property given by this specification The value of the flag number found of a particular signal input (here one of Y/U/V) must be separately determined and given a distinct flag number lm Il to 4 l from the available input object signals From this multiple but parallel and discrete function a single number value for b'(m) is then later evaluated by a minimum (m) value selection function or inferior value function taken from the inputs.
Method for M-ultiple \_;ideo-Eo-ignal Inputs; where each input may exceptionally have coding sensitivity to a distinct and different property of the source object.
-The use of a number of parallel, spatially coincident and temporally simultaneous information signals, with an originally analogue sampled and held, or multiple analogue component (-MAC type) basis, or digital, can be used (e g.
RGB, Y & U/V or YCC component triads) It may benefit if an appropriate priority function, based on the multiple availability of flag numbers (say Ym,Um, and Vm) for an exact reference of a common point, and is thus best related to the found numerical inferiority of the given group of (maybe three, or more) flag numbers This may be given by the following equation for the output matrix flag b'(m) as follows:- Channel Flag b'(m) = Inferior Value Function C Ym, Um, or Vml, or b'(m) = Inferior Value Function lRm, Gm, Bml; etc.
The number of selectable entities within this b'(m) function in compass can vary according to the video-signal aquisition method used; however any signal giving a perception or informational condition which is reliable upon its input information can be included within the above Inferior Value Function There must be an exact registration of pixel reference location for each motion flag included The selection outcome is a sensitive motion classification as best determined by individual FT-DIR-filter input subsystems for each input signal discrimination, according to the properties determined by each individual channel filter.
This optional final, or pictorially 'colour intercoded', flag b C'(m) from separate channels can more fully define the pseudocolour boundaries for a chrominance segmentation overlay The flag number b'(m) of the segmentation, in directly identifying with more of the background of scene detail than given of the picture luminance information alone, enhances the sensitivity of the method to motion events, resulting in only a change to one of the source input signals (of for example Y, U or V) The method of the temporal filter, in giving a motion-coded indication of an image object, can overcome the physiological and/or psychological confusion found when directly viewing a screen at a distance from the scene object, or viewing an object at a distance unaided.
The multi-inlet choice for composite m in description of selected possible video spectrum frequency or wavelength input to provide the signal(s) used in the filter(s) will depend upon the application and specification intended and thus the degree of complexity envisaged.
The composite m flag number as the determinate component in the b'(n) array can code the transmission for a distant matrix for a conformal allocation of data as a mapping or data flow onto a new k(n) frame from adjacent pixel data At a distance this further assimilation can produce or update both composite and intermediate images by a either a one bit '0 '/'1 ' or a four bit quaternary transmission code for a single, or a commonly identified group of pixels The necessary data can be exported or 'chained' from the flag matrix array to other systems Every point (Pk(n),x,y) gives a potential mapping,in being entity defined underaxiomatic premises as given, onto the logic universe of the video-frame as a flag b'(m); each with a cardinality of four in the integer range 1 to 4 inclusive The final flag can also be designated as the algebraic triple Eb'(m),x,yl set within a video-frame which has a pixel quantisation in scanning of the co-ordinates x -? across, and y down 4.
The binary digital approach has been to use structured methods for data in the logic design sections where information is numerically quantified from source object dynamics and connected upon the basis of physical objects, behaviour and events In this way the logical hardware items are detailed for a digital electronic-system design to provide an input aquisition of further information and for more than one envisaged application.
l:Design Supplication and Implementation Fxample.
F rom a flag number array, containing either of the b(m) or b'(m) components, the single '0 '/'1 ' bit quantum can represent the locational position of any motion class catagory, including a mutually exclusive but determinant option on classes 2 & 3 Such a quantum operator can be coded into vector constructs for later image reconstitution which may by a logic '0 '/'1 ' to direct reconstruction in the possible reconsitution of the central k(n) video-frame either locally, or when transmitted at a distance Only adjacent k(n) frame information need then be transmitted from source to destination allowing source k(n) frame suppression The choice of analytic motion class equations (by the statements for
Canons 1 to 4) for a true determination of a logic '1 ' from selected motion class flag values will effect the design recomposition.
e.g If a FT-DIR-filter when however gated lb( 1) and/or b( 2) alonel = TRUE then a logic '1 ' can direct representative data, in channel communication, from the frame pixel Pk(n+ 1) for a Pk(n) frame reconstruction If the criterion is not met then a logic '0 ' is representative for reconstitution (An uncovering' and 'fully-motive' class identification and/or correction see Fig 1 8 8 and specification "Bandwidth
Reduction or Datarate Conversion") The overall resultant registration in the temporal domain attained by the FT-DIR-filter is fast and true with the benefit of the quicker response times (as higher response extinction/cut-off frequencies) for (e g CCD) camera tubes and generally higher pixel-forest density.
The design changes for this revision in circuit configuration can change the flag matrix in content and operation A flag array b(m) component for any purpose can be the resultant produced by any class equation selected from the set of four, and be according to need This may demand modification to the motion classifier circuit to optimise the appropriate computation for the requisite flag values alone.
kjision Image ocation and spatial F-eriod Measurement.
E 3 y taking example of a single pixel reference the overall filter flag cycle of b'( 4) b'( 3) b'( 1) marks the leading edge of a moving edge or boundary for an object in motion.
There possibly then continues a temporal period of a b'( 1) flag condition (for say N x A t frame intervals), together with the appropriate spatial period, before a trailing edge or boundary cycle of b'( 1) b'( 2) b'( 4) Any given flag condition may exist in either the leading or the trailing edge detection condition for more than one 3-frame cascade period ( 2 x L t) The minimum spatial period across the pixel forest and the minimum temporal period through the frame interval Cs) between the b'( 3) b'( 1) leading edge change and the b'( 1) - b'( 2) trailing edge-change gives the necessary input data necessary to determine an indication on size and speed of scene and/or vision objects in motion These scene activities may be under human observation on a screen, or for AI/IKBS input or other cybernetic control which may be accessed by an external system and control The still image areas are always coded b'( 4) From such pixel data, within the 3-frame cascade, relative and comparative size measuration of objects which have undergone motion can be computed.
The most suitable equipment position for a FT-DIR-filter section and scheme within a system is dependent upon its intended function To benefit from the potential for digital data-reduction in performance a near-camera position is the likely best However with current TV system standards, or with scene image display and machine locational guidence as facilitation, the suggestion of a location may be compromised to be both near the screen and near the system controller The flag matrix outputs from the FT-DIR-filter can continue to drive a viewing screen with motion class information while also providing a direct interface access for the cybernetic data requirement of AI/IKBS automation whatever the motion classification.
with data identified by motion class for the screen colouring overlay and/ or the relocation of image information for a Pk(n) frame reconstruction, the b(m) or b'(m) value in binary can digitally be kept in the hardwear interface of a flag register store, b'(m) This completes the descriptive specification of an information gathering and using system (IGUS) giving an image ensemble language as a specific data and screen record (see Fig 1, 7 & 8).
The manufactured product design requires a family of logic gates upon a boolean algebra where the minimal transition or propagation times commensurate with a maximum reduction of gate circuitry 26/9/92 CL L /4 IM 15 1 - -rhe description specifies the method of the Finite
Temporal-Domain Impulse Filter as an electronic system for a device to provide on a screen the display of enhanced visual information of the image motion condition by a coded classification colouring using a number in the range El to 4 l A video signal input is provided by a television camera, preferably providing a digital signal (eg CCIR 601 system standard) An output is communicable and may be seen on a visual display unit (VDU), video monitor, or television set The view seen of the display image is defined by the movement coded of the source scene; in accordance with the motion type or class determination for indication by colouring on a television screen, or video picture monitor.
The retained monochrome or luminance image of the objective scene as claimed in claim I is coloured by hue CLLAIIS GB 9225180 0 codification, or more simply highlighted in brightness, having thus been catagorised primarily into one of the four motion class conditions lm Ilto 4 l, or dynamic image states, as follows, Image Motion Coding.
Flag Number (m) class 1:
Image Motion condition fully motive movement.
colouring (green) class 2:
C.lass 3:
ceasing or 'uncovering'.
commencing or 'covering'.
class,4 stationary, no movement.
(magenta or none) By separately identifying (class 2) 'uncovering' and (class 3) 'covering', from continuing motion the absolute direction, metric length, and time period or duration of an object motion in the image can be computed and made output as communicable data and/or displayed The change of direction in a scene object causes an image codification change between class numbers 2 & 3, and hence a screen colouring change in the display image is made from (red) (blue) I I I I I I I I I 1, I I
Claims (3)
- CLAIMS GB 9225180 0 -48- the associated flag The temporary retension in aframe-store of class
- 2 & 3 identified flags, can produce a colouring or brightness highlighting as an alerting indication of advancing ( 3) and retarding ( 2) motion and motion class 2/3 change in the source object.
- 3.- The image of the source object as in claim 2 can be displayed from an ordinary monochrome (or black and white) camera signal, from the normal luminance information, as for a grey-scale on amplitude, or picture contrast signal, together with the given additional colour overlay or highlighting' of a specifically coded hue The signal processing defines the colour information to be shown directly with the monochrome (B&W or luminance) picture.The motion synthesised colour information, as a 'false' colour or pseudocolour' rendering can represent one of the four classed motion conditions or states The computational extraction, from a video or television camera source, is for the motion class number l 1 to 4), as output by the Finite Temporal-Domain Impulse Response filter (FT-DIR-filter) with a video signal source.From claim 3 the three serial frames of video picture signal are kept in discrete digitised frame-stores of pixel values called a 3-frame cascade or triple kinematic CLAIMS GB 9225180 0 -49) trap Algebraically defined gate operations upon the digital operations are also characterised in binary on the bounded cascade data from signal information which supports the decomposition of the whole design.Temporally-adjacent signal frames have exactly opposite pixel locations which are subtracted through in difference to a single value of pixel absolute difference (-, e g.lPk(n+ 1) Pk(n)l) for the interim PAD-transform field.Where these difference values are found to accede a detection threshold (ADLT)-value, itself a computational design of an image parameter In parametricisation the ADLT-value attained determines the detection sensitivity and may automatically take a lower value for signals relatively free from noise and distortion The boolean variable given is a true detection of positional change so q' is made equal to one '1 ' in binary logic, otherwise a boolean nought '0 ' is found or subsumes on calculative omission The boolean value found is transferred to a second frame-store with a one-bit representation for each pixel point to later become simultaneously the respective p' value, one frame period later.- The characteristics of four canonical laws for flag (m) in mathematical logic used for modelling the design classification in claim 4, and in specifying the FT-DIR-filter design, are formulated upon the two CLAIMS GB 9225180 0 -50- intrafield boolean variables 'p' R q' which in themselves are computational motion detection values l'/'0 ' From both intrafield booleans 'p' and 'q' each of the four canonical laws is formulated in functional and connective logic, definable as a unique equation, for one of the four distinct motion class categories as given above and identified by a logicised flag' number, again representable as a binary word The flag number Ell 1 to 4 l is then transcoded to give an overlaid hue or colouring upon the background monochrome signal.i - -he device described may be made with control of the display switching, at option, between a normal colour television picture and the motion class coding llto 4 l of the monochrome image using a 'false' colour or ' pseudocolour' laid upon the exact pixel areas under the motion class determined by the filter The use of brightness highlighting will indicate for display one selected motion class as described.T-he motion classification by flag number upon the video signal can be made by using the luminance (Y), chrominance (RGB, or U/V), or another wavelength of the electromagnetic spectrum (e g IR & UV) giving an input CLAIMS GB 9225180 0 -51- signal, whether naturally visible to the human eye or not.M 1 ore than one FT-DIR-filter can be coupled for parallel operation with individually different spectral sensitivity to the change conditions of image motion, to produce 'q' the intrafield boolean The selection applied to the outputs is by a number priority function to give a single resultant flag The performance can be faster, more deterministic and accurate in detection of true motion change in the source object.98- n optional method using standard spatial filter techniques can be applied onto the intermediate PAD-transform discrete values, and/or a majority flag or a number sort filter to the values of the pixel flag produced on output The discrete dimensions of the pixel window size for the filter sequencing will depend upon display screen size and instrumental performance or resolution.1 O - The location of the FT-DIR-filter near the camera, for the flag or number output, provides for a simple communication transmission code at a low data rate, for the reconstruction operation of a reconsituted 'dynamic' LAIMS GB 9225180 0 -52- image upon monochrome at a distant display.11- n method of image coding for expicitely displaying motion class conditions as in claimed 10, wherein an empirical fast-transition or sharp-edge determination of a pixel (ADLT-value) Absolute Difference Limit Transfere - transform value is defined by a graph with a crucial function for motion threshold detection and consequential evaluation of the co-ordinated booleans lxy,'p'l or lx,y,'q'l from frame intrafields.1 2- method of image motion coding and visual highlighting by pixel brightness or colouring hue incorporating the principles and canon laws of a Finite Temporal-Domain Impulse Response Filter (FT-DIR-filter) for display locally or at a distance on a television screen or similar unit as given by the description text or set of drawings.13- method of image motion coding for display, computation, control, communication or any other purpose using any of the principles and canon laws of the Finite Temporal-Domain Impulse Response Filter (FT-DIR-filter) to produce transmissible coding as given by the description text or the drawing set.I., Ii CLAIMS GB 9225180 0 -53- 14- Fig: Schematic system diagram for Vision Motion Classification And Display incorporating the FT-DIR-filter.zz L CAMEIL -FILTER MAIN 4 UNIT CODI NG V v \I VISUAL- l NS Pkn Y MOHITOR U
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB9306533A GB2266638B (en) | 1992-04-01 | 1993-03-29 | Multiple visual display from motion classifications for digital TV |
| GB939318294A GB9318294D0 (en) | 1992-12-02 | 1993-09-03 | Digital impulse response filter system |
| GB939323875A GB9323875D0 (en) | 1992-12-02 | 1993-11-19 | Video image motion classification and display |
| GB9324678A GB2274371B (en) | 1992-12-02 | 1993-12-01 | Digital Impulse response filter system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB9207151A GB2265783B (en) | 1992-04-01 | 1992-04-01 | Bandwidth reduction employing a classification channel |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB9225180D0 GB9225180D0 (en) | 1993-01-20 |
| GB2265784A true GB2265784A (en) | 1993-10-06 |
| GB2265784B GB2265784B (en) | 1995-12-06 |
Family
ID=10713268
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB9207151A Expired - Fee Related GB2265783B (en) | 1992-04-01 | 1992-04-01 | Bandwidth reduction employing a classification channel |
| GB9225180A Expired - Fee Related GB2265784B (en) | 1992-04-01 | 1992-12-02 | Vision motion classification and display |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB9207151A Expired - Fee Related GB2265783B (en) | 1992-04-01 | 1992-04-01 | Bandwidth reduction employing a classification channel |
Country Status (1)
| Country | Link |
|---|---|
| GB (2) | GB2265783B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2274371A (en) * | 1992-12-02 | 1994-07-20 | Kenneth Stanley Jones | Measurement and control using motion classification flags |
| GB2296401A (en) * | 1994-10-04 | 1996-06-26 | Kenneth Stanley Jones | Motion vector encoder using spatial majority filter |
| WO1997022949A1 (en) * | 1995-12-16 | 1997-06-26 | Paul Gordon Wilkins | Method for analysing the content of a video signal |
| RU2251735C2 (en) * | 2003-09-16 | 2005-05-10 | Курский государственный технический университет | Device for processing images |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2291306A (en) * | 1994-07-02 | 1996-01-17 | Kenneth Stanley Jones | Image motion flag or vector filter |
| JP2001507552A (en) * | 1997-10-29 | 2001-06-05 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Motion vector prediction and detection of hidden / exposed image parts |
| US6594313B1 (en) | 1998-12-23 | 2003-07-15 | Intel Corporation | Increased video playback framerate in low bit-rate video applications |
| GB2444993B (en) * | 2007-03-01 | 2011-09-07 | Kenneth Stanley Jones | Plastic digital video codec circuit |
| EP2697972B1 (en) | 2011-04-14 | 2015-01-14 | Dolby Laboratories Licensing Corporation | Image prediction based on primary color grading model |
| US10970881B2 (en) | 2018-12-21 | 2021-04-06 | Samsung Display Co., Ltd. | Fallback modes for display compression |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2022357A (en) * | 1978-05-30 | 1979-12-12 | British Broadcasting Corp | Method of and apparatus for movement portrayal with a raster e.g.television display |
| GB2031685A (en) * | 1978-10-04 | 1980-04-23 | Cbs Inc | Apparatus for displaying or recording paths of motion |
| GB2150724A (en) * | 1983-11-02 | 1985-07-03 | Christopher Hall | Surveillance system |
| EP0169273A1 (en) * | 1984-06-20 | 1986-01-29 | Siemens Aktiengesellschaft | X-ray diagnostic apparatus |
| WO1988000784A1 (en) * | 1986-07-15 | 1988-01-28 | Rca Corporation | Motion detector apparatus for responding to edge information contained in a television signal |
| US5091780A (en) * | 1990-05-09 | 1992-02-25 | Carnegie-Mellon University | A trainable security system emthod for the same |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE3408061A1 (en) * | 1984-03-05 | 1985-09-05 | ANT Nachrichtentechnik GmbH, 7150 Backnang | METHOD FOR MOTION-ADAPTIVE INTERPOLATION OF TELEVISION SEQUENCES AND APPLICATIONS OF THIS METHOD |
| US4890160A (en) * | 1986-03-19 | 1989-12-26 | British Broadcasting Corporation | TV picture motion vector measurement by correlation of pictures |
| GB2231743B (en) * | 1989-04-27 | 1993-10-20 | Sony Corp | Motion dependent video signal processing |
| GB9013642D0 (en) * | 1990-06-19 | 1990-08-08 | British Broadcasting Corp | Video signal processing |
| KR930701888A (en) * | 1990-09-20 | 1993-06-12 | 마머듀크 제임스 허시. 마이클 찰스 스티븐슨 | Video image processing method and apparatus |
| GB2253760B (en) * | 1991-02-01 | 1994-07-27 | British Broadcasting Corp | Video image processing |
| FR2675002B1 (en) * | 1991-04-05 | 1993-06-18 | Thomson Csf | METHOD FOR CLASSIFYING THE PIXELS OF AN IMAGE BELONGING TO A SEQUENCE OF MOVED IMAGES AND METHOD FOR TEMPORALLY INTERPOLATING IMAGES USING SAID CLASSIFICATION. |
| CA2087946A1 (en) * | 1991-05-24 | 1992-11-25 | Michael Burl | Video image processing |
-
1992
- 1992-04-01 GB GB9207151A patent/GB2265783B/en not_active Expired - Fee Related
- 1992-12-02 GB GB9225180A patent/GB2265784B/en not_active Expired - Fee Related
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2022357A (en) * | 1978-05-30 | 1979-12-12 | British Broadcasting Corp | Method of and apparatus for movement portrayal with a raster e.g.television display |
| GB2031685A (en) * | 1978-10-04 | 1980-04-23 | Cbs Inc | Apparatus for displaying or recording paths of motion |
| GB2150724A (en) * | 1983-11-02 | 1985-07-03 | Christopher Hall | Surveillance system |
| EP0169273A1 (en) * | 1984-06-20 | 1986-01-29 | Siemens Aktiengesellschaft | X-ray diagnostic apparatus |
| WO1988000784A1 (en) * | 1986-07-15 | 1988-01-28 | Rca Corporation | Motion detector apparatus for responding to edge information contained in a television signal |
| US5091780A (en) * | 1990-05-09 | 1992-02-25 | Carnegie-Mellon University | A trainable security system emthod for the same |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2274371A (en) * | 1992-12-02 | 1994-07-20 | Kenneth Stanley Jones | Measurement and control using motion classification flags |
| GB2274371B (en) * | 1992-12-02 | 1996-05-15 | Kenneth Stanley Jones | Digital Impulse response filter system |
| GB2296401A (en) * | 1994-10-04 | 1996-06-26 | Kenneth Stanley Jones | Motion vector encoder using spatial majority filter |
| GB2296401B (en) * | 1994-10-04 | 1998-10-14 | Kenneth Stanley Jones | Improved 'majority' filter |
| WO1997022949A1 (en) * | 1995-12-16 | 1997-06-26 | Paul Gordon Wilkins | Method for analysing the content of a video signal |
| RU2251735C2 (en) * | 2003-09-16 | 2005-05-10 | Курский государственный технический университет | Device for processing images |
Also Published As
| Publication number | Publication date |
|---|---|
| GB9207151D0 (en) | 1992-05-13 |
| GB2265783B (en) | 1996-05-29 |
| GB2265784B (en) | 1995-12-06 |
| GB9225180D0 (en) | 1993-01-20 |
| GB2265783A (en) | 1993-10-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8724894B1 (en) | Colorization of digital imagery | |
| EP0634873B1 (en) | Method to determine the motion vectors in small picture segments of a television picture | |
| US4661853A (en) | Interfield image motion detector for video signals | |
| JP3328934B2 (en) | Method and apparatus for fusing images | |
| CA2399106C (en) | System for automated screening of security cameras | |
| US7190725B2 (en) | System and methods for smoothing an input signal | |
| US5650828A (en) | Method and apparatus for detecting and thinning a contour image of objects | |
| JP4705959B2 (en) | Apparatus and method for creating image saliency map | |
| US20110085028A1 (en) | Methods and systems for object segmentation in digital images | |
| EP1313312A2 (en) | Method of edge based interpolation | |
| CN106408846A (en) | Image fire hazard detection method based on video monitoring platform | |
| EP0636299B1 (en) | A method for detecting and removing errors exceeding a specific contrast in digital video signals | |
| Koschan et al. | A comparison of median filter techniques for noise removal in color images | |
| GB2265784A (en) | Video image motion classification and display | |
| KR950009704B1 (en) | Collor processing device | |
| WO2008150454A1 (en) | Method for detecting water regions in video | |
| KR20030021252A (en) | Object tracking based on color distribution | |
| US20030137592A1 (en) | Memory with interaction between data in a memory cell | |
| Koschan | Using perceptual attributes to obtain dense depth maps | |
| Pavel et al. | Model-based sensor fusion for aviation | |
| Ismael | Comparative study for different color spaces of image segmentation based on Prewitt edge detection technique | |
| GB2303015A (en) | Digital video image-response predictor filter system | |
| EP4439480A1 (en) | Method and image-processing device for detecting a reflection of an identified object in an image frame | |
| Alhaidari et al. | Motion detection in digital video recording format with static background | |
| Zeng et al. | Evaluation of color categorization for representing vehicle colors |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 746 | Register noted 'licences of right' (sect. 46/1977) |
Effective date: 19990310 |
|
| PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 19991202 |