[go: up one dir, main page]

WO2008073563A1 - Procédé et système pour l'estimation du regard - Google Patents

Procédé et système pour l'estimation du regard Download PDF

Info

Publication number
WO2008073563A1
WO2008073563A1 PCT/US2007/081023 US2007081023W WO2008073563A1 WO 2008073563 A1 WO2008073563 A1 WO 2008073563A1 US 2007081023 W US2007081023 W US 2007081023W WO 2008073563 A1 WO2008073563 A1 WO 2008073563A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
region
image capturing
video sequence
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2007/081023
Other languages
English (en)
Inventor
Xiaoming Liu
Nils Oliver Krahnstoever
A.G. Amitha Perera
Anthony J. Hoogs
Peter Tu
Gianfranco Doretto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NBCUniversal Media LLC
Original Assignee
NBC Universal Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NBC Universal Inc filed Critical NBC Universal Inc
Publication of WO2008073563A1 publication Critical patent/WO2008073563A1/fr
Priority to US12/474,962 priority Critical patent/US20090290753A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Definitions

  • the present disclosure relates, generally, to gaze estimation.
  • a system and method are disclosed for determining and presenting an estimate of the gaze of a subject in a video sequence of captured images.
  • viewing of the video may allow a viewer to see the event from the perspective and location of the subject even though the viewer did not witness the event in person as it occurred. While the video may sufficiently capture and present the event, the presentation of the event may be enhanced to increase the viewing pleasure of the viewer.
  • an on-air commentator may provide commentary in conjunction with a video broadcast in an effort to convey additional knowledge and information regarding the event to the viewer. It is noted however that care is needed by the on-air commentator not to say too much as to, for example, distract from the video broadcast.
  • a method including capturing a video sequence of images with an image capturing system, designating at least one landmark in a region of interest of the captured video sequence, fitting, based on the at least one landmark, a model of the region of interest to the region of interest in the captured video sequence, and determining a pose parameter for the model fitted to the region of interest may be provided.
  • the pose parameter includes an estimation of a gaze of a subject associated with the region of interest.
  • the method may further include determining the pose parameter for the model over a period of time.
  • extracted data may be associated with the pose parameter of the region of interest.
  • the extracted data may be presented in a user-viewable format.
  • the image capturing system may be calibrated relative to a location of the image capturing system that includes determining geometrical information associated with location.
  • the geometrical information may include at least one of information regarding a specification of the image capturing system components, the location of the image capturing system with respect to an area captured in the video sequence, and a pan, tilt, and roll parameter.
  • a system including at least one image capturing device and a processor may be provided to implement the methods disclosed herein.
  • program instructions or code may be provided on a tangible media for execution by a system or device (e.g., processor) to implement some methods herein.
  • FIG. 1 is an illustrative depiction of an image captured by an image capturing system, including gaze estimation overlays, in accordance with some embodiments herein;
  • FIG. 2 is an illustrative depiction of a re-visualization of an image captured by an image capturing system, including gaze estimation overlays, in accordance with some embodiments herein;
  • FIG. 3 is an illustrative depiction of an image captured by an image capturing system, including a display area, in accordance with some embodiments herein;
  • FIG. 4 is an exemplary illustration of a number of models, in accordance herewith;
  • FIG. 5 is an exemplary depiction of a number of models, in accordance herewith
  • FIG. 6A is an exemplary illustration of an image captured by an image capturing system, in accordance herewith;
  • FIG. 6B is an illustrative depiction of a model used, for example, in association with the captured image of FIG. 6A, in accordance herewith;
  • FIG. 7 is an illustrative graphical representation, in accordance with aspects herein.
  • FIG. 8 is an illustrative depiction of a captured image, including visualizations, in accordance with some embodiments herein.
  • the present disclosure relates to video visualization.
  • some embodiments herein provide a method, system, apparatus, and program instructions for gaze estimation of an individual captured by a video system.
  • a machine based gaze estimation process and system determines and estimates of the gaze direction of an individual captured on a video sequence. Some embodiments further provide a visual presentation of the gaze estimation.
  • the visual presentation or visualization of the gaze estimation may be provided alone or in combination with a video sequence and in a variety of formats.
  • a computer vision algorithm estimates the gaze of a subject individual. Portions of the process of estimating the gaze of the individual may be accomplished manually, semi-manually, semi-automatically, and automatically.
  • the gaze estimation process may, in general, comprise two processing stages.
  • a first stage includes a training stage wherein a number of landmarks on a region(s) of interest are labeled.
  • the landmark labeling operation may include manually designating the region(s) of interest given a sequence of video images.
  • the region of interest includes the head of subject individual for whom the gaze estimation is being determined.
  • a shape model may be used to represent the shape of a region of interest (i.e., the head of a subject individual).
  • the shape model may be associated with appearance information such as, for example, texture information.
  • the shape model may be an Active Appearance Model (AAM) using, for example, two subspace models; a deformable model; or a rigid model.
  • AAM Active Appearance Model
  • the term AAM should be understood to be one of many model representations that can be used to estimate gaze direction.
  • the model is capable of automatically fitting a model of the region of interest (i.e., head) to the subject individual in a sequence of video.
  • the model may automatically correlate a mesh model to the subject individual's head in each frame of a video sequence by estimating the shape and appearance for the subject individual. Based on the resulting shape parameter(s), an estimation of the gaze of the subject individual may be determined for each frame of the video sequence.
  • the gaze estimation methods disclosed herein may efficiently provide gaze estimation in real time.
  • gaze estimation in accordance with the present disclosure may be performed substantially concurrent with the capture of video sequences such that gaze estimation data relating to the captured video sequences is available for presentation, visualization and otherwise, in real time coincident with a live broadcast of the video sequences.
  • the images used to learn an AAM may be, in some embodiments, relatively few as compared to the applicability of the AMM. For example, nine (9) images may be used to learn an AAM that in turn is used to estimate the gaze for about one hundred (100) frames of video.
  • the gaze estimation methods disclosed herein may provide gaze estimation data even in an instance where low resolution video is used as a basis for the gaze estimation processing.
  • the methods herein may be effectively used with low resolution video.
  • the gaze estimation herein may be extended to subject individuals having at least a portion of their face obscured.
  • the gaze estimation methods, systems, and related implementations herein may be used to provide gaze estimation for subject individuals captured on video participating in various contexts and sporting events wherein the face and head of the subject individual is visually obscured, such as in football, hockey, and other activities where a helmet is worn.
  • the gaze direction of a football player may be provided as an overlay in broadcast video footage, in real time or subsequently (e.g., a replay).
  • on-air commentators may offer, for example, on-air analysis of a quarterback's decision process before and/or during a football play by visually showing the broadcast viewers via gaze estimation overlays how and when the quarterback scans the football field and looks at different receivers and/or defenders before making a football pass.
  • Gaze estimation overlays may be obtained using a variety of techniques, including a completely manual technique by a graphics artist without any tools requiring specialized skills and knowledge from the domain of computer vision to a fully automatic process that requires significant technology from the realm of computer vision.
  • an individual such as, for example, a graphic artist or special effects artist may visually inspect a sequence of video and manually draw lines in every video frame to visually indicate the gaze direction of the football player.
  • an on-air commentator may use a broadcast tool/process (e.g., a Telestrator®) to manually draw overlays into the broadcast that indicates gaze direction. In this manner, a gaze estimation visualization may be provided.
  • a broadcast tool/process e.g., a Telestrator®
  • an operator may manually inspect and draw gaze direction estimation indicators (e.g., lines, highlights, etc.) on certain frames of a sequence of video.
  • the certain frames may be every few "key" frames of video in the footage.
  • An interpolation operation may be performed on the non-key frames to obtain gaze direction estimates for every frame of the video.
  • an operator may use a special tool to improve upon the accuracy and/or efficiency of the manual gaze direction estimation process in frames or key-frames.
  • a tool may display a graphical model of a football player's helmet or an athlete's head, represented by points and/or lines.
  • the graphical (i.e., virtual) model may be displayed on a display screen and, using a suitable graphical user interface, the location, scale, and pose of the model may be manipulated until there is a good visual match between the virtual model and the true helmet of the subject football player. Accordingly, the gaze direction of the subject player in the video footage would correspond to the pose of the virtual football helmet or head of the subject after alignment.
  • a model of the football helmet or head may be a 3-D model that closely approximates or resembles an actual football helmet.
  • a model of the football helmet or head may be 2-D model that resembles the projection of an actual football helmet.
  • Pose and shape parameters of the helmet or head model may be used to represent 3-D location and 3-D pose or be more abstract shape and appearance parameters may be used that describe the deformation of a 2-D model helmet in a 2-D image.
  • the gaze estimation capture tool may further use knowledge about a broadcast camera that recorded the video footage.
  • the location of the camera with respect to the field, the pan, tilt and roll of the camera, the focal length, the zoom factor, and other parameters and characteristics of the camera may be used to effectuate some gaze estimations, in accordance herewith.
  • This camera knowledge may define certain constraints regarding the possible locations of the virtual helmet/head in the video imagery, thereby aiding the alignment process between the virtual model helmet and the captured video footage for the operator. The constraints arise because the helmet/head of the subject is, in practical terms, typically limited to between about 10 cm and about 250 cm above the football field and is typically limited to a fixed range of poses (i.e., a human primarily pans and tilts) etc.
  • the gaze estimation capture tool may use multiple viewing angles of a football player. Given accurate camera information for multiple viewing angles, the operator may perform the alignment process between the virtual model and the actual video footage based on multiple viewing directions simultaneously, thereby making such alignment processes more accurate and more robust.
  • a semi-automatic approach for providing a gaze estimation overlay includes associating a virtual model of the helmet/head of the subject individual with appearance information such as, for example, "image texture".
  • the appearance information facilitates the generation of a virtual football helmet/head that appears substantially similar to the actual video captured helmet/head in the broadcast footage.
  • the alignment between the virtual helmet and the image of the helmet may be automated.
  • an operator may initially bring the virtual helmet into an approximate alignment with the actual (i.e., real) helmet and an optimization algorithm may further refine the location and pose parameters of the virtual helmet in order to maximize a similarity between the video footage's real helmet and the virtual helmet.
  • the automatic refinement may be selectively or exclusively performed with shape information (i.e., without appearance information in some instances) by performing a manual or purely shape based alignment once, that is then followed by an acquisition of appearance information from the video footage (e.g., texture information is mapped from the broadcast footage onto the virtual model of the helmet). Subsequent alignments may then be performed using the acquired appearance information.
  • the amount and degree of operator intervention may be further reduced to a single rough alignment between the virtual helmet and the helmet/head of the broadcast footage by using the automatic pose refinement incrementally. For example, after an alignment has been established for one frame, subsequent alignments may be obtained by maximizing the similarity between the model and the capture imagery, as described hereinabove.
  • the detector may include an algorithm that automatically determines the location of the subject or subject body (e.g., helmet or head) in a sequence of video images. In some embodiments, the detector may also include determining at least a rough pose of an object or person in a video image.
  • one or more cameras may be used to capture the video. It should be appreciated that the use of more than one camera to yield video containing multiple viewing angles of a scene may contribute to providing a gaze direction estimation that is more accurate that a single camera/single viewing angle approach. Furthermore, knowledge regarding the camera parameters may be obtained from optoelectronic devices attached to the broadcast cameras or via computer vision means that match 2-D image points field with 3-D world coordinate points of a video captured environment (e.g., a football field).
  • FIG. 1 is an exemplary illustration of a video image 100 including gaze estimation overlay 105.
  • the gaze estimation presents a visualization of the field of vision of player 150 at a given instant in time.
  • the gaze estimation overlay includes boundaries 110, 115, 120, and 125 that define the boundaries of the subject player's field of vision in the video scene. Boundary marking 130 further defines the filed of vision. Gaze estimation 105 may be obtained using one or more of the gaze estimation techniques disclosed herein.
  • FIG. 2 provides an exemplary illustration of video image 200, including gaze estimation overlay 205.
  • Gaze estimation overlay 205 is provided in conjunction with other visualizations such as telemetry components 240, 245 that provide details of subjects in the video.
  • Gaze estimation overlay 205 includes boundaries 210, 215, 220, 225, and 230 that define the boundaries of the subject player's (250) field of vision in the video scene.
  • Gaze estimation overlay 205 may be continuously updated as video 200 changes to provide an accurate, real time visualization of the gaze direction of player 250.
  • a directional icon 235 is provided to inform viewers of the frame of reference used in the determination and/or presentation of the gaze estimation overlay.
  • FIG. 3 provides an exemplary video image 300 including a display area 305 on video image 310.
  • Display area 305 may be used to display textual and/or descriptive information regarding a gaze estimation determination for video image 310.
  • gaze estimation may be performed for a player in video image 310 but instead of an overlay being generated and visualized thereon, display area 305 may be used to display textual and/or descriptive information regarding the gaze estimation.
  • the textual and/or descriptive information may include a gaze angle, rate of change in the gaze angle, maximum distance downfield included in the gaze estimation, and other gaze related information.
  • FIG. 4 is an illustrative depiction of a number mesh models for determining a gaze estimation. Arrows on the mesh model provide, for example, an indication of the direction of the gaze for the model.
  • FIG. 5 is an illustrative depiction of a number visualization models for determining a gaze estimation.
  • FIG. 6A is an illustrative depiction of a video image 600 including a helmet 605 worn by football player 610. That is, helmet 605 is the actual or real helmet shown in the video.
  • FIG. 6B is a depiction of a mesh model 615 that has been aligned with helmet 605. The alignment of mesh model 615 with helmet 605 may be accomplished using one or more of the techniques disclosed herein.
  • FIG. 7 provides an exemplary graphical presentation 700 relating to a gaze estimation for a video image. Section 705 includes graph line 715 that tracks or represents the gaze direction (i.e., angle) over a period of time.
  • Section 710 of graph 700 includes a segment of the video including the helmet of the player whose gaze is being determined and corresponds to the line graph in section 705.
  • FIG. 8 is an illustration 800 of video image 805 including visualizations of subject detections.
  • a detector method and/or system may be used to detect, in real time or subsequent thereto, the helmet/head of interested subjects (e.g., football players) in video image 805.
  • graphic overlays 810, 815, and 820 visually indicate the detected helmets/heads of, for example, three players.
  • graphic overlays 810, 815, and 820 may be visualized to indicate the players in the filed of vision for another player, such as the quarterback in video image 805. In this manner, gaze estimation data is also provided to a viewer.
  • FIG. 9 is an exemplary depiction 900 of a gaze estimation overlay for a video image.
  • the gaze estimation is provided and associated with player 905.
  • the player's jersey number is provided at 915, in close proximity with graphic overlay 910 that tracks the player's helmet.
  • Graphic overlay 910 may be obtained using, though not necessarily, an automatic helmet detector method and system.
  • the gaze direction of player 905 is visualized by a center line 930 and boundaries 920 and 925.
  • boundaries 925 and 920 may be based on a theoretical or even an estimated range of vision for player 905.
  • boundaries 925 and 920 may be offset from center line 940 based on a calculation using data specific to the actual range of vision for player 905.
  • Display area 935 includes graphical information relating to player 905.
  • the information shown relates to the position of player to a reference point of the field (e.g., line of scrimmage), velocity and acceleration for player 905. Also included is the gaze direction (0°) for the player. It should be appreciated that additional, alternative, and fewer data may be provided in display area 935.
  • gaze overlay information including the visualization of same, may be presented as lines (solid, dashed, colored, wavy, flashing, etc.) in a 2-D presentation or a 3-D presentation that includes height (up and down), width (side-to-side), and depth (near to far) aspects of an estimated and determined field of vision.
  • the 3-D presentation may resemble a "cone of vision”.
  • the gaze overlay information may be provided on-screen with a sequence of video images as graphical or textual descriptions.
  • a frame of reference for the gaze estimation may be presented as and include, for example, a line graph, a circle graph with indications of the gaze estimation therein, a coordinate system, ruler(s), a grid, a gaze angle and time graph, and other visual indicators.
  • an angle velocity indicative of a rate at which a subject individual changes their gaze direction may be provided.
  • gaze estimation may be presented on a video image in a split-screen presentation wherein one screen area displays the video without the gaze estimation overlay and another screen displays the video with the gaze estimation overlay.
  • an indication of a gaze estimation may be presented or associated with or in a computer-generated display or computer visualization (e.g., a PC-based game image, a console game image, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un système, le procédé comprenant la capture d'une séquence vidéo d'images avec un système de capture d'image, l'indication d'au moins un point de repère dans une zone d'intérêt de la séquence vidéo capturée, l'adaptation d'un modèle de la zone d'intérêt à la zone d'intérêt de la séquence vidéo capturée, et la détermination d'un paramètre de pose pour le modèle adapté à la zone d'intérêt.
PCT/US2007/081023 2006-12-08 2007-10-11 Procédé et système pour l'estimation du regard Ceased WO2008073563A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/474,962 US20090290753A1 (en) 2007-10-11 2009-05-29 Method and system for gaze estimation

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US86921606P 2006-12-08 2006-12-08
US60/869,216 2006-12-08
US75203007A 2007-05-22 2007-05-22
US11/752,030 2007-05-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/474,962 Continuation-In-Part US20090290753A1 (en) 2007-10-11 2009-05-29 Method and system for gaze estimation

Publications (1)

Publication Number Publication Date
WO2008073563A1 true WO2008073563A1 (fr) 2008-06-19

Family

ID=39251346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/081023 Ceased WO2008073563A1 (fr) 2006-12-08 2007-10-11 Procédé et système pour l'estimation du regard

Country Status (1)

Country Link
WO (1) WO2008073563A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009009253A1 (fr) * 2007-07-09 2009-01-15 General Electric Company Procédé et système pour une pose automatique et un suivi de trajectoire dans une vidéo
WO2011160114A1 (fr) * 2010-06-18 2011-12-22 Minx, Inc. Réalité accrue
WO2013086739A1 (fr) * 2011-12-16 2013-06-20 Thomson Licensing Procédé et appareil de génération de vidéos de point de vue libre en trois dimensions
GB2584400A (en) * 2019-05-08 2020-12-09 Thirdeye Labs Ltd Processing captured images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999053430A1 (fr) * 1998-04-13 1999-10-21 Eyematic Interfaces, Inc. Architecture video pour decrire les traits de personnes
US20030076980A1 (en) * 2001-10-04 2003-04-24 Siemens Corporate Research, Inc.. Coded visual markers for tracking and camera calibration in mobile computing systems
US20040002642A1 (en) * 2002-07-01 2004-01-01 Doron Dekel Video pose tracking system and method
US20050073136A1 (en) * 2002-10-15 2005-04-07 Volvo Technology Corporation Method and arrangement for interpreting a subjects head and eye activity
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999053430A1 (fr) * 1998-04-13 1999-10-21 Eyematic Interfaces, Inc. Architecture video pour decrire les traits de personnes
US20030076980A1 (en) * 2001-10-04 2003-04-24 Siemens Corporate Research, Inc.. Coded visual markers for tracking and camera calibration in mobile computing systems
US20040002642A1 (en) * 2002-07-01 2004-01-01 Doron Dekel Video pose tracking system and method
US20050073136A1 (en) * 2002-10-15 2005-04-07 Volvo Technology Corporation Method and arrangement for interpreting a subjects head and eye activity
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
UKITA N ET AL: "Extracting a gaze region with the history of view directions", PATTERN RECOGNITION, 2004. ICPR 2004. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON CAMBRIDGE, UK AUG. 23-26, 2004, PISCATAWAY, NJ, USA,IEEE, vol. 4, 23 August 2004 (2004-08-23), pages 957 - 960, XP010724080, ISBN: 0-7695-2128-2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009009253A1 (fr) * 2007-07-09 2009-01-15 General Electric Company Procédé et système pour une pose automatique et un suivi de trajectoire dans une vidéo
WO2011160114A1 (fr) * 2010-06-18 2011-12-22 Minx, Inc. Réalité accrue
WO2013086739A1 (fr) * 2011-12-16 2013-06-20 Thomson Licensing Procédé et appareil de génération de vidéos de point de vue libre en trois dimensions
GB2584400A (en) * 2019-05-08 2020-12-09 Thirdeye Labs Ltd Processing captured images

Similar Documents

Publication Publication Date Title
US20090290753A1 (en) Method and system for gaze estimation
US10269177B2 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
EP2866201B1 (fr) Appareil de traitement d'informations et son procédé de contrôle
CN105678802B (zh) 辨识二维影像产生三维信息的方法
US12382005B2 (en) Image processing apparatus, image processing method, and storage medium
CN101815227A (zh) 图像处理设备和方法
TW201322178A (zh) 擴增實境的方法及系統
KR101198557B1 (ko) 시청자세를 반영하는 3차원 입체영상 생성 시스템 및 방법
JP2005198818A (ja) 身体動作の学習支援システム及び学習支援方法
CN105611267A (zh) 现实世界和虚拟世界图像基于深度和色度信息的合并
CN115797439A (zh) 基于双目视觉的火焰空间定位系统及方法
TW201827788A (zh) 用於計算運動的客體的位置資訊的感測裝置以及利用該裝置的感測 方法
WO2008073563A1 (fr) Procédé et système pour l'estimation du regard
US20240037843A1 (en) Image processing apparatus, image processing system, image processing method, and storage medium
CN107368188B (zh) 介导现实中的基于多重空间定位的前景抽取方法及系统
CN112989908A (zh) 数据处理方法及装置、程序和非临时性存储介质
CN112416124A (zh) 舞蹈姿势反馈方法及装置
CN119620855A (zh) 轨迹获取方法、装置和系统
KR101189665B1 (ko) 카메라를 이용한 골프 퍼팅 결과 분석 시스템 및 그 방법
CN116797643A (zh) 获取vr中用户注视区域的方法、存储介质及电子设备
WO2019053790A1 (fr) Procédé et dispositif de calcul de coordonnées de position
Paletta et al. An integrated system for 3D gaze recovery and semantic analysis of human attention
CN117710445B (zh) 一种应用于ar设备的目标定位方法、装置及电子设备
US12501020B2 (en) Information processing apparatus and information processing method
JP2021051374A (ja) 形状データ生成装置、形状データ生成方法、及び、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07853935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07853935

Country of ref document: EP

Kind code of ref document: A1