[go: up one dir, main page]

WO2018117538A1 - Procédé d'estimation d'informations de voie et dispositif électronique - Google Patents

Procédé d'estimation d'informations de voie et dispositif électronique Download PDF

Info

Publication number
WO2018117538A1
WO2018117538A1 PCT/KR2017/014810 KR2017014810W WO2018117538A1 WO 2018117538 A1 WO2018117538 A1 WO 2018117538A1 KR 2017014810 W KR2017014810 W KR 2017014810W WO 2018117538 A1 WO2018117538 A1 WO 2018117538A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
lane
information
estimating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/014810
Other languages
English (en)
Korean (ko)
Inventor
김지만
박찬종
양도준
이현우
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170142567A external-priority patent/KR102480416B1/ko
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US16/469,910 priority Critical patent/US11436744B2/en
Priority to EP17884410.6A priority patent/EP3543896B1/fr
Publication of WO2018117538A1 publication Critical patent/WO2018117538A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • Various embodiments relate to a method and an electronic device for estimating lane information, and more particularly, to a method and an electronic device for estimating and outputting lane information in bad weather.
  • AI Artificial Intelligence
  • AI technology is composed of elementary technologies that utilize machine learning (deep learning) and machine learning.
  • Machine learning is an algorithm technology that classifies / learns characteristics of input data by itself
  • element technology is a technology that simulates the functions of human brain cognition and judgment by using machine learning algorithms such as deep learning. It consists of technical areas such as understanding, reasoning / prediction, knowledge representation, and motion control.
  • Linguistic understanding is a technology for recognizing and applying / processing human language / characters and includes natural language processing, machine translation, dialogue system, question and answer, speech recognition / synthesis, and the like.
  • Visual understanding is a technology that recognizes and processes objects as human vision, and includes object recognition, object tracking, image retrieval, person recognition, scene understanding, spatial understanding, and image enhancement.
  • Inference Prediction is a technique for judging, logically inferring, and predicting information. It includes knowledge / probability-based inference, optimization prediction, preference-based planning, and recommendation.
  • Knowledge expression is a technology that automatically processes human experience information into knowledge data, and includes knowledge construction (data generation / classification) and knowledge management (data utilization).
  • Motion control is a technology for controlling autonomous driving of a vehicle and movement of a robot, and includes motion control (navigation, collision, driving), operation control (action control), and the like.
  • ADAS advanced driver assistance system
  • Various embodiments may provide a method and apparatus for estimating lane information of a road on which a vehicle is driving in an environment including visual disturbances such as bad weather.
  • FIG. 1 is a schematic diagram illustrating an example of estimating lane information by an electronic device according to an exemplary embodiment.
  • FIG. 2 is a flowchart illustrating a method of operating an electronic device according to an exemplary embodiment.
  • FIG. 3 is a diagram for describing a method of converting an image by an electronic device according to an exemplary embodiment.
  • FIG. 4 is a diagram for describing a method of estimating lane information of a road, by an electronic device, according to an exemplary embodiment.
  • FIG. 5 is a diagram for describing a method of estimating lane information of a road, by an electronic device, according to an exemplary embodiment.
  • FIG. 6 is a diagram for describing a method of outputting guide information by an electronic device according to an exemplary embodiment.
  • FIG. 7 and 8 are block diagrams illustrating a configuration of an electronic device according to an embodiment.
  • FIG. 9 is a block diagram of a processor according to an embodiment.
  • FIG. 10 is a block diagram of a data learner according to an exemplary embodiment.
  • FIG. 11 is a block diagram of a data recognizer according to an exemplary embodiment.
  • FIG. 12 illustrates an example of learning and recognizing data by interworking with an electronic device and a server, according to an exemplary embodiment.
  • an electronic device includes a camera photographing an external image of a vehicle, a memory storing one or more instructions, and a processor executing one or more instructions stored in the memory, wherein the processor executes one or more instructions. Determining at least one object for estimating lane information from the captured image, estimating lane information of a road on which the vehicle is traveling in the image, based on the determined distance between the at least one object and the vehicle and the vanishing point of the image, Based on the estimated lane information, guide information for guiding driving of the vehicle may be output.
  • a method of operating an electronic device may include obtaining an image of an exterior of a vehicle, determining at least one object for estimating lane information from the obtained image, and determining a distance between the determined at least one object and the vehicle. And estimating lane information of a road on which the vehicle is driving based on the vanishing point of the image, and outputting guide information for guiding driving of the vehicle based on the estimated lane information.
  • a computer-readable recording medium includes a recording medium recording a program for executing the above-described method on a computer.
  • FIG. 1 is a schematic diagram illustrating an example of estimating lane information by an electronic device according to an exemplary embodiment.
  • the electronic device 100 is installed in the vehicle 110 to estimate lane information 103 of a driving road from an image 101 of the outside of the vehicle 110.
  • the lane information 103 means a line that divides the road on which the vehicle 110 is traveling into a lane (for example, a first lane and a second lane in the case of a two-lane road).
  • the electronic device 100 may be a mobile or non-mobile electronic device that may be mounted on the vehicle 110.
  • the electronic device 100 may include a display module for outputting a camera and guide information for acquiring an external image 101.
  • the electronic device 100 may control other devices such as a camera, a display 106, a local navigation, a GPS receiver, and the like included in the vehicle 110.
  • the electronic device 100 may receive data for estimating the lane information 103 or transmit guide information by communicating with other devices included in the vehicle 110.
  • the electronic device 100 may assist the safe driving of the vehicle 110 by estimating the lane information 103 of the driving road and outputting the guide information as shown in FIG. 1.
  • the electronic device 100 may acquire an image 101 of the outside of the driving vehicle 110 to estimate the lane information 103 of the driving road.
  • the electronic device 100 may estimate the lane information 103 using at least one object included in the acquired image 101.
  • the object is a subject included in the image, and refers to one subject that is recognized by being distinguished from other subjects in the image 101.
  • image 101 may include at least one object, such as guard-rail 105 or vehicle 104 in front of it.
  • the electronic device 100 may utilize information through analysis of the acquired image 101 such as lane number information of a driving road and a vanishing point of the image 101 to estimate the lane information 103.
  • the electronic device 100 converts the acquired image such that the image acquired to estimate the lane information 103 has a visibility greater than or equal to a preset value by using a learning model. can do.
  • the learning model for transforming the image may be based on learning according to a deep neural model technology.
  • the learning model may be an artificial intelligence learning model.
  • the electronic device 100 may output guide information based on the estimated lane information 103.
  • the guide information means information for guiding driving of the vehicle.
  • the guide information may include lane information of a road, a driving speed of a vehicle, or danger warning information processed based on the same.
  • the electronic device 100 may include a display 106 displaying guide information.
  • the display 106 may include at least one of a head-up display, a mirror display, and a transparent display.
  • the electronic device 100 may control the driving of the vehicle 110 based on the autonomous driving system or the driving assistance system.
  • the electronic device 100 may be a smart phone, tablet PC, PC, smart TV, mobile phone, personal digital assistant (PDA), laptop, media player, micro server, global positioning system (GPS) device, electronic Book terminals, digital broadcasting terminals, navigation, kiosks, MP3 players, digital cameras, home appliances and other computing devices, but is not limited thereto.
  • the electronic device 100 may be a wearable device such as a watch, glasses, a hair band, and a ring having a display function and a data processing function.
  • the present invention is not limited thereto, and the electronic device 100 may include all kinds of devices capable of processing data and providing processed data.
  • the electronic device 100 obtains a front image of the driving vehicle 110 and estimates lane information 103 in front of the vehicle 110 by using the acquired image, but is limited thereto. It doesn't happen.
  • the electronic device 100 may obtain a rear image of the driving vehicle 110 and estimate the lane information 103 using the acquired image. In this case, the electronic device 100 may assist the driver in driving the vehicle by providing lane information of the rearward driving road of the vehicle 110.
  • the electronic device 100 is illustrated as a separate device from the vehicle 110, the present disclosure is not limited thereto, and the electronic device 100 may be integrated into the vehicle 110 and included in the vehicle 110. It may be implemented as.
  • the electronic device 100 may be implemented as a processor included in the vehicle 110.
  • the processor may include a micro controller unit (MCU) included in the vehicle 110.
  • the vehicle 110 may include a memory for storing data and lane information 103 necessary for the processor to operate, and a communication module capable of communicating with an external device.
  • FIG. 2 is a flowchart illustrating a method of operating an electronic device according to an exemplary embodiment.
  • the electronic device 100 acquires an image of the outside of the vehicle.
  • the external image of the vehicle may be an image of a part of the external space of the vehicle that can be detected by a camera or other sensor.
  • the external image of the vehicle may be an image representing the front or the rear of the vehicle, but is not limited thereto.
  • the electronic device 100 may acquire an image of the outside of the vehicle using a camera.
  • the camera may mean a pinhole camera, a stereo camera, an infrared camera, or a thermal imaging camera, but is not limited thereto.
  • the electronic device 100 may acquire an external image of the vehicle or receive an external image of the vehicle from a photographing device outside the electronic device 100 by using a camera provided in the electronic device 100. .
  • the electronic device 100 determines at least one object for estimating lane information from the obtained image. For example, the electronic device 100 may extract a plurality of objects as distinct objects by analyzing pixels included in the image. The electronic device 100 may use a learning model according to a deep neural network technology to extract an object from an image.
  • the electronic device 100 may determine at least one object for estimating lane information among the plurality of extracted objects. For example, the electronic device 100 may determine at least one object for estimating lane information by selecting a preset object from among a plurality of objects included in an image according to a lane information estimation method. For example, the electronic device 100 may determine at least one of the guard rail, the front driving vehicle, and the rear driving vehicle included in the image as at least one object for estimating lane information.
  • the electronic device 100 may use a learning model according to a deep neural network technology to determine at least one object for estimating lane information from the obtained image. For example, the electronic device 100 may improve the visibility of an image by using a learning model. Also, the electronic device 100 may determine at least one object for estimating lane information from an image having improved visibility.
  • the use of the learning model according to the deep neural network technology will be described later in detail with reference to FIG. 3.
  • the electronic device 100 estimates lane information of the road on which the vehicle is driving in the image based on the determined distance between the object and the vehicle and the vanishing point of the image.
  • the electronic device 100 may measure a distance between the determined object and the vehicle. For example, the electronic device 100 may measure the distance between the object determined by the distance sensor and the vehicle. Alternatively, the electronic device 100 may determine the distance between the determined object and the vehicle based on the pre-stored experimental data for the specific type of object and the size of the specific object included in the acquired image.
  • the electronic device 100 may predict a vanishing point of the image. For example, the electronic device 100 may predict the vanishing point by extracting a straight line through the lower end of the building or the guard rail among the objects included in the image. In detail, the electronic device 100 may predict one point where a plurality of extension lines of a plurality of straight lines extracted from an image are collected as a vanishing point.
  • the electronic device 100 may determine a road area based on the determined position of the object, and determine the lane width of each lane based on the distance between the determined object and the vehicle. Also, the electronic device 100 may estimate the lane information of the road by dividing the road area determined by the vanishing point of the image into the number of lanes.
  • the electronic device 100 may estimate the lane width based on the distance between the determined object and the vehicle.
  • the electronic device 100 may estimate the lane information of the road by extending a straight line having the estimated lane width by using the vanishing point of the image as a center point.
  • the electronic device 100 outputs guide information for guiding driving of the vehicle based on the estimated lane information.
  • the electronic device 100 may display the synthesized guide information on the road area of the acquired image.
  • the electronic device 100 may output guide information as a sound.
  • the electronic device 100 may determine whether a danger of the driving route of the vehicle occurs based on the guide information and the preset reference, and output the determination result as a sound.
  • FIG. 3 is a diagram for describing a method of converting an image by an electronic device according to an exemplary embodiment.
  • the electronic device 100 uses a plurality of images of the same subject to learn a model 321 based on a result of learning a relationship between the plurality of images. ) Can be obtained.
  • the plurality of images of the same subject may be a plurality of images photographing the same object at the same position and at the same angle.
  • the relationship between the plurality of images may be an error value between pixels included in the same position of the image.
  • the electronic device 100 may generate the learning model 321 by learning relationships between the plurality of pair images using the plurality of pair images.
  • the plurality of pair images may include a plurality of images 311 including visual disturbances (eg, low light, fog, yellow dust, etc.) and a plurality of images 312 without corresponding visual disturbances. it means.
  • the electronic device 100 may generate a second learning model 321 for image conversion by using a pre-generated first learning model (not shown).
  • the electronic device 100 may receive a first learning model for image analysis from another external electronic device (for example, a server including a plurality of image analysis models).
  • the electronic device 100 may retrain the first learning model to generate a second learning model 321 for image conversion.
  • the electronic device 100 may include an image 311 including visual disturbances in one pair image and an output image passed through the first training model and an image not including visual disturbances in one pair image.
  • the second learning model 321 may be generated by repeatedly inputting an error of the second learning model to the first learning model.
  • the electronic device 100 may receive the learned learning model 321 from the outside.
  • the electronic device 100 may determine at least one object for estimating lane information from the converted image.
  • the embodiments described in step 204 of FIG. 2 may be applied.
  • the electronic device 100 utilizes the learning model 321 acquired in block 310 to obtain a visibility of a acquired image having a predetermined value or more. It is possible to convert the acquired image to have. For example, the electronic device 100 may increase the visibility of an image including visual disturbance elements by using the learning model 321.
  • the learning model 321 is a set of algorithms that identify and / or determine objects included in an image by extracting and using various attributes included in the image by using a result of statistical machine learning. Can be. For example, when it is determined that the visibility of the image does not exceed the preset value, the learning model 321 may perform image conversion based on the identification and / or determination of the objects of the image.
  • the training model 321 may be an end-to-end deep neural network model.
  • the end-to-end deep neural network model refers to a learning model that can convert an input image into an output image without post-processing.
  • the electronic device 100 may convert the first image 322 into the second image 323 by inputting the first image 322 including visual disturbances into the acquired learning model 321.
  • the second image 323 is an image having a visibility greater than or equal to a preset value.
  • the learning model 321 may identify each object included in the image and process the pixels included in the object for each object, thereby converting the visibility of the entire image to be greater than or equal to a preset value.
  • the preset value may be determined according to the learning model 321.
  • the preset value may change as the learning model is updated.
  • the electronic device 100 converts an image into an image having a visibility greater than or equal to a predetermined value by using the learning model 321, thereby performing lane information through analysis of an image including an existing visual disturbance element.
  • the precision can be higher than when estimating.
  • FIG. 4 is a diagram for describing a method of estimating lane information of a road, by an electronic device, according to an exemplary embodiment.
  • the electronic device 100 may estimate the lane information 405 based on the information obtained through the analysis of the acquired image and the lane number information 404 of the driving road.
  • the information obtained through analysis of the image may include at least one object, such as vanishing point, road area, guard rail, or front vehicle, but is not limited thereto.
  • the electronic device 100 may determine the road area 402 from the obtained image. For example, the electronic device 100 may determine the road area 402 based on the position of the guard rail 401 and the height of the guard rail 401 among the plurality of objects included in the image through analysis of the image. have.
  • the electronic device 100 may obtain lane number information 404 of a road on which the vehicle is driving, based on the location information of the vehicle 400.
  • the electronic device 100 may obtain the lane number information 404 of the road on which the vehicle 400 is driving based on the position information of the vehicle 400 obtained through the global positioning system (GPS).
  • GPS global positioning system
  • the electronic device 100 may obtain the lane number information 404 of the driving road from the local navigation 411 included in the vehicle 400.
  • the electronic device 100 may obtain the number of lanes 404 previously stored in the electronic device 100 based on the location information of the vehicle 400.
  • the electronic device 100 is driving the vehicle based on the distance between the front vehicle 406 and the vehicle 400 determined in the image, the determined road area 402, and the obtained road number information 404. It is possible to estimate the lane widths 407-410 for each lane of the in-road. For example, when the lane number information 404 is four lanes, the electronic device 100 may divide the road area into four lanes. In addition, the electronic device 100 may determine a ratio of each lane divided by the lane number information 404 to occupy the entire road area based on the distance between the front vehicle 406 and the vehicle 400.
  • the electronic device 100 determines a ratio between the first lane 407, the second lane 408, the third lane 409, and the fourth lane 410 as 1.1: 1.2: 1.0: 1.1. Can be. In addition, the electronic device 100 may estimate each lane width 407-410 based on the determined ratio.
  • the electronic device 100 may estimate respective lane widths 407-410 with reference to previously stored experimental data values.
  • the pre-stored experimental data value may be ratio information of each lane width matching the distance between the front vehicle 406 and the vehicle 400 and the number of lanes when the total road area is 1.
  • the electronic device 100 may predict the vanishing point 403 of the image.
  • the embodiments described in operation 206 of FIG. 2 may be applied to the method of determining the distance between the front vehicle 406 and the vehicle 400 and the method of predicting the vanishing point 403 of the image. .
  • the electronic device 100 may estimate lane information 405 of a driving road based on the estimated lane width 407-410 and the vanishing point 403 of the image. For example, the electronic device 100 may estimate the lane information 405 by extending a straight line dividing the lane based on the vanishing point of the image based on the estimated road width 407-410 based on the road area. have.
  • the electronic device 100 when the electronic device 100 utilizes the number of lanes 404 stored in the vehicle 400, the electronic device 100 analyzes the acquired image to obtain a data server external to the vehicle 400. Lane information 405 can be estimated without access to.
  • FIG. 5 is a diagram for describing a method of estimating lane information of a road, by an electronic device, according to an exemplary embodiment.
  • the electronic device 100 may estimate lane information of a road based on the first front vehicle 502 located on the driving center line 501 of the vehicle 500. .
  • the electronic device 100 may determine the driving center line 501 of the vehicle.
  • the driving center line 501 is a center line of the driving vehicle 500 and means a center line of an image obtained through a camera.
  • the electronic device 100 may determine the first front vehicle 502 located on the center line as at least one object for estimating lane information.
  • the electronic device 100 may determine the first path of the first lane in which the vehicle is driven based on the distance between the first front vehicle 502 and the vehicle 500 and the vehicle width of the first front vehicle 502.
  • the lane width 503 can be estimated.
  • the vehicle width refers to the horizontal size of the first front vehicle detected in the acquired image.
  • the size of the vehicle width may be expressed in units of pixels.
  • the electronic device 100 may determine the distance between the first front vehicle 502 and the vehicle 500 based on the size of the first front vehicle 502 detected from the image by using previously stored experimental data. Can be.
  • the electronic device 100 may estimate the sum of the vehicle width and the first value of the first front vehicle 502 as the first lane width 503 of the first lane.
  • the first value may be determined according to the distance between the first front vehicle 502 and the vehicle 500.
  • the first value may be a value set according to experimental data previously stored in the electronic device 100.
  • the electronic device 100 may estimate lane information 505 of a road on which the vehicle is driving, based on the estimated first lane width 503 and the vanishing point of the image. For example, the electronic device 100 may extend the straight line passing through the vanishing point of the image based on the driving width 503 estimated based on the driving center line 501 or the first front vehicle in the image. 505) can be estimated.
  • the electronic device 100 may lanes of other lanes (for example, the second lane in which the second front vehicle 510 is driving) except for the lane in which the vehicle 500 is driving in a similar manner. Can be estimated.
  • the electronic device 100 may determine the second front vehicle 510 not located on the driving center line as at least one object.
  • the electronic device 100 may be configured based on the distance between the second front vehicle 510 and the driving center line 501, the distance between the second front vehicle 510 and the vehicle 500, and the vehicle width of the second front vehicle.
  • the second lane width of the second lane can be estimated.
  • the electronic device 100 may estimate the sum of the vehicle width and the second value of the second front vehicle 510 as the width of the second lane of the second lane.
  • the second value may be determined according to the distance between the second front vehicle 510 and the driving center line 501 and the distance between the second front vehicle 510 and the vehicle 500.
  • the electronic device 100 may estimate lane information of the second lane based on the estimated second lane width and the vanishing point of the image. For example, the electronic device 100 may extend lane line information of the second lane by extending a straight line passing through the vanishing point of the image based on the driving lane line 501 or the second lane width estimated based on the second front vehicle in the image. Can be estimated.
  • the electronic device 100 outputs lane information including lane information of a lane on which the second front vehicle is driving and lane information 505 of a lane on which the first front vehicle is driving as guide information. can do.
  • FIG. 6 is a diagram for describing a method of outputting guide information by an electronic device according to an exemplary embodiment.
  • the electronic device 100 may generate guide information based on the estimated lane information.
  • the guide information may be an image of estimated lane information.
  • the guide information may be processed by the electronic device 100 based on the estimated lane information and the driving information of the vehicle.
  • the electronic device 100 may generate guide information based on the estimated lane information and the driving speed of the vehicle.
  • the electronic device 100 may output imaged lane information.
  • the electronic device 100 may display the lane 602 of the driving road based on the estimated lane information on the display 601 included in the vehicle (eg, a head-up display or a transparent display on the front of the vehicle). I can display it.
  • the electronic device 100 may display the acquired image on the display 601 and highlight and display the estimated lane 602 on the obtained image.
  • the guide information may be hazard warning information processed using the estimated lane information.
  • the electronic device 100 may predict the driving path of the vehicle according to the driving direction, the speed, and the like of the vehicle, and determine whether the lane is in danger of departure using the estimated lane information.
  • the electronic device 100 may output corresponding guide information as a sound.
  • the electronic device 100 may output danger warning information 603 generated based on lane information. For example, when it is determined that there is a lane departure risk, the electronic device 100 may output the danger warning information 603 to a display included in the vehicle. For example, the electronic device 100 may provide text, an image, or an animation indicating the danger warning information 603.
  • the electronic device 100 may determine whether the location of the vehicle is within a preset range based on the estimated lane information. In addition, the electronic device 100 may output guide information based on the determination result.
  • the preset range is a range of the position of the vehicle estimated when the vehicle is driven in one lane.
  • the electronic device 100 may use the center line of the image to determine whether the location of the vehicle is within a preset range. When the location of the lane estimated from the centerline of the image in the acquired image is outside the safety range, it may be determined that the driving path of the vehicle of the electronic device 100 has left the lane.
  • the safety range may be preset in the electronic device 100 based on the experiment data.
  • the electronic device 100 may output guide information such as danger warning information 603.
  • the driver may safely drive the vehicle even when there is a visual disturbance in the driving environment.
  • the guide information is output in various forms while the driver is driving the vehicle, thereby increasing the driving satisfaction of the driver.
  • FIG. 7 and 8 are block diagrams illustrating a configuration of an electronic device according to an embodiment.
  • the electronic device 1000 may include a memory 1100, a display unit 1210, a camera 1610, and a processor 1300.
  • the electronic device 1000 may be implemented by more components than those illustrated in FIG. 7, or the electronic device 1000 may be implemented by fewer components than those illustrated in FIG. 7.
  • the electronic device 1000 may include the output unit 1200 in addition to the memory 1100, the display unit 1210, the camera 1610, and the processor 1300.
  • the communication unit 1500 may further include a sensing unit 1400, an A / V input unit 1600, and a user input unit 1700.
  • the memory 1100 may store a program for processing and controlling the processor 1300, and may store an image input to the electronic device 1000 or guide information output from the electronic device 1000. In addition, the memory 1100 may store specific information for determining whether to output the guide information.
  • the memory 1100 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), RAM Random Access Memory (RAM) Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, Magnetic Disk It may include at least one type of storage medium of the optical disk.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • Magnetic Memory Magnetic Disk It may include at least one type of storage medium of the optical disk.
  • Programs stored in the memory 1100 may be classified into a plurality of modules according to their functions.
  • the programs stored in the memory 1100 may be classified into a UI module 1110, a touch screen module 1120, and a notification module 1130. .
  • the UI module 1110 may provide a specialized UI, GUI, or the like that interworks with the electronic device 1000 for each application.
  • the touch screen module 1120 may detect a touch gesture on the user's touch screen and transmit information about the touch gesture to the processor 1300.
  • the touch screen module 1120 according to an embodiment may recognize and analyze a touch code.
  • the touch screen module 1120 may be configured as separate hardware including a controller.
  • the notification module 1130 may generate a signal for notifying occurrence of an event of the electronic device 1000. Examples of events occurring in the electronic apparatus 1000 may include call signal reception, message reception, key signal input, and schedule notification.
  • the notification module 1130 may output the notification signal in the form of a video signal through the display unit 1210, may output the notification signal in the form of an audio signal through the sound output unit 1220, and the vibration motor 1230. Through the notification signal may be output in the form of a vibration signal.
  • the notification module 1130 may generate a signal for outputting guide information based on the estimated lane information.
  • the output unit 1200 may output an audio signal, a video signal, or a vibration signal, and the output unit 1200 may include a display unit 1210, an audio output unit 1220, and a vibration motor 1230. have.
  • the display unit 1210 displays and outputs information processed by the electronic apparatus 1000.
  • the display 1210 may output an image captured by the camera 1610.
  • the display unit 1210 may output the synthesized guide information generated by the processor 1300 to the captured image.
  • the display unit 1210 may display a user interface for executing an operation related to the response in response to a user input.
  • the sound output unit 1220 outputs audio data received from the communication unit 1500 or stored in the memory 1100.
  • the sound output unit 1220 outputs a sound signal related to a function (for example, a call signal reception sound, a message reception sound, and a notification sound) performed by the electronic device 1000.
  • the sound output unit 1220 may output guide information generated as a signal from the notification module 1130 as a sound signal under the control of the processor 1300.
  • the processor 1300 typically controls the overall operation of the electronic apparatus 1000.
  • the processor 1300 executes programs stored in the memory 1100, such as a user input unit 1700, an output unit 1200, a sensing unit 1400, a communication unit 1500, and an A / V input unit 1700. ) Can be controlled overall.
  • the processor 1300 may perform the functions of the electronic apparatus 1000 described with reference to FIGS. 1 to 6 by executing programs stored in the memory 1100.
  • the processor 1300 may control the camera 1610 to capture an image of the outside of the vehicle by executing one or more instructions stored in the memory.
  • the processor 1300 may control the communicator 1500 to acquire an image of the outside of the vehicle by executing one or more instructions stored in the memory.
  • the processor 1300 may determine at least one object for estimating lane information from the captured image by executing one or more instructions stored in the memory. For example, the processor 1300 may determine a guard rail, a front vehicle, or a rear vehicle included in the image as at least one object for estimating lane information.
  • the processor 1300 may estimate lane information of a road on which the vehicle is driving in the image based on the distance between the at least one object and the vehicle and the vanishing point of the image determined by executing one or more instructions stored in the memory. For example, the processor 1300 may obtain a distance between at least one object determined by the sensing unit 1400 and the vehicle by executing one or more instructions stored in a memory. In addition, the processor 1300 may predict the vanishing point by extracting a straight line through a lower end of the building or a guard rail among the objects included in the image.
  • the processor 1300 may determine a road area based on the distance between the determined object and the vehicle. In addition, the processor 1300 may estimate the lane information of the road by dividing the predicted road area by the number of cars by using the predicted road area as the vanishing point of the image.
  • the processor 1300 may estimate the lane width based on the distance between the determined object and the vehicle.
  • the processor 1300 may estimate the lane information of the road by extending a straight line having the estimated lane width by using the vanishing point of the image as a centripetal point.
  • the processor 1300 may determine the road area from the captured image by executing one or more instructions stored in the memory. In addition, the processor 1300 may obtain lane number information of a road on which the vehicle is driving, based on the location information of the vehicle. For example, the processor 1300 may control the position sensor 1460 to acquire GPS information of the vehicle by executing one or more instructions stored in a memory. In addition, the processor 1300 may estimate the lane width in the image based on the determined distance between the at least one object and the vehicle, the determined road area, and the obtained lane number information. In addition, the processor 1300 may estimate lane information of the road on which the vehicle is traveling in the image based on the estimated lane width and the vanishing point of the image.
  • the processor 1300 may determine the driving center line of the vehicle by executing one or more instructions stored in a memory. In addition, the processor 1300 may determine the first front vehicle positioned on the driving center line as at least one object for estimating lane information by executing one or more instructions stored in a memory. In addition, the processor 1300 may execute one or more instructions stored in the memory to determine the width of the first lane of the first lane in which the vehicle is driving in the image based on the distance between the first vehicle and the vehicle width of the first vehicle. It can be estimated. In addition, the processor 1300 may estimate lane information of a road on which the vehicle is traveling in the image based on the estimated first lane width and the vanishing point of the image by executing one or more instructions stored in the memory.
  • the processor 1300 may determine a second front vehicle that is not positioned on the driving center line as at least one object for estimating lane information. Further, the processor 1300 may further include a second lane of the second lane in which the second front vehicle is driving based on the distance between the second front vehicle and the driving centerline, the distance between the second front vehicle and the vehicle, and the vehicle width of the second front vehicle. The width can be estimated. In addition, the processor 1300 may estimate lane information of the road on which the vehicle is traveling in the image based on the estimated second lane width and the vanishing point of the image.
  • the processor 1300 may learn a relationship between the plurality of images by using the plurality of images of the same subject.
  • the processor 1300 may generate a learning model based on the learning result by executing one or more instructions stored in the memory.
  • the processor 1300 may convert the photographed image using the learning model by executing one or more instructions stored in the memory so that the photographed image has a visibility greater than or equal to a preset value.
  • the processor 1300 may estimate lane information of a road on which the vehicle is driven by utilizing the converted image by executing one or more instructions stored in a memory. For example, the processor 1300 may determine at least one object for estimating lane information from the converted image.
  • the processor 1300 may control the output unit 1200 to output guide information for guiding driving of the vehicle based on the estimated lane information by executing one or more instructions stored in a memory. .
  • the processor 1300 may control the display unit 1210 to synthesize and display the guide information on the road area of the acquired image by executing one or more instructions stored in the memory.
  • the processor 1300 may control the sound output unit 1220 to output guide information by executing one or more instructions stored in a memory.
  • the processor 1300 may determine whether a risk of a vehicle's driving path occurs according to the guide information and a predetermined reference by executing one or more instructions stored in a memory, and determine a result of the sound output unit 1220 or vibration.
  • the motor 1230 may be controlled to output.
  • the processor 1300 may generate guide information.
  • the processor 1300 may generate guide information based on the estimated lane information and the driving speed of the vehicle.
  • the processor 1300 may generate guide information by determining whether the location of the vehicle is within a preset range, based on the estimated lane information.
  • the sensing unit 1400 may detect a state of the electronic device 1000 or a state around the electronic device 1000 and transmit the detected information to the processor 1300.
  • the sensing unit 1400 may include a geomagnetic sensor 1410, an acceleration sensor 1420, a temperature / humidity sensor 1430, an infrared sensor 1440, a gyroscope sensor 1450, and a position sensor. (Eg, GPS) 1460, barometric pressure sensor 1470, proximity sensor 1480, and RGB sensor (RGB sensor) 1490, but are not limited thereto. Since functions of the respective sensors can be intuitively deduced by those skilled in the art from the names, detailed descriptions thereof will be omitted.
  • the sensing unit 1400 may measure a distance between at least one object determined from the captured image and the vehicle.
  • the communication unit 1500 may include one or more components that allow the electronic device 1000 to communicate with other devices (not shown) and a server (not shown).
  • the other device (not shown) may be a computing device such as the electronic device 1000 or a sensing device, but is not limited thereto.
  • the communicator 1500 may include a short range communicator 1510, a mobile communicator 1520, and a broadcast receiver 1530.
  • the short-range wireless communication unit 1510 includes a Bluetooth communication unit, a Bluetooth low energy (BLE) communication unit, a near field communication unit, a WLAN (Wi-Fi) communication unit, a Zigbee communication unit, an infrared ray ( IrDA (Infrared Data Association) communication unit, WFD (Wi-Fi Direct) communication unit, UWB (ultra wideband) communication unit, Ant + communication unit and the like, but may not be limited thereto.
  • the short range communication unit 1510 may receive the number of lane information through short range wireless communication from a navigation device included in the vehicle.
  • the mobile communication unit 1520 transmits and receives a radio signal with at least one of a base station, an external terminal, and a server on a mobile communication network.
  • the wireless signal may include various types of data according to transmission and reception of a voice call signal, a video call call signal, or a text / multimedia message.
  • the broadcast receiving unit 1530 receives a broadcast signal and / or broadcast related information from the outside through a broadcast channel.
  • the broadcast channel may include a satellite channel and a terrestrial channel. According to an embodiment of the present disclosure, the electronic device 1000 may not include the broadcast receiving unit 1530.
  • the A / V input unit 1600 is for inputting an audio signal or a video signal, and may include a camera 1610 and a microphone 1620.
  • the camera 1610 may obtain an image frame such as a still image or a moving image through an image sensor in a video call mode or a photographing mode.
  • the image captured by the image sensor may be processed by the processor 1300 or a separate image processor (not shown).
  • the camera 1610 may capture an image of the outside of the vehicle.
  • the camera 1610 may photograph a front image of a driving vehicle, but is not limited thereto.
  • the microphone 1620 receives an external sound signal and processes the external sound signal into electrical voice data.
  • the microphone 1620 may receive an acoustic signal from an external device or a user.
  • the microphone 1620 may receive a voice input of a user.
  • the microphone 1620 may use various noise removing algorithms for removing noise generated in the process of receiving an external sound signal.
  • the user input unit 1700 means a means for a user to input data for controlling the electronic apparatus 1000.
  • the user input unit 1700 includes a key pad, a dome switch, a touch pad (contact capacitive type, pressure resistive layer type, infrared sensing type, surface ultrasonic conduction type, and integral type). Tension measurement method, piezo effect method, etc.), a jog wheel, a jog switch, and the like, but are not limited thereto.
  • FIG. 9 is a block diagram of a processor according to an embodiment.
  • a processor 1300 may include a data learner 1310 and a data recognizer 1320.
  • the data learner 1310 may learn a criterion for increasing the visibility of an image.
  • the data learner 1310 may learn what data is used to increase the visibility of the image and how to increase the visibility of the image using the data.
  • the data learner 1310 acquires data to be used for learning and applies the acquired data to a data recognition model to be described later, thereby learning a criterion for increasing the visibility of an image.
  • the data recognizer 1320 may increase the visibility of the input image.
  • the data recognizer 1320 may increase the visibility of the input image by using the learned data recognition model.
  • the data recognizing unit 1320 may increase the visibility of the input image by acquiring predetermined data according to a predetermined reference by learning and using the data recognition model using the acquired data as an input value.
  • the result value output by the data recognition model using the acquired data as an input value may be used to update the data recognition model.
  • At least one of the data learner 1310 and the data recognizer 1320 may be manufactured in the form of at least one hardware chip and mounted on the electronic device.
  • at least one of the data learner 1310 and the data recognizer 1320 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or an existing general purpose processor (eg, a CPU).
  • AI artificial intelligence
  • the electronic device may be manufactured as a part of an application processor or a graphics dedicated processor (eg, a GPU) and mounted on the aforementioned various electronic devices.
  • the data learner 1310 and the data recognizer 1320 may be mounted on one electronic device or may be mounted on separate electronic devices, respectively.
  • one of the data learner 1310 and the data recognizer 1320 may be included in the electronic device, and the other may be included in the server.
  • the data learner 1310 and the data recognizer 1320 may provide model information constructed by the data learner 1310 to the data recognizer 1320 via a wired or wireless connection.
  • the data input to 1320 may be provided to the data learner 1310 as additional learning data.
  • At least one of the data learner 1310 and the data recognizer 1320 may be implemented as a software module.
  • the software module may be a computer readable non-transitory computer. It may be stored in a non-transitory computer readable media.
  • at least one software module may be provided by an operating system (OS) or by a predetermined application.
  • OS operating system
  • OS operating system
  • others may be provided by a predetermined application.
  • FIG. 10 is a block diagram of a data learner according to an exemplary embodiment.
  • the data learner 1310 may include a data acquirer 1310-1, a preprocessor 1310-2, a training data selector 1310-3, and a model learner 1310. -4) and the model evaluator 1310-5.
  • the data acquirer 1310-1 may acquire data necessary for determining a situation.
  • the data acquirer 1310-1 may acquire data necessary for learning for situation determination.
  • the data acquirer 1310-1 may receive a plurality of images of the same subject. For example, the data acquirer 1310-1 may receive an image of a time, a date, or a season, photographing the same road at the same place. The data acquirer 1310-1 may receive an image through a camera of the electronic device including the data learner 1310. Alternatively, the data acquirer 1310-1 may acquire data through an external device that can communicate with the electronic device.
  • the preprocessor 1310-2 may preprocess the acquired data so that the acquired data can be used for learning for situation determination.
  • the preprocessor 1310-2 may process the acquired data in a preset format so that the model learner 1310-4, which will be described later, uses the acquired data for learning for situation determination.
  • the preprocessor 1310-2 may divide the image in pixel units to analyze the obtained plurality of images.
  • the preprocessor 1310-2 may extract an object from each image to analyze the obtained plurality of images.
  • the preprocessor 1310-2 may process the extracted object as data.
  • the preprocessor 1310-2 may tag and classify an object or a pixel at a common position in the image.
  • the training data selector 1310-3 may select data required for learning from the preprocessed data.
  • the selected data may be provided to the model learner 1310-4.
  • the training data selector 1310-3 may select data necessary for learning from preprocessed data according to a predetermined criterion for determining a situation.
  • the training data selector 1310-3 may select data according to preset criteria by learning by the model learner 1310-4 to be described later.
  • the training data selector 1310-3 may select data required for learning from data processed by the preprocessor 1310-2.
  • the training data selector 1310-3 may select data corresponding to the specific object in order to learn a criterion for increasing the visibility of the specific object in the image.
  • the model learner 1310-4 may learn a criterion on how to determine a situation based on the training data. In addition, the model learner 1310-4 may learn a criterion about what training data should be used for situation determination.
  • the model learner 1310-4 may analyze characteristics of each object or pixel to learn a criterion for increasing the visibility of an image.
  • the model learner 1310-4 may learn a criterion for increasing the visibility of the image by analyzing the relationship between the images by using one pair image.
  • the model learner 1310-4 may analyze the relationship by extracting an error between the images by using one pair image.
  • the model learner 1310-4 may train the data recognition model used for situation determination using the training data.
  • the data recognition model may be a pre-built model.
  • the data recognition model may be a model built in advance by receiving basic training data (eg, a sample image).
  • the data recognition model may be constructed in consideration of the application field of the recognition model, the purpose of learning, or the computer performance of the device.
  • the data recognition model may be, for example, a model based on a neural network.
  • a model such as a deep neural network (DNN), a recurrent neural network (RNN), and a bidirectional recurrent deep neural network (BRDNN) may be used as the data recognition model, but is not limited thereto.
  • the model learner 1310-4 may be a data recognition model to learn a data recognition model having a large correlation between input training data and basic training data. You can decide.
  • the basic training data may be previously classified by the type of data, and the data recognition model may be pre-built by the type of data. For example, the basic training data is classified based on various criteria such as the region where the training data is generated, the time at which the training data is generated, the size of the training data, the genre of the training data, the creator of the training data, and the types of objects in the training data. It may be.
  • model learner 1310-4 may train the data recognition model using, for example, a learning algorithm including an error back-propagation method or a gradient descent method. .
  • model learner 1310-4 may train the data recognition model through, for example, supervised learning using the training data as an input value.
  • the model learner 1310-4 for example, by unsupervised learning to find a criterion for situation determination by learning the kind of data necessary for situation determination without any guidance, You can train the data recognition model.
  • the model learner 1310-4 may train the data recognition model, for example, through reinforcement learning using feedback on whether the result of the situation determination according to the learning is correct.
  • the model learner 1310-4 may store the trained data recognition model.
  • the model learner 1310-4 may store the learned data recognition model in a memory of the electronic device including the data recognizer 1320.
  • the model learner 1310-4 may store the learned data recognition model in a memory of an electronic device including the data recognizer 1320, which will be described later.
  • the model learner 1310-4 may store the learned data recognition model in a memory of a server connected to the electronic device through a wired or wireless network.
  • the memory in which the learned data recognition model is stored may store, for example, commands or data related to at least one other element of the electronic device.
  • the memory may also store software and / or programs.
  • the program may include, for example, a kernel, middleware, an application programming interface (API) and / or an application program (or “application”), and the like.
  • the model evaluator 1310-5 may input the evaluation data into the data recognition model, and cause the model learner 1310-4 to relearn if the recognition result output from the evaluation data does not satisfy a predetermined criterion. have.
  • the evaluation data may be preset data for evaluating the data recognition model.
  • the evaluation data may include at least one pair image.
  • the model evaluator 1310-5 may determine a predetermined criterion when the number or ratio of the evaluation data that is not accurate among the recognition results of the learned data recognition model for the evaluation data exceeds a preset threshold. It can be evaluated as not satisfied. For example, when a predetermined criterion is defined at a ratio of 2%, the model evaluator 1310-5 when the learned data recognition model outputs an incorrect recognition result for more than 20 evaluation data out of a total of 1000 evaluation data. Can be judged that the learned data recognition model is not suitable.
  • the model evaluator 1310-5 evaluates whether each learned data recognition model satisfies a predetermined criterion, and recognizes the final data as a model that satisfies the predetermined criterion. Can be determined as a model. In this case, when there are a plurality of models satisfying a predetermined criterion, the model evaluator 1310-5 may determine any one or a predetermined number of models that are preset in the order of the highest evaluation score as the final data recognition model.
  • At least one of -5) may be manufactured in the form of at least one hardware chip and mounted on the electronic device.
  • at least one of the data acquirer 1310-1, the preprocessor 1310-2, the training data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 One may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or may be manufactured as a part of an existing general purpose processor (eg, a CPU or application processor) or a graphics dedicated processor (eg, a GPU). It may be mounted on various electronic devices.
  • AI artificial intelligence
  • the data obtaining unit 1310-1, the preprocessor 1310-2, the training data selecting unit 1310-3, the model learning unit 1310-4, and the model evaluating unit 1310-5 are electronic components. It may be mounted on the device, or may be mounted on separate electronic devices, respectively. For example, some of the data acquirer 1310-1, the preprocessor 1310-2, the training data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5. May be included in the electronic device, and the rest may be included in the server.
  • At least one of the data acquirer 1310-1, the preprocessor 1310-2, the training data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 may be used. It may be implemented as a software module. At least one of the data acquirer 1310-1, the preprocessor 1310-2, the training data selector 1310-3, the model learner 1310-4, and the model evaluator 1310-5 is a software module. (Or a program module including instructions), the software module may be stored in a computer readable non-transitory computer readable media. In this case, at least one software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the at least one software module may be provided by an operating system (OS), and others may be provided by a predetermined application.
  • OS operating system
  • OS operating system
  • some of the at least one software module may be provided by an operating system (OS), and others may be provided by a predetermined application.
  • 11 is a block diagram of a data recognizer 1320 according to an embodiment.
  • the data recognizer 1320 may include a data acquirer 1320-1, a preprocessor 1320-2, a recognition data selector 1320-3, and a recognition result provider ( 1320-4) and a model updater 1320-5.
  • the data acquirer 1320-1 may acquire data necessary for situation determination, and the preprocessor 1320-2 may preprocess the acquired data so that the acquired data may be used for situation determination.
  • the preprocessor 1320-2 may process the acquired data into a preset format so that the recognition result providing unit 1320-4, which will be described later, uses the acquired data for determining a situation.
  • the recognition data selector 1320-3 may select data required for situation determination from among the preprocessed data.
  • the selected data may be provided to the recognition result provider 1320-4.
  • the recognition data selector 1320-3 may select some or all of the preprocessed data according to a preset criterion for determining a situation.
  • the recognition data selector 1320-3 may select data according to a predetermined criterion by learning by the model learner 1310-4 to be described later.
  • the recognition result providing unit 1320-4 may determine the situation by applying the selected data to the data recognition model.
  • the recognition result providing unit 1320-4 may provide a recognition result according to a recognition purpose of data.
  • the recognition result provider 1320-4 may apply the selected data to the data recognition model by using the data selected by the recognition data selector 1320-3 as an input value.
  • the recognition result may be determined by the data recognition model.
  • the recognition result for the input image may be provided as text, an image, or a command (for example, an application execution command, a module function execution command, etc.).
  • the recognition result providing unit 1320-4 may apply the image to the data recognition model to provide a result converted into an image that satisfies a preset visibility reference value.
  • the recognition result providing unit 1320-4 may provide a display function execution command for causing the display unit 1210 to output the converted image as the recognition result.
  • the model updater 1320-5 may cause the data recognition model to be updated based on the evaluation of the recognition result provided by the recognition result provider 1320-4. For example, the model updater 1320-5 provides the model learning unit 1310-4 with the recognition result provided by the recognition result providing unit 1320-4 so that the model learner 1310-4 provides the recognition result.
  • the data recognition model can be updated.
  • the data acquisition unit 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result providing unit 1320-4, and the model updater in the data recognition unit 1320 may be manufactured in the form of at least one hardware chip and mounted on the electronic device.
  • At least one may be fabricated in the form of a dedicated hardware chip for artificial intelligence (AI), or may be fabricated as part of an existing general purpose processor (e.g., CPU or application processor) or graphics dedicated processor (e.g., GPU). It may be mounted on various electronic devices.
  • AI artificial intelligence
  • the data acquisition unit 1320-1, the preprocessor 1320-2, the recognition data selection unit 1320-3, the recognition result providing unit 1320-4, and the model updater 1320-5 may be mounted on an electronic device, or may be mounted on separate electronic devices, respectively.
  • the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 may be included in the electronic device, and others may be included in the server.
  • At least one of the data acquirer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 May be implemented as a software module.
  • At least one of the data acquirer 1320-1, the preprocessor 1320-2, the recognition data selector 1320-3, the recognition result provider 1320-4, and the model updater 1320-5 is software.
  • the software module When implemented as a module (or a program module including instructions), the software module may be stored on a computer readable non-transitory computer readable media.
  • at least one software module may be provided by an operating system (OS) or by a predetermined application.
  • some of the at least one software module may be provided by an operating system (OS), and others may be provided by a predetermined application.
  • FIG. 12 illustrates an example in which the electronic apparatus 1000 and the server 2000 learn and recognize data by interworking with each other, according to an exemplary embodiment.
  • the server 2000 may learn a criterion for increasing the visibility of an image, and the electronic apparatus 1000 may determine a situation based on the learning result by the server 2000.
  • the model learner 2340 of the server 2000 may perform a function of the data learner 1310 illustrated in FIG. 10.
  • the model learner 2340 of the server 2000 may learn what data is used to increase the visibility of the image and how to increase the visibility of the image using the data.
  • the model learner 2340 acquires data to be used for learning and applies the acquired data to a data recognition model to be described later, thereby learning a criterion for increasing the visibility of an image.
  • the recognition result providing unit 1320-4 of the electronic apparatus 1000 applies the data selected by the recognition data selecting unit 1320-3 to a data recognition model generated by the server 2000 to display the image visibility. Can be increased.
  • the recognition result provider 1320-4 transmits the data selected by the recognition data selector 1320-3 to the server 2000, and the server 2000 transmits the recognition data selector 1320-3.
  • the recognition result providing unit 1320-4 may receive an image converted by the server 2000 into an image having increased visibility from the server 2000.
  • the recognition result providing unit 1320-4 of the electronic apparatus 1000 receives the recognition model generated by the server 2000 from the server 2000, and increases the visibility of the image using the received recognition model. You can.
  • the recognition result providing unit 1320-4 of the electronic apparatus 1000 may apply the data selected by the recognition data selecting unit 1320-3 to the data recognition model received from the server 2000 to display the image visibility. Can increase.
  • Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
  • unit may be a hardware component such as a processor or a circuit, and / or a software component executed by a hardware component such as a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne : un système d'intelligence artificielle (AI) permettant d'imiter des fonctions telles que la cognition et la détermination du cerveau humain en utilisant un algorithme d'apprentissage automatique tel qu'un apprentissage profond ; et une application associée. L'invention concerne un dispositif électronique comprenant : une caméra permettant de capturer une image externe d'un véhicule ; et un processeur permettant d'exécuter une ou plusieurs instructions stockées dans une mémoire. Ledit processeur détermine au moins un objet permettant d'estimer des informations de voie à partir d'une image capturée en exécutant une ou plusieurs instructions, estime les informations de voie d'une route sur laquelle se déplace le véhicule dans une image d'après la distance entre le ou les objets déterminés et le véhicule et le point de fuite de l'image, puis génère des informations de guidage permettant de guider le déplacement du véhicule d'après les informations de voie estimées.
PCT/KR2017/014810 2016-12-23 2017-12-15 Procédé d'estimation d'informations de voie et dispositif électronique Ceased WO2018117538A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/469,910 US11436744B2 (en) 2016-12-23 2017-12-15 Method for estimating lane information, and electronic device
EP17884410.6A EP3543896B1 (fr) 2016-12-23 2017-12-15 Procédé d'estimation d'informations de voie et dispositif électronique

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20160178012 2016-12-23
KR10-2016-0178012 2016-12-23
KR1020170142567A KR102480416B1 (ko) 2016-12-23 2017-10-30 차선 정보를 추정하는 방법 및 전자 장치
KR10-2017-0142567 2017-10-30

Publications (1)

Publication Number Publication Date
WO2018117538A1 true WO2018117538A1 (fr) 2018-06-28

Family

ID=62626749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/014810 Ceased WO2018117538A1 (fr) 2016-12-23 2017-12-15 Procédé d'estimation d'informations de voie et dispositif électronique

Country Status (1)

Country Link
WO (1) WO2018117538A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260918A (zh) * 2018-12-03 2020-06-09 罗伯特·博世有限公司 精确至车道地定位车辆的方法和服务器单元
CN111369566A (zh) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 确定路面消隐点位置的方法、装置、设备及存储介质
CN111967301A (zh) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 定位导航方法、装置、电子设备和存储介质
CN112149484A (zh) * 2019-06-28 2020-12-29 百度(美国)有限责任公司 基于车道线确定消失点
CN113574535A (zh) * 2019-03-13 2021-10-29 标致雪铁龙汽车股份有限公司 训练神经网络,以通过确定难观察到的界限辅助驾驶车辆
CN115243932A (zh) * 2020-04-24 2022-10-25 斯特拉德视觉公司 一种校准车辆的摄像头间距的方法和装置以及其持续学习消失点估计模型的方法和装置
WO2023273344A1 (fr) * 2021-06-28 2023-01-05 北京百度网讯科技有限公司 Procédé et appareil de reconnaissance de traversée de ligne de véhicule, dispositif électronique et support de stockage
US11679768B2 (en) 2020-10-19 2023-06-20 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for vehicle lane estimation
US12283114B2 (en) 2022-12-28 2025-04-22 Ford Global Technologies, Llc Vehicle lane boundary detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130012629A (ko) * 2011-07-26 2013-02-05 한국과학기술원 헤드업 디스플레이를 위한 증강현실 시스템
KR20140071174A (ko) * 2012-12-03 2014-06-11 현대자동차주식회사 차량의 차선 가이드 장치 및 그 방법
KR20150084234A (ko) * 2014-01-13 2015-07-22 한화테크윈 주식회사 차량 및 차선 위치 검출 시스템 및 방법
KR101582572B1 (ko) * 2013-12-24 2016-01-11 엘지전자 주식회사 차량 운전 보조 장치 및 이를 구비한 차량
KR101609303B1 (ko) * 2014-03-28 2016-04-20 주식회사 하나비전테크 카메라 캘리브레이션 방법 및 그 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130012629A (ko) * 2011-07-26 2013-02-05 한국과학기술원 헤드업 디스플레이를 위한 증강현실 시스템
KR20140071174A (ko) * 2012-12-03 2014-06-11 현대자동차주식회사 차량의 차선 가이드 장치 및 그 방법
KR101582572B1 (ko) * 2013-12-24 2016-01-11 엘지전자 주식회사 차량 운전 보조 장치 및 이를 구비한 차량
KR20150084234A (ko) * 2014-01-13 2015-07-22 한화테크윈 주식회사 차량 및 차선 위치 검출 시스템 및 방법
KR101609303B1 (ko) * 2014-03-28 2016-04-20 주식회사 하나비전테크 카메라 캘리브레이션 방법 및 그 장치

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260918A (zh) * 2018-12-03 2020-06-09 罗伯特·博世有限公司 精确至车道地定位车辆的方法和服务器单元
CN111369566A (zh) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 确定路面消隐点位置的方法、装置、设备及存储介质
CN111369566B (zh) * 2018-12-25 2023-12-05 杭州海康威视数字技术股份有限公司 确定路面消隐点位置的方法、装置、设备及存储介质
CN113574535A (zh) * 2019-03-13 2021-10-29 标致雪铁龙汽车股份有限公司 训练神经网络,以通过确定难观察到的界限辅助驾驶车辆
CN112149484A (zh) * 2019-06-28 2020-12-29 百度(美国)有限责任公司 基于车道线确定消失点
CN115243932A (zh) * 2020-04-24 2022-10-25 斯特拉德视觉公司 一种校准车辆的摄像头间距的方法和装置以及其持续学习消失点估计模型的方法和装置
CN111967301A (zh) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 定位导航方法、装置、电子设备和存储介质
US11679768B2 (en) 2020-10-19 2023-06-20 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for vehicle lane estimation
WO2023273344A1 (fr) * 2021-06-28 2023-01-05 北京百度网讯科技有限公司 Procédé et appareil de reconnaissance de traversée de ligne de véhicule, dispositif électronique et support de stockage
US12283114B2 (en) 2022-12-28 2025-04-22 Ford Global Technologies, Llc Vehicle lane boundary detection

Similar Documents

Publication Publication Date Title
WO2018117538A1 (fr) Procédé d'estimation d'informations de voie et dispositif électronique
WO2018212538A1 (fr) Dispositif électronique et procédé de détection d'événement de conduite de véhicule
WO2020231153A1 (fr) Dispositif électronique et procédé d'aide à la conduite d'un véhicule
WO2018117704A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2020080773A1 (fr) Système et procédé de fourniture de contenu sur la base d'un graphe de connaissances
EP3602497A1 (fr) Dispositif électronique et procédé de détection d'événement de conduite de véhicule
WO2019172645A1 (fr) Dispositif électronique et procédé d'assistance à la conduite de véhicule
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2019027141A1 (fr) Dispositif électronique et procédé de commande du fonctionnement d'un véhicule
WO2020085694A1 (fr) Dispositif de capture d'image et procédé de commande associé
WO2019059505A1 (fr) Procédé et appareil de reconnaissance d'objet
KR102480416B1 (ko) 차선 정보를 추정하는 방법 및 전자 장치
WO2019151735A1 (fr) Procédé de gestion d'inspection visuelle et système d'inspection visuelle
WO2019168336A1 (fr) Appareil de conduite autonome et procédé associé
WO2018117428A1 (fr) Procédé et appareil de filtrage de vidéo
WO2019208950A1 (fr) Dispositif de robot mobile et procédé permettant de fournir un service à un utilisateur
WO2018128362A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2018182153A1 (fr) Dispositif et procédé de reconnaissance d'objet figurant sur une image d'entrée
EP3539056A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2019031825A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2018143630A1 (fr) Dispositif et procédé de recommandation de produits
WO2019093819A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2021206221A1 (fr) Appareil à intelligence artificielle utilisant une pluralité de couches de sortie et procédé pour celui-ci
WO2019124963A1 (fr) Dispositif et procédé de reconnaissance vocale
WO2019132410A1 (fr) Dispositif électronique et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17884410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017884410

Country of ref document: EP

Effective date: 20190621