[go: up one dir, main page]

WO2024111682A1 - Appareil d'intelligence artificielle et son procédé de commande - Google Patents

Appareil d'intelligence artificielle et son procédé de commande Download PDF

Info

Publication number
WO2024111682A1
WO2024111682A1 PCT/KR2022/018418 KR2022018418W WO2024111682A1 WO 2024111682 A1 WO2024111682 A1 WO 2024111682A1 KR 2022018418 W KR2022018418 W KR 2022018418W WO 2024111682 A1 WO2024111682 A1 WO 2024111682A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
state transition
artificial intelligence
backup
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2022/018418
Other languages
English (en)
Korean (ko)
Inventor
임진석
임정은
김대인
이정우
김종태
권강덕
박준석
이기열
홍창기
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority to PCT/KR2022/018418 priority Critical patent/WO2024111682A1/fr
Publication of WO2024111682A1 publication Critical patent/WO2024111682A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time

Definitions

  • This disclosure relates to an artificial intelligence device related to an automobile electronic system and a control method thereof.
  • ICT Information and Communication Technology
  • automotive electrical components use MCU (Microcontroller unit) to monitor power, temperature, etc. to ensure a stable operating environment for the automotive electrical system, and when a problem occurs, the system is used to prevent system errors and transition to a safe state. Perform a system reset.
  • MCU Microcontroller unit
  • the problem that this disclosure aims to solve is to provide an artificial intelligence device and a control method for an automobile electronic system.
  • Another problem that the present disclosure aims to solve is to monitor the operation of the automotive electrical system based on artificial intelligence to diagnose and predict its state, thereby preventing not only data damage or loss due to state transition, but also the inability to change the system state at an appropriate time. This ensures stable operation while protecting not only data important to system operation but also the system itself.
  • a method for controlling the operation of an artificial intelligence device includes monitoring first data regarding the operation of an electronic system; Generating predicted data regarding a change in the operational state of the battlefield system according to the monitored first data based on artificial intelligence-based pre-learned data; Backing up preset second data based on the generated operation state transition prediction data of the battlefield system; And it may include controlling a transition to an operating state of the predicted data-based battlefield system.
  • An artificial intelligence motion control system includes: a vehicle; and an electrical system that controls the vehicle, wherein the electrical system monitors first data related to system operation, and the operating state of the system according to the monitored first data based on artificial intelligence-based pre-learned data. It may include an MCU that generates prediction data regarding the transition, backs up preset second data based on the generated operation state transition prediction data of the system, and controls the operation state transition of the system based on the prediction data.
  • FIG 1 shows an AI device according to an embodiment of the present disclosure.
  • Figure 2 shows an AI server according to an embodiment of the present disclosure.
  • Figure 3 shows an AI system according to an embodiment of the present disclosure.
  • Figure 4 shows an AI device according to another embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating an artificial intelligence-based automotive electronic system according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating an artificial intelligence-based operation control method between the MCU and AP of FIG. 5.
  • FIG. 7 is a block diagram illustrating an artificial intelligence-based operation control method within the MCU of FIG. 6.
  • FIG. 8 is a diagram illustrating operation monitoring according to an embodiment of the present disclosure.
  • 9 to 10 are flowcharts illustrating an artificial intelligence-based operation control method according to an embodiment of the present disclosure.
  • AI Artificial Intelligence
  • Machine Learning Machine Learning
  • Machine Learning is a methodology that defines and solves various problems dealt with in the field of artificial intelligence. refers to the field of research.
  • Machine learning is also defined as an algorithm that improves the performance of a task through consistent experience.
  • Artificial Neural Network is a model used in machine learning. It refers to an overall model with problem-solving capabilities consisting of artificial neurons (nodes) that form a network through the combination of synapses. can do. Artificial neural networks can be defined by connection patterns between neurons in different layers, a learning process that updates model parameters, and an activation function that generates output values.
  • An artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include synapses connecting neurons. In an artificial neural network, each neuron can output the function value of the activation function for the input signals, weight, and bias input through the synapse.
  • Model parameters refer to parameters determined through learning and include the weight of synaptic connections and the bias of neurons.
  • Hyperparameters refer to parameters that must be set before learning in a machine learning algorithm, and include learning rate, number of repetitions, mini-batch size, initialization function, etc.
  • the purpose of artificial neural network learning can be seen as determining model parameters that minimize the loss function.
  • the loss function can be used as an indicator to determine optimal model parameters in the learning process of an artificial neural network.
  • Machine learning can be classified into supervised learning, unsupervised learning, and reinforcement learning depending on the learning method.
  • Supervised learning refers to a method of training an artificial neural network with a label for the learning data given.
  • a label is the correct answer (or result value) that the artificial neural network must infer when learning data is input to the artificial neural network. It can mean.
  • Unsupervised learning can refer to a method of training an artificial neural network in a state where no labels for training data are given.
  • Reinforcement learning can refer to a learning method in which an agent defined within an environment learns to select an action or action sequence that maximizes the cumulative reward in each state.
  • machine learning implemented with a deep neural network that includes multiple hidden layers is also called deep learning, and deep learning is a part of machine learning.
  • machine learning is used to include deep learning.
  • Object detection models using machine learning include the single-step YOLO (you Only Look Once) model and the two-step Faster R-CNN (Regions with Convolution Neural Networks) model.
  • the YOLO model is a model in which objects that exist in an image and their locations can be predicted by looking at the image only once.
  • the YOLO model divides the original image into grids of equal size. Then, for each grid, the number of bounding boxes designated in a predefined form centered on the center of the grid is predicted, and reliability is calculated based on this.
  • the Faster R-CNN model is a model that can detect objects faster than the RCNN model and Fast RCNN model.
  • a feature map is extracted from the image through a CNN model. Based on the extracted feature map, a plurality of regions of interest (RoI) are extracted. RoI pooling is performed for each region of interest.
  • RoI regions of interest
  • RoI pooling sets the grid to fit the predetermined H This is the process of extracting a feature map.
  • a feature vector is extracted from a feature map having a size of H x W, and identification information of the object can be obtained from the feature vector.
  • Extended Reality refers collectively to Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
  • VR technology provides objects and backgrounds in the real world only as CG images
  • AR technology provides virtual CG images on top of images of real objects
  • MR technology provides computer technology that mixes and combines virtual objects in the real world. It is a graphic technology.
  • MR technology is similar to AR technology in that it shows real objects and virtual objects together. However, in AR technology, virtual objects are used to complement real objects, whereas in MR technology, virtual objects and real objects are used equally.
  • XR technology can be applied to HMD (Head-Mounted Display), HUD (Head-Up Display), mobile phones, tablet PCs, laptops, desktops, TVs, digital signage, etc., and devices to which XR technology is applied can be called XR devices. It can be called (XR Device).
  • HMD Head-Mounted Display
  • HUD Head-Up Display
  • mobile phones tablet PCs, laptops, desktops, TVs, digital signage, etc.
  • XR devices It can be called (XR Device).
  • Figure 1 shows an AI device 100 according to an embodiment of the present disclosure.
  • the AI device 100 includes TVs, projectors, mobile phones, smartphones, desktop computers, laptops, digital broadcasting terminals, PDAs (personal digital assistants), PMPs (portable multimedia players), navigation, tablet PCs, wearable devices, and set-top boxes (STBs). : Set-top Box), DMB receiver, radio, washing machine, refrigerator, desktop computer, digital signage, robot, vehicle, etc. It can be implemented as a fixed or movable device.
  • the AI device 100 includes a communication unit 110, an input unit 120, a learning processor 130, a sensing unit 140, an output unit 150, a memory 170, a processor 180, etc. may include.
  • the communication unit 110 can transmit and receive data with external devices such as other AI devices (100a to 100e in FIG. 3) or the AI server 200 using wired or wireless communication technology.
  • the communication unit 110 may transmit and receive sensor information, user input, learning models, and control signals with external devices.
  • the communication technologies used by the communication unit 110 include GSM (Global System for Mobile communication), CDMA (Code Division Multi Access), LTE (Long Term Evolution), 5G, 6G, WLAN (Wireless LAN), and Wi-Fi ( Wireless-Fidelity), BluetoothTM, RFID (Radio Frequency Identification), Infrared Data Association (IrDA), ZigBee, and NFC (Near Field Communication).
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • LTE Long Term Evolution
  • Wi-Fi Wireless-Fidelity
  • BluetoothTM BluetoothTM
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • ZigBee ZigBee
  • NFC Near Field Communication
  • the input unit 120 can acquire various types of data.
  • the input unit 120 may include a camera for inputting video signals, a microphone for receiving audio signals, and a user input unit for receiving information from the user.
  • the camera or microphone may be treated as a sensor, and the signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
  • the input unit 120 may acquire training data for model learning and input data to be used when obtaining an output using the learning model.
  • the input unit 120 may acquire unprocessed input data, and in this case, the processor 180 or the learning processor 130 may extract input features by preprocessing the input data.
  • the learning processor 130 can train a model composed of an artificial neural network using training data.
  • the learned artificial neural network may be referred to as a learning model.
  • a learning model can be used to infer a result value for new input data other than learning data, and the inferred value can be used as the basis for a decision to perform an operation.
  • the learning processor 130 may perform AI processing together with the learning processor 240 of the AI server 200.
  • the learning processor 130 may include memory integrated or implemented in the AI device 100.
  • the learning processor 130 may be implemented using the memory 170, an external memory directly coupled to the AI device 100, or a memory maintained in an external device.
  • the sensing unit 140 may use various sensors to obtain at least one of internal information of the AI device 100, information about the surrounding environment of the AI device 100, and user information.
  • the sensors included in the sensing unit 140 include a proximity sensor, illuminance sensor, acceleration sensor, magnetic sensor, gyro sensor, inertial sensor, RGB sensor, IR sensor, fingerprint recognition sensor, ultrasonic sensor, light sensor, microphone, and There is Ida, Radar, etc.
  • the output unit 150 may generate output related to vision, hearing, or tactile sensation.
  • the output unit 150 may include a display unit that outputs visual information, a speaker that outputs auditory information, and a haptic module that outputs tactile information.
  • the memory 170 may store data supporting various functions of the AI device 100.
  • the memory 170 may store input data, learning data, learning models, learning history, etc. obtained from the input unit 120.
  • the processor 180 may determine at least one executable operation of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Additionally, the processor 180 may control the components of the AI device 100 to perform the determined operation.
  • the processor 180 may request, retrieve, receive, or utilize data from the learning processor 130 or the memory 170, and may perform an operation that is predicted or an operation that is determined to be desirable among the at least one executable operation.
  • Components of the AI device 100 can be controlled to execute.
  • the processor 180 may generate a control signal to control the external device and transmit the generated control signal to the external device.
  • the processor 180 may obtain intent information for user input and determine the user's request based on the obtained intent information.
  • the processor 180 uses at least one of a STT (Speech To Text) engine for converting voice input into a string or a Natural Language Processing (NLP) engine for acquiring intent information of natural language.
  • STT Seech To Text
  • NLP Natural Language Processing
  • At this time, at least one of the STT engine or the NLP engine may be composed of at least a portion of an artificial neural network learned according to a machine learning algorithm. And, at least one of the STT engine or the NLP engine is learned by the learning processor 130, learned by the learning processor 240 of the AI server 200, or learned by distributed processing thereof. It could be.
  • the processor 180 collects history information including the user's feedback on the operation or operation of the AI device 100 and stores it in the memory 170 or the learning processor 130, or in the AI server 200, etc. Can be transmitted to an external device. The collected historical information can be used to update the learning model.
  • the processor 180 may control at least some of the components of the AI device 100 to run an application program stored in the memory 170. Furthermore, the processor 180 may operate by combining two or more of the components included in the AI device 100 to run the application program.
  • Figure 2 shows an AI server 200 according to an embodiment of the present disclosure.
  • the AI server 200 may refer to a device that trains an artificial neural network using a machine learning algorithm or uses a learned artificial neural network.
  • the AI server 200 may be composed of a plurality of servers to perform distributed processing, and may be defined as a 5G network.
  • the AI server 200 may be included as a part of the AI device 100 and may perform at least part of the AI processing.
  • the AI server 200 may include a communication unit 210, a memory 230, a learning processor 240, and a processor 260.
  • the communication unit 210 can transmit and receive data with an external device such as the AI device 100.
  • Memory 230 may include a model storage unit 231.
  • the model storage unit 231 may store a model (or artificial neural network, 231a) that is being trained or has been learned through the learning processor 240.
  • the learning processor 240 can train the artificial neural network 231a using training data.
  • the learning model may be used while mounted on the AI server 200 of the artificial neural network, or may be mounted and used on an external device such as the AI device 100.
  • Learning models can be implemented in hardware, software, or a combination of hardware and software.
  • the learning model When part or all of the learning model is implemented as software, one or more instructions constituting the learning model may be stored in the memory 230.
  • the processor 260 may infer a result value for new input data using a learning model and generate a response or control command based on the inferred result value.
  • Figure 3 shows an AI system 1 according to an embodiment of the present disclosure.
  • the AI system 1 includes at least one of an AI server 200, a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e. It is connected to this cloud network (10).
  • a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e to which AI technology is applied may be referred to as AI devices 100a to 100e.
  • the cloud network 10 may constitute part of a cloud computing infrastructure or may refer to a network that exists within the cloud computing infrastructure.
  • the cloud network 10 may be configured using a 3G network, 4G or LTE network, or 5G network.
  • each device (100a to 100e, 200) constituting the AI system 1 may be connected to each other through the cloud network 10.
  • the devices 100a to 100e and 200 may communicate with each other through a base station, but may also communicate directly with each other without going through the base station.
  • the AI server 200 may include a server that performs AI processing and a server that performs calculations on big data.
  • the AI server 200 is connected to at least one of the AI devices constituting the AI system 1: a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e. It is connected through the cloud network 10 and can assist at least some of the AI processing of the connected AI devices 100a to 100e.
  • the AI server 200 can train an artificial neural network according to a machine learning algorithm on behalf of the AI devices 100a to 100e, and directly store or transmit the learning model to the AI devices 100a to 100e.
  • the AI server 200 receives input data from the AI devices 100a to 100e, infers a result value for the received input data using a learning model, and provides a response or control command based on the inferred result value. Can be generated and transmitted to AI devices (100a to 100e).
  • the AI devices 100a to 100e may infer a result value for input data using a direct learning model and generate a response or control command based on the inferred result value.
  • AI devices 100a to 100e to which the above-described technology is applied will be described.
  • the AI devices 100a to 100e shown in FIG. 3 can be viewed as specific examples of the AI device 100 shown in FIG. 1.
  • the XR device (100c) applies AI technology and can be implemented as HMD, HUD provided in a vehicle, television, mobile phone, smart phone, computer, wearable device, home appliance, digital signage, vehicle, fixed robot, or mobile robot. You can.
  • the XR device 100c analyzes 3D point cloud data or image data acquired through various sensors or from external devices to generate location data and attribute data for 3D points, thereby providing information about surrounding space or real objects.
  • the XR object to be acquired and output can be rendered and output.
  • the XR device 100c may output an XR object containing additional information about the recognized object in correspondence to the recognized object.
  • the XR device 100c may perform the above operations using a learning model composed of at least one artificial neural network.
  • the XR device 100c can recognize a real-world object from 3D point cloud data or image data using a learning model, and provide information corresponding to the recognized real-world object.
  • the learning model may be learned directly from the XR device 100c or may be learned from an external device such as the AI server 200.
  • the XR device 100c may perform an operation by generating a result using a direct learning model, but may perform the operation by transmitting sensor information to an external device such as the AI server 200 and receiving the result generated accordingly. It can also be done.
  • Figure 4 shows an AI device 100 according to an embodiment of the present disclosure.
  • the input unit 120 includes a camera 121 for inputting video signals, a microphone 122 for receiving audio signals, and a user input unit for receiving information from the user. 123) may be included.
  • Voice data or image data collected by the input unit 120 may be analyzed and processed as a user's control command.
  • the input unit 120 is for inputting image information (or signal), audio information (or signal), data, or information input from the user.
  • the AI device 100 includes one or more Cameras 121 may be provided.
  • the camera 121 processes image frames such as still images or moving images obtained by an image sensor in video call mode or shooting mode.
  • the processed image frame may be displayed on the display unit (151) or stored in the memory (170).
  • the microphone 122 processes external acoustic signals into electrical voice data.
  • Processed voice data can be utilized in various ways depending on the function (or application being executed) being performed by the AI device 100. Meanwhile, various noise removal algorithms may be applied to the microphone 122 to remove noise generated in the process of receiving an external acoustic signal.
  • the user input unit 123 is for receiving information from the user.
  • the processor 180 can control the operation of the AI device 100 to correspond to the input information. .
  • the user input unit 123 is a mechanical input means (or a mechanical key, such as a button, dome switch, jog wheel, jog switch, etc. located on the front/rear or side of the terminal 100) and It may include a touch input means.
  • the touch input means consists of a virtual key, soft key, or visual key displayed on the touch screen through software processing, or a part other than the touch screen. It can be done with a touch key placed in .
  • the output unit 150 includes at least one of a display unit (151), a sound output unit (152), a haptic module (153), and an optical output unit (154). can do.
  • the display unit 151 displays (outputs) information processed by the AI device 100.
  • the display unit 151 may display execution screen information of an application running on the AI device 100, or UI (User Interface) and GUI (Graphic User Interface) information according to this execution screen information.
  • UI User Interface
  • GUI Graphic User Interface
  • the display unit 151 can implement a touch screen by forming a layered structure or being integrated with the touch sensor.
  • This touch screen functions as a user input unit 123 that provides an input interface between the AI device 100 and the user, and can simultaneously provide an output interface between the terminal 100 and the user.
  • the audio output unit 152 may output audio data received from the communication unit 110 or stored in the memory 170 in call signal reception, call mode or recording mode, voice recognition mode, broadcast reception mode, etc.
  • the sound output unit 152 may include at least one of a receiver, a speaker, and a buzzer.
  • the haptic module 153 generates various tactile effects that the user can feel.
  • a representative example of a tactile effect generated by the haptic module 153 may be vibration.
  • the optical output unit 154 uses light from the light source of the AI device 100 to output a signal to notify that an event has occurred. Examples of events that occur in the AI device 100 may include receiving a message, receiving a call signal, missed call, alarm, schedule notification, receiving email, receiving information through an application, etc.
  • FIG. 5 is a diagram illustrating an artificial intelligence-based automotive electronic system according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating an artificial intelligence-based operation control method between the MCU 510 and AP 550 of FIG. 5.
  • FIG. 7 is a block diagram illustrating an artificial intelligence-based operation control method within the MCU 510 of FIG. 6.
  • FIG. 8 is a diagram illustrating operation monitoring according to an embodiment of the present disclosure.
  • the artificial intelligence-based automotive electronic system may include a communication module, MCU 510, AP 550, display 150, power module, memory, etc. You can.
  • FIG. 5 although it is expressed as an independent component, it may be combined with other components and implemented in a module form, or vice versa.
  • Radio Tuner can tune and receive radio signals.
  • Sound DSP Digital Signal Processing
  • the communication module may include at least one of an Ethernet-based A-ETH PHY and a CAN communication-based CAN transceiver, and may be responsible for external and/or internal communication.
  • the external input receiver can receive external inputs such as touch input, button input, and key input through various electronic devices as well as the in-vehicle display.
  • the external input received in this way can be ADC processed.
  • the external output unit may output signals (e.g., LED, Telltale, etc.) according to events that occur within the vehicle.
  • signals e.g., LED, Telltale, etc.
  • the power module (PMU: Power Monitoring Unit) can monitor and manage the power supplied to the automotive electronic system, MCU (510), etc.
  • the display 150 can output signals processed by the MCU 510, AP 550, etc. under the control of the display controller.
  • EEPROM electrically erasable programmable read-only memory
  • watchdog components such as memory (EEPROM) and watchdog may be included.
  • the MCU 510 is the main processing unit of the vehicle and may be involved in all operations of the electronic system, such as collecting, processing, processing, and managing electronic data.
  • the AP 550 is a component that actually processes various application data within the vehicle, and can perform data communication with the MCU 510, the main processing unit of the vehicle. AP 550 can monitor the operation of MCU 510.
  • the MCU 510 includes a first processing unit 511, Peri 512, ADC 513, 514, first memory 515, second memory 516, and second processing unit 517. It may be configured to include, etc.
  • Figure 6 is only an example related to the present disclosure and the configuration of the MCU 510 according to the present disclosure is not limited thereto.
  • the MCU 510 may be implemented by further including at least one component not shown in FIG. 6, or vice versa.
  • Each component shown in FIG. 6 does not necessarily need to be implemented as an independent entity, and a plurality of components may be modularized depending on system design, etc.
  • each component does not necessarily include only one, but may include multiple elements.
  • each component of the MCU 510 is described as follows.
  • the first processing unit 511 is the main processing unit of the MCU 510 and can substantially process or control the operation of the MCU 510.
  • the first processing unit 511 is an example of a CPU (Central Processing Unit), but is not limited thereto.
  • Peripheral (Peripheral) 512 may process or control a reset operation for the entire or part of the system according to the control of the first processing unit 511.
  • Peri 512 provides various external interfaces of the first processing unit 511, and external reset can be used to drive the reset IC of the entire or partial system (board) using GPIO.
  • the ADCs 513 and 514 can acquire analog signals such as voltage and temperature, which are external operating environment factors, and can convert the analog signals obtained in this way into digital signals.
  • the ADC 513 can convert an analog signal related to the operating voltage of the system (or the voltage supplied from the power module) into a digital signal.
  • the ADC 514 can convert an analog signal regarding the operating temperature of the system into a digital signal.
  • the first memory 515 may store environmental factor data (i.e., operating voltage, operating temperature, operating clock, etc.) obtained through the ADCs 513 and 514 or may be used as a memory for the first processing unit 511.
  • environmental factor data i.e., operating voltage, operating temperature, operating clock, etc.
  • the first memory 515 may be a volatile memory, for example, SRAM (Static Random Access Memory), but is not limited thereto.
  • SRAM Static Random Access Memory
  • the second memory 516 is a non-volatile memory that can be used to store program code, store important data, etc., and can be used to store important system variables in unstable operations.
  • the second memory 516 may store data transmitted from the first processing unit 511.
  • This second memory 516 is a non-volatile memory, for example, eFlash, but is not limited thereto.
  • the second memory 516 may store key data preset according to the present disclosure for backup, as will be described later.
  • the second processing unit 517 includes the core 518 and the Pred Control (Pred Control) 519, and can process data related to artificial intelligence-based operation of the automotive electronic system.
  • the second processing unit 517 may be an example of a Neural Processing Unit (NPU) or may be named as an NPU, but is not limited thereto.
  • NPU Neural Processing Unit
  • the applicant uses the operating temperature data of the system as an example, but it is not necessarily limited thereto.
  • the operating temperature data of the system, the operating voltage of the system, the operating clock of the system, etc. may be used individually or in combination in the present disclosure.
  • the first processing unit 511 may transmit a setting signal, a control signal, etc. related to the operation of the second processing unit 517, and the second processing unit 517 may return a corresponding signal, and the returned content. can be written to a register.
  • the first memory 515 can receive the system operating temperature and temporarily store it. Meanwhile, in relation to the present disclosure, the first memory 515 may transmit raw data of the system operating temperature to the second processing unit 517.
  • the first memory 515 can store data necessary when the second processing unit 517 uses an artificial intelligence-based learning model, and can transmit this to the second processing unit 517.
  • the first memory 515 may store operation activation/deactivation data of the artificial intelligence-based learning model of the second processing unit 517, and may transmit this to the second processing unit 517.
  • the second processing unit 517 may receive environmental element data from the first memory 515 as input and predict a value after a certain time based on data learned in advance.
  • the second processing unit 517 may include a core (NPU Core) 518 and a Fred control 519.
  • the core 518 operates using a Long-Short-Term-Memory (LSTM) learning model and may be used to determine the prediction value.
  • LSTM Long-Short-Term-Memory
  • the Fred control 519 sets conditions for generating an event interrupt for each environmental element and generates an actual interrupt signal to perform a response operation to the unstable state of the first processing unit 511.
  • the conditions for the event interrupt can be set, for example, a time to be predicted, a standard value of an environmental element to generate an interrupt, etc., in the register of the first processing unit 511.
  • the prediction time is set to 100 ms
  • the reference temperature is set to '110 degrees'
  • the value predicted by the core 518 is set to 100 ms.
  • the deep learning prediction model may use an LSTM model suitable for predicting time series data of continuous attributes, but is not limited thereto.
  • the second processing unit 517 may include a deep learning prediction model, that is, an LSTM model.
  • the LSTM model considers how many sequences of input raw data and the simultaneous output sequence of output data may vary depending on the algorithm design.
  • Figure 7 shows an example of an LSTM model that uses 4 input data (x) and 8 learned weight parameters (w) to generate 3 output data (y). has been shown.
  • continuous data of input and output determines the latency from the initial input to the output, so it is desirable to select a different model depending on the application field.
  • Setting and control of the second processing unit 517 can be done by the first processing unit 511 through a register interface, and by setting a specific condition to generate an interrupt and generating an interrupt when the condition is met, the first processing is performed. It is processed through unit 511.
  • conditions for generating an interrupt can be adjusted by setting the discovery time, discovery reference value, etc. of the MCU 510 operation environment elements. For example, if the predicted time to generate an interrupt is set to 100ms and the predicted temperature value is set to 110 degrees, the operation can be controlled to generate an interrupt when the temperature value reaches 110 degrees in the y-n section corresponding to 100ms. .
  • the first processing unit 511 can be controlled to store important data of software such as applications and programs currently being processed in the second memory 516. It is possible to control actions corresponding to the severity of the event. In relation to this operation control, the first processing unit 511 may partially stabilize the system through internal reset of the MCU 510 and reset of specific peripheral components, or may send an entire system reset signal to the outside to reset the entire system. You can reset the system to put it in a safe state. However, it is not necessarily limited to the entire motion control content described above.
  • FIG. 8 (a) a graph of the operating temperature of the above-described system is shown.
  • the set temperature (Tj) compared to the system operating state transition is assumed to be 120 degrees.
  • an immediate operating state transition i.e., system reset
  • the present disclosure by predicting the time point t2, it is possible to control to back up (or start the backup) preset data to the second memory 516 at time t1, which is an arbitrary time before t2.
  • FIG. 8(b) a graph of system operating voltage is shown.
  • 1.1v is the reference voltage, and 0.95v can be set as a voltage that can cause a change in the operating state of the system. Therefore, as a result of monitoring, an immediate operation state transition (e.g., system reset) may occur at time t2, so the time t2 is predicted and data preset at time t1, which is a random time before t2, is stored in the second memory 516. ) is controlled to back up.
  • an immediate operation state transition e.g., system reset
  • FIG. 8(c) a graph of the system operation clock is shown.
  • clk1 becomes the bandwidth of the reference clock, but if the lower clock bandwidth continues, it may cause a change in the operating state of the system. Accordingly, as a result of monitoring, a clock with a bandwidth less than clk1 is repeated at time t2, thereby predicting the start of an operation state transition, thereby controlling the backup of preset data to the second memory 516 at time t1, which is an arbitrary time before t2. It is done.
  • 9 to 10 are flowcharts illustrating an artificial intelligence-based operation control method according to an embodiment of the present disclosure.
  • the MCU 510 may monitor first data regarding system operation (S101).
  • the first data may represent data that can cause a state transition of the system, such as the above-described system operating voltage, operating temperature, and operating clock.
  • the MCU 510 may generate predicted data regarding the transition of the operating state of the system according to the monitored first data based on artificial intelligence-based pre-learned data (103).
  • the MCU 510 may back up preset second data based on the generated operation state transition prediction data of the system (S105).
  • the second data may represent key data that is preset as data to be stored in the second memory 516 according to the system operation state transition.
  • the MCU 510 may control the operation state transition of the prediction data-based system (S107).
  • FIG. 10 may explain, for example, the operation between steps S103 and S105 of FIG. 9 described above.
  • the MCU 510 can calculate the time remaining until the system operation state is changed (S201).
  • the MCU 510 can calculate the amount of data to be backed up and the expected time required for backup (S203).
  • the backup target data can be set in advance, so if the amount and the expected time required for backup have already been calculated and stored together, it can be replaced with an operation to call it.
  • the MCU 510 may determine whether the time remaining until the system operation state transition calculated in step S201 is greater than the expected backup time in step S203 (S205).
  • step S205 if the remaining time until the system operation state transition calculated in step S201 is greater than the expected backup time calculated in step S203, the MCU 510 proceeds with the backup as is and then switches the system operating state. You can return to step S105 of FIG. 9 described above to control .
  • an arbitrary time point may be determined and provided.
  • the MCU 510 determines that if the remaining time until the system operation state transition calculated in step S201 is less than the expected backup time calculated in step S203, it is sufficient to back up all predefined important data. Since it does not, you will have to take action in response. In this case, an arbitrary point in time related to the backup progress may be determined and provided differently from what was described above.
  • the MCU 510 classifies the backup target data again according to priority or weight. And, it can be controlled to back up only some of the data classified according to priority or weight (S205). For example, the MCU 510 can only back up backup target data that has the highest priority or whose weight is equal to or greater than a threshold. According to another embodiment, the MCU 510 compares the classified data with the time remaining until the operation state transition of the system calculated in step S201, and data that can be backed up within that time, that is, according to priority or weight. You can control backups to occur sequentially.
  • the MCU 510 uses other components, such as using or switching to a data bus or a network that ensures faster network speed, to improve data transfer speed for backup. You can also try to back up the entire backup target data (or as much data as possible).
  • the MCU 510 may control the event interrupt to occur earlier than a preset time according to the time remaining until the system operation state transition. Later, the MCU 510 may update the event interrupt setting time through feedback.
  • the MCU 510 may distribute the backup target data to a plurality of second memories 516 and attempt backup in parallel, thereby reducing the backup time and ensuring that all target data is backed up. .
  • the MCU 510 may manage system operation state transitions in more detail.
  • the MCU 510 may manage the system operation state in more detail, such as a first unstable state, a second unstable state, etc., rather than two stages, stable or unstable.
  • the MCU 510 when the MCU 510 manages the system operation state by dividing it into 4 stages, such as stable - 1st unstable - 2nd unstable - 3rd unstable, the MCU 510 provides predicted time data for each state. can be obtained, and in particular, it can be controlled so that an event interrupt occurs or data backup starts at the first and/or second instability state prediction time. In this case as well, the amount of backup data or backup time described above can be referred to.
  • the MCU 510 generates a first event interrupt at the first instability state prediction time, and when the first event interrupt occurs, only data with the highest priority or weight greater than the threshold among backup target data can be backed up first.
  • the MCU 510 generates a second event interrupt at the second instability state prediction time, and when the second event interrupt occurs, the remaining backup target data can be sequentially backed up.
  • This method can be used more efficiently when the amount of data to be backed up is large.
  • the backup target data is stored at a predetermined period regardless of the operation state transition predicted time for the backup target data, and the data backup is centered on updates at the operation state transition predicted time according to the present disclosure, thereby ensuring data backup. You can also speed it up.
  • the operating state transition may occur before that point, so the timing is determined so that data backup can be completed by setting a margin in advance from the predicted operating state transition time if possible. It is desirable to do so.
  • secondary data refers to data set as fundamentally most important for system operation, arbitrarily designated data, data essential for system stability or upgrade, data to be reported to system users, etc. At least one or more may be included.
  • the second data includes predefined data for data backup in relation to the operation of the automotive electronic system, data currently being processed, data that is currently being processed but cannot be completed before the state transition occurs, and the state. At least one of the data deleted depending on the occurrence of the conversion may be included.
  • At least one or all of the operations of the MCU 510 described in FIGS. 5 to 10 may be performed by the NPU or the AP 550.
  • the operation of the automotive electrical system can be monitored based on artificial intelligence to diagnose and predict the state, thereby preventing data loss due to state transition and data important for system operation. Not only can it protect the system and ensure stable operation, but it can also respond accurately to state transitions, minimizing system instability due to over- or under-response.
  • the above-described method can be implemented as processor-readable code on a program-recorded medium.
  • media that the processor can read include ROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage devices.
  • the display device described above is not limited to the configuration and method of the above-described embodiments, and the embodiments may be configured by selectively combining all or part of each embodiment so that various modifications can be made. It may be possible.
  • the artificial intelligence device and its control method by predicting unexpected changes in the operating state of the automotive electrical system in advance and determining and performing response actions, the loss of important data for system operation according to such changes is prevented. It can significantly increase the efficiency and stability of the system, so it has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

Sont divulgués un appareil d'intelligence artificielle et son procédé de commande. Le procédé de commande d'un fonctionnement de l'appareil d'intelligence artificielle, selon au moins l'un des divers modes de réalisation de la présente divulgation, peut comprendre les étapes consistant à : surveiller des premières données concernant un fonctionnement d'un système électrique ; sur la base de données pré-apprises fondées sur l'intelligence artificielle, générer des données de prédiction concernant une transition d'état de fonctionnement du système électrique selon les premières données qui sont surveillées ; sauvegarder des secondes données prédéfinies sur la base des données de prédiction générées concernant la transition d'état de fonctionnement du système électrique ; et commander la transition d'état de fonctionnement du système électrique sur la base des données de prédiction.
PCT/KR2022/018418 2022-11-21 2022-11-21 Appareil d'intelligence artificielle et son procédé de commande Ceased WO2024111682A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2022/018418 WO2024111682A1 (fr) 2022-11-21 2022-11-21 Appareil d'intelligence artificielle et son procédé de commande

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2022/018418 WO2024111682A1 (fr) 2022-11-21 2022-11-21 Appareil d'intelligence artificielle et son procédé de commande

Publications (1)

Publication Number Publication Date
WO2024111682A1 true WO2024111682A1 (fr) 2024-05-30

Family

ID=91196243

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/018418 Ceased WO2024111682A1 (fr) 2022-11-21 2022-11-21 Appareil d'intelligence artificielle et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2024111682A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006243898A (ja) * 2005-03-01 2006-09-14 Mitsubishi Electric Corp 車載電子制御装置
KR100697449B1 (ko) * 1998-08-10 2007-03-20 지멘스 악티엔게젤샤프트 제어 장치
CN110494868A (zh) * 2017-04-28 2019-11-22 日立汽车系统株式会社 车辆电子控制装置
JP2020013181A (ja) * 2018-07-13 2020-01-23 三菱電機株式会社 バックアップ制御装置、車載機器、バックアップ制御プログラム
KR20210023702A (ko) * 2019-08-22 2021-03-04 도요타 지도샤(주) 차량용 학습 제어 시스템, 차량용 제어 장치, 및 차량용 학습 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100697449B1 (ko) * 1998-08-10 2007-03-20 지멘스 악티엔게젤샤프트 제어 장치
JP2006243898A (ja) * 2005-03-01 2006-09-14 Mitsubishi Electric Corp 車載電子制御装置
CN110494868A (zh) * 2017-04-28 2019-11-22 日立汽车系统株式会社 车辆电子控制装置
JP2020013181A (ja) * 2018-07-13 2020-01-23 三菱電機株式会社 バックアップ制御装置、車載機器、バックアップ制御プログラム
KR20210023702A (ko) * 2019-08-22 2021-03-04 도요타 지도샤(주) 차량용 학습 제어 시스템, 차량용 제어 장치, 및 차량용 학습 장치

Similar Documents

Publication Publication Date Title
WO2018117428A1 (fr) Procédé et appareil de filtrage de vidéo
WO2018212538A1 (fr) Dispositif électronique et procédé de détection d'événement de conduite de véhicule
WO2019059505A1 (fr) Procédé et appareil de reconnaissance d'objet
EP3602497A1 (fr) Dispositif électronique et procédé de détection d'événement de conduite de véhicule
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
EP3756145A1 (fr) Appareil électronique et son procédé de commande
WO2020141952A1 (fr) Système et procédé de commande conversationnelle permettant d'enregistrer un dispositif externe
WO2019132410A1 (fr) Dispositif électronique et son procédé de commande
WO2020180105A1 (fr) Dispositif électronique et procédé de commande associé
WO2019190171A1 (fr) Dispositif électronique et procédé de commande associé
WO2021040105A1 (fr) Dispositif d'intelligence artificielle générant une table d'entité nommée et procédé associé
WO2020262733A1 (fr) Climatiseur basé sur l'intelligence artificielle
EP3888083A1 (fr) Procédé d'affichage d'informations visuelles associées à une entrée vocale et dispositif électronique prenant en charge ledit procédé
WO2020251074A1 (fr) Robot à intelligence artificielle destiné à fournir une fonction de reconnaissance vocale et procédé de fonctionnement associé
WO2019172642A1 (fr) Dispositif électronique et procédé pour mesurer la fréquence cardiaque
WO2023033538A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2024072001A1 (fr) Appareil et procédé de partage et d'élagage de poids pour des modèles de vision et de langage
WO2019231068A1 (fr) Dispositif électronique et son procédé de commande
WO2021006363A1 (fr) Robot pour fournir un service d'informations en utilisant l'intelligence artificielle, et son procédé de fonctionnement
WO2019190243A1 (fr) Système et procédé de génération d'informations pour une interaction avec un utilisateur
EP3738305A1 (fr) Dispositif électronique et son procédé de commande
WO2023182795A1 (fr) Dispositif d'intelligence artificielle pour détecter un produit défectueux sur la base d'une image de produit, et procédé associé
WO2023182794A1 (fr) Dispositif de contrôle de vision fondé sur une mémoire permettant la conservation de performances de contrôle, et procédé associé
WO2024111682A1 (fr) Appareil d'intelligence artificielle et son procédé de commande
WO2023204321A1 (fr) Appareil d'intelligence artificielle pour prédire la consommation d'énergie, et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22966543

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22966543

Country of ref document: EP

Kind code of ref document: A1