[go: up one dir, main page]

CN120406886A - A virtualized screen projection method, device, equipment, storage medium and program product - Google Patents

A virtualized screen projection method, device, equipment, storage medium and program product

Info

Publication number
CN120406886A
CN120406886A CN202510494851.4A CN202510494851A CN120406886A CN 120406886 A CN120406886 A CN 120406886A CN 202510494851 A CN202510494851 A CN 202510494851A CN 120406886 A CN120406886 A CN 120406886A
Authority
CN
China
Prior art keywords
system control
domain
image
control domain
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510494851.4A
Other languages
Chinese (zh)
Inventor
刘亮
马力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Ecarx Hubei Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecarx Hubei Tech Co Ltd filed Critical Ecarx Hubei Tech Co Ltd
Priority to CN202510494851.4A priority Critical patent/CN120406886A/en
Publication of CN120406886A publication Critical patent/CN120406886A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/80Arrangements for controlling instruments
    • B60K35/81Arrangements for controlling instruments for controlling displays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本发明公开了一种虚拟化投屏方法、装置、设备、存储介质及程序产品,投屏方法,包括:在系统控制域中运行图形合成器,通过图形合成器跨域获取不同操作系统域的图像数据;将获取的不同操作系统域的图像数据传输至系统控制域的后端;在系统控制域中按照图像数据的位置排序优先级,对获取的不同操作系统域的图像数据进行透明叠加处理以形成合成图像;将合成图像输出至目标显示屏中投屏显示。本发明,通过系统控制域确定投射路径和图像显示排序优先级,通过图像投射驱动和图形合成器驱动对图像数据进行整合处理,使来自多个系统控制域的图像数据叠加在一个显示屏上进行显示,避免了多个操作系统域之间的数据传输,有效增加了系统的可靠性。

The present invention discloses a virtualized screen projection method, device, equipment, storage medium and program product. The screen projection method includes: running a graphics synthesizer in a system control domain, and obtaining image data of different operating system domains across domains through the graphics synthesizer; transmitting the obtained image data of different operating system domains to the back end of the system control domain; sorting the image data in the system control domain according to the position priority of the image data, and transparently superimposing the obtained image data of different operating system domains to form a composite image; and outputting the composite image to a target display screen for projection display. The present invention determines the projection path and image display sorting priority through the system control domain, integrates and processes the image data through the image projection driver and the graphics synthesizer driver, and enables image data from multiple system control domains to be superimposed on one display screen for display, thereby avoiding data transmission between multiple operating system domains and effectively increasing the reliability of the system.

Description

Virtualized screen projection method, device, equipment, storage medium and program product
Technical Field
The present invention relates to the field of screen display technologies, and in particular, to a virtualized screen display method, apparatus, device, storage medium, and program product.
Background
In today's smart cockpit solutions, in order to provide users in different areas within the cockpit with a diversified experience, it is necessary to run multiple operating systems. For example, the driving area may need to concentrate on the operating System for instrument display and vehicle state monitoring, the passenger area may need different operating systems such as entertainment System, and increasing the number of System On Chip (SOC) in the cabin to meet the running of multiple operating systems may lead to the rise of the cost of the hardware of the automobile, and as the Chip manufacturing process advances, the performance of the nano SOC Chip is stronger, so that the running of one operating System by a single SOC may lead to the surplus of the hardware capability.
To effectively solve this problem, many solutions have been developed, especially for how to run multiple operating systems on a single SoC, including LXC (Linux container) based multisystem solutions, hypervisors (virtual machine monitor) based multisystem solutions, hardware isolation based multisystem solutions, and so on. Based on a multi-system scheme realized by hardware isolation, in order to realize how multiple systems access the same external device, the current vehicle-to-machine system mostly adopts Type-1 Hypervisor to support the multiple systems. The Type-1 Hypervisor can directly run on hardware, provides independent running environments for each virtual machine, ensures isolation between operating systems, and can effectively manage shared access to external equipment. In the conventional map screen projection scheme in intelligent automobile virtualization (Hypervisor), a plurality of Activity components correspond to a specific user interface and interact with a user, (for example, one is used for displaying an instrument, the other is used for displaying a navigation map), and then a transparent superposition synthesis is performed by a graphics processing (Surfaceflinger) process or a hardware synthesis (Hardware Composer, HWC) process, so that the navigation map is mapped in an instrument display blank area. Under the virtualized environment, because the instrument is displayed in the Cluster domain, the map navigation is displayed in the Android domain, and the data of the Android domain is received on the Cluster to be subjected to the influence of the access and the stability between the two domains when the data are displayed in a superimposed mode, so that Cluster resources are occupied, and the stability of the system is influenced.
Disclosure of Invention
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to solve the defects in the prior art, and provides a virtualized screen projection method and a monitoring method.
In order to achieve the above object, in a first aspect, the present invention provides a virtualized screen projection method, including:
Operating a graphic synthesizer in a system control domain, and acquiring image data of different operating system domains in a cross-domain manner through the graphic synthesizer;
Transmitting the acquired image data of different operation system domains to the rear end of the system control domain;
the system control domain is used for carrying out transparent superposition processing on the acquired image data of different operation system domains according to the position sequencing priority of the image data so as to form a composite image;
And outputting the synthesized image to a target display screen for screen projection display.
In some embodiments, the running a graphics synthesizer in a system control domain, obtaining image data of different operating system domains across the graphics synthesizer includes:
starting and loading a virtual front end driver in an operating system domain, and creating a transmitting image data interface;
starting and loading a virtual back-end driver in a system control domain, and creating a received image data interface;
And running image projection drivers corresponding to the virtual front-end driver and the virtual back-end driver configuration based on the interaction requirement of the graphic application program, and projecting images of a plurality of operating system domains to a system control domain in a virtualized environment.
In some embodiments, the transmitting the acquired image data of the different operating system domains to the back end of the system control domain includes:
Reading projection display requirements and running performance requirements of the SOC system, and acquiring a target use scene and a target display mode;
Starting and loading a graphic synthesizer driver based on the current system configuration requirement and the current operation performance requirement;
And configuring a graphic rendering environment, and transmitting data to the rear end of the system control domain through an image virtualization transmission protocol.
In some of these embodiments, said prioritizing in the system control domain by location of image data comprises:
The graphic synthesizer driver receives a plurality of image data and analyzes and acquires image display attributes;
the graphics compositor driver generates a ranking priority for the location of the different image data based on the display window range, the image overlay sequence, and the image rendering scene.
In some embodiments, the transparent overlaying of the acquired image data of different operating system domains to form a composite image includes:
the graphic synthesizer driver carries out transparent superposition processing on the acquired image data of different operating system domains based on the position sorting priority to form a synthesized image;
the system control domain configures one or more virtual screens to differentiate the projected paths of the different virtual screens.
In some embodiments, the transparent overlaying of the acquired image data of different operating system domains to form a composite image includes:
The graphic synthesizer driver analyzes the image display attribute based on the position sorting priority;
The graphic synthesizer drives to perform image rendering processing and image superimposition processing based on the image display attribute to form a transparent superimposed image.
In some of these embodiments, the virtual screen comprises:
the first virtual screen is used for displaying the transparent superimposed image formed after the processing;
One or more second virtual screens for displaying images of different operating system domains.
In a second aspect, the present invention further provides a virtualized environment map screen device, applied to the same system-on-chip SOC, where the SOC includes a system control domain and a plurality of operating system domains, the system control domain is integrated with a virtual back-end driver and a graphics synthesizer driver and is connected to a plurality of display screens, and each operating system domain is integrated with a virtual front-end driver and is connected to the system control domain, where the device includes:
The superposition screen-throwing module is used for projecting the images of the operating system domains onto the system control domain in a virtualized environment through the virtual front end driver, and carrying out transparent superposition on the images through the virtual back end driver and the graphic synthesizer driver and outputting the images to the display screen;
The function display module is used for projecting any one image in a plurality of operating system domains to the system control domain in a virtualized environment through the virtual front end driver and outputting the image to the display screen through the virtual back end driver and the graphic synthesizer driver;
And the switching selection module is used for switching and selecting the projection paths between the system control domain and one or more operating system domains and the display screen under different use scenes.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A system-on-chip (SOC) comprising a system control domain and a plurality of operating system domains, wherein the system control domain is integrated with a virtual back-end driver and a graphic synthesizer driver and is connected with a plurality of display screens, each operating system domain is integrated with a virtual front-end driver and is connected with the system control domain, and
A memory and a processor communicatively coupled to the at least one SOC, wherein,
The memory stores computer-executable instructions executable by the at least one SOC;
the processor executes computer-executable instructions stored by the memory such that the processor performs the various possible implementations of the first aspect and/or the first aspect as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions for implementing the various possible implementations of the above first aspect and/or the first aspect when executed by a processor.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the various possible implementations of the above first aspect and/or the first aspect.
The invention has the following beneficial effects:
The embodiment of the invention provides a virtualized screen projection method, a device, equipment, a storage medium and a program product, which are applied to the same system-on-chip SOC, wherein the SOC comprises a system control domain and a plurality of operating system domains, the system control domain is integrated with a virtual back end driver and a graphic synthesizer driver and is connected with a plurality of display screens, each operating system domain is integrated with a virtual front end driver and is connected with the system control domain, a graphic synthesizer is operated in the system control domain, image data of different operating system domains are acquired through the graphic synthesizer in a cross-domain manner, the acquired image data of different operating system domains are transmitted to the back end of the system control domain, the acquired image data of different operating system domains are subjected to transparent superposition processing according to the position sequencing priority of the image data in the system control domain so as to form a synthesized image, and the synthesized image is output to a target display screen for screen projection display. According to the technical scheme, the projection path and the image display sequencing priority are determined through the system control domain, the image data are integrated through the image projection drive and the image synthesizer drive, so that the image data from a plurality of system control domains can be overlapped on one display screen for display, the image data before combination can still be displayed on the display screen corresponding to the original projection path through the system control domain, data transmission among a plurality of operating system domains is avoided, and the reliability of the system is effectively improved.
Drawings
FIG. 1 is a schematic flow chart of a virtualized screen projection method according to the present application;
FIG. 2 is a second flow chart of the virtualized screen projection method provided by the application;
FIG. 3 is a flow chart of a virtualized screen projection method according to the present application;
FIG. 4 is a flow chart of a virtualized screen projection method according to the present application;
fig. 5 is a schematic structural diagram of a virtualized screen projection device according to the present application;
FIG. 6 is a schematic diagram II of a virtualized screen device according to the present application;
Fig. 7 is a schematic structural diagram III of the virtualized screen projection device provided by the application;
Fig. 8 is a schematic structural diagram of an electronic device provided by the present application.
Legend description:
1. System-on-chip SOC, 110, system control domain, 111, virtual back end driver, 112, graphics synthesizer driver, 120, operating system domain, 121, virtual front end driver, 2, display screen, 3, electronic device, 301, communication interface, 302, processor, 303, memory, 304, bus.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In today's intelligent cockpit solutions, with the increasing demands of user experience, more and more operating systems are introduced into the cockpit to meet the functional and service demands of different areas. However, increasing the number of System On Chips (SOCs) in the cabin, while providing more processing power, can also significantly increase the overall cost of the vehicle hardware. In addition, with the progress of production technology, the performance and functions of the nanoscale SoC are continuously improved, so that a phenomenon of excessive hardware capability can occur when a single SoC runs a single operating system. In order to effectively solve this problem, many technical solutions have been developed, and in particular, how to run multiple operating systems on a single SoC, and LXC-based multisystem solutions, hypervisors-based multisystem solutions, hardware isolation-based multisystem solutions, and the like have been continuously developed and perfected in this context. The goal of these schemes is to increase resource utilization while maintaining system independence and security. Based on a multi-system scheme realized by hardware isolation, in order to realize how multiple systems access the same external device, the current vehicle-to-machine system mostly adopts Type-1 Hypervisor to support the multiple systems.
Type1 hypervisors are a Type of virtualization software that runs directly on physical server hardware, also known as bare metal hypervisors. The Type1 Hypervisor can directly access hardware resources of a server, including a processor, a memory, a network, a storage and the like, and can provide higher performance and stability. In the smart cockpit, type1 hypervisors may be used to run multiple virtual machines, each of which may run a different operating system and application. Meanwhile, the Type1 Hypervisor can provide a strong safety isolation function, so that different virtual machines are ensured not to interfere with each other, and the reliability and safety of the whole system are improved.
Taking a map screen projection as an example, a traditional map screen projection scheme needs to realize user interface display with different functions and interaction with a user through a plurality of Activity components, wherein one Activity component is specially responsible for displaying instrument related information including various key parameters of a vehicle, such as vehicle speed, engine speed, oil quantity, water temperature and the like, and the other Activity component is used for displaying a navigation map and providing services such as route planning, real-time position positioning, surrounding geographic information and the like for a driver. In order to integrate and display two different interfaces of the instrument and the navigation map, the system realizes superposition synthesis of the image interfaces by means of a graphic processing (Surfaceflinger) process or a hardware synthesizer (Hardware Composer, HWC) process. Because the instrument display is usually in the Cluster domain and the map navigation display is in the Android domain, the two domains are relatively independent operation environments in the system, when the data of the Android domain is required to be received on the Cluster domain for superposition display, the data transmission channel between the two domains can be limited by factors such as network delay and bandwidth limitation, so that the data transmission is not timely or complete, the data transmission of the cross-domain also occupies Cluster resources, excessive resource occupation can cause the system performance of the Cluster domain to be reduced, the stability of the whole system is further influenced, such as instrument display blocking, response delay and the like, even in extreme cases, the normal operation of some key safety functions of the vehicle can be influenced, and potential threats are caused to driving safety. Therefore, a screen projection method is needed to solve the above problems.
Please refer to the following examples:
referring to fig. 1 to fig. 4, an embodiment of a virtualized screen projection method provided by the present invention includes:
s100, a graphic synthesizer is operated in a system control domain 110, and image data of different operation system domains 120 are acquired through the graphic synthesizer in a cross-domain manner;
S200, transmitting the acquired image data of different operation system domains to the rear end of the system control domain 110;
S300, in the system control domain 110, according to the position sorting priority of the image data, performing transparent superposition processing on the acquired image data of different operation system domains 120 to form a composite image;
and S400, outputting the synthesized image to a target display screen 2 for screen projection display.
Specifically, the system control domain 110 and the operating system domain 120 are applied to the same system on chip SOC1, the system control domain 110 is integrated with a virtual back-end driver 111 and a graphics synthesizer driver 112 and is connected to a plurality of display screens 2, and each operating system domain 120 is integrated with a virtual front-end driver 121 and is connected to the system control domain 110.
In detail, in the intelligent automobile virtualization (Hypervisor), the system Control Domain 110 (Dom 0) and the operating system Domain 120 (Android Domain and Cluster Domain) have the meaning that in the virtualized environment, dom0 represents a Control Domain (Control Domain), which is a highest-authority virtual machine and is responsible for managing the operation of other virtual machines (called client Domain or Guest Domains), dom0 generally operates at the highest-authority level and is responsible for functions such as security policy, resource allocation and fault recovery, the Cluster Domain refers to a dashboard Domain and is mainly responsible for displaying various information of a vehicle, such as speed, rotation speed, fuel consumption, etc., and providing real-time vehicle state information to a driver through the dashboard, and the Android Domain refers to a user function Domain for operating an Android operating system.
By way of example, the virtual backend driver 111 may be a graphics driver such as virtio-gpu, through simulation in QEMU, DMA transfer may be implemented by sharing a common page between a guest and a host, avoiding memory duplication, which introduces a new virtual machine display screen 2 refresh acceleration module, supporting more timely refreshing of output pictures into various virtual display screens 2, reducing path delay of rendering instructions, improving rendering efficiency, reducing occupation of CPU resources, greatly shortening state synchronization delay time, and improving response speed.
Illustratively, the graphics synthesizer driver 112 may be Weston, wayland, mesa D, ANGLE, swiftShader, etc.:
(1) In the embodiment of the application, the window of the navigation application, the window of the music playing application, the vehicle state information display window and the like can be efficiently synthesized and displayed through the Weston driver, for example, the environment map image and the instrument image are overlapped and synthesized into a scene, the Weston driver can well process the synthesis of the two images, so that the smooth display and the correct overlapping effect of the images are ensured, the safety of a system is ensured, and the graphic display interference among different application programs is prevented;
(2) In the embodiment of the application, important driving information (such as vehicle speed, navigation indication and the like) can be rapidly and accurately displayed on the front windshield display screen 2 in a graphic manner through Wayland driving, so that rapid updating and accurate display of the graphic are ensured, and timely and clear information is provided for a driver;
(3) In the embodiment of the application, the environmental data acquired by the sensor can be converted into the 3D graph through the Mesa3D drive rendering of the environmental model around the vehicle, and then the 3D graph is overlapped and synthesized with the vehicle state information in the instrument image, for example, the environmental information such as parking spaces, obstacles and the like around the vehicle is rendered in an automatic parking scene, and the environmental information is synthesized with the instrument image (such as reversing radar display, gear information and the like) of the vehicle, so that visual parking auxiliary information is provided for a driver.
It can be understood that in the intelligent cabin scene, in order to solve the problems of communication delay, high resource occupation, poor system stability and the like existing in the traditional cross-domain screen projection (such as overlapping and displaying the navigation map of the Android domain and the instrument data of the Cluster domain), the application optimizes the flow by introducing a centralized graphic processing scheme based on the system control domain 110 and the graphic synthesizer. The scheme utilizes the direct management capability of Type-1 Hypervisor on hardware resources, and a graphic synthesizer is operated in a system control domain 110, so that graphic buffer data of each operating system domain 120 (such as an Android domain, a Cluster domain and the like) can be directly accessed without depending on cross-domain communication, transparent superposition synthesis is completed according to priority, and finally the data is output to a display screen 2. The method avoids the occupation and network delay influence of cross-domain data transmission on Cluster resources, ensures safety through hardware isolation of Hypervisor, remarkably improves the efficiency and system stability of graph synthesis, and solves the problems of blocking, slow response and even safety risk and the like possibly caused in the traditional multi-Activity or HWC cross-domain synthesis scheme.
With continued reference to fig. 2, in some embodiments, step S100 includes:
s110, starting and loading a virtual front end driver 121 in an operating system domain 120, and creating a transmission image data interface;
S120, starting and loading a virtual back-end driver 111 in a system control domain 110, and creating a received image data interface;
s130, running image projection drivers configured corresponding to the virtual front end driver 121 and the virtual back end driver 111 based on the interaction requirement of the graphics application program, and projecting images of the plurality of operating system domains 120 to the system control domain 110 in the virtualized environment.
Illustratively, in the system control domain 110, the virtual back-end driver 111 is responsible for communicating with the operating system domain 120, receiving image data from them, initializing various parameters and resources of itself after starting, adapting and registering with other components of the system control domain 110, creating a received image data interface, further unifying the format, protocol and mode of image data transmission and received data stream transmission of the operating system domain 120, and specifying the data packet size, transmission frequency and error checking mode of the data. When the virtual front end driver 121 of the operating system domain 120 is to send image data, it will perform data encapsulation and sending as required by this interface. The creation of an interface also involves allocating resources such as corresponding port numbers or memory addresses in the system control domain 110 so that data can be accurately received.
Further, different application scenarios and hardware performance may have different requirements for graphics processing, and the system needs to consider the limitation of hardware resources, such as CPU, GPU performance and memory capacity, if the hardware resources are limited, but a large amount of image data and complex graphics effects need to be processed, so that a graphics synthesizer driver 112 capable of efficiently using the hardware resources needs to be selected. Taking a display scene for a navigation map as an example, operations of zooming, panning and the like on the map need to be supported, and characters and icons on the map need to be clearly readable, while in the display scene for a vehicle instrument, various instrument panel pointers and numbers need to be accurately drawn, and updating needs to be performed in real time. Illustratively, wayland drivers and Weston drivers can be used for processing the graph synthesis process, so that the acceleration function of modern graph hardware can be fully utilized, the occupation of system resources is reduced, and the frame rate and fluency of graph display are improved.
Further, in the operating system domain 120, the virtual front end driver 121 (e.g., virtio-gpu front end driver) functions to prepare and send the image data in the present operating system domain 120 to the system control domain 110. After the virtual front end driver 121 is started and loaded, the virtual front end driver interacts with the graphics application program in the operating system domain 120 to obtain the generation and update information of the image data. Specifically, according to the specification of the receiving interface of the virtual back-end driver 111, the image data to be sent is encapsulated and formatted, and the creation of the sending interface involves the sending mode of the configuration data, for example, the transmission is performed by a network protocol or a shared memory. If the network protocol is adopted, parameters such as network address, port number and the like are set, and if the memory sharing mode is adopted, corresponding memory areas are allocated in the operating system domain 120, and memory mapping setting is carried out with the system control domain 110, so that the image data can be accurately and efficiently sent to the system control domain 110.
Further, graphics applications have a variety of different interaction requirements. For example, the navigation application needs to update the map display in real time according to the change in the position of the user, and respond to the user's operations such as zooming, searching, etc. The vehicle meter application program needs to update the display of information such as vehicle speed, oil quantity and the like in real time according to the data of the vehicle sensor. When the virtual front end driver 121 receives notification of a change in the graphics application data, it acquires the image data from the application, encapsulates the acquired image data according to the specifications of the previously created interface for transmitting the image data, and projects the data to the system control domain 110 through the virtualized environment.
It can be understood that when image data of different os domains 120 are obtained by a graphics synthesizer in a cross-domain manner, a virtual front-end driver 121 (e.g., virtio-gpu front-end driver) is initialized in each os domain 120 (e.g., android domain, cluster domain), the driver interacts with a graphics application (e.g., navigation, meter program, etc.) in a domain, creates a transmission interface according to a preset specification (including a data format, a packaging manner, etc.), and if a shared memory manner is adopted, needs to allocate a memory area in advance and establish a mapping with a control domain; the virtual back-end driver 111 is started in the system control domain 110, receiving interface parameters (such as data packet size, checking mechanism, etc.) are uniformly configured, ports or memory resources are allocated, complete matching with transmission protocols (such as network protocols or memory through, etc.) of the front-end driver is ensured, finally, based on real-time application requirements (such as high frame rate required by navigation dynamic scaling, microsecond level update required by instrument data, etc.), multi-domain image streams are projected to the control domain through a virtualization layer (such as Type-1 Hypervisor), and mixed according to scene priority by a hardware acceleration synthesizer (such as Wayland driver, weston driver, etc.), for example, a map layer is semi-transparent superimposed on an instrument panel layer, and finally output to the display screen 2, so that high frame rate and low delay fusion of a multi-system picture are realized under a resource limited environment, and display instantaneity of key information (such as vehicle speed warning, illegal photographing, road construction, etc.) is ensured.
With continued reference to fig. 3, in some embodiments, step S200 includes:
S210, reading projection display requirements and running performance requirements of an SOC system, and acquiring a target use scene and a target display mode;
S220, starting and loading the graphic synthesizer driver 112 based on the current system configuration requirement and the current operation performance requirement;
S230, configuring a graphic rendering environment, and transmitting data to the back end of the system control domain 110 through an image virtualization transmission protocol.
For example, one operating system domain 120 is responsible for functions related to driving of the vehicle, such as instrument display and driving assistance system, which needs to accurately display various parameters and road condition information of the vehicle in real time, and has high requirements on accuracy and stability of graphics, while another operating system domain or domains 120 is focused on entertainment functions, such as multimedia playing, games and map projection, so that high resolution and smooth graphics rendering and rich color expression need to be supported correspondingly, and furthermore, the overall architecture and resource allocation of the vehicle system (such as the communication bandwidth between the system control domain 110 and the operating system domains 120, the allocation of memory resources, etc. affect the efficiency and effect of graphics processing) need to be considered between the system control domain 110 and the operating system domains 120, so that hardware resources (such as the processing capability of CPU, the graphics processing capability of GPU, the memory capacity and bandwidth, etc.) that can be provided by the corresponding acquisition and analysis system are needed to select a suitable graphics processing strategy and driving.
Further, performance requirements may also involve a number of considerations, firstly the frame rate of the graphics processing, which may be required to be maintained for some dynamic graphics displays (e.g., navigation map updates during vehicle travel, animation effects in games, etc.), to ensure smoothness of the picture, secondly the response time, when the user operates the graphical interface (e.g., clicking a button, sliding a screen, etc.), the system may need to respond quickly to provide a good user experience, and finally the graphics quality (e.g., image clarity, resolution, color accuracy, etc.).
Further, different usage scenarios may be determined according to the application field and the function of the system. In intelligent automobiles, common use scenes include driving scenes, parking scenes, entertainment scenes and the like, wherein the driving scenes mainly focus on display of navigation maps, vehicle state information and the like, the information needs to be displayed to a driver in a simple and visual mode and cannot be dispersed, the parking scenes focus on display of surrounding environments of the vehicle, such as reversing images, parking auxiliary lines and the like, so as to help the driver to park safely, and the entertainment scenes focus on requirements of watching videos, operating games and the like, and at the moment, the requirements on graphic processing capacity and display effect are high so as to provide better entertainment experience.
It can be understood that when optimizing multi-scenario graphics processing through intelligent scheduling and dynamic configuration, firstly, the hardware resource state of SoC (including CPU/GPU load, memory bandwidth, etc.) is collected in real time and the current use scenario (for example, driving mode needs to be guaranteed with priority for instrument display, entertainment mode is focused on multimedia performance) is analyzed, meanwhile, the display mode requirement (for example, instrument requires 60fps for stable output, navigation needs to support dynamic scaling) is combined, then, an adaptive graphics synthesizer driver 112 is dynamically loaded according to the analysis result (for example, driving scenario selects Wayland synthesizer with low delay, entertainment scenario enables Weston driver supporting Vulkan acceleration), a corresponding resource allocation strategy (for example, GPU computing unit is reserved for instrument domain, more display memory is allocated for entertainment domain), finally, image data of each domain is transmitted to the rear end of system control domain 110 through optimized virtualized transmission protocol (for example, zero copy shared memory scheme based on DMA-BUF), real-time mixing is performed by synthesizer according to the scene priority (for example, semitransparent HUD information is superimposed on instrument panel in driving mode, 4K video and game picture in entertainment mode is synthesized in entertainment mode), and finally, output to display screen 2 is displayed. The method can ensure the real-time performance of key driving information and simultaneously maximize the graphic expressive force of an entertainment system through the scene-aware resource dynamic allocation and hardware acceleration transmission.
With continued reference to fig. 4, in some embodiments, step S300 includes:
s310, the graphic synthesizer driver 112 receives a plurality of image data and analyzes and acquires image display attributes;
S320, the graphic synthesizer driver 112 generates a ranking priority for the locations of the different image data based on the display window range, the image superimposition layer sequence, and the image rendering scene.
Illustratively, after the display properties of the images are obtained, the graphic synthesizer driver 112 begins processing the images to obtain transparent superimposed images, wherein for color processing, the graphic synthesizer driver 112 can perform operations such as color correction, color space conversion (for example, adjusting parameters such as brightness, contrast, saturation and the like of the images) to ensure that the colors of the images are displayed uniformly on different display screens 2, can also render scenes according to different images, convert the images from one color space to another color space to meet specific display requirements, and can simulate different illumination effects (for example, ambient light, direct light, shadow and the like) on the aspects of illumination processing, and can also perform operations such as mapping, filtering, compressing and the like on textures of the images to improve the details and realism of the images. In the overlaying process, the graphics synthesizer driver 112 needs to process the transparency of the image and the blending mode, where the transparency determines the shielding degree of the upper layer image to the lower layer image, and the blending mode determines how the colors of the upper layer image and the lower layer image are blended (for example, using the alpha blending mode to realize the semitransparent effect, so that the colors of the upper layer image and the lower layer image are blended according to a certain proportion). In addition, the graphics compositor driver 112 also needs to handle the edge blending and antialiasing effects of the image to ensure that the image does not appear to have significant edge aliasing or unnatural transitions when superimposed (e.g., the image can be made to look more natural by using specific edge detection and antialiasing algorithms).
Further, the system control domain 110, upon receiving the transparent overlay image generated by the graphics synthesizer driver 112, communicates with the display screen via a specific interface and communication protocol to send the image data in a suitable format and rate to the display screen 2 connected thereto.
With continued reference to fig. 4, in some embodiments, step 300 further includes:
S330, the graphic synthesizer driver 112 performs transparent superposition processing on the acquired image data of the different operating system domains 120 based on the position sorting priority to form a synthesized image;
s340, the system control domain 110 configures one or more virtual screens to differentiate projection paths of different virtual screens;
S350, the graphic synthesizer driver 112 analyzes the image display attribute based on the position sorting priority;
s360, the graphic synthesizer driver 112 performs image rendering processing and image superimposition processing based on the image display attribute to form a transparent superimposed image.
Referring to fig. 5-7, in the embodiment of the present application, taking an example of performing an instrument interface projection environment map by using a system control domain 110 (Dom 0) and two operating system domains 120 (instrument display domain and environment map domain) as an example, when a Linux graphic system is running on the system control domain 110 (Dom 0), a virtual back end driver 111 (virtio-gpu) and a graphic synthesizer driver 112 (Weston, wayland, mesa3D, ANGLE, swiftShader, etc.) are synchronously started, then when the instrument display domain (Cluster domain) and the environment map domain (Android domain) are started, graphic data streams of the two operating system domains 120 can be projected onto the system control domain 110 (Dom 0) on a virtual environment through a virtual front end driver 121 (virtio-gpu) and directly output to a display screen 2 corresponding to the operating system domain 120, and further, in order to ensure that when a map is being cast, normal images of an Android can be simultaneously displayed on an environment map domain, a second virtual synthesizer domain (synthesizer domain) is synchronously started, and then images of the two operating system domain 120 can be superimposed on the environment map domain (Android domain) through the graphic driver 112 (for example, weston, wayland, mesa) and the image of the Cluster domain (Cluster domain D, ANGLE, swiftShader) and the original image (Android domain) is displayed on the environment map domain (Android domain) and the Cluster domain 2) and the image is superimposed on the environment map domain (Android domain) and the environment map domain (Android domain) and the original image domain 2) is displayed.
The virtual screen comprises a first virtual screen and one or more second virtual screens, wherein the first virtual screen is used for displaying a transparent superposition image formed after processing, and the second virtual screen is used for displaying images of different operating system domains. It can be understood that three domains (Dom 0 domain, cluster domain, and Android domain), two screens, and a graphics synthesizer driver 112 are used in the embodiment of the present application, which is just one example of the implementation solution of the present application, and in practical application, other numbers of domain screens, other kinds of image projection drivers, and graphics synthesizer drivers 112 are also possible.
The present invention also provides an embodiment of a virtualized environment map screen device, which can be implemented in hardware and/or software, and is applied to the same system-on-chip SOC1, where the SOC includes a system control domain 110 and a plurality of operating system domains 120, the system control domain 110 is integrated with a virtual back-end driver 111 and a graphics synthesizer driver 112 and is connected to a plurality of display screens 2, and each operating system domain 120 is integrated with a virtual front-end driver 121 and is connected to the system control domain 110, where the device includes:
The superposition screen-throwing module is used for throwing the images of the plurality of operating system domains 120 to the system control domain 110 in the virtualized environment through the virtual front end driver, and transparently superposing and outputting the plurality of images to the display screen 2 through the virtual back end driver 111 and the graphic synthesizer driver 112;
A function display module, configured to project any one of the plurality of operating system domains 120 onto the system control domain 110 in the virtualized environment through the virtual front end driver 121, and output the image to the display screen 2 through the virtual back end driver 111 and the graphic synthesizer driver 112;
A switching selection module, configured to switch and select the projection paths between the system control domain 110 and the one or more operating system domains 120 and the display screen 2 under different usage scenarios.
The overlay screen projection module can accurately project images of a plurality of operating system domains 120 onto the system control domain 110 through the virtual front end driver 121 in a virtualized environment, then performs fine processing on a plurality of image data streams from different operating system domains 120 by means of the cooperative work of the virtual back end driver 111 and the graphic synthesizer driver 112 to obtain transparent overlay images, and finally outputs the synthesized images onto the corresponding display screen 2, the functional display module can select one image from the operating system domains 120, project the image onto the system control domain 110 in the virtualized environment through the virtual front end driver 121, then independently output the image onto the corresponding original path display screen 2 through the virtual back end driver 111 and the graphic synthesizer driver 112, and the switching selection module adds higher flexibility and adaptability to the whole system, can intelligently switch and select paths between the system control domain 110 and one or more operating system domains 120 and the display screen 2 according to different use scenes, and ensures that the image data can be accurately and efficiently transmitted onto the target display screen 2.
Illustratively, corresponding image projection drivers are run in the system control domain 110 and the operating system domain 120 to project images of a plurality of operating system domains 120 to the system control domain 110 in a virtualized environment, corresponding graphic synthesizer drivers 112 are run in the system control domain 110 to project images of a plurality of operating system domains 120 onto one display screen 2 in a transparent overlay, and corresponding graphic synthesizer drivers 112 are run in the system control domain 110 to project images of one operating system domain 120 onto another display screen 2 individually. Through the above design, the system control domain 110 can not only efficiently manage and schedule image data of a plurality of operating system domains 120, but also realize integrated processing of the image data through the accurate image projection driver and the graphic synthesizer driver 112. The core advantage of this mechanism is that it effectively avoids direct data transfer between multiple operating system domains 120, significantly improving the reliability of the system. Meanwhile, the image data before combination can still be displayed on the respective projection paths through the system control domain 110, so that the real-time performance and the integrity of the information are ensured.
The embodiment of the application also provides the electronic device 3, and the electronic device 3 comprises a system-on-chip SOC1, the SOC comprises a system control domain 110 and a plurality of operating system domains 120, the system control domain 110 is integrated with a virtual back-end driver 111 and a graphic synthesizer driver 112 and is connected with a plurality of display screens 2, each operating system domain 120 is integrated with a virtual front-end driver 121 and is connected with the system control domain 110, and a memory 303 and a processor 302 which are in communication connection with at least one SOC, wherein the memory 303 stores computer execution instructions which can be executed by the at least one SOC, and the processor 302 executes the computer execution instructions stored by the memory 303, so that the processor 302 executes various possible implementation modes in the embodiment.
The electronic device 3 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device 3 may also represent various forms of mobile equipment such as cellular telephones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing equipment. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 3 specifically includes a communication interface 301, a processor 302, a memory 303, and a bus 304, where the communication interface 301, the processor 302, and the memory 303 perform communication with each other through the bus 304. The processor 302 may perform the method described above by reading and executing machine executable instructions in the memory 303 corresponding to the control logic of the problem-prompting method, the details of which are referred to in the above embodiments and will not be further elaborated here.
The memory 303 is used for storing a program, and the processor 302 executes the program after receiving an execution instruction. The memory 303 referred to in this disclosure may be any electronic, magnetic, optical, or other physical storage device that may contain stored information, such as executable instructions, data, or the like. In particular, the memory 303 may be RAM (Random Access Memory, random access memory 303), flash memory, a storage drive (e.g., hard drive), any type of storage disk (e.g., optical disk, DVD, etc.), or a similar storage medium, or a combination thereof. The communication connection between the system network element and at least one other network element is achieved through at least one communication interface 301 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
In some embodiments, the processor 302 may include one or more interfaces. The interfaces may include an integrated circuit (Inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (Inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), a General-Purpose Input/Output (GPIO) interface, a subscriber identity module (Subscriber Identity Module, SIM) interface, and/or a universal serial bus (Universal Serial Bus, USB) interface, among others. The buses may be divided into address buses, data buses, control buses, etc.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not limit the structure of the electronic device 3. In other embodiments of the present application, the electronic device 3 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The processor 302 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware in the processor 302 or by instructions in software. The Processor 302 may be an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics Processing Unit, GPU), an image signal Processor (IMAGE SIGNAL Proessor, ISP), a controller, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor, and/or a neural network Processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors 302. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
The electronic device 3 provided by the embodiment of the present application and the problem prompting method provided by the embodiment of the present application are based on the same technical concept, and have the same beneficial effects as the method adopted, operated or implemented by the same.
The embodiment of the present application further provides a computer readable storage medium, where computer executable instructions are stored, and when the processor 302 executes the computer executable instructions, the above-mentioned method is implemented.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by the processor 302, implements the method described above. The computer program comprises one or more computer instructions that, when loaded and executed on a computer, produce, in whole or in part, a process or function in accordance with embodiments of the present application. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, exercise device, or data center to another website, computer, exercise device, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.) means. Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as training devices, data centers, and the like, that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk), among others.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable SOC, which may be a special or general purpose programmable SOC, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to an SOC of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the SOC, cause the functions/operations specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device 3 having a display device (e.g., a cathode ray tube or a liquid crystal display monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the electronic device 3. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a first piece of component (e.g., an application server), or that includes a front-end component (e.g., a user computer with a graphical user interface or web browser through which a user can interact with an implementation of the systems and techniques described here), or that includes any combination of such background, first piece of component, or front-end component. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), a blockchain network, and the Internet.
It should be noted that the foregoing description is only a preferred embodiment of the present invention, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood that modifications, equivalents, improvements and modifications to the technical solution described in the foregoing embodiments may occur to those skilled in the art, and all modifications, equivalents, and improvements are intended to be included within the spirit and principle of the present invention.

Claims (11)

1.A virtualized screen projection method, comprising:
Operating a graphic synthesizer in a system control domain, and acquiring image data of different operating system domains in a cross-domain manner through the graphic synthesizer;
Transmitting the acquired image data of different operation system domains to the rear end of the system control domain;
the system control domain is used for carrying out transparent superposition processing on the acquired image data of different operation system domains according to the position sequencing priority of the image data so as to form a composite image;
And outputting the synthesized image to a target display screen for screen projection display.
2. The virtualized screen method of claim 1, wherein running a graphics synthesizer in a system control domain, cross-domain capturing image data of different operating system domains by the graphics synthesizer comprises:
starting and loading a virtual front end driver in an operating system domain, and creating a transmitting image data interface;
starting and loading a virtual back-end driver in a system control domain, and creating a received image data interface;
And running image projection drivers corresponding to the virtual front-end driver and the virtual back-end driver configuration based on the interaction requirement of the graphic application program, and projecting images of a plurality of operating system domains to a system control domain in a virtualized environment.
3. The virtualized screen method of claim 1, wherein the transferring the acquired image data of the different operating system domains to the back end of the system control domain comprises:
Reading projection display requirements and running performance requirements of the SOC system, and acquiring a target use scene and a target display mode;
Starting and loading a graphic synthesizer driver based on the current system configuration requirement and the current operation performance requirement;
And configuring a graphic rendering environment, and transmitting data to the rear end of the system control domain through an image virtualization transmission protocol.
4. A virtualized screen method as recited in claim 1, wherein said prioritizing in said system control domain by location of image data comprises:
The graphic synthesizer driver receives a plurality of image data and analyzes and acquires image display attributes;
the graphics compositor driver generates a ranking priority for the location of the different image data based on the display window range, the image overlay sequence, and the image rendering scene.
5. The virtualized screen method of claim 1, wherein the transparent overlaying of the acquired image data of different operating system domains to form a composite image comprises:
the graphic synthesizer driver carries out transparent superposition processing on the acquired image data of different operating system domains based on the position sorting priority to form a synthesized image;
the system control domain configures one or more virtual screens to differentiate the projected paths of the different virtual screens.
6. The virtualized screen method of claim 5, wherein the transparent overlaying of the acquired image data of different operating system domains to form a composite image comprises:
The graphic synthesizer driver analyzes the image display attribute based on the position sorting priority;
The graphic synthesizer drives to perform image rendering processing and image superimposition processing based on the image display attribute to form a transparent superimposed image.
7. The virtualized screen method of claim 5, wherein the virtualized screen comprises:
the first virtual screen is used for displaying the transparent superimposed image formed after the processing;
One or more second virtual screens for displaying images of different operating system domains.
8. The utility model provides a screen device is thrown to virtualization environment map, is characterized in that is applied to same system on chip SOC, including system control domain and a plurality of operating system domain on the SOC, the system control domain is integrated with virtual back-end drive and graphic synthesizer drive and is connected with a plurality of display screens, every operating system domain is integrated with virtual front-end drive and is connected with the system control domain, the device includes:
The superposition screen-throwing module is used for projecting the images of the operating system domains onto the system control domain in a virtualized environment through the virtual front end driver, and carrying out transparent superposition on the images through the virtual back end driver and the graphic synthesizer driver and outputting the images to the display screen;
The function display module is used for projecting any one image in a plurality of operating system domains to the system control domain in a virtualized environment through the virtual front end driver and outputting the image to the display screen through the virtual back end driver and the graphic synthesizer driver;
And the switching selection module is used for switching and selecting the projection paths between the system control domain and one or more operating system domains and the display screen under different use scenes.
9. An electronic device, the electronic device comprising:
A system-on-chip (SOC) comprising a system control domain and a plurality of operating system domains, wherein the system control domain is integrated with a virtual back-end driver and a graphic synthesizer driver and is connected with a plurality of display screens, each operating system domain is integrated with a virtual front-end driver and is connected with the system control domain, and
A memory and a processor communicatively coupled to the at least one SOC, wherein,
The memory stores computer-executable instructions executable by the at least one SOC;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the method of any one of claims 1-7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-7.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202510494851.4A 2025-04-18 2025-04-18 A virtualized screen projection method, device, equipment, storage medium and program product Pending CN120406886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510494851.4A CN120406886A (en) 2025-04-18 2025-04-18 A virtualized screen projection method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510494851.4A CN120406886A (en) 2025-04-18 2025-04-18 A virtualized screen projection method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN120406886A true CN120406886A (en) 2025-08-01

Family

ID=96513285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510494851.4A Pending CN120406886A (en) 2025-04-18 2025-04-18 A virtualized screen projection method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN120406886A (en)

Similar Documents

Publication Publication Date Title
AU2021339341B2 (en) Augmented reality-based display method, device, and storage medium
CN110347464B (en) User interface rendering method, device and medium of application program and electronic equipment
CN109388467A (en) Map information display method, device, computer equipment and storage medium
CN112644276B (en) Screen display method, vehicle and computer storage medium
CN114489543B (en) Multi-screen processing method and device for intelligent cabin, chip, vehicle and medium
JP7601497B2 (en) Page switching display method, device, storage medium and electronic device
CN114461064B (en) Virtual reality interaction methods, devices, equipment and storage media
CN106598514B (en) Method and system for switching virtual reality mode in terminal equipment
CN113313802B (en) Image rendering method, device and equipment and storage medium
CN111708587A (en) A device and method for realizing multi-mode application of vehicle air conditioning screen display
CN116088784B (en) Image projection method, device, electronic equipment, chip, storage medium and vehicle
JP2021190098A (en) Image preprocessing method, device, electronic apparatus, and storage medium
CN115243107A (en) Method, device, system, electronic equipment and medium for playing short video
JP4268142B2 (en) Drawing device
CN108196909A (en) A kind of implementation method based on the reversing of Android onboard system kernel
CN120406886A (en) A virtualized screen projection method, device, equipment, storage medium and program product
CN115767469B (en) Interaction system and method for vehicle-mounted terminal and mobile terminal, vehicle-mounted terminal and medium
WO2024044936A1 (en) Composition for layer roi processing
CN117557701B (en) Image rendering method and electronic device
CN115578299A (en) Image generation method, device, equipment and storage medium
CN113051032A (en) Application picture processing method, device and system
CN121340912A (en) Display method, intelligent cabin, vehicle, storage medium and computer program product
US20250371647A1 (en) Multi-operating system-based graphics display method and related apparatus
CN119025206A (en) Graphics display method, device, equipment and storage medium
CN119621234A (en) A performance optimization method and system based on virtual graphics card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination