[go: up one dir, main page]

CN120803570A - Interface generation device and interface generation method - Google Patents

Interface generation device and interface generation method

Info

Publication number
CN120803570A
CN120803570A CN202410420835.6A CN202410420835A CN120803570A CN 120803570 A CN120803570 A CN 120803570A CN 202410420835 A CN202410420835 A CN 202410420835A CN 120803570 A CN120803570 A CN 120803570A
Authority
CN
China
Prior art keywords
animation effect
task
layer
subtask
effect processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410420835.6A
Other languages
Chinese (zh)
Inventor
袁博
郭鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202410420835.6A priority Critical patent/CN120803570A/en
Priority to PCT/CN2025/082147 priority patent/WO2025214054A1/en
Publication of CN120803570A publication Critical patent/CN120803570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an interface generating device and an interface generating method, wherein the interface generating device comprises a processor, a graphics processor and an animation effect accelerator, wherein the processor is used for determining a first task, the first task comprises a first subtask and a second subtask, the first subtask comprises a first graphics rendering task, the second subtask comprises an animation effect processing task, the graphics processor is used for acquiring the first subtask and performing graphics rendering based on the first subtask to obtain a first image layer, and the animation effect accelerator is used for acquiring the animation effect processing task in the second subtask and performing animation effect processing based on the animation effect processing task to obtain a second image layer, and the animation effect processing comprises at least one of fuzzy processing or round corner processing. By adopting the embodiment of the application, the interface generation efficiency can be improved.

Description

Interface generation device and interface generation method
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interface generating device and an interface generating method.
Background
With the development of electronic technology, more and more electronic devices are involved in the daily life of users. In addition, as the resolution, size, and other parameters of the screen of the electronic device are higher, more contents can be displayed on the electronic device. However, the electronic device needs to spend the computing resources of the processor and the storage resources of the memory to generate the interface of the application before displaying the interface of the application. In the current interface design of electronic devices, spatial perception, depth perception, light shadow, and the like are trends to realize high-quality interface design. Under the trend, the electronic device needs to spend more computing resources of the processor and storage resources of the memory to render to generate interfaces of a plurality of application programs or a plurality of windows, so that the power consumption of the processor is increased, and the interface generation efficiency is reduced.
Therefore, how to provide an interface generating device and an interface generating method to improve the interface generating efficiency is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an interface generation device and an interface generation method, which are used for improving the interface generation efficiency.
In a first aspect, an embodiment of the application provides an interface generating device, which comprises a processor, wherein the processor is used for determining a first task, the first task comprises a first subtask and a second subtask, the first subtask comprises a first graphic rendering task, the second subtask comprises an animation effect processing task, the graphic processor is used for acquiring the first subtask and carrying out graphic rendering based on the first subtask to obtain a first image layer, and the animation effect accelerator is used for acquiring the animation effect processing task in the second subtask and carrying out animation effect processing based on the animation effect processing task to obtain a second image layer, wherein the animation effect processing comprises at least one of fuzzy processing or round corner processing.
In the embodiment of the application, compared with the traditional interface generating device, the animation effect accelerator is added, and the processor, the graphic processor and the animation effect accelerator cooperatively work to render the interface so as to improve the rendering efficiency of the interface. In the conventional technology, an interface generating task is directly issued to a graphics processor by a processor for execution, the graphics processor executes both a graphics rendering task and an animation effect processing task, the whole flow of animation effect processing is complex, the number of executing instructions of the processor and the graphics processor is large, the power consumption is high, and the performance risk is large. Therefore, in the application, the processor determines a first task (i.e. an interface generation task) and a first subtask and a second subtask in the first task, wherein the first subtask is used for indicating that a clear layer is rendered, and the second subtask is used for indicating that the layer after the animation effect is processed is rendered. Further, the first subtask can be executed by the graphics processor, the second subtask can be executed by the animation effect accelerator, that is, the graphics processor only needs to execute the graphics rendering task, the animation effect processing task is not needed, and the graphics processor and the animation effect accelerator can work in parallel, so that the workload of the processor and the graphics processor can be reduced, and the interface generation efficiency can be improved.
In some embodiments, the animation effect processing task is used for indicating the layer to be processed, and the animation effect accelerator is specifically used for performing animation effect processing on the layer to be processed based on the animation effect processing task to obtain the second layer.
In the embodiment of the application, the animation effect accelerator may acquire the second subtask, and may read the layer to be processed stored in the memory based on the animation effect processing task in the second subtask, and perform animation effect processing, such as fillet processing, and/or blurring processing, on the layer to be processed. The processor determines a first task and a first subtask and a second subtask in the first task, wherein the first subtask is used for indicating that a clear layer is rendered, and the second subtask is used for indicating that the layer after the animation effect is processed is rendered. Further, the first subtask can be executed by the graphics processor, the second subtask can be executed by the animation effect accelerator, that is, the graphics processor only needs to execute the graphics rendering task, the animation effect processing task is not needed, and the graphics processor and the animation effect accelerator can work in parallel, so that the workload of the processor and the graphics processor can be reduced, and the interface generation efficiency can be improved.
In some embodiments, the second subtask further comprises a second graphics rendering task, the graphics processor is further configured to acquire the second graphics rendering task in the second subtask and perform graphics rendering based on the second graphics rendering task to obtain a third layer, and the animation effect accelerator is specifically configured to perform animation effect processing on the third layer based on the animation effect processing task to obtain the second layer.
In the embodiment of the present application, after the graphics processor performs graphics rendering based on the second graphics rendering task to obtain the third layer, the animation effect accelerator may obtain the third layer, and perform animation effect processing, such as performing rounding processing, and/or blurring processing, on the third layer to obtain the second layer. The rendering interface is cooperatively operated by a processor, a graphics processor, and an animation effect accelerator, and the processor determines a first task and graphics rendering tasks and animation effect acceleration processing tasks included in the first task. Further, the graphics processor can execute graphics rendering tasks, and the animation effect accelerator can execute animation acceleration processing tasks, namely, the graphics processor only needs to execute graphics rendering tasks, does not need to execute animation effect processing tasks, and the graphics processor and the animation effect accelerator can work in parallel, so that the workload of the processor and the graphics processor can be reduced, and the interface generation efficiency can be improved.
In some embodiments, the interface generating device further comprises a task scheduler, a processor further configured to send a first dependency relationship to the task scheduler, the first dependency relationship being configured to indicate a dependency relationship between a second graphics rendering task and an animation effect processing task in executing the second subtask, and the task scheduler being configured to schedule the graphics processor to execute the second graphics rendering task and schedule the animation effect accelerator to execute the animation effect processing task according to the first dependency relationship.
In an embodiment of the present application, the processor may send the first dependency relationship to the task scheduler. Further, the task scheduler can store the first dependency relationship after receiving the first dependency relationship, and the task scheduler can schedule the graphics processor to execute the second graphics rendering task based on the first dependency relationship to obtain a third layer, schedule the animation effect accelerator to execute the animation effect processing task to obtain the second layer, and avoid the problems of long execution time, heavy load, high complexity and the like required by the processor to issue the task, thereby improving the task issuing speed and the interface generating efficiency.
In some embodiments, the interface generating device further comprises a display subsystem for receiving the first layer and the second layer and forming the first layer and the second layer into an image to be displayed.
In the embodiment of the application, the display subsystem can acquire a plurality of layers simultaneously, and the layers are overlapped on line to generate the image to be displayed, and then the image to be displayed can be sent to the display peripheral for displaying, so that the final dynamic interaction picture is displayed on the screen. In the conventional technology, a graphics processor is required to superimpose a plurality of layers, and a single layer of the graphics processor is sent to a display peripheral by a display subsystem, so that the power consumption of the graphics processor is increased, and the memory read-write power consumption is reduced. In the embodiment of the application, the display subsystem can acquire a plurality of layers at the same time, and then superimpose the layers on line to obtain the image to be displayed, so that the power consumption of the graphics processor can be reduced.
In some embodiments, the interface generating means comprises a chip.
In the embodiment of the application, the interface generating device comprises a chip, and compared with the traditional interface generating device, the animation effect accelerator is added, and the processor, the graphic processor and the animation effect accelerator cooperatively work to render the interface so as to improve the rendering efficiency of the interface.
In a second aspect, an embodiment of the application provides an interface generation method, which comprises the steps of determining a first task through a processor, wherein the first task comprises a first subtask and a second subtask, the first subtask comprises a first graphic rendering task, the second subtask comprises an animation effect processing task, graphic rendering is conducted through a graphic processor based on the first subtask to obtain a first image layer, animation effect processing is conducted through an animation effect accelerator based on the animation effect processing task in the second subtask to obtain a second image layer, and the animation effect processing comprises at least one of fuzzy processing and round corner processing.
In some embodiments, the animation effect processing task is used for indicating the to-be-processed image layer, and the animation effect processing is performed by the animation effect accelerator based on the animation effect processing task in the second subtask to obtain the second image layer, and the animation effect processing is performed by the animation effect accelerator based on the animation effect processing task to obtain the second image layer.
In some embodiments, the second subtask further comprises a second graphic rendering task, the method further comprises the steps of obtaining the second graphic rendering task in the second subtask through a graphic processor, conducting graphic rendering based on the second graphic rendering task to obtain a third graphic layer, conducting animation effect processing based on an animation effect processing task in the second subtask through an animation effect accelerator to obtain the second graphic layer, and conducting animation effect processing based on the animation effect processing task through the animation effect accelerator to the third graphic layer to obtain the second graphic layer.
In some embodiments, the interface generating device further comprises a task scheduler, the method further comprises sending, by the processor, a first dependency relationship to the task scheduler, the first dependency relationship being used to indicate a dependency relationship between executing a second graphics rendering task and an animation effect processing task in the second subtask, and scheduling, by the task scheduler, the graphics processor to execute the second graphics rendering task and the animation effect accelerator to execute the animation effect processing task according to the first dependency relationship.
In some embodiments, the interface generating device further comprises a display subsystem, and the method further comprises receiving, by the display subsystem, the first layer and the second layer, and forming the first layer and the second layer into an image to be displayed.
In a third aspect, an embodiment of the present application provides an electronic device, including an apparatus as in any one of the first aspects and a memory, where the memory is configured to store computer program code, where the computer program code includes computer instructions, and where the apparatus invokes and executes the computer instructions.
In a fourth aspect, the present application provides a computer storage medium storing a computer program which, when executed by a processor, implements the method of any one of the second aspects.
In a fifth aspect, the present application provides a chip system comprising a processor for supporting an electronic device to implement the functions involved in the second aspect described above, for example, to generate or process information involved in the power supply control method described above. In one possible design, the chip system further includes a memory to hold the necessary program instructions and data for the electronic device. The chip system may include a chip, or may include a chip and other discrete devices.
In a sixth aspect, the present application provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any of the second aspects above.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 2 illustrates a software architecture of an electronic device according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an interface generating device according to an embodiment of the present application.
Fig. 4 is a schematic diagram of dividing a target rendering tree into a plurality of rendering sub-trees according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a determining target rendering tree according to an embodiment of the present application.
Fig. 6 is a schematic workflow diagram of an interface generating device according to an embodiment of the present application.
Fig. 7 is a schematic diagram of another interface generating device according to an embodiment of the present application.
Fig. 8 is a schematic diagram of layer stacking according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a fillet treatment according to an embodiment of the present application.
Fig. 10 is a schematic diagram of another layer stacking according to an embodiment of the present application.
Fig. 11 is a schematic workflow diagram of another interface generating device according to an embodiment of the present application.
Fig. 12 is a schematic flow chart of an interface generating device according to an embodiment of the present application.
Fig. 13 is a flowchart of an interface generating method according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "user interface (userinterface, UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application or operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface is a source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, and the interface source code is analyzed and rendered on the electronic equipment to finally be presented as content which can be identified by a user. A commonly used presentation form of a user interface is a graphical user interface (graphic userinterface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be a visual interface element of text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc., displayed in a display of the electronic device.
For ease of understanding, related terms and related concepts related to the embodiments of the present application are described below. The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application.
The interface is used as a medium interface for interaction and information interaction between the application program and the user, and the electronic equipment needs to generate the interface of the application program for the application program at the foreground when the vertical synchronous signal arrives each time. Wherein the frequency of the vertical synchronization signal is related to the refresh rate of the screen of the electronic device, e.g. the frequency of the vertical synchronization signal is the same as the refresh rate of the screen of the electronic device.
That is, each time the electronic device refreshes the content displayed on the screen, an interface of the application program needs to be generated for the foreground application to present the newly generated interface of the application program to the user at the time of screen refreshing.
The interface displayed by the electronic device on the electronic device may include an interface of one or more application programs, that is, the electronic device needs to generate an interface for one or more application programs, and synthesize the interface, thereby obtaining a synthesized interface displayed on the screen.
An exemplary electronic device provided in the following embodiments of the present application will first be described.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device may include, but is not limited to, various smart display devices such as a smart phone, a smart wearable device (e.g., a smart watch), a tablet computer, a personal digital assistant, and the like. The electronic device may incorporate a chip or chipset or a circuit board carrying a chip or chipset, which may operate under the necessary software drive. The chip or chipset, or a circuit board on which the chip or chipset is mounted, may include a processor 101, a graphics processor 102, an animation effect acceleration engine 103, a task scheduler 104, a display subsystem module 105, and an internal memory 106. Further, interfaces, peripherals, etc., not shown in fig. 1 may also be included. The processor 101, the graphics processor 102, the animation effect acceleration engine 103, the task scheduler 104, the display subsystem module 105, and the internal memory 106 may be connected by a bus.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components.
The processor 101 (Central Process Unit, CPU) may run an operating system, a file system (e.g., a flash file system), or an application program, etc., to control a plurality of hardware or software elements connected to the processor 101, and may process various data and perform operations. The processor 101 may load instructions or data stored in an external memory (not shown in fig. 1) into the internal memory 106, call instructions or data to be operated to the processor 101 for operation, temporarily store the result in the internal memory 106 when the operation is completed, and store instructions or data to be stored for a long period of time into the external memory through the controller. Optionally, a memory may be provided in the processor 101 for storing instructions and data. In some embodiments, the memory in the processor 101 is a Cache. The Cache may hold instructions or data that the processor 101 has just used or recycled. If the processor 101 needs to reuse the instruction or data, it can be called directly from the Cache. Repeated accesses are avoided and the latency of the processor 101 is reduced, thus improving the efficiency of the system. In some embodiments, the processor 101 may have a rendering service (RENDER SERVICE) and a plurality of applications running thereon, which may be foreground applications. Each application program in the plurality of application programs running by the processor 101 may generate a rendering tree, RENDER SERVICE may fuse the plurality of rendering trees to obtain a target rendering tree, and then may divide the target rendering tree according to the displayed layer sequence to obtain a plurality of rendering subtrees, and then render. In the embodiment of the application, the generation efficiency of the interface can be improved by a unified rendering mode.
Graphics processor 102 (Graphic Process Unit, GPU), is a microprocessor that performs drawing operations specifically on personal computers, workstations, gaming machines, and some mobile devices (e.g., tablet computers, smartphones, etc.). In some embodiments, the processor 101 may invoke the graphics processor 102 to render multiple rendering subtrees in parallel to obtain multiple layer data, so that the rendering speed of the interface may be accelerated, and the generating efficiency of the interface may be improved. In some embodiments, graphics processor 102 is located inside processor 101, or may be located outside processor 101, and is not specifically limited in this disclosure.
The animation effect acceleration engine 103 (Animation Acceleration Engine, AAE) is a hardware module for accelerating the generation of animation effects. The animation effect acceleration engine 103 may integrate a downsampling (Down Scale) operator, upsampling (Up Scale) operator, blur operator, gamut conversion (CSC) operator, toneMaping/Inverse Tone Maping operator, fillet generation operator (Rounded Corner Generator, RCG), layer overlaying Operator (OV), color enhancement operator (Color Enhancement, CE), read memory module (READDIRECT MEMORY ACCESS, RDMA), write memory module (WRITEDIRECT MEMORY ACCESS, WDMA), on-chip RAM, and the like. In the present application, operators in the animation effect acceleration engine 103 can operate in a pipelined manner so that animation effect generation can be accelerated. In some embodiments, the animation effect acceleration engine 103 may accelerate the layer, such as one or more blurring, rounding, etc. of the layer.
A Task Scheduler 104 (TS) may be used to schedule the graphics processor 102, the animation effect acceleration engine 103, etc. In the application, the task scheduler 104 can receive event notifications sent by the graphic processor 102 or the animation effect acceleration engine 103 according to the configuration task dependency relationship rule, and send corresponding event notifications to chip modules needing to execute tasks subsequently, such as the graphic processor 102 or the animation effect acceleration engine 103, and trigger the corresponding tasks to be executed, and the hardware interfaces of the graphic processor 102 and the animation effect acceleration engine 103, which are integrated inside and are in butt joint with the task scheduler 104, can send event notifications to each other. When the software issues a task, the event notification and corresponding task are configured into a hardware interface that is integrated inside the graphics processor 102 and the animation effect acceleration engine 103 and interfaces with the task scheduler 104. Event notification between the task scheduler 104 and the graphics processor 102, animation effect acceleration engine 103, may be implemented via a message bus, or on-chip internal wiring, etc.
The display subsystem module 105 (Display Subsystem, DSS) is a hardware module that may be responsible for retrieving pixel data from the internal memory 106 and sending it to a display peripheral device, such as an LCD/OLED display screen or display. The display subsystem module 105 hardware may obtain pixel data, perform color conversion, synthesis, etc. pixel operations, and the display subsystem module 105 hardware may also be responsible for encoding raw pixel data into standard display signals, such as HDMI or MIPIDPI, or display signal formats defined by DP/eDP, etc.
The internal Memory 106, typically a power-down volatile Memory, loses its stored contents when powered down, also referred to as a Memory (Memory) or main Memory. The internal memory 106 in the present application includes a readable and writable running memory, and is used for temporarily storing the operation data in the processor 101, the graphics processor 102, and the animation effect acceleration engine 103, and interacting with an external memory or other external memories, and may be used as a storage medium for temporary data of an operating system or other running programs. For example, an operating system running on the processor 101 transfers data to be operated from the internal memory 106 to the processor 101 for operation, and when the operation is completed, the processor 101 transmits the result.
The internal memory 106 may include one or more of Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), synchronous Dynamic Random Access Memory (SDRAM), and the like. The DRAM further includes a double-rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM) abbreviated as DDR, a second generation double-rate synchronous dynamic random access memory (DDR 2), a third generation double-rate synchronous dynamic random access memory (DDR 3), a fourth generation low power consumption double data rate synchronous dynamic random access memory (Low Power Double Data Rate, lpddr 4), a fifth generation low power consumption double data rate synchronous dynamic random access memory (Low Power Double Data Rate, lpddr 5), and the like.
It should be understood that the structure of the electronic device in fig. 1 is merely some exemplary implementations provided by the embodiments of the present application, and the structure of the electronic device in the embodiments of the present application includes, but is not limited to, the above implementation.
The following describes a software architecture of an electronic device provided by an embodiment of the present application.
Fig. 2 illustrates a software architecture of an electronic device according to an embodiment of the present application.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. As shown in fig. 2, taking a software architecture as a hierarchical architecture as an example, a software architecture of an electronic device is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the software system is divided into four layers, from top to bottom, an application layer, a framework layer, a hardware abstraction layer, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include desktop, status bar, settings, phone calls, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions.
As shown in FIG. 2, the framework layers may include a window manager, a content provider, a view system, a UI library, a rendering service (RENDER SERVICE), a memory allocation API, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may include one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The UI library is a common UI component library, can help designers and research and development to improve the working efficiency and improve the design specialty and quality.
A rendering service (RENDER SERVICE) is operable to render an interface of an application, such as a rendering tree that may be sent by the application, and then render a corresponding layer based on the rendering tree. Rendering services, and can also control multiple layer composition, etc.
It should be noted that, the rendering service is a software function module, and corresponds to modules with different names in different embodiments or systems, for example, in an Android system, the rendering service corresponds to Surface Flinger services of Android, and in a Harmony OS Next, the rendering service corresponds to RENDER SERVICE services of the Harmony OS Next.
A Hardware Abstraction Layer (HAL) is an interface layer located between the operating system kernel and upper layer software, which aims at abstracting the hardware. The hardware abstraction layer is a device kernel driven abstraction interface for enabling application programming interfaces that provide higher level Java API frameworks with access to the underlying devices. The HAL may provide a standard interface that displays device hardware functionality to a higher level Java API framework. The HAL comprises a plurality of library modules, such as a camera HAL, an audio HAL, etc. Wherein each library module implements an interface for a particular type of hardware component. To load library modules for the hardware component when the system framework layer API requires access to the hardware of the portable device, the operating system will load the library modules for the hardware component.
The kernel layer is a layer between hardware and software. The kernel layer contains at least kernel mode drivers (Kernel Mode Driver, KMD) of the display subsystem, task scheduler KMD, animation effect accelerator KMD, graphics processor KMD.
It will be appreciated that the software architecture of the electronic device in fig. 2 is merely some exemplary implementations provided by embodiments of the present application, including but not limited to the above implementations.
Referring to fig. 3, fig. 3 is a schematic diagram of an interface generating device according to an embodiment of the present application, where the interface generating device 20 may include a processor 201, a graphics processor 202, and an animation effect accelerator 203. Processor 201 in FIG. 3 may include some or all of the functionality of processor 101 in FIG. 1 described above, graphics processor 202 in FIG. 3 may include some or all of the functionality of graphics processor 102 in FIG. 1 described above, and animation effect accelerator 203 in FIG. 3 may include some or all of the functionality of animation effect acceleration engine 103 in FIG. 1 described above. The processor 201, the graphics processor 202, and the animation effect accelerator 203 may be connected by a bus.
A processor 201 for determining a first task.
Specifically, the first task may be used to instruct rendering of a to-be-displayed interface of the current electronic device. The first task may include a plurality of subtasks, and the plurality of subtasks may include a first subtask and a second subtask. The first subtask comprises a first graphic rendering task which can be used for indicating that a clear layer is rendered, and the second subtask comprises an animation effect processing task which can be used for indicating that a layer after animation effect processing is rendered.
Alternatively, the processor 201 may have a rendering service and a plurality of applications running thereon. The rendering service may generate a target rendering tree based on the currently-to-be-displayed interfaces of the plurality of applications. The target rendering tree records all information for generating a frame interface of a plurality of application programs, and comprises a plurality of rendering nodes, wherein each rendering node in the plurality of rendering nodes can comprise rendering attributes and a drawing instruction list. Alternatively, the plurality of applications may be foreground applications.
Optionally, the rendering service may generate a first task based on the target rendering tree, where the first task is used to indicate to render the target rendering tree to obtain the interface to be displayed.
Optionally, the rendering service may divide the first task into a first subtask and a second subtask according to the task type, where the first subtask is used to indicate that a clear layer is rendered, and the second subtask is used to indicate that a layer after the animation effect is processed is rendered.
Optionally, the rendering service may divide the target rendering tree into a plurality of rendering subtrees, each of which may correspond to one of the first tasks. As shown in fig. 4, fig. 4 is a schematic diagram of dividing a target rendering tree into a plurality of rendering subtrees according to an embodiment of the present application, where a rendering service may divide the target rendering tree into a plurality of rendering subtrees, and assume that a main node of the target rendering tree is an interface 1, two applications that are respectively an application 1 and an application 2 need to be displayed on the interface 1, where the application 1 includes a rendering node 1.1, a rendering node 1.2, a rendering node 1.3, and a rendering node 1.4, and the application 2 includes a rendering node 2.1 and a rendering node 2.2. For the interface display effect, the layers rendered by the rendering nodes 1.2, 1.3 and 1.4 need to be subjected to animation effect processing, such as fillet processing and/or blurring processing, and the layers rendered by the rendering nodes 1.1, 2.1 and 2.2 need not to be subjected to animation effect processing and only need to be rendered into clear layers. Furthermore, the rendering nodes 1.2, 1.3, 1.4 may be divided into one rendering sub-tree, and the other rendering nodes into another rendering sub-tree. The rendering service may divide the first task into a first subtask and a second subtask according to task types, the first subtask may be used to indicate rendering subtree 2, and the second subtask may be used to indicate rendering subtree 1.
Optionally, the rendering service may obtain a target rendering tree according to a plurality of rendering trees respectively generated by a plurality of application programs, where one application program renders one rendering tree correspondingly. Referring to fig. 5, fig. 5 is a schematic diagram of a target rendering tree determination provided by an embodiment of the present application, in which a plurality of application programs are running on a processor 201, each of the plurality of application programs may render a rendering tree, and it is assumed that an application 1 and an application 2 currently run on the processor 201, the application 1 may generate a rendering tree 1 according to a current content to be displayed, the rendering tree 1 may include a plurality of rendering nodes, such as a rendering node 1.1, a rendering node 1.2, a rendering node 1.3, and a rendering node 1.4, the application 2 may generate a rendering tree 2 according to the current content to be displayed, and the rendering tree 2 may include a plurality of rendering nodes, such as a rendering node 2.1 and a rendering node 2.2. The rendering service may obtain the rendering tree 1 and the rendering tree 2, and then may fuse the rendering tree 1 and the rendering tree 2 to obtain a target rendering tree. In the embodiment of the application, the application program does not need independent rendering, and unified rendering is performed through the rendering service, so that the rendering efficiency of the interface is improved.
The graphics processor 202 is configured to obtain a first subtask, and perform graphics rendering based on the first subtask to obtain a first layer.
Specifically, the first layer includes at least one image. When the processor 201 determines the first task, a first sub-task in the first task may be assigned to the graphics processor 202 for processing. The graphics processor 202 may be a dedicated graphics processing chip, and the graphics processor 202 may accelerate the graphics rendering by using an accelerator of the graphics processor 202, and after the graphics processor 202 is turned on for rendering, the graphics processor 202 may acquire a first subtask and perform graphics rendering based on the first subtask to obtain a clear layer, i.e. a first layer, so that the interface display speed may be increased, and the burden of the processor 201 may be reduced.
And the animation effect accelerator 203 is configured to acquire an animation effect processing task in the second subtask, and perform animation effect processing based on the animation effect processing task to obtain a second layer.
Specifically, the animation effect process includes at least one of a blurring process or a rounding process. Blurring may be understood as a process of reducing sharpness of an image, such as a gaussian blurring of an image, resulting in a blurred image of the image. A great deal of blurring effects may be applied in the UI design of the electronic device, for example, a control center pull-down, a notification center pull-down, a desktop application start-up, and the like. Rounded corner processing is understood to mean the substitution of sharp edges in the image with circular arc shapes, resulting in a design shape with soft edges. In the interactive control of the electronic device, rounded rectangles can be applied in a large amount, so that a better human eye effect can be obtained. The second layer includes at least one image. When the processor 201 determines the first task, a second sub-task in the first task may be assigned to the animation effect accelerator 203 for processing. The animation effect accelerator 203 may be a special animation effect processing chip, and the animation effect accelerator 203 is utilized to accelerate the generation of the graphic animation effect, after the animation effect accelerator 203 is started to accelerate the animation effect, the animation effect accelerator 203 may acquire a second subtask, and perform the generation of the graphic animation effect based on the second subtask, so as to obtain an image after the animation effect processing, namely a second layer, so that the interface display speed can be improved, and the burden of the processor 201 and the graphic processor 202 can be reduced.
For example, as shown in fig. 6, fig. 6 is a schematic workflow diagram of an interface generating device according to an embodiment of the present application, where a rendering service, an application 1 and an application 2 may be run on a processor 201. Application 1 may generate rendering tree 1 and application 2 may generate rendering tree 2. The rendering service may acquire the rendering tree 1 and the rendering tree 2, respectively, and may generate a target rendering tree based on the rendering tree 1 and the rendering tree 2. Further, the rendering service may determine a first task, and a first subtask and a second subtask in the first task, based on the target rendering tree. Next, the processor 201 may issue the first sub-task to the graphics processor 202 for processing, and the processor 201 may issue the second sub-task to the animation effect accelerator 203 for processing. Graphics processor 202 obtains a first subtask and generates a first layer based on graphics rendering tasks in the first subtask, where the first layer may include at least one image. The animation effect accelerator 203 may obtain a second subtask and generate a second layer based on the animation effect processing task in the second subtask, where the second layer may include at least one image. In other words, the rendering service may split all the controls to be rendered onto different layers, including a foreground layer, a background layer, a video layer, and the like, based on the attribute settings and hardware capabilities of the controls. The rendering service may issue the layer requiring animation acceleration to the hardened animation effect accelerator 203 for processing, and configure the animation effect accelerator 203 to write the result after processing into the middle layer. The rendering service issues layers that can be directly rendered to the graphics processor 202 for rendering. The rendering service gives the animation effect accelerator 203 the task of accelerating the animation effect after rendering by the graphics processor 202, and the animation effect accelerator 203 writes the processed layers into the designated layers. In the application, the processor 201, the graphic processor 202 and the animation effect accelerator 203 are cooperated to render the interface, so that the display speed of the interface can be improved, and the burden of the processor 201 and the graphic processor 202 can be reduced.
In some embodiments, referring to fig. 7, fig. 7 is a schematic diagram of another interface generating apparatus according to an embodiment of the present application, the interface generating apparatus 20 may further include a display subsystem 204, where the display subsystem 204 is configured to receive the first layer and the second layer, and form the first layer and the second layer into an image to be displayed.
Specifically, the display subsystem 204 is a hardware module, where the display subsystem 204 may obtain multiple layers at the same time, and superimpose the multiple layers on line to generate an image to be displayed, and then send the image to be displayed to a display peripheral device for display, so that a final dynamic interaction picture is displayed on the screen. In the conventional technology, the graphics processor 202 is required to superimpose multiple layers, and then a single layer of the graphics is sent to the display peripheral by the display subsystem 204, so that the power consumption of the graphics processor 202 is increased, and the memory read-write power consumption is also reduced. In the embodiment of the present application, the display subsystem 204 may acquire multiple layers at the same time, and then superimpose the multiple layers online to obtain the image to be displayed, so as to reduce the power consumption of the graphics processor 202.
For example, as shown in fig. 8, fig. 8 is a schematic diagram of a layer overlay provided by an embodiment of the present application, and it is assumed that the graphic processor 202 is configured to render a clear main interface and a clear folder window interface, that is, a first layer, where the first layer includes two images, the main interface may include icons of a plurality of application programs, such as a clock, a calendar, a gallery, a memo, a setting, etc., and the folder window interface may include an application icon of a smart home, an application icon of a recording, an application icon of an application mall, etc. The animation effect accelerator 203 is configured to perform rounded corner processing and background blurring processing on the folder 1 window in the main interface, so as to obtain a second image layer, where the second image layer includes an image in which the background of the folder 1 window is blurred, and the edge of the folder 1 window is rounded corner processed. After the graphics processor 202 generates the first layer and the animation effect accelerator 203 generates the second layer, the display subsystem 204 may acquire the first layer and the second layer at the same time, and perform online fusion on the first layer and the second layer to obtain the image to be displayed. Further, the display subsystem 204 may send a display to a screen, and then display the image to be displayed on the screen of the electronic device, and the user can obviously notice that a plurality of applications are stored in the folder 1 due to the blurring process and the rounding process at the window of the folder 1.
In some embodiments, the animation effect processing task is used for indicating the layer to be processed, and the animation effect accelerator 203 is specifically used for performing animation effect processing on the layer to be processed based on the animation effect processing task to obtain the second layer.
Specifically, the layer to be processed may be a layer stored in a memory. The animation effect processing task in the second subtask may be used to indicate the layer to be processed. The animation effect accelerator 203 may acquire a second subtask, and may read a layer to be processed stored in the memory based on an animation effect processing task in the second subtask, and perform animation effect processing, such as rounding processing, and/or blurring processing, on the layer to be processed. In the embodiment of the present application, the processor 201, the graphics processor 202 and the animation effect accelerator 203 cooperatively work to render an interface, and the processor 201 determines a first task (i.e., an interface generating task) and a first subtask and a second subtask in the first task, where the first subtask is used to indicate that a clear layer is rendered, and the second subtask is used to indicate that a layer after the animation effect is processed is rendered. Further, the first subtask may be executed by the graphics processor 202, and the second subtask may be executed by the animation effect accelerator 203, i.e., the graphics processor 202 need only execute graphics rendering tasks, no animation effect processing tasks are required to be executed, and the graphics processor 202 and the animation effect accelerator 203 may work in parallel, so that not only the workload of the processor 201 and the graphics processor 202 may be reduced, but also the interface generation efficiency may be improved.
Optionally, a fillet generator (Rounded Corner Generator, RCG), a layer overlay Operator (OV) may be included in the animation effects accelerator 203. The RCG operator may generate different parameter configuration fillet hoods (ALPHA MASK) through a hardening algorithm, and may implement graph-level fillet processing in cooperation with the OV operator inside the animation effect accelerator 203.
Alternatively, the RCG operator configures parameters of the rounded curve by software, and calculates and antialiases by hardware to obtain ALPHA MASK, ALPHA MASK only includes alpha values of the rounded transition region. For example, as shown in fig. 9, fig. 9 is a schematic diagram of a fillet treatment provided in an embodiment of the present application, and an OV operator may superimpose ALPHA MASK a layer to be subjected to a fillet layer, so as to achieve a transparency effect of a fillet region. In the embodiment of the application, the fillet generating operator is integrated in the animation effect accelerator 203, so that the fillet mask can be generated rapidly, and the function of the layer fillet can be realized with high efficiency and low power consumption by matching with the OV operator.
Alternatively, the animation effect accelerator 203 may include a downsampling operator (Down Scale), an upsampling operator (Up Scale). The animation effect accelerator 203 can downsample the image layer based on the downsampling operator, upsample the image layer based on the upsampling operator to perform fuzzy processing on the image layer, so that the problems of large calculated amount and the like in fuzzy processing on the image layer by using a fuzzy algorithm are avoided, and the efficiency of fuzzy processing on the image layer is improved.
In some embodiments, the second subtask further comprises a second graphics rendering task, the graphics processor 202 is further configured to acquire the second graphics rendering task in the second subtask and perform graphics rendering based on the second graphics rendering task to obtain a third layer, and the animation effect accelerator 203 is specifically configured to perform animation effect processing on the third layer based on the animation effect processing task to obtain the second layer.
In particular, the second subtask may include a second graphics rendering task and an animation effect acceleration task. The second graphics rendering task may be used to indicate that a clear layer, i.e., the third layer, is rendered. After graphics processor 202 performs graphics rendering based on the second graphics rendering task to obtain a third layer, animation effect accelerator 203 may obtain the third layer and perform animation effect processing, such as rounding processing, and/or blurring processing, on the third layer to obtain the second layer. In an embodiment of the present application, the processor 201 determines a first task (i.e., an interface generation task) and a graphics rendering task and an animation effect acceleration processing task included in the first task by cooperatively rendering the interface by the processor 201, the graphics processor 202, and the animation effect accelerator 203. Further, the graphics processor 202 may perform graphics rendering tasks, and the animation effect accelerator 203 may perform animation acceleration processing tasks, i.e., the graphics processor 202 need only perform graphics rendering tasks, and does not need to perform animation effect processing tasks, and the graphics processor 202 and the animation effect accelerator 203 may work in parallel, so that not only the workload of the processor 201 and the graphics processor 202 may be reduced, but also the interface generation efficiency may be improved.
Optionally, the display subsystem 204 may be configured to receive the first layer, the second layer, and the third layer, and to layer the first layer, the second layer, and the third layer into an image to be displayed.
Optionally, as shown in fig. 10, fig. 10 is a schematic diagram of another layer overlay provided in the embodiment of the present application, and it is assumed that the graphic processor 202 is configured to render a clear folder window interface, that is, a first layer, where the first layer includes an image, and the folder window interface may include an application icon of a smart home, a recorded application icon, an application icon of an application mall, and so on. The memory may store a third layer, which may be a layer after the primary interface is blurred, where the primary interface may include icons of a plurality of applications, such as a clock, calendar, gallery, memo, setting, and the like. The animation effect accelerator 203 is configured to perform rounded corner processing and background blurring processing on a third layer, that is, the folder 1 window in the main interface, so as to obtain a second layer, where the second layer includes an image, where the background of the folder 1 window in the image is blurred, and the edge of the folder 1 window is rounded corner processed. After the first layer, the second layer, and the third layer are obtained, the display subsystem 204 may simultaneously obtain the first layer, the second layer, and the third layer, and perform online fusion on the first layer, the second layer, and the third layer to obtain the image to be displayed. Further, the display subsystem 204 may send a display to a screen, and then display the image to be displayed on the screen of the electronic device, and the user can obviously notice the content in the folder 1 because the window of the folder 1 is subjected to blurring and rounding, and the background is subjected to blurring.
In some embodiments, referring to FIG. 11, FIG. 11 is a workflow diagram of another interface generating apparatus according to an embodiment of the present application, the interface generating apparatus 20 further includes a task scheduler 205, a processor 201 further configured to send a first dependency relationship to the task scheduler 205, where the first dependency relationship is used to indicate a dependency relationship of executing a second graphics rendering task and an animation effect processing task in a second subtask, and the task scheduler 205 is configured to schedule the graphics processor 202 to execute the second graphics rendering task and schedule the animation effect accelerator 203 to execute the animation effect processing task according to the first dependency relationship.
In particular, the task scheduler 205 may be a hardware module that may be used to schedule the graphics processor 202 and the animation effects accelerator 203. The processor 201 may send the first dependency relationship to the task scheduler 205. Further, the task scheduler 205 may store the first dependency relationship after receiving the first dependency relationship, and the task scheduler 205 may schedule the graphics processor 202 to execute the second graphics rendering task to obtain the third layer based on the first dependency relationship, and schedule the animation effect accelerator 203 to execute the animation effect processing task to obtain the second layer. In the embodiment of the application, the graphics processor 202 and the animation effect accelerator 203 are not required to be scheduled by the processor 201, so that the problems of long execution time, heavy load, high complexity and the like required by the processor 201 for issuing the task are avoided, the task issuing speed can be improved, and the interface generating efficiency is improved.
For example, the task scheduler 205 may receive event notifications sent by the graphics processor 202 or the animation effect accelerator 203 according to the task dependency rule, and send corresponding event notifications to a chip module that needs to execute the task subsequently, for example, the graphics processor 202 or the animation effect accelerator 203 triggers the corresponding task to be executed, and the hardware interfaces that may be integrated inside the graphics processor 202 and the animation effect accelerator 203 and that may interface with the task scheduler 205 may send event notifications to each other. When the processor 201 issues a task, the event notification and corresponding task may be configured into a hardware interface that is integrated within the graphics processor 202 and animation effects accelerator 203 and that interfaces with the task scheduler 205. Event notification between the task scheduler 205 and the graphics processor 202, animation effects accelerator 203, etc. may be implemented through a message bus, or an on-chip wire connection, etc., and is not limited to the above description. The task scheduler 205 defines and implements the task dependency rules in various forms, so long as the above-mentioned processes can be implemented, which are all within the scope of the present application, and the specific task dependency description and task scheduler 205 implementation are not limited.
In some embodiments, interface generation device 20 comprises a chip.
Specifically, the interface generating device 20 includes a chip, and compared with the conventional interface generating device, the animation effect accelerator 203 is added, and the processor 201, the graphic processor 202 and the animation effect accelerator 203 cooperatively work to render the interface, so as to improve the interface rendering efficiency.
For example, as shown in fig. 12, fig. 12 is a flowchart of an interface generating apparatus according to an embodiment of the present application, where a rendering service obtains attributes of all layers to be processed, or obtains attributes of each node according to a rendering tree, and related attributes include ambiguity, rounded corners, and layer sequence and superposition between nodes. The rendering service constructs tasks according to the layer attributes and the supporting capability of the hardware, and analyzes task dependencies. The rendering service invokes the AAE-driven development toolkit (DEVICE DRIVER KIT, DDK) and the GPU DDK to build tasks that are distributed to the AAE and the GPU, respectively. The rendering service configures the dependencies of the tasks to the TS through TSDDK. The rendering service submits the task to the AAE and GPU and waits for the AAE/GPU to complete. The AAE and GPU hardware execute tasks and are scheduled by TS hardware, and hardware interrupt is returned to the CPU after the tasks are completed. And the rendering service calls the DSS to configure the DSS to send all the layers to be displayed in an online superposition mode, and finally the expected display picture is displayed on the screen.
Specifically, a Harmony OS Next operating system is taken as an example for illustration. The Harmony OS Next operating system is an open-source Next-generation operating system for mobile terminals (including mobile phones, watches, PCs, etc.) designed and developed by Hua corporation. The Harmony OS Next operating system adopts a unified rendering architecture, and UI interaction spaces and rendering operations of all applications are submitted to a rendering tree (RENDER SERVICE) for unified rendering. RENDER SERVICE each frame obtains RENDER TREE to be rendered currently, and obtains window and attribute information thereof to be subjected to blurring or rounding processing, RENDER SERVICE and an AAE UMD interface are interacted to perform bidirectional negotiation or RENDER SERVICE split RENDER TREE into different rendering subtrees, namely Sub-RENDER TREE, based on the specification capability of an AAE chip. RENDER SERVICE based on the optimal chip specification capability splitting RENDER TREE, render nodes capable of being rendered to a buffer are split into the same Sub-RENDER TREE, and render nodes (render nodes) capable of being rendered in parallel are split into different rendering tasks, namely different Sub-RENDER TREE. Taking window blurring as an example, the rendering of the blurred region does not affect the rendering of the uppermost window, so the background region can be split into one Sub-RENDER TREE and the upper window into another Sub-RENDER TREE. Next, RENDER SERVICE traverses each Sub-RENDER TREE, converting Sub-RENDER TREE to a drawing Operation (Draw Operation, abbreviated as Draw Op), and different Sub-RENDER TREE may be traversed in parallel. RENDER SERVICE analyzing the dependency relationship between rendering tasks, calling the GPU DDK by the Draw Op corresponding to the graphics rendering task to construct a GPU task, and calling the AAE DDK by the task needing fuzzy or round-corner processing to construct an AAE task. Then RENDER SERVICE configures the dependency relationship of different tasks to TS, and submits the constructed tasks to GPU and AAE for execution. Then, the TS can receive event notification sent by the GPU or the AAE according to the configuration task dependency rule, and send the corresponding event notification to a chip module needing to execute the task subsequently, for example, the GPU or the AAE is triggered to execute the corresponding task, and hardware interfaces of the internal integration of the GPU and the AAE and the TS can mutually send the event notification. And when the software issues the task, configuring the event notification and the corresponding task into hardware interfaces of the GPU and the AAE internal integration and TS docking. Next, RENDER SERVICE submits the finally rendered multiple layers to be displayed through a DISPLAY HAL layer interface, and the layers are configured to a chip module DSS for display by DSS KMD. When the frame synchronous signal comes, the chip DSS module sends a plurality of layers to the screen in an online superposition mode, and finally the dynamic effect picture is displayed on the screen. Among other things, the AAE chip module may integrate a downsampling (Down Scale) operator, an upsampling (Up Scale) operator, a Blur operator, a gamut conversion (CSC) operator, a ToneMaping/Inverse Tone Maping operator, a corner rounding operator (Rounded Corner Generator, RCG), a layer overlaying Operator (OV), a color enhancement operator (Color Enhancement, CE), RDMA, WDMA, on-chip RAM, and the like. RCG operator of AAE can be through hardening algorithm generation ALPHA MASK, cooperate with the OV operator inside AAE, can realize the round corner processing of picture level. The AAE can integrate multiple identical operators and multiple channels (pipes), so that multiple tasks can be supported to be executed concurrently, and the link relation of the operators in the AAE can be configured dynamically among different tasks, so that more flexible functions are realized. Still further, the AAE may share on-chip RAM with the DSS, or share data through SYSTEM CACHE. Furthermore, when the throughput capacity of the software is calculated to exceed the data rate required by the DSS transmission and display, the software can configure the AAE to transmit and display after processing the single-layer N data (instead of complete buffer data), namely, the DSS is notified by the hardware scheduler TS, so that the data processed by the AAE and the data read by the DSS transmission and display can be efficiently circulated through on-chip storage, the transmission and display delay is reduced, and the DDR bandwidth in the whole flow is saved.
Alternatively, the application is not limited to the Harmony OS Next system, but can be used in an Android system. In the Surface Flinger (rendering service) per frame preparation (preparation) phase, layers to be overlaid and obscured and rounded layers may be marked as AAE processing by invoking the HWC interface, and then multiple layers may be configured to GPU or DSS for rendering. After the AAE and the GPU execute the processing of the layers, the multiple layers are configured to the DSS, the DSS is used for online superposition of the multiple layers for display, and expected display frames are displayed on a screen.
In summary, compared with the conventional interface generating device, the animation effect accelerator 203 is added in the application, and the processor 201, the graphic processor 202 and the animation effect accelerator 203 cooperatively work to render the interface, so as to improve the rendering efficiency of the interface. In the conventional technology, the processor 201 directly issues the interface generating task to the graphics processor 202 for execution, the graphics processor 202 executes the graphics rendering task and also executes the animation effect processing task, the whole flow of the animation effect processing is complex, the number of executing instructions of the processor 201 and the graphics processor 202 is large, the power consumption is high, and the performance risk is large. Thus, in the present application, the processor 201 determines a first task (i.e., an interface generation task) and a first subtask and a second subtask in the first task, where the first subtask is used to indicate that a clear layer is rendered, and the second subtask is used to indicate that a layer after the animation effect is processed is rendered. Further, the first subtask may be executed by the graphics processor 202, and the second subtask may be executed by the animation effect accelerator 203, i.e., the graphics processor 202 need only execute graphics rendering tasks, no animation effect processing tasks are required to be executed, and the graphics processor 202 and the animation effect accelerator 203 may work in parallel, so that not only the workload of the processor 201 and the graphics processor 202 may be reduced, but also the interface generation efficiency may be improved.
Referring to fig. 13, fig. 13 is a flowchart of an interface generating method according to an embodiment of the present application, and the embodiment of the present application provides an interface generating method, which can be applied to the above-mentioned interface generating device, and is described in detail below.
Step S301, determining a first task through a processor.
Specifically, the first task comprises a first subtask and a second subtask, wherein the first subtask comprises a first graphic rendering task, and the second subtask comprises an animation effect processing task.
Step S302, performing graphic rendering based on the first subtask through a graphic processor to obtain a first layer.
And step S303, performing animation effect processing based on the animation effect processing task in the second subtask through the animation effect accelerator to obtain a second layer.
Wherein the animation effect process includes at least one of a blurring process or a rounding process.
In some embodiments, the animation effect processing task is used for indicating the to-be-processed image layer, and the animation effect processing is performed by the animation effect accelerator based on the animation effect processing task in the second subtask to obtain the second image layer, and the animation effect processing is performed by the animation effect accelerator based on the animation effect processing task to obtain the second image layer.
In some embodiments, the second subtask further comprises a second graphic rendering task, the method further comprises the steps of obtaining the second graphic rendering task in the second subtask through a graphic processor, conducting graphic rendering based on the second graphic rendering task to obtain a third graphic layer, conducting animation effect processing based on an animation effect processing task in the second subtask through an animation effect accelerator to obtain the second graphic layer, and conducting animation effect processing based on the animation effect processing task through the animation effect accelerator to the third graphic layer to obtain the second graphic layer.
In some embodiments, the interface generating device further comprises a task scheduler, the method further comprises sending, by the processor, a first dependency relationship to the task scheduler, the first dependency relationship being used to indicate a dependency relationship between executing a second graphics rendering task and an animation effect processing task in the second subtask, and scheduling, by the task scheduler, the graphics processor to execute the second graphics rendering task and the animation effect accelerator to execute the animation effect processing task according to the first dependency relationship.
In some embodiments, the interface generating device further comprises a display subsystem, and the method further comprises receiving, by the display subsystem, the first layer and the second layer, and forming the first layer and the second layer into an image to be displayed.
The embodiment of the application provides electronic equipment, which comprises any interface generating device and a memory, wherein the memory is used for storing computer program codes, the computer program codes comprise computer instructions, and the interface generating device calls and runs the computer instructions.
The present application provides a computer storage medium storing a computer program which, when executed by the interface generating apparatus, implements any one of the interface generating methods.
The application provides a chip system, which comprises the interface generating device and is used for supporting an electronic device to realize the functions involved in the interface generating method, for example, generating or processing the information involved in the interface generating method. In one possible design, the chip system further includes a memory to hold the necessary program instructions and data for the electronic device. The chip system may include a chip, or may include a chip and other discrete devices.
The present application provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any of the second aspects above.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in the computer device) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. The storage medium may include various media capable of storing program codes, such as a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.

Claims (12)

1.一种界面生成装置,其特征在于,包括:1. An interface generation device, comprising: 处理器,用于:确定第一任务;所述第一任务包括第一子任务和第二子任务;其中,所述第一子任务包括第一图形渲染任务;所述第二子任务包括动画效果处理任务;The processor is configured to: determine a first task; the first task includes a first subtask and a second subtask; wherein the first subtask includes a first graphics rendering task; and the second subtask includes an animation effect processing task; 图形处理器,用于:获取所述第一子任务,并基于所述第一子任务进行图形渲染以得到第一图层;A graphics processor, configured to: obtain the first subtask, and perform graphics rendering based on the first subtask to obtain a first layer; 动画效果加速器,用于:获取所述第二子任务中的所述动画效果处理任务,并基于所述动画效果处理任务进行动画效果处理以得到第二图层;其中,所述动画效果处理包括模糊处理或圆角处理中的至少一项。An animation effect accelerator is used to: obtain the animation effect processing task in the second subtask, and perform animation effect processing based on the animation effect processing task to obtain a second layer; wherein the animation effect processing includes at least one of blur processing or rounded corner processing. 2.如权利要求1所述装置,其特征在于,所述动画效果处理任务用于指示待处理图层;2. The device according to claim 1, wherein the animation effect processing task is used to indicate a layer to be processed; 所述动画效果加速器,具体用于:基于所述动画效果处理任务对所述待处理图层进行所述动画效果处理以得到所述第二图层。The animation effect accelerator is specifically configured to: perform the animation effect processing on the to-be-processed layer based on the animation effect processing task to obtain the second layer. 3.如权利要求1所述装置,其特征在于,所述第二子任务还包括第二图形渲染任务;3. The apparatus according to claim 1, wherein the second subtask further comprises a second graphics rendering task; 所述图形处理器,还用于:获取所述第二子任务中的所述第二图形渲染任务,并基于所述第二图形渲染任务进行图形渲染以得到第三图层;The graphics processor is further configured to: obtain the second graphics rendering task in the second subtask, and perform graphics rendering based on the second graphics rendering task to obtain a third layer; 所述动画效果加速器,具体用于:基于所述动画效果处理任务对所述第三图层进行所述动画效果处理以得到所述第二图层。The animation effect accelerator is specifically configured to: perform the animation effect processing on the third layer based on the animation effect processing task to obtain the second layer. 4.如权利要求3所述装置,其特征在于,所述界面生成装置还包括任务调度器;4. The device according to claim 3, wherein the interface generation device further comprises a task scheduler; 所述处理器,还用于:向所述任务调度器发送第一依赖关系,所述第一依赖关系用于指示执行所述第二子任务中的所述第二图形渲染任务和所述动画效果处理任务的依赖关系;The processor is further configured to: send a first dependency relationship to the task scheduler, where the first dependency relationship is used to indicate a dependency relationship between executing the second graphics rendering task and the animation effect processing task in the second subtask; 所述任务调度器,用于:根据所述第一依赖关系调度所述图形处理器执行所述第二图形渲染任务和调度所述动画效果加速器执行所述动画效果处理任务。The task scheduler is configured to schedule the graphics processor to execute the second graphics rendering task and schedule the animation effect accelerator to execute the animation effect processing task according to the first dependency relationship. 5.如权利要求1-4任一项所述装置,其特征在于,所述界面生成装置还包括:5. The device according to any one of claims 1 to 4, wherein the interface generating device further comprises: 显示子系统,用于:接收所述第一图层和所述第二图层,并将所述第一图层和所述第二图层合成为待显示图像。The display subsystem is configured to receive the first layer and the second layer, and synthesize the first layer and the second layer into an image to be displayed. 6.如权利要求1-5任一项所述装置,其特征在于,所述界面生成装置包括芯片。6. The device according to any one of claims 1 to 5, wherein the interface generating device comprises a chip. 7.一种界面生成方法,其特征在于,包括:7. A method for generating an interface, comprising: 通过处理器确定第一任务;所述第一任务包括第一子任务和第二子任务,其中,所述第一子任务包括第一图形渲染任务;所述第二子任务包括动画效果处理任务;Determining a first task by a processor; the first task includes a first subtask and a second subtask, wherein the first subtask includes a first graphics rendering task; and the second subtask includes an animation effect processing task; 通过所述图形处理器基于所述第一子任务进行图形渲染以得到第一图层;Performing graphics rendering based on the first subtask by the graphics processor to obtain a first layer; 通过所述动画效果加速器基于所述第二子任务中的所述动画效果处理任务进行动画效果处理以得到第二图层;其中,所述动画效果处理包括模糊处理或圆角处理中的至少一项。Animation effect processing is performed by the animation effect accelerator based on the animation effect processing task in the second subtask to obtain a second layer; wherein the animation effect processing includes at least one of blur processing or rounded corner processing. 8.如权利要求7所述方法,其特征在于,所述动画效果处理任务用于指示待处理图层;所述通过所述动画效果加速器基于所述第二子任务中的所述动画效果处理任务进行动画效果处理以得到第二图层,包括:8. The method of claim 7, wherein the animation effect processing task indicates a layer to be processed; and performing animation effect processing based on the animation effect processing task in the second subtask to obtain the second layer by the animation effect accelerator comprises: 通过所述动画效果加速器,基于所述动画效果处理任务对所述待处理图层进行所述动画效果处理以得到所述第二图层。The animation effect accelerator performs the animation effect processing on the layer to be processed based on the animation effect processing task to obtain the second layer. 9.如权利要求7所述方法,其特征在于,所述第二子任务还包括第二图形渲染任务;所述方法还包括:9. The method according to claim 7, wherein the second subtask further comprises a second graphics rendering task; the method further comprises: 通过所述图形处理器,获取所述第二子任务中的所述第二图形渲染任务,并基于所述第二图形渲染任务进行图形渲染以得到第三图层;Obtaining, by the graphics processor, the second graphics rendering task in the second subtask, and performing graphics rendering based on the second graphics rendering task to obtain a third layer; 所述通过所述动画效果加速器基于所述第二子任务中的所述动画效果处理任务进行动画效果处理以得到第二图层,包括:The performing animation effect processing based on the animation effect processing task in the second subtask by the animation effect accelerator to obtain a second layer includes: 通过所述动画效果加速器,基于所述动画效果处理任务对所述第三图层进行所述动画效果处理以得到所述第二图层。The animation effect accelerator performs the animation effect processing on the third layer based on the animation effect processing task to obtain the second layer. 10.如权利要求9所述方法,其特征在于,所述界面生成装置还包括任务调度器;所述方法还包括:10. The method according to claim 9, wherein the interface generation device further comprises a task scheduler; the method further comprises: 通过所述处理器,向所述任务调度器发送第一依赖关系,所述第一依赖关系用于指示执行所述第二子任务中的所述第二图形渲染任务和所述动画效果处理任务的依赖关系;Sending, by the processor, a first dependency relationship to the task scheduler, where the first dependency relationship is used to indicate a dependency relationship between executing the second graphics rendering task and the animation effect processing task in the second subtask; 通过所述任务调度器,根据所述第一依赖关系调度所述图形处理器执行所述第二图形渲染任务和调度所述动画效果加速器执行所述动画效果处理任务。The task scheduler schedules the graphics processor to execute the second graphics rendering task and the animation effect accelerator to execute the animation effect processing task according to the first dependency relationship. 11.如权利要求7-10任一项所述方法,其特征在于,所述界面生成装置还包括显示子系统:所述方法还包括:11. The method according to any one of claims 7 to 10, wherein the interface generation device further comprises a display subsystem; the method further comprises: 通过所述显示子系统,接收所述第一图层和所述第二图层,并将所述第一图层和所述第二图层合成为待显示图像。The first layer and the second layer are received through the display subsystem, and the first layer and the second layer are synthesized into an image to be displayed. 12.一种电子设备,其特征在于,包括如权利要求1-6任一项所述装置和存储器;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述装置调用并运行所述计算机指令。12. An electronic device, characterized in that it comprises the device according to any one of claims 1 to 6 and a memory; the memory is used to store computer program code, the computer program code includes computer instructions, and the device calls and runs the computer instructions.
CN202410420835.6A 2024-04-08 2024-04-08 Interface generation device and interface generation method Pending CN120803570A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410420835.6A CN120803570A (en) 2024-04-08 2024-04-08 Interface generation device and interface generation method
PCT/CN2025/082147 WO2025214054A1 (en) 2024-04-08 2025-03-12 Interface generation apparatus and interface generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410420835.6A CN120803570A (en) 2024-04-08 2024-04-08 Interface generation device and interface generation method

Publications (1)

Publication Number Publication Date
CN120803570A true CN120803570A (en) 2025-10-17

Family

ID=97310871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410420835.6A Pending CN120803570A (en) 2024-04-08 2024-04-08 Interface generation device and interface generation method

Country Status (2)

Country Link
CN (1) CN120803570A (en)
WO (1) WO2025214054A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941413B (en) * 2019-12-09 2023-04-11 Oppo广东移动通信有限公司 Display screen generation method and related device
CN113407283B (en) * 2021-06-24 2024-10-08 优地网络有限公司 Interface display method and device and electronic equipment
CN114092312A (en) * 2022-01-19 2022-02-25 北京鲸鲮信息系统技术有限公司 Image generation method, image generation device, electronic equipment and storage medium
CN117806745B (en) * 2022-10-19 2024-10-29 华为技术有限公司 Interface generation method and electronic device
CN117112165A (en) * 2023-08-11 2023-11-24 中国电信股份有限公司技术创新中心 Virtual reality application task processing method and device and virtual reality system

Also Published As

Publication number Publication date
WO2025214054A1 (en) 2025-10-16

Similar Documents

Publication Publication Date Title
US9552187B2 (en) System and method for display mirroring
EP2962191B1 (en) System and method for virtual displays
EP2622463B1 (en) Instant remote rendering
CN103329094B (en) cross-context redirection
JP4901261B2 (en) Efficient remote display system with high-quality user interface
US9563971B2 (en) Composition system thread
CN105190701B (en) Synthesis system based on primitive and method
CN107179920B (en) Network engine startup method and device
CN114972607B (en) Data transmission method, device and medium for accelerating image display
US20220292628A1 (en) Image processing method and apparatus
CN116136784A (en) Data processing method, device, storage medium and program product
CN116166255B (en) Interface generation method and electronic equipment
CN107077347B (en) View management architecture
CN119248268B (en) System supporting qt4 adaptation wayland and wayland interaction method
WO2024016798A9 (en) Image display method and related apparatus
CN113553017A (en) Terminal screen adapting method, system, equipment and medium
US20240377939A1 (en) Split-screen effect generating method and apparatus, device, and medium
CN118885234B (en) Method for playing application starting animation, electronic equipment and related medium
CN120803570A (en) Interface generation device and interface generation method
WO2013185664A1 (en) Operating method and device
TW202318189A (en) Parallelization of gpu composition with dpu topology selection
CN117472434A (en) Image processing method, device, computer equipment and storage medium for intelligent equipment
CN120014132A (en) A graphics rendering processing method and system on chip
CN118647966A (en) Display mask layer generation and runtime adjustments
JP2017072977A (en) Computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication