Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the image processing method.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use client devices 101, 102, 103, 104, 105, and/or 106 to send raw images or configure image processing streams. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various classes and versions of software applications and operating systems, such as MICROSOFT Windows, apply iOS, UNIX-like operating systems, linux or Linux-like operating systems (e.g., GOOGLE Chrome OS), or include various mobile operating systems, such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any of a variety of networks known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual special server (VPS PRIVATE SERVER) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different categories. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the related art, an image processing flow may be represented by a graph structure composed of a plurality of functional nodes, so that a user can customize the image processing flow by configuring each functional node on a graphical user interface, and at the same time, reproducibility of the image processing flow can be realized. In this case, the workflow for the original image is generally processed based on the pixel data of the original image, failing to take into consideration the color characteristics of the image, resulting in the loss of the color characteristics of the original image, and the image processing result generating color shift.
Based on the above, the present disclosure provides an image processing method, when an image loading component is utilized to obtain an original image to be processed, a function of reading and caching color configuration information is integrated in the image loading component, and the color configuration information is cached while the image is loaded, so that the cached color configuration information can be directly reused when the image is displayed, the accuracy of an image processing result is ensured, and meanwhile, the image processing efficiency is improved.
Fig. 2 shows a flowchart of an image processing method 200 according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method 200 includes:
Step S210, an image loading component is operated aiming at an original image to be processed, and the image loading component is used for loading the original image and caching color configuration information of the original image;
step S220, responding to the received image display request, and reading current pixel data of a current image processing result;
Step S230, reading the color configuration information from the cache, and
Step S240, displaying the current image processing result by configuring the current pixel data based on the color configuration information.
By applying the image processing method 200, when the image loading component is utilized to acquire the original image to be processed, the image loading component is utilized to directly execute the steps of reading and caching the image color configuration information, namely, the image is loaded and the color configuration information can be cached, so that the cached color configuration information can be directly reused when the image display request is received, more accurate image display is realized based on the cached color configuration information, and color deviation of an image processing result is avoided.
In some examples, the image processing stream may also be built based on the image loading component and the plurality of image processing components, which in turn are run to implement the image processing operations based on the image processing stream. The image processing flow may be represented as a graph structure composed of a plurality of functional components, based on which the image processing flow is described, and specifically, the logical timing of the operations of the respective functional components may be described using the connection relationships between the respective functional components in the graph structure. In some examples, the image processing components may include components capable of implementing various types of image processing functions, such as an image color transformation component, an image background removal component, an image sharpening component, and so forth.
In one example, the image processing stream may be composed of an image loading component, an image background removal component, and an image sharpening component, which are connected in sequence. In this case, the respective components may be sequentially arranged based on the image processing flow, the loaded original image is transferred to the image background removing component and the image sharpening component to perform corresponding processing after the image loading component is operated, and more accurate image display may be realized by using the color configuration information buffered by the image loading component when the image display request is received.
In some examples, the image processing stream may be built using ComfyUI to enable customization and reproducibility of the image processing stream.
In some examples, the color profile of the image may be an ICC (International Color Consortium ) color profile carried by the original image, i.e., a standard file format formulated by the ICC, for maintaining color consistency between different devices and software. The ICC profile may be used to describe device color characteristics, including information required by the color management system, by applying the ICC profile for image display, i.e., enabling different image display devices to have consistent color rendering to enable users to view more accurate image processing results.
In some examples, the ICC color profile may include file header, tag table, color spectrum data, color space look-up table, and the like. The header may contain information such as the version of the file, the type of device (e.g., display, scanner, etc.), and the type of color space. The tag table may record tags used in the file and their corresponding values. The color spectrum data is used to describe the color response characteristics of the device. The color space look-up table may describe a mapping from one color space to another, based on which color management transformations may be implemented.
In some examples, the ICC color profile carried by the original image may be a color profile of a device or software used in the creation of the original image, or may be a profile for a particular output device (e.g., printer, particular type of display device) to enable the image display device to resolve color characteristics of the original image based thereon. For example, the image display device may convert color data of the original image into color data corresponding to one or more standard color spaces based on the color space described by the color configuration information of the original image, and map the color data corresponding to the standard color spaces to the color space of the device, so as to ensure that the image presents an accurate color on the image display device. This conversion process ensures accurate transfer and reproduction of colors between devices.
In one example, the color configuration information of the original image may record the color space characteristics corresponding to the original image, in which case, by configuring the pixel data to be displayed based on the color configuration information of the original image in the image display process, it may be ensured that the image display result can restore the color characteristics of the original image, so that the user can view more accurate image processing results.
It should be understood that the above is merely an example describing the content and usage of the color configuration information of the original image, and the present disclosure is not limited to the content and usage of the color configuration information of the image as long as the color characteristics of the original image can be restored by buffering and multiplexing the color configuration information.
In some examples, the user may trigger an image display request by configuring an image preview component or an image save component (i.e., a component that displays and saves the current image processing result output) in the image processing stream, and may also send the image display request directly using the image processing user interface.
According to some embodiments, the method 200 further comprises determining a display type based on the image display request, wherein the display type comprises an image preview or an image output display, and determining a target storage address based on the display type, and wherein displaying the current image processing result in step S240 by configuring the current pixel data based on the color configuration information comprises packaging the current pixel data and the color configuration information as a target image, storing the target image in the target storage address, and displaying the target image by invoking an image display to access the target storage address. Therefore, the storage address (for example, temporary storage or permanent storage) can be determined according to the image preview/image export storage node operated by a user, the pixel data and the color configuration information are subjected to unified encapsulation and storage, and the image display supporting the color configuration function can be further used for directly displaying the image with accurate color.
In some examples, the image display may be a variety of types of image viewers or image editors capable of supporting color configuration functions. By applying the technical means, the image display capable of reading and using the color configuration information can be conveniently and rapidly utilized to directly check the target image packaged with the color configuration information, and the efficiency and convenience of image display are improved.
As described above, the user may trigger the image display request by configuring the image preview node or the image output display node in the image processing stream, and may directly transmit the image display request using the image processing user interface, in which case the type of the image display request may be directly determined based on the type of operation of the user.
According to some embodiments, wherein the determining a target storage address based on the display type includes determining the target storage address from a temporary folder in response to determining that the display type is an image preview, wherein a data lifecycle of the temporary folder does not exceed a time threshold. Thus, the image for previewing can be saved by using the temporary folder with limited data life cycle, and the storage space can be saved by using the automatic cleaning function for the temporary folder.
In some examples, when the display type is an image output display, the current image processing result can be permanently stored based on a default storage address or a storage address configured by the user, so that the user can conveniently execute further processing or viewing.
According to some embodiments, the method 200 further comprises determining a storage compression level based on the display type, and wherein the encapsulating the current pixel data and the color configuration information into a target image comprises encapsulating the current pixel data and the color configuration information into a target image based on the storage compression level. Therefore, the storage compression level can be determined according to the image preview/image export storage node operated by the user, so that the quality (namely, the image data size) of the target image is adapted to the requirement of the user, and the hardware resources are reasonably utilized.
In some examples, the user may also send configuration information storing the compression level at the same time as sending the image display request. For example, a user may directly configure or select a compression level of a target image on an image processing user interface, and further may compress and store the target image based on user configuration information to meet user needs.
According to some embodiments, the determining a storage compression level based on the display type includes determining that a compression rate corresponding to the storage compression level is not less than a compression rate threshold in response to determining that the display type is an image preview. Thus, the image for previewing can be lightweight-compressed, and the storage space and processing resources can be saved.
In some examples, a lightweight storage compression level corresponding to the image preview request may be preconfigured, that is, a compression mode that gives consideration to both image preview efficiency and image preview effect may be set for the image preview function, so that hardware resources are saved and the preview requirement of a user is satisfied.
According to some embodiments, the method 200 further includes returning the target storage address in response to determining that the display type is an image output display. Therefore, the node return value can be utilized to return the storage address, and the user can conveniently view and use the image processing result.
Fig. 3 shows a flow diagram of an image preview process 300 according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the image preview flow 300 includes:
Step S301, configuring temporary file information and compression level. In this step, information such as a target storage address, a file prefix, and the like may be configured based on the system temporary folder to save the temporary image to be previewed in the temporary folder, saving storage space with an automatic cleaning function for the temporary folder. In this step, a lightweight compression level may also be set for the temporary image to be previewed, further saving storage space and processing resources.
Step S302, a storage address of a target image is acquired. On the basis of the configuration of the temporary file information by step S301, the target storage address of the temporary folder can be acquired in this step, and the temporary image for previewing can be stored.
Step S303, obtaining pixel data of the current image processing result.
Step S304, pixel data conversion.
In this example, the Image processing flow may be implemented using a Python library, and in steps S302-S304, the picture data in numpy array format may be converted into an Image object in the picture using the picture library.
Step S305, additional image information and color configuration information are acquired.
Step S306, packaging the target image.
In the above steps, the additional Image information and the color arrangement information may be transmitted as parameters to the Image object to be uniformly stored.
Step S307, storing and displaying the target image.
The target image packaged with the color configuration information is stored in the temporary folder, so that the target image can be accessed and displayed by using the image viewer, that is, the image preview is realized, meanwhile, the storage space is saved by using the temporary folder with limited data life cycle, and the temporary image for preview is prevented from occupying hardware resources for a long time.
Fig. 4 shows a flow diagram of an image processing procedure according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the image processing stream includes an image loading component 410, an image processing component 420, an image saving component 430, and an image preview component 440.
Referring to fig. 4, the image loading component 410 operates as follows:
step S11, acquiring pixel data.
Step S12, pixel data conversion.
As previously described, in this example, the image processing flow may be implemented using a Python library, in which a picture may be read and converted to numpy array object format using the piclow library, facilitating various types of processing operations to be performed based on the numpy array object format. In this example, the a-channel value in the image RGBA may be further acquired as mask information.
Step S13, obtaining color configuration information.
Step S14, the color configuration information is cached.
Image processing component 420 may include components capable of performing various types of image processing functions, such as an image color conversion component, an image background removal component, an image sharpening component, and the like.
The image save assembly 430 operates as follows:
step S31, a storage address of the target image is acquired.
Step S32, acquiring pixel data.
Step S32, pixel data conversion.
Step S34, additional image information and color configuration information are acquired.
And step S35, packaging the target image.
Step S36, storing the target image.
In this example, the specific implementation of the above steps S31 to S36 is similar to the specific implementation of the above steps S302 to S307, and will not be repeated here.
The image loading component 440 operates as follows:
step S41, configuring temporary file information.
Step S42, configuring and storing compression level.
In some examples, image loading component 440 may inherit from image preservation component 430, configure storage information and storage compression levels for temporary files based on implementing target image packaging and storage functions with image preservation component 430 to enable compressed storage for temporary images to be previewed, saving storage space.
According to an aspect of the present disclosure, there is also provided an image processing apparatus. Fig. 5 shows a block diagram of an image processing apparatus 500 according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes:
An image loading unit 510 configured to run an image loading component for an original image to be processed, the image loading component being configured to load the original image and to cache color configuration information of the original image;
a first reading unit 520 configured to read current pixel data of a current image processing result in response to receiving an image display request;
a first reading unit 530 configured to read the color configuration information from the cache, and
A display unit 540 configured to display the current image processing result by configuring the current pixel data based on the color configuration information.
According to some embodiments, the apparatus 500 further comprises a first determining unit configured to determine a display type based on the image display request, wherein the display type comprises an image preview or an image output display, and a second determining unit configured to determine a target storage address based on the display type, and wherein the display unit 540 comprises a packaging subunit configured to package the current pixel data and the color configuration information as a target image, a storage subunit configured to store the target image in the target storage address, and a display subunit configured to display the target image by invoking an image display to access the target storage address.
According to some embodiments, the second determining unit is configured to determine the target storage address from a temporary folder in response to determining that the display type is an image preview, wherein a data lifecycle of the temporary folder does not exceed a time threshold.
According to some embodiments, the apparatus 500 further comprises a third determining unit configured to determine a storage compression level based on the display type, and wherein the packaging subunit is configured to package the current pixel data and the color configuration information as a target image based on the storage compression level.
According to some embodiments, the third determining unit is configured to determine that the compression rate corresponding to the stored compression level is not less than a compression rate threshold in response to determining that the display type is an image preview.
According to some embodiments, the apparatus 500 further comprises a return unit configured to return the target storage address in response to determining that the display type is an image output display.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to another aspect of the present disclosure, there is also provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the image processing method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described image processing method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned image processing method.
Referring to fig. 6, a block diagram of an electronic device 600 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the device 600, the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 608 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, and a blockchain network.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.