Disclosure of Invention
The application provides an interface control method, a control system, electronic equipment and a storage medium, so that a user can adjust the first electronic equipment through the second electronic equipment instead of a key-type remote controller, user operation is greatly facilitated, and experience of the user is improved.
In a first aspect, the present application provides an interface control method applied to a first electronic device, where the first electronic device includes a touchless screen, and the method includes:
sending interface content to a second electronic device to enable the second electronic device to generate a screen projection interface, wherein the second electronic device comprises a touch screen;
receiving target coordinate information fed back by the second electronic device, and determining a target position based on the target coordinate information, wherein the target coordinate information is generated by the second electronic device based on a touch screen event triggered by a user on the screen projection interface;
acquiring a current focus area;
judging the position relation between the target position and the current focus area;
determining a target focus area based on the position relation, and simulating a confirmation key instruction in the target focus area;
and responding to the confirmation key instruction to finish the operation on the interface content.
Through the scheme, the user operation is greatly facilitated, and the experience of the user is improved.
In the interface control method provided by the present application, the determining a target focal region based on the position relationship includes:
when the target position is located in the current focus area, determining the current focus area as a target focus area;
and when the target position is not located in the current focus area, moving the current focus area to a target focus area according to a preset rule.
In the interface control method provided by the present application, moving the current focus area to a target focus area according to a preset rule includes:
planning a shortest path between the current focus area and a target position according to a preset direction;
and moving the current focus area to a target focus area according to the shortest path.
In the interface control method provided by the present application, the method further includes:
sending configuration information to the second electronic equipment so that the second electronic equipment generates a zoomed screen projection interface based on the configuration information and the interface content;
the configuration information comprises resolution information, and the zoomed screen projection interface is obtained by zooming the interface content by the first electronic device according to the resolution information and based on an equal proportion principle.
In the interface control method provided by the present application, the configuration information further includes gesture information, and the method further includes:
sending gesture information to the second electronic device to enable the second electronic device to generate a blank area;
receiving sliding track information sent by the second electronic device, wherein the sliding track information is generated by the second electronic device based on a touch screen event triggered by a user to an empty area;
when the sliding track information is successfully matched with a preset sliding track, determining a corresponding control instruction according to the successfully matched preset sliding track;
and responding to the control instruction, and executing corresponding operation on the interface content.
In the interface control method provided by the application, the configuration information further includes virtual key information; the method further comprises the following steps:
sending virtual key information to the second electronic device to generate a virtual key area on the second electronic device;
receiving a virtual key instruction sent by the second electronic device, wherein the virtual key instruction is generated by the second electronic device based on a user operation in the virtual key area;
and responding to the virtual key instruction, and executing corresponding operation on the interface content.
In a second aspect, an interface control system includes a first electronic device and a second electronic device, where the first electronic device is configured to execute the interface control method, and the second electronic device is configured to execute steps including:
the second electronic equipment is used for receiving the interface content sent by the first electronic equipment and generating a screen projection interface;
the second electronic equipment generates coordinate information based on a touch screen event of the user on the screen projection interface, and scales the coordinate information to obtain target coordinate information;
and sending the target coordinate information to the first electronic equipment.
In the interface control system provided by the present application, the steps executed by the second electronic device further include:
receiving configuration information sent by the first electronic device, wherein the configuration information comprises resolution information;
zooming the interface content according to the resolution information and an equal proportion principle to generate a zoomed screen projection interface;
generating coordinate information based on a touch screen event triggered by the zoomed screen projection interface by a user, and zooming the coordinate information according to the equal proportion principle to obtain the target coordinate information;
and sending the target coordinate information to the first electronic equipment.
In a third aspect, the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the interface control method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the steps of the interface control method as described above.
Compared with the prior art, in the interface control method provided by the embodiment of the application, the first electronic device can determine the target position according to the target coordinate information, determine the target focus area according to the position relation between the target position and the current focus area, and simulate the confirmation key instruction in the target focus area, so that the electronic device without the touch screen function can be quickly and conveniently controlled, the user operation is greatly facilitated, and the experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that for the convenience of clearly describing the technical solutions of the embodiments of the present application, the words "first", "second", and the like are used in the embodiments of the present application to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The inventor of the application finds that a user needs to use a control device such as a remote controller to operate the electronic device without the touch screen. The operation of the existing remote controller is complex, and the user experience is poor. In view of this, an embodiment of the present application provides a method for controlling an electronic device, where the method is applied to a first electronic device, where the first electronic device includes a touchless screen, and the method mainly includes: sending interface content to second electronic equipment to enable the second electronic equipment to generate a screen projection interface, wherein the second electronic equipment comprises a touch screen; receiving target coordinate information fed back by the second electronic device, wherein the target coordinate information is generated by the second electronic device based on a touch screen event triggered by a user on a screen projection interface; acquiring a current focus area; judging the position relation between the target coordinate information and the current focus area; determining a target focus area based on the position relation, and simulating a confirmation key instruction in the target focus area; and responding to the confirmation key instruction to finish the operation of the interface content.
It can be understood that the first electronic device in the embodiments of the present application may include a device without a touch screen, such as a smart television, a smart set-top box, a smart projector, and a smart speaker. That is, the first electronic device does not have a touch screen function.
The second electronic device may include a smart phone, tablet, or the like having a touch screen. That is, the second electronic device has a touch screen function.
Before the first electronic device sends interface content to the second electronic device, the first electronic device needs to establish connection with the second electronic device and perform communication. The first electronic device and the second electronic device can be connected through a communication network, so that the first electronic device and the second electronic device are in the same network.
The communication network may be a Local Area Network (LAN) or a Wide Area Network (WAN), such as the internet. The communication network may be implemented using any known network communication protocol, which may be a variety of wired or wireless communication protocols such as, for example, it may be ethernet, Universal Serial Bus (USB), FIREWIRE (FIREWIRE), global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), bluetooth, wireless fidelity (Wi-Fi), NFC, voice over Internet protocol (VoIP), a communication protocol supporting a network slice architecture, or any other suitable communication protocol. For example, in some embodiments, a first electronic device and a second electronic device may establish a Wi-Fi connection via a Wi-Fi protocol.
Some embodiments of the present application are described in detail below with reference to the following detailed description of the drawings. In the following embodiments, features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, an embodiment of the present application provides an interface control method, which is applied to a first electronic device, where the first electronic device includes a touchless screen, and the method includes:
s100, sending interface content to second electronic equipment to enable the second electronic equipment to generate a screen projection interface, wherein the second electronic equipment comprises a touch screen.
It is understood that the interface content transmitted by the first electronic device may include, but is not limited to: a screen interface of the first electronic device, or video, audio, images, documents, games, etc. played by the first electronic device. As shown in fig. 2, fig. 2 illustrates an interface content diagram of a first electronic device. The first electronic device is an intelligent television, a function bar is configured above a screen of the intelligent television, and the function bar can include a plurality of sub-function bars, such as a recommendation bar, a movie bar, an education bar, a shopping bar, a game bar, an application bar and other sub-function bars for a user to select. When the cursor stays at a certain sub-function bar, all the application icons included in the sub-function bar are displayed below the function bar of the screen. All the application icons and the function bars which are displayed in an arranged mode form interface content together.
For example, when the cursor stays at the "application bar", all application icons included in the "application bar" are displayed on the screen. At this time, the main interface formed by all application icons and function bars included in the "application bar" displayed on the screen is the interface content. Therefore, after receiving the screen projection, the second electronic device can display the interface content, that is, display the screen projection interface, on its own screen.
S200, receiving target coordinate information fed back by the second electronic device, confirming a target position based on the target coordinate information, wherein the target coordinate information is generated by the second electronic device based on a touch screen event triggered by a user on a screen projection interface, and the first electronic device can confirm the target position according to the target coordinate information.
Generally, the user may touch the touch screen of the second electronic device by a finger or a stylus to generate a touch screen event. The second electronic device determines the position touched by the user according to the touch screen event so as to obtain target coordinate information, and then the second electronic device sends the target coordinate information to the first electronic device. Then, the first electronic device determines a target position corresponding to the touch screen event position on the first electronic device according to the target coordinate information.
Taking fig. 2 as an example, the first electronic device is a smart television, the second electronic device is a smart phone, and the smart television sends the current interface content to the smart phone. Assuming that a user touches the local playing on the screen projection interface of the smart phone, the smart phone generates target coordinate information based on the touch screen event and sends the target coordinate information to the smart television. The intelligent television obtains the target coordinate information, and searches a corresponding target position, namely a local playing position on the intelligent television according to the target coordinate information.
S300, acquiring a current focus area;
the current focus area may be a position of a cursor of the first electronic device. Exemplarily, the first electronic device is taken as an example of an intelligent electronic device without a touch screen function; referring to fig. 2, the cursor in fig. 2 stays at the "local play", where the position of the "local play" is the current focus area, and in order to distinguish the cursor area from other areas, the electronic device may appropriately enlarge the cursor area, or configure different colors, etc. to complete the differentiated setting.
S400, judging the position relation between the target position and the current focus area;
when a user clicks a certain APP icon in a screen projection interface on the second electronic device, the second electronic device sends target coordinate information of the touch point to the first electronic device. The first electronic device judges a position area of the APP, on the first electronic device, required to be triggered by the user according to the target coordinate information, wherein the position area is often different from the current focus area, so that the position relationship between the target coordinate information and the current focus area needs to be determined.
S500, determining a target focus area based on the position relation, and simulating a confirmation key instruction in the target focus area;
wherein, based on the position relationship, determining a target focus area comprises:
when the target position is located in the current focus area, determining the current focus area as a target focus area;
and when the target position is not located in the current focus area, moving the current focus area to a target focus area according to a preset rule.
In general, a target position is understood to mean a coordinate which lies within a focal region, i.e. the target focal region.
Wherein, the preset rule may include: when the target position (i.e. the converted coordinates) is not located in the focus area, the shortest path between the current focus area and the target position is planned according to the preset direction. And then, according to the shortest path, moving the current focus area to the target focus area.
In order to facilitate understanding of the application scenario of the above scheme, a smart television without a virtual key function on the market is taken as an example for description. In an embodiment of the present application:
referring to fig. 3, fig. 3 is a schematic structural diagram illustrating that the current focus area is moved to the target position in the embodiment of the present application. When the first electronic device determines that the target position is not in the current focus area, the first electronic device may determine the shortest path according to all tracks and directions of the current focus area moving to the target focus area, so that the first electronic device moves the position of the current focus area to the target focus area, and the target position falls into the target focus area.
As for the determination method of the shortest path, it may be that the first electronic device will determine the shortest path according to how many times the first electronic device moves. Taking the direction of fig. 3 as an example, the path for moving the current focus area to the target focus area includes:
a first path: move down once and then move left twice.
And a second route: move twice to the left, once down, and then once to the right.
At this time, the first electronic device may set the path with the smallest number of movements as the most suitable movement path. Therefore, the first electronic device determines that the first path is the shortest path. Therefore, the first electronic device simulates a direction control according to the track and the moving direction of the first planned path, so that the current focus area moves to the specified target position, and the coincidence of the current focus area and the target focus area is realized.
If the moving times of the paths are the same in the above scheme, the path with the minimum distance length may be used as the shortest path.
The calculation rule for the distance length may include:
and sequentially calculating the distance between the centers of each adjacent focus area along a path planned according to the moving times, and finally summing to obtain the distance length.
Taking path one in fig. 3 as an example, the length of path one may be:
sequentially calculating the distance between the center point of the current focus area and the center point of the lower focus area, the distance between the center point of the lower focus area (namely the focus area where the lower application icon is located) and the center point of the left adjacent focus area, and the distance between the center point of the left adjacent focus area and the center point of the target focus area; and summing all the calculated distances to obtain the length of path one.
For example, assuming that the moving times of the path one and the path two in fig. 3 are the same, the method for determining the shortest path further includes:
the method comprises the steps that first electronic equipment obtains a first distance length of a first path and a second distance length of a second path;
the first electronic device judges the size of the first distance length and the second distance length, and when the first distance length is smaller than the second distance length, the first path is the shortest path.
Of course, if the moving times of the first path and the second path are the same, and the first distance length and the second distance length are also the same, the first electronic device may randomly select the first path or the second path.
In addition, the shortest path may be selected according to the distance length of the movement. Still taking fig. 3 as an example, the path for moving the current focus area to the target focus area includes two paths, i.e., a first path and a second path. As is apparent from the figure, the first distance length of the first path is smaller than the second distance length of the second path, so that the first path is the shortest path. And if the distance lengths of the paths are the same, the path with the least number of movements can be used as the shortest path. If the moving distance length and the moving times are the same in the multiple paths, the first electronic device may randomly select a path as the shortest path.
After the target focus area is determined, the first electronic device may move the current focus area to the designated target focus area, and then the first electronic device may simulate a confirm key instruction in the target focus area.
S600, responding to the confirmation key instruction to finish the operation of the interface content.
When the first electronic device monitors that the target position is in the focus area, a key instruction (namely a 'confirm' key) is simulated and determined immediately, the APP corresponding to the converted target focus area is activated, and then the operation of the interface content of the first electronic device is realized.
In addition, in view of the difference between the control modes of the existing first electronic device and the second electronic device, some applications APP of the first electronic device cannot be run on the second electronic device, for example, the mobile phone version of smart APP cannot be run on the smart television, and a developer needs to separately develop a TV version of smart APP for use by the smart television, thereby increasing the development cost of the developer.
By the interface control method, the mobile phone edition Youkou APP can be used in the smart television, and the development cost of a developer is reduced.
In an embodiment of the application, referring to fig. 2, when a user touches a youku APP in interface content (or a screen projection interface) of a smart phone, the smart phone performs amplification processing on coordinate information of the youku APP according to a second preset proportion to obtain target coordinate information; and sending the target coordinate information to the smart television, confirming the target position touched by the user by the smart television according to the target coordinate information, and judging that the target position is in the focus area range of the Youkou APP on the screen of the smart television. If the target position is located in the current focus area, the smart television simulates and determines a key instruction, namely, the Youkou APP is opened, and the Youkou APP on the smart television is also opened at the same time.
In another embodiment of the present application, referring to fig. 2, if the user wants to return to the desktop of the smart tv after watching the movie through the youku APP on the smart tv; the user can click a return control on a video playing interface of the Youkou APP in the smart phone. After the user clicks the return control, the smart phone can generate corresponding coordinate information and convert the coordinate information according to an equal proportion principle to obtain target coordinate information. The intelligent television receives the target coordinate information, confirms a target position and judges that the target position is in the focus area range of a 'return control' on the intelligent television; namely, the target position is located in the current focus area, the intelligent television simulates a confirmation key instruction and triggers the 'return control' to execute the operation.
It should be emphasized that the video playing APP, similar to the cooling APP, the Tencent video APP, the Aiqiyi video APP, the bilibili video APP, and the like, has a return control or at least a return function when playing video or audio.
Further, when the target position is not located within the current focus area, the situation may be as follows:
referring to fig. 2, the current focus area on the smart tv is the location of "local play". The user can click the Youkou video on the smart phone, and the smart phone generates target coordinate information and sends the target coordinate information to the smart television. The smart television can plan the shortest path between the target position and the position of the current focus area according to a preset rule. As can be seen from fig. 2, the paths total many, and the shortest path is: the method comprises the steps of local playing, internet access public class, fox searching video MAX and Youkou video in sequence.
Then, the smart television moves the current focus area to the target position information, that is, to the kuku video according to the planned shortest path. After the current focus area moves to the Youkou video, the intelligent television can simulate and confirm a key instruction, and the Youkou video is opened.
The inventor of the present application finds that, in most cases, the sizes of the first electronic device and the second electronic device are often different, and at this time, in order to ensure the experience of the user, the first electronic device may also send its own configuration information to the second electronic device.
When the configuration information of the first electronic device includes resolution information, the second electronic device receives the resolution information of the first electronic device, and the interface content is displayed in an equal proportion according to an equal proportion principle based on the resolution information and the size of a screen of the second electronic device, so that a zoomed screen projection interface can be obtained.
According to the equal proportion principle, the equal proportion display interface content can be as follows: the ratio of the width to the height of the interface content of the first electronic device is the same as the ratio of the width to the height of the screen-projecting interface displayed on the second electronic device after the screen is projected to the second electronic device. And displaying the zoomed interface content, namely the zoomed screen projection interface on the second electronic equipment in an equal ratio. By means of the equal proportion principle, the ratio of the width to the height of the screen projection interface can be guaranteed to be unchanged, the screen projection interface is prevented from deforming, and the experience feeling of a user is guaranteed.
Illustratively, when the first electronic device is a smart tv, the second electronic device is a smart phone, and the resolution of the smart tv is 1920 × 1080, the ratio of the width to the height of the interface content of the smart tv may be recorded as 16/9. Due to the screen limitation of the smart phone, the smart phone can display the interface to be projected of the smart television on the screen of the smart phone in an equal ratio, for example, the resolution of the interface content is reduced to 960 × 540, and the ratio of the width to the height of the screen projection interface is also 16/9 at this time.
Furthermore, in order to ensure the aesthetic appearance, the screen projection interface can be positioned close to the middle position of the screen of the second electronic equipment.
In the scheme, the second electronic device can generate coordinate information based on a touch screen event triggered by a zoomed screen-in interface of a user;
the second electronic device may convert the coordinate information to obtain target coordinate information, and send the target coordinate information to the first electronic device.
Specifically, when a user operates in a screen projection interface to trigger a touch event by using a finger or a touch pen, the second electronic device acquires coordinate information of the touch event;
the second electronic equipment converts the coordinate information to obtain target coordinate information and sends the target coordinate information to the first electronic equipment.
Converting the coordinate information may include: and zooming the coordinate information according to an equal proportion principle to obtain target coordinate information.
When the smart television is projected to the smart phone, the size of the smart television is larger than that of the smart phone, so that the screen projection interface needs to be compressed according to an equal proportion principle, and the screen projection interface can be clearly displayed on the smart phone.
Correspondingly, if the content of the smart phone is projected on the smart television, the size of the smart television is larger than that of the smart phone, so that the screen projection interface needs to be amplified according to an equal proportion principle, and the screen projection interface can be clearly displayed on the smart television.
In order to facilitate understanding of the step of scaling the coordinate information according to the equal proportion principle, the following description takes an intelligent television and an intelligent mobile phone as examples:
assuming that the relative coordinate information (x, y) of the position touched by the user is acquired on the smart phone (with the upper left corner of the touch area as a starting point), the relative coordinate information is converted into an absolute coordinate (nx, ny) (with the upper left corner of the whole screen as a starting point) according to an equal proportion principle, and the absolute coordinate is sent to the first electronic device.
It should be emphasized that the algorithm for converting the coordinate information is not limited in the embodiments of the present application.
As a further aspect of the present invention, the configuration information may further include gesture information.
The first electronic device sends gesture information to the second electronic device to generate a blank area on the second electronic device;
the method comprises the steps that first electronic equipment receives sliding track information sent by second electronic equipment;
the first electronic equipment matches the sliding track information with a preset sliding track, and when the sliding track information is successfully matched with the preset sliding track, a corresponding control instruction is determined according to the successfully matched preset sliding track; the first electronic device is internally pre-stored with preset sliding tracks, and different preset sliding tracks correspond to different operation instructions.
The first electronic device can respond to the control instruction and execute corresponding operation so as to complete operation on the interface content.
In order to prevent the preset sliding track from possibly colliding with the inherent sliding track of the second electronic device, in view of this, the embodiment of the present application further includes:
the second electronic equipment judges whether the sliding track information is a sliding track corresponding to the inherent sliding track of the second electronic equipment or can control the sliding track of the first electronic equipment;
if the sliding track information belongs to the sliding track corresponding to the inherent sliding track of the second electronic equipment, the second electronic equipment does not send the sliding track information; if the first electronic equipment does not belong to the preset sliding track, the sliding track information is sent, and the first electronic equipment further matches the sliding track information with the preset sliding track;
and if the matching is successful, the first electronic equipment responds to the operation instruction triggered by the sliding track information, and executes the operation corresponding to the operation instruction.
The following describes the above scheme in detail by taking the first electronic device as a smart television, the second electronic device as a smart phone, and displaying a blank area on the smart phone as an example.
Referring to fig. 4, fig. 4 shows a screen projection interface displayed on a screen of a smartphone, where a space is left between a boundary of the screen projection interface and an adjacent screen boundary to form a blank area (i.e., a black frame portion in the drawing).
According to an embodiment of the application, a preset sliding track is stored in the smart television, and meanwhile, the smart television can determine a corresponding operation instruction according to the preset sliding track and adjust interface content according to the operation instruction. When the user slides from one end of the blank area to the other end of the blank area in the smart phone, the smart phone acquires the sliding track information. The smart phone sends the sliding track information to a smart television, and the smart television matches the sliding track information with a preset sliding track; and when the matching is successful, the intelligent television determines a control instruction corresponding to the sliding track according to the sliding track successfully matched, and executes corresponding operation according to the control instruction to adjust the interface content.
Meanwhile, if the sliding track is a sliding track corresponding to the inherent sliding track of the smart phone, the smart phone can determine a corresponding control instruction according to the sliding track, and the smart television cannot receive the sliding track information.
In addition, as another embodiment of the present application, the preset sliding track may be different from an inherent sliding track of the second electronic device, i.e., the smartphone.
Illustratively, when the user slides in the screen projection interface by using a finger, the starting point of the corresponding sliding track is positioned in the screen projection interface, and the ending point of the sliding track is also positioned in the screen projection interface, and the sliding track is used for controlling the content of the screen projection interface in the smart phone. When the starting point of the sliding track of the finger of the user is located in the blank area or the screen projection interface and the ending point is located in other blank areas, the sliding track is used for controlling the smart phone instead of controlling the screen projection interface. By the method, the conflict between the inherent sliding track of the smart phone and the sliding track of the screen projection control interface can be effectively avoided.
Furthermore, when a user needs to control the smart television through the smart phone, the user can slide in the screen projection interface through a finger, so that the smart television determines a corresponding control instruction according to the sliding track, and performs corresponding operation according to the control instruction. If the user needs to operate the smartphone itself, for example, the smartphone is split into screens, and the sliding track of the split-screen operation instruction may include: the thumb of the user is positioned in the screen projection interface, the index finger is positioned in the blank area, and the thumb and the index finger are close to each other in opposite directions. At this time, the user slides on the screen of the smartphone by using a finger to form the sliding track, so that the smartphone executes the screen switching operation.
In addition, the sliding track information can be manually set, so that different sliding track information corresponds to different control commands. For example, if the distance from one end to the other end in the screen of the smart phone is more than half the screen distance, the smart phone is represented to return to the home page; and if the distance from one end to the other end in the screen does not exceed the half-screen distance, the smart television returns to the homepage.
As a further aspect of the present invention, the configuration information may further include virtual key information.
The first electronic device further sends the virtual key information to the second electronic device to generate a virtual key area on the second electronic device.
The method comprises the steps that a first electronic device receives a virtual key instruction sent by a second electronic device, wherein the virtual key instruction is generated by the second electronic device based on the operation of a user in a virtual key area;
and the first electronic equipment responds to the virtual key instruction and executes corresponding operation so as to finish the operation on the interface content.
Correspondingly, in the further scheme, the second electronic device may further include a virtual key area on the basis of the screen projection interface and the blank area. The second electronic device can display the screen projection interface on the screen of the second electronic device in the middle, one side of the screen projection interface is a blank area, and the other side of the screen projection interface is a virtual key area. For example, referring to fig. 4, a virtual key area may be arranged on the left side of the screen projection interface in fig. 4, and the virtual key area may include a progress adjustment key and a volume adjustment key; and a blank area is arranged on the right side of the screen projection interface. Then, when the user touches the virtual key in the virtual key area, a virtual key instruction corresponding to the virtual key is generated and sent to the first electronic device. And the first electronic equipment executes corresponding operation according to the virtual key instruction.
The virtual keys may include a direction adjustment control, a determination control, a volume adjustment control, and the like. Therefore, when the user clicks the direction adjusting control on the second electronic device, a moving operation instruction is generated and sent to the first electronic device. The first electronic device controls the cursor position to move according to the moving operation instruction.
Of course, the embodiment of the present application may also be applied to the case of multiple first smart devices. Referring to fig. 5, fig. 5 is a schematic structural diagram of another screen projection interface displayed on a screen of a second electronic device according to an embodiment of the present disclosure. In fig. 5, the second electronic device is a smart phone, and the two first electronic devices are a smart television and a smart sound box, respectively. The smart phone can perform layout of interface contents according to configuration information of the smart television and the smart sound box and by combining the size of a screen of the smart phone.
Of course, if a plurality of first electronic devices without touch screen function are projected to a second electronic device, the second electronic device may also perform layout of interface content according to the configuration information of each first electronic device and by combining the size of its own screen.
For example, the configuration information of the smart television only includes resolution information, and the configuration information of the smart sound box includes resolution information, virtual key information, and gesture information. The user can simultaneously or sequentially project the interface content of the smart television and the interface content of the smart sound box to the smart phone in advance. At this moment, the smart phone can reasonably divide the screen according to the configuration information of the two screen projection devices, and arrange the corresponding screen projection interface, the virtual key area and the blank area. Taking the direction of fig. 5 as an example, a screen projection interface corresponding to the smart television is displayed on the left side of the smart phone, a screen projection interface corresponding to the smart speaker is displayed on the right side of the smart phone, and meanwhile, a virtual button area can be displayed above the screen projection interface corresponding to the smart speaker, and a blank area is displayed below the screen projection interface.
In order to save bandwidth and ensure transmission efficiency, the first electronic device (e.g., a smart tv) may compress the interface content to a certain extent, so as to ensure that the picture outline in the interface content can be clearly displayed on the screen of the second electronic device (e.g., a smart phone).
It will be appreciated that the user may select the degree of compression of the interface content based on the actual circumstances.
An embodiment of the present application further provides an interface control system, where the system includes a first electronic device and a second electronic device, where the first electronic device is configured to execute the interface control method, and the second electronic device is configured to execute the steps including:
the second electronic equipment is used for receiving the interface content sent by the first electronic equipment and generating a screen projection interface;
the second electronic equipment generates coordinate information based on a touch screen event of a screen projection interface by a user, and scales the coordinate information to obtain target coordinate information;
and sending the target coordinate information to the first electronic equipment.
Further, the method also comprises the following steps: receiving configuration information sent by first electronic equipment, wherein the configuration information comprises resolution information;
zooming interface contents according to the resolution information and an equal proportion principle to generate a screen projection interface;
generating coordinate information based on a touch screen event triggered by a user on a screen projection interface, and zooming the coordinate information according to an equal proportion principle to obtain target coordinate information;
and sending the target coordinate information to the first electronic equipment.
Specifically, when a user uses a finger or a touch pen to operate and trigger a touch event in the compressed screen projection interface, at this time, the second electronic device may obtain coordinate information of the touch event;
and converting the coordinate information, generating target coordinate information according to the converted coordinate information, and sending the target coordinate information to the first electronic equipment.
Of course, the configuration information may also include gesture information, and the second electronic device receives the gesture information and generates the blank area according to the gesture information.
The second electronic device acquires the sliding track information of the user in the blank area, and when judging that the sliding track information is not the inherent track information of the second electronic device, the sliding track information is sent to the first electronic device, so that the first electronic device can conveniently execute corresponding operation according to the sliding track information.
Further, the configuration information may further include virtual key information, and the second electronic device generates a corresponding virtual key region according to the virtual key information. The user clicks the virtual key in the virtual key area on the second electronic device, and the second electronic device can send the virtual key instruction to the first electronic device.
The embodiment of the application also provides an electronic device, which is shown in fig. 6, and fig. 6 is a schematic structural diagram of the electronic device. The electronic device 500 includes a processor (CPU, GPU, FPGA, etc.) 501, which can perform part or all of the processing in the embodiments shown in the above-described drawings according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the system 500 are also stored. The processor 501, the ROM502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to embodiments of the present application, the method described above with reference to the figures may be implemented as a computer software program. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the methods of the figures. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation on the units or modules themselves.
As another embodiment, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the screen projection device in the above embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer-readable storage medium stores one or more programs, which are used by one or more processors to execute the control method of the electronic device described in the present application.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.