[go: up one dir, main page]

WO2024069228A1 - A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system - Google Patents

A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system Download PDF

Info

Publication number
WO2024069228A1
WO2024069228A1 PCT/IB2023/000561 IB2023000561W WO2024069228A1 WO 2024069228 A1 WO2024069228 A1 WO 2024069228A1 IB 2023000561 W IB2023000561 W IB 2023000561W WO 2024069228 A1 WO2024069228 A1 WO 2024069228A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
movement
extension
images
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2023/000561
Other languages
French (fr)
Inventor
Nan Ye
Zhihong Guo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
Orange SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange SA filed Critical Orange SA
Publication of WO2024069228A1 publication Critical patent/WO2024069228A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2356/00Detection of the display position w.r.t. other display screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information

Definitions

  • the present invention relates to a field of image processing, in particular screen extension method, device and system.
  • screen extension technology has been developed to extent the display of a content on a first screen of a first device (such as a host device) to at least one additional screen of other additional device(s).
  • a first device such as a host device
  • the user can use multiple screens simultaneously for collaborative work, entertainment and other professional or individual purposes.
  • a user when a user wants to use a larger display size, in addition to a first screen, e.g. a screen of a mobile device, to get a better experience, he can combine more than one screen of additional device(s), e.g. a second screen of another mobile device, with the first screen into an integrated virtual screen for extending the display content of the first screen to the second screen.
  • additional device(s) e.g. a second screen of another mobile device
  • a display extension technology is already known especially for PC/laptops, which allows to add additional devices/screens to a host device in order to extend the screen on the additional devices.
  • a first connected additional display is usually assigned by default a first predefined position, such as on the right side of the host computer for a right screen extension, while a second one is assigned by default a second predefined position, such as on the left side of the host computer for the left screen extension.
  • the order of connection of the additional display to the host computer defines by default the position of the additional display with respect to the host computer.
  • the extension position of the additional display is assigned by default, when the actual position is different from the predefined position, the display content will not be correctly extended.
  • the first additional display is positioned on the left side of a host computer and is assigned by default a right position relatively to the host computer
  • the right extension display content will be incorrectly extended to the first additional display.
  • the second additional display is positioned on the right side of a host computer and is assigned by default a left position relatively to the host computer
  • the left extension display content will be incorrectly extended to the second additional display. Therefore, the extended display content are mismatched on the incorrect extension screen.
  • Patent application US 2021/0096802 discloses a computing system wherein data are connected by sensors, such as image sensors or positional sensors, to generate a 3D model of the real-world environment in which this computing system operates. After that this 3D model has been generated, the position of a given display device of the system is calculated in this 3D model, and the content displayed managed accordingly, without any user intervention.
  • the present invention aims thus to address these problems in the art to improve the overall performance and efficiency of screen extension, in a computationaly simpler and more cost-effective way than the prior art systems.
  • a method for extending the display of a content on a first screen of a first device to a second screen of a second device which comprises:
  • the present invention provides an automatic way to correctly extent screen, and therefore a user no longer need to be aware in advance of the predefined connection order for screen extension and/or to manually adjust screen extension configuration based on the actual screen positions.
  • the display context extension can be performed in a simple and cost effective way, without having to build a 3D model of the real-word environment around both devices.
  • the second device comprises an on-device camera
  • the step of detecting the movement of the second device relatively to the first device comprises:
  • the on-device camera is a front camera.
  • the step of comparing the captured a plurality of images comprises:
  • the displacement of the reference object is a translation and/or a rotation thereof.
  • the reference object is a user face in front of the on- device cameras, especially when the on-device camera is a front camera.
  • the first device may also comprise an on-device camera, and the method further comprises:
  • the position of the second device is on the left, right, top or bottom side of the first device. That being said, other positions of the second device relatively to the first device can also be determined.
  • the method it comprises, after the step of determining a position of the second device relatively to the first device based on the detected movement, connecting the second device to the first device in order to enable extending the display content on the first screen. Therefore, a position of the second device relatively to the first device can be learned before the actual display connection, so that it saves time to determine the extension position of the display content which further improves user experience.
  • the step of connecting the second device to the first device comprises:
  • connection can also be implemented, as long as it requires less system resources.
  • the second device is connected to the first device using a wire or wireless communication technology, for example a short-range wireless communication technology, such as Bluetooth.
  • a wire or wireless communication technology for example a short-range wireless communication technology, such as Bluetooth.
  • the step of extending display content on the first screen of the first device to the second screen of the second device based on the detected position comprises transmitting extended content corresponding to the detected position to second device for displaying said extended content on the second screen.
  • a device for enabling screen extension comprising:
  • a movement detection module for detecting a movement of said device relatively to another device
  • a position detection module for determining a position of said device relatively to said other device based on the detected movement
  • - display content extension module for extending display content on the screen of the other device to the screen of said device for enabling screen extension, based on the detected position.
  • Such a device provides an efficient and automatic screen extension.
  • the device as mentioned above comprises an on-device camera and the movement detection module comprises:
  • a screen extension system comprising a first device with a first screen and a second device as mentioned above, wherein the display content extension module of the second device is for extending display content on the first screen of the first device to the screen of the second device.
  • a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method as mentioned above.
  • Figure 1 is a flowchart illustrating an exemplary method according to the present invention
  • Figure 2 illustrates left and right screen extension according to the present invention
  • Figure 3 illustrates an example of detecting movement of the screen extension devices and determining a position of screen extension devices relatively to the first device based on the detected movement;
  • Figure 4 illustrates an alternative example of detecting movement of the screen extension devices and determining a position of screen extension devices relatively to the first device based on the detected movement
  • Figure 5 illustrates an example of connecting screen extension devices to the first device after detecting a position of screen extension devices based on the detected movement.
  • the method according to the present invention is adapted to be used on various kinds of devices including screens that can be connected for screen extension.
  • These devices include, but are not limited to, smartphones, tablets, in-car devices, augmented reality (AR)Zvirtual reality (VR) devices, laptops, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (PDA), artificial intelligence (Al) terminals and other terminal devices.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistants
  • Al artificial intelligence terminals and other terminal devices.
  • the embodiments of this application do not limit any specific type of these devices.
  • the first device contains a first screen
  • the second device contains a second screen, wherein the first screen is, for example, a host screen and the second screen is an additional screen to which the display content of the first screen can be extended.
  • the exemplary method comprises the following steps as shown in figure 1 :
  • a movement of the second device is detected, for example, by movement detection module, such as gyroscope, or an on-device camera, which will be detailed later;
  • movement detection module such as gyroscope, or an on-device camera, which will be detailed later;
  • a position of the second device relatively to the first device is determined based on the detected movement, such a position could be on the left, right, top or bottom side of the first device;
  • the display content on the first screen of the first device is extended to the second screen.
  • a first device 1 with a screen is present in the middle as a central host screen.
  • a second device 2 with a second screen, and possibly a third device 3 with a third screen, are also provided to connect to the first device for extending the display content of the host screen to the second and/or the third screens.
  • the first device 1 Before the screen extension, the first device 1 usually only displays content on its own screen, i.e. the first screen, as a host screen, which is for example stationary at a position.
  • the second device 2 and/or the third device 3 are moved closer to the first device 1 , and their movements are detected. For example, when the second device 2 moves and approaches the left side of the first device 1 and/or the third device 3 moves and approaches the right side of the first device 1 , these movements (especially the relative movements between the devices) are detected for example by a movement detection module equipped on the second and the third devices or by other external movement detection module.
  • the positions of the second device 2 and the third device 3 relatively to the first device 1 are determined, for example by a position detection module equipped on the second and the third devices or by other external movement detection module. As shown in figure 2, the position of the second device 2 is determined as being on the left side of the first device 1 , and the position of third device 3 is determined as being on the right side of the first device 1 .
  • target device(s) i.e. second device 2 or third device 3 in the example illustrated in figure 2, a “target” device being a device the screen of which one intends to extend a displayed content to) relatively to a “source” device (i.e. first device 1 in the example illustrated in figure 2, a “source” device being a device the screen of which a displayed content may be fully displayed on, i.e. without screen extension).
  • a relative position between devices is determined here (with possibly simple values such as “on the left side”, “on the right side”, “on the top side”, “on the bottom side”, etc.), not the absolute position of such a “target” device as it would be in a complex-to-build 3D model mapped to the real-world environment around the devices. It is also stressed here that the relative position of a “target” device with respect to a “source” device may be determined from the detected movement of the “target” device in a relatively straight-forward way.
  • the determination of the relative position of the second device 2 (respectively the third device 3) with respect to the first device 1 may comprise the verification as to whether the detected movement of the second device 2 (respectively the third device 3) is associated with a predetermined value of a relative position parameter which may define the position of the second device 2 (respectively the third device 3) relatively to the first device 1 .
  • this can implemented by storing, in a memory of one of said devices (in particular of the “source” device 1 when it is in charge of deciding whether or not to extend its screen display to other devices), predetermined associations between possible movement values and possible relative position values of a “target” device. This way, when one of these possible movement values is detected for a given “target” device, the associated relative position value of this “target” device, with respect to the “source” device, is retrieved.
  • the display content on the first screen of the first device 1 may then be extended to the second screen and/or the third screen by a display content extension module with correct extension configurations. For instance, in the example illustrated in figure 2, after having determined that the relative position of second device 2 with respect to the first device 1 is “on the left side”, a left screen extension can be performed towards the second device 2, while after having determined that the relative position of third device 3 with respect to the first device 1 is “on the right side”, a right screen extension can be performed towards the third device 3.
  • the extension configuration can be automatically adjusted based on the actual relative position between the first device and the second and/or third device, without manual adjustments as required in the art.
  • the second device 2 and the third device 3 respectively comprise on-device cameras, preferably front cameras.
  • an exemplary screen extension on the second and the third devices will be discussed.
  • the screen extension on different number of devices i.e. two or more than three, can be respectively implemented in a similar fashion according to the present invention.
  • the on-device cameras of the second device 2 and the third device 3 respectively capture a plurality of images, and then these captured images are respectively compared to detect the movement of the second and the third devices.
  • the second device 2 captures one image before its movement and another image after its movement, and these two images contain the same reference object with different positions, i.e. displacement, in the image.
  • the on-device camera is a front camera as shown in figures 3 and 4
  • the reference object is the portrait of a user who is in front of the device and is captured in the image.
  • a displacement of the reference object in the captured images is detected.
  • the position of the portrait in the images moves with the displacement, i.e. a translation movement in figure 3 and a rotation movement in figure 4.
  • the movement of the second devices relatively to the first device can be determined.
  • the movement of the second device 2 is detected as approaching to the left side of the first device 1 based on the displacement of the user’s portrait in the captured images, and therefore the position of the second device is then determined.
  • the third device 3 in figures 3 and 4 can be used to detect its movement and determine its position in a similar manner.
  • figures 3 and 4 show different types of the movement.
  • the second and the third devices 2, 3 make translational movements.
  • the second device 2 moves towards to the first device from the left side and therefore the user’s portrait in the images have an obvious and detectable movement from right to left.
  • a right-to-left movement of the user’s portrait is detected, based on which a left extension position of the second device 2 is determined.
  • the third device 3 moves towards to the first device from the right side and therefore the user’s portrait in the images have an obvious and detectable movement from left to right.
  • a left-to-right movement of the user’s portrait is detected, based on which a right extension position of the second device 3 is determined.
  • the second and the third devices 2, 3 make rotational movements.
  • the second and the third devices rotate along a Z-axis.
  • the second device 2 rotates anti-clockwise and the user’s face in the portrait moves to the left part of the screen, and his left face directly faces to the screen which is captured by the front camera of the second device 2. Based on such a movement, the corresponding displacement is determined and then a left extension position is the determined for the second device 2.
  • the third device 3 rotates clockwise and the user’s face in the portrait moves to the right part of the screen, and his right face directly faces to the screen which is captured by the front camera of the third device 3. Based on such a movement, the corresponding displacement is determined and then a right extension position is the determined for the third device 3.
  • the translational and rotational movements may occur independently or simultaneously. Therefore, in an embodiment, in the present invention, both types of movements can be detected simultaneously, and one can be implemented as a complementary solution to the other, especially when the movement is not clear or stable.
  • any other objects can also be captured as reference object by either front or back on-device cameras, as long as the movement and displacement of such objects can be detected in the images.
  • the first device may also comprise an on-device camera, preferably a front camera.
  • the method according to the present invention further comprises:
  • an exemplary method comprises a step of connecting the second device to the first device in order to extend screen.
  • an information about the detected movement is sent from the second/third device to the first device using either signalling messages or by establishing a temporary connection which is released after the information has been transmitted.
  • a data connection between the second/third device and the first device is then established, in order to enable the extension of the content displayed on the first screen of the first device to the screen(s) of the second/third device, i.e. by transmitting in this data connection data packets containing parts of the content to be displayed.
  • a single connection can be established beforehand between second/third device and first device, e.g. before the movement detection of this second/third device. Such a single connection may then used for transmitting information about the detected movement of the second/third device as well as, when the screen extension is decided based on the relative position determination therefrom, data packets enabling the extension of the content displayed on the first screen of the first device to the screen(s) of the second/third device.
  • the step of connecting the second device to the first device comprises:
  • the above-mentioned connecting and transmitting steps are only performed when the first and second hash codes matches each other, i.e. if they do not match, the second device does not connect with the first device, no information is thus transmitted to the first device and the screen of the first device is not extended to the screen of the second device.
  • these hash codes are compared in order in order to provide similarity score, which can be compared to a similarity threshold above which both hash codes (thus both images they are derived from) are considered as matching.
  • a pHash algoritm can be used, which is known in the art and will not be discussed in detail in the present application.
  • the first, the second and the third devices 1 , 2, 3 may communicate via cables such as USB cable or wireless communication technology, especially a short range wireless communication technology, such as Bluetooth Advertising technology, to exchange the information among these devices.
  • cables such as USB cable or wireless communication technology, especially a short range wireless communication technology, such as Bluetooth Advertising technology, to exchange the information among these devices.
  • the first device 1 which captures a portrait image via its on-device camera, puts a hash code of the captured portrait (e.g. “bc10abcde21 efbcc” in figure 5) in the Protocol Data Unit (PDU) of the Bluetooth Advertising protocol. Then the first device 1 waits for the extension device (the second and/or the third devices 2, 3) to discover it and informs it about the movements of the extension device.
  • a hash code of the captured portrait e.g. “bc10abcde21 efbcc” in figure 5
  • PDU Protocol Data Unit
  • the extension devices e.g. the second and the third devices 2, 3, discover the first device nearby.
  • the extension device(s) will use the hash code in the candidate's PDU to compare it with the hash code of the portrait image captured by their own on-device camera(s) (e.g. “bc11 abdde21efbb9” of the portrait image caputed by the second device 2 and “bc11dadde22efbb2” of the portrait image caputed by the second device 2, in figure 5). If the similarity of two codes (e.g. 0.91 between the first and the second devices, and 0.94 between the first and the third devices) is higher than a threshold, e.g.
  • the extension device will consider that is exactly the host screen (the first device) the extension devices are looking for. Then, the extension devices 2 and 3 may connect to the first device 1 and inform the latter about the movement direction of the portrait. According to this movement direction as received from the extension devices 2 and 3, the first device 1 will derive the extension position of the devices 2 and 3.
  • the extension device i.e. the second and/or the third devices 2, 3) that detects its movement direction and the first device that derives the extension position from this movement direction, hence the sending of an information about the movement direction from the extension device to the first device.
  • the position of this extension device does not have to be derived by the first device, but could also be derived by the extension device itself. In such a case, it is an information about the position of the extension device, not about its movement direction, that is communicated to the first device, for the purpose of content transmission.
  • the position of the extension device relatively to the first device can also be derived by a third-party device/network server.
  • the extension device provides an information about the movement direction to this third- party device/network server, which derives the extension device position relatively to the first device, then sends an information about this position to the first device for the purpose of content transmission and screen extension.
  • the first device 1 which is for example a host device for the content to be displayed (i.e. the device storing or receiving this content, for instance a streamed video), extends its display content on its screen (i.e. the first screen) to the second screen and/or the third screen of the second device 2 and the third device 3. Since the extension position of the second device (e.g. left extension) and the third device (e.g. the right extension) is already determined as mentioned above, there is no need to configure the extension position manually by the user. The following content transmission between the first device 1 and the extension devices 2, 3 can be proceeded based on this automatically determined extension position configuration by means of any display content transmission technologies in the art.
  • the aforementioned examples according to the present invention can be implemented in many ways, such as program instructions for execution by a processor, as software modules, microcode, as computer program product on computer readable media, as logic circuits, as application specific integrated circuits, as firmware, etc.
  • the embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • a computer- usable or computer readable medium can be any apparatus that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be electronic, magnetic, optical, or a semiconductor system (or apparatus or device).
  • Examples of a computer- readable medium include, but are not limited to, a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a RAM, a read-only memory (ROM), a rigid magnetic disk, an optical disk, etc.
  • Current examples of optical disks include compact disk-read-only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates method for extending the display of a content on a first screen of a first device to a second screen of a second device, comprising: - detecting a movement of the second device; - determining a position of the second device relatively to the first device based on the detected movement; and - extending the display content on the first screen of the first device to the second screen of the second device based on the determined position. The invention improves the connection efficiency for screen extension on multiple devices.

Description

A METHOD FOR EXTENDING THE DISPLAY OF A CONTENT ON A FIRST SCREEN OF A FIRST DEVICE TO A SECOND SCREEN OF A SECOND DEVICE, A DEVICE FOR ENABLING SCREEN EXTENSION AND A SCREEN EXTENSION SYSTEM
FIELD OF INVENTION
The present invention relates to a field of image processing, in particular screen extension method, device and system.
BACKGROUND
With more and more screens of various devices being simultaneously used by a user, screen extension technology has been developed to extent the display of a content on a first screen of a first device (such as a host device) to at least one additional screen of other additional device(s). With such a technology, the user can use multiple screens simultaneously for collaborative work, entertainment and other professional or individual purposes.
For example, in a particular scenario, when a user wants to use a larger display size, in addition to a first screen, e.g. a screen of a mobile device, to get a better experience, he can combine more than one screen of additional device(s), e.g. a second screen of another mobile device, with the first screen into an integrated virtual screen for extending the display content of the first screen to the second screen.
To that end, a display extension technology is already known especially for PC/laptops, which allows to add additional devices/screens to a host device in order to extend the screen on the additional devices. In such a prior art technology, for instance, when one connects an additional display to the host computer through, for example, a USB cable, a first connected additional display is usually assigned by default a first predefined position, such as on the right side of the host computer for a right screen extension, while a second one is assigned by default a second predefined position, such as on the left side of the host computer for the left screen extension. In other words, in the art, the order of connection of the additional display to the host computer defines by default the position of the additional display with respect to the host computer.
Although the abovementioned solution is well developed and widely used to extent screen, such a solution has some drawbacks. Since the extension position of the additional display is assigned by default, when the actual position is different from the predefined position, the display content will not be correctly extended. For example, when the first additional display is positioned on the left side of a host computer and is assigned by default a right position relatively to the host computer, the right extension display content will be incorrectly extended to the first additional display. Similar, when the second additional display is positioned on the right side of a host computer and is assigned by default a left position relatively to the host computer, the left extension display content will be incorrectly extended to the second additional display. Therefore, the extended display content are mismatched on the incorrect extension screen. To solve this problem, in the art, one can manually adjust configuration on the host computer to correctly select the actual positions for the additional displays. Alternatively, one can manually change the physical positions or connection order of the first and second additional displays, so as to match the predefined positions.
Therefore, in the art, to overcome the problem of misconfigured screen extension due to additional displays for screen extension not being correctly positioned relatively to the host computer according to the predefined position arrangements as mentioned above, the use needs to pay more attention beforehand to the predefined positions rules of the host computer and correctly connect the additional displays in right order, needs to manually adjust the screen extension configuration on the host computer, or needs to manually reconnect the additional displays in a correct order or change the physical positions of the additional displays. Patent application US 2021/0096802, on the other hand, discloses a computing system wherein data are connected by sensors, such as image sensors or positional sensors, to generate a 3D model of the real-world environment in which this computing system operates. After that this 3D model has been generated, the position of a given display device of the system is calculated in this 3D model, and the content displayed managed accordingly, without any user intervention.
However, the process described in US 2021/00968202 is quite complex and requires power hardware profiles in order to build a 3D model mapped to the real-word environment of the computing system : one should deploy 3D depth sensing cameras (which are powerful but extremely expensive) to achieve the best results, and in any case must deploy sufficiently powerful computing platforms (i.e. PC with powerful GPU) to calculate images from multiple cameras.
The present invention aims thus to address these problems in the art to improve the overall performance and efficiency of screen extension, in a computationaly simpler and more cost-effective way than the prior art systems.
SUMMARY
In this regard, according to one aspect of the invention, it is provided a method for extending the display of a content on a first screen of a first device to a second screen of a second device, which comprises:
- detecting a movement of the second device;
- determining a position of the second device relatively to the first device based on the detected movement; and
- extending the display content on the first screen of the first device to the second screen of the second device based on the determined position.
With such a method according to the present invention, it provides an automatic way to correctly extent screen, and therefore a user no longer need to be aware in advance of the predefined connection order for screen extension and/or to manually adjust screen extension configuration based on the actual screen positions. In addition, since the position of second device relatively to a first device is obtained in a simple way, from the detection of a movement of this second device, the display context extension can be performed in a simple and cost effective way, without having to build a 3D model of the real-word environment around both devices.
In an embodiment, the second device comprises an on-device camera, and the step of detecting the movement of the second device relatively to the first device comprises:
- capturing a plurality of first images by the on-device camera of the second device;
- comparing the captured first images in order to detect the movement of the second device.
Therefore, it is possible to wisely use normal on-device cameras for an unexpected purpose.
In an embodiment, the on-device camera is a front camera.
Furthermore, the step of comparing the captured a plurality of images comprises:
- detecting a reference object in the captured images;
- detecting a displacement of the reference object in the captured images; and
- determining the movement of the second device relative to the first device based on the detected displacement of the reference object in the captured images.
By virtue of such an arrangement, it provides an efficient and simple way to detect the movement.
In an embodiment, the displacement of the reference object is a translation and/or a rotation thereof.
In an embodiment, wherein the reference object is a user face in front of the on- device cameras, especially when the on-device camera is a front camera. Alternatively, the first device may also comprise an on-device camera, and the method further comprises:
- capturing a second image by the on-device camera of the first device; and
- verifying the determined position of the second device relatively to the first device, by comparing the second image with one of the first images; wherein the step of extending display content is performed only if the verification of the determined position of the second device relatively to the first device outputs a positive result.
By virtue of such an arrangement, it provides more accurate movement detection and position determination.
In an embodiment, the position of the second device is on the left, right, top or bottom side of the first device. That being said, other positions of the second device relatively to the first device can also be determined.
In an embodiment, the method it comprises, after the step of determining a position of the second device relatively to the first device based on the detected movement, connecting the second device to the first device in order to enable extending the display content on the first screen. Therefore, a position of the second device relatively to the first device can be learned before the actual display connection, so that it saves time to determine the extension position of the display content which further improves user experience.
In an embodiment, the step of connecting the second device to the first device comprises:
- calculating a first hash code of an image captured by the first device;
- calculating a second hash code of one of the images captured by the second device;
- determining whether these first and second hash codes match each other; and
- when said first and second hash codes are matching, connecting the second device with the first device and transmitting, from the second device to the first device, an information about the detected movement and/or the determined position of the second device.
Alternatively, other methods for the connection can also be implemented, as long as it requires less system resources.
Moreover, the second device is connected to the first device using a wire or wireless communication technology, for example a short-range wireless communication technology, such as Bluetooth.
In an embodiment, the step of extending display content on the first screen of the first device to the second screen of the second device based on the detected position comprises transmitting extended content corresponding to the detected position to second device for displaying said extended content on the second screen.
According to another aspect of the invention, it is provided a device for enabling screen extension, comprising:
- a screen;
- a movement detection module for detecting a movement of said device relatively to another device;
- a position detection module for determining a position of said device relatively to said other device based on the detected movement; and
- display content extension module for extending display content on the screen of the other device to the screen of said device for enabling screen extension, based on the detected position.
Such a device provides an efficient and automatic screen extension.
In an embodiment, the device as mentioned above comprises an on-device camera and the movement detection module comprises:
- a module for capturing a plurality of images by the on-device camera;
- a module for comparing the captured first images in order to detect the movement of the device for enabling screen extension. According to yet another aspect of the invention, it is provided a screen extension system comprising a first device with a first screen and a second device as mentioned above, wherein the display content extension module of the second device is for extending display content on the first screen of the first device to the screen of the second device.
According to yet another aspect of the invention, it is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method as mentioned above.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the present invention will appear in the description hereinafter, in reference to the appended drawings, where:
Figure 1 is a flowchart illustrating an exemplary method according to the present invention;
Figure 2 illustrates left and right screen extension according to the present invention;
Figure 3 illustrates an example of detecting movement of the screen extension devices and determining a position of screen extension devices relatively to the first device based on the detected movement;
Figure 4 illustrates an alternative example of detecting movement of the screen extension devices and determining a position of screen extension devices relatively to the first device based on the detected movement; and
Figure 5 illustrates an example of connecting screen extension devices to the first device after detecting a position of screen extension devices based on the detected movement. DESCRIPTION OF EMBODIMENTS
Hereinafter, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings.
The method according to the present invention is adapted to be used on various kinds of devices including screens that can be connected for screen extension. These devices include, but are not limited to, smartphones, tablets, in-car devices, augmented reality (AR)Zvirtual reality (VR) devices, laptops, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (PDA), artificial intelligence (Al) terminals and other terminal devices. The embodiments of this application do not limit any specific type of these devices.
An exemplary method according to the present invention is now discussed with reference to the flowchart of figure 1 . In this exemplary method, a first and a second devices are implemented. The first device contains a first screen, and the second device contains a second screen, wherein the first screen is, for example, a host screen and the second screen is an additional screen to which the display content of the first screen can be extended.
The exemplary method comprises the following steps as shown in figure 1 :
- at step S1 , a movement of the second device is detected, for example, by movement detection module, such as gyroscope, or an on-device camera, which will be detailed later;
- at step S2, once the movement is detected, a position of the second device relatively to the first device is determined based on the detected movement, such a position could be on the left, right, top or bottom side of the first device; and
- at step S3, based on the determined position, the display content on the first screen of the first device is extended to the second screen. For example, as shown in figure 2, by virtue of the abovementioned method, a first device 1 with a screen is present in the middle as a central host screen. A second device 2 with a second screen, and possibly a third device 3 with a third screen, are also provided to connect to the first device for extending the display content of the host screen to the second and/or the third screens.
Before the screen extension, the first device 1 usually only displays content on its own screen, i.e. the first screen, as a host screen, which is for example stationary at a position. In order to extend the screen, the second device 2 and/or the third device 3 are moved closer to the first device 1 , and their movements are detected. For example, when the second device 2 moves and approaches the left side of the first device 1 and/or the third device 3 moves and approaches the right side of the first device 1 , these movements (especially the relative movements between the devices) are detected for example by a movement detection module equipped on the second and the third devices or by other external movement detection module.
Based on such a movement detection, the positions of the second device 2 and the third device 3 relatively to the first device 1 are determined, for example by a position detection module equipped on the second and the third devices or by other external movement detection module. As shown in figure 2, the position of the second device 2 is determined as being on the left side of the first device 1 , and the position of third device 3 is determined as being on the right side of the first device 1 .
It is stressed here that what is determined at this stage is the position of one or more “target” device(s) (i.e. second device 2 or third device 3 in the example illustrated in figure 2, a “target” device being a device the screen of which one intends to extend a displayed content to) relatively to a “source” device (i.e. first device 1 in the example illustrated in figure 2, a “source” device being a device the screen of which a displayed content may be fully displayed on, i.e. without screen extension).
In other words, a relative position between devices is determined here (with possibly simple values such as “on the left side”, “on the right side”, “on the top side”, “on the bottom side”, etc.), not the absolute position of such a “target” device as it would be in a complex-to-build 3D model mapped to the real-world environment around the devices. It is also stressed here that the relative position of a “target” device with respect to a “source” device may be determined from the detected movement of the “target” device in a relatively straight-forward way. In particular, the determination of the relative position of the second device 2 (respectively the third device 3) with respect to the first device 1 may comprise the verification as to whether the detected movement of the second device 2 (respectively the third device 3) is associated with a predetermined value of a relative position parameter which may define the position of the second device 2 (respectively the third device 3) relatively to the first device 1 .
In particular, this can implemented by storing, in a memory of one of said devices (in particular of the “source” device 1 when it is in charge of deciding whether or not to extend its screen display to other devices), predetermined associations between possible movement values and possible relative position values of a “target” device. This way, when one of these possible movement values is detected for a given “target” device, the associated relative position value of this “target” device, with respect to the “source” device, is retrieved.
The table below gives a non-limiting example of such a predefined association (wherein moving a 2nd device closer to the 1st device defines its position with respect to this 1 st device) which enables a very simple way of implementing the determination of the relative position of a second device (e.g. device 2 or 3 in figure 2) with respect to a first device (device 1 in figure 2) :
Figure imgf000012_0001
Accordingly, based on these determined relative positions, which correspond to the actual physical positions of the second and third devices 2, 3 relatively to the first device 1 , the display content on the first screen of the first device 1 may then be extended to the second screen and/or the third screen by a display content extension module with correct extension configurations. For instance, in the example illustrated in figure 2, after having determined that the relative position of second device 2 with respect to the first device 1 is “on the left side”, a left screen extension can be performed towards the second device 2, while after having determined that the relative position of third device 3 with respect to the first device 1 is “on the right side”, a right screen extension can be performed towards the third device 3.
With such an arrangement, the extension configuration can be automatically adjusted based on the actual relative position between the first device and the second and/or third device, without manual adjustments as required in the art.
In order to detect the movement and determine the position of the second and/or third devices relatively to the first device, several movement detection technologies can be implemented according to the present invention. For example, gyroscope, G- Sensor and GPS can be used. More interestingly, in the present invention, another wise solution is proposed to detect and determine the position by using on-device camera on the device, which will be discussed below.
In one exemplary embodiment, the second device 2 and the third device 3 respectively comprise on-device cameras, preferably front cameras. As an example, hereinafter an exemplary screen extension on the second and the third devices will be discussed. However, the screen extension on different number of devices, i.e. two or more than three, can be respectively implemented in a similar fashion according to the present invention.
In particular, to detect the movement of the second device 2 and the third device 3 relatively to the first device 1 , the on-device cameras of the second device 2 and the third device 3 respectively capture a plurality of images, and then these captured images are respectively compared to detect the movement of the second and the third devices.
More particularly, as shown in figures 3 and 4, the second device 2 captures one image before its movement and another image after its movement, and these two images contain the same reference object with different positions, i.e. displacement, in the image. For example, when the on-device camera is a front camera as shown in figures 3 and 4, the reference object is the portrait of a user who is in front of the device and is captured in the image.
Afterwards, a displacement of the reference object in the captured images is detected. For example, in the two images respectively captured before and after the movement of the device 2, the position of the portrait in the images moves with the displacement, i.e. a translation movement in figure 3 and a rotation movement in figure 4. By calculating such a displacement, the movement of the second devices relatively to the first device can be determined. For example, in figures 3 and 4, the movement of the second device 2 is detected as approaching to the left side of the first device 1 based on the displacement of the user’s portrait in the captured images, and therefore the position of the second device is then determined.
Similarly, the third device 3 in figures 3 and 4 can be used to detect its movement and determine its position in a similar manner.
Moreover, regarding the movement of the devices, figures 3 and 4 show different types of the movement.
In figure 3, the second and the third devices 2, 3 make translational movements. In particular, the second device 2 moves towards to the first device from the left side and therefore the user’s portrait in the images have an obvious and detectable movement from right to left. With this characteristic, a right-to-left movement of the user’s portrait is detected, based on which a left extension position of the second device 2 is determined. Similarly, the third device 3 moves towards to the first device from the right side and therefore the user’s portrait in the images have an obvious and detectable movement from left to right. With this characteristic, a left-to-right movement of the user’s portrait is detected, based on which a right extension position of the second device 3 is determined.
In figure 4, the second and the third devices 2, 3 make rotational movements. In particular, the second and the third devices rotate along a Z-axis. The second device 2 rotates anti-clockwise and the user’s face in the portrait moves to the left part of the screen, and his left face directly faces to the screen which is captured by the front camera of the second device 2. Based on such a movement, the corresponding displacement is determined and then a left extension position is the determined for the second device 2. Similarly, the third device 3 rotates clockwise and the user’s face in the portrait moves to the right part of the screen, and his right face directly faces to the screen which is captured by the front camera of the third device 3. Based on such a movement, the corresponding displacement is determined and then a right extension position is the determined for the third device 3.
The translational and rotational movements may occur independently or simultaneously. Therefore, in an embodiment, in the present invention, both types of movements can be detected simultaneously, and one can be implemented as a complementary solution to the other, especially when the movement is not clear or stable.
Moreover, in addition to capturing user’s portrait in front of the devices, any other objects can also be captured as reference object by either front or back on-device cameras, as long as the movement and displacement of such objects can be detected in the images.
Alternatively, as shown in figures 3 and 4, the first device may also comprise an on-device camera, preferably a front camera. In this case, the method according to the present invention further comprises:
- capturing a second image by the on-device camera of the first device, wherein this second image stands still as the first device is usually used as a host device with a stationary position; and
- verifying the determined position of the second/third devices relatively to the first device, by comparing the second image with one of the first images;
Only if the verification of the determined position of the second device relatively to the first device outputs a positive result, the following step of extending display content will be performed.
In an embodiment, after determining a position of the second/third device based on the detected movement, an exemplary method according to the present invention comprises a step of connecting the second device to the first device in order to extend screen. In this embodiment, when the movement of the of second/third device is detected by the second/third device while its relative position with respect to the first device is determined by the first device, an information about the detected movement is sent from the second/third device to the first device using either signalling messages or by establishing a temporary connection which is released after the information has been transmitted. When it is decided to extend the screen to the second/third device based on this information and relative position determination therefrom, a data connection between the second/third device and the first device is then established, in order to enable the extension of the content displayed on the first screen of the first device to the screen(s) of the second/third device, i.e. by transmitting in this data connection data packets containing parts of the content to be displayed.
Alternatively, a single connection can be established beforehand between second/third device and first device, e.g. before the movement detection of this second/third device. Such a single connection may then used for transmitting information about the detected movement of the second/third device as well as, when the screen extension is decided based on the relative position determination therefrom, data packets enabling the extension of the content displayed on the first screen of the first device to the screen(s) of the second/third device.
In an advantageous embodiment, the step of connecting the second device to the first device comprises:
- calculating a first hash code of an image captured by the first device;
- calculating a second hash code of one of the images captured by the second device;
- determining whether these first and second hash codes match each other;
- when said first and second hash codes are matching, connecting the second device with the first device and transmitting, from the second device to the first device, an information about the detected movement and/or the determined position of the second device.
In a specific embodiment of this embodiment, the above-mentioned connecting and transmitting steps are only performed when the first and second hash codes matches each other, i.e. if they do not match, the second device does not connect with the first device, no information is thus transmitted to the first device and the screen of the first device is not extended to the screen of the second device.
In particular, in order to determine whether the first and second hash codes match each other, these hash codes are compared in order in order to provide similarity score, which can be compared to a similarity threshold above which both hash codes (thus both images they are derived from) are considered as matching. For performing such a comparison and similarity code calculation, a pHash algoritm can be used, which is known in the art and will not be discussed in detail in the present application.
In a more particular example as shown in figure 5, the first, the second and the third devices 1 , 2, 3 may communicate via cables such as USB cable or wireless communication technology, especially a short range wireless communication technology, such as Bluetooth Advertising technology, to exchange the information among these devices.
According to one exemplary embodiment, the first device 1 , which captures a portrait image via its on-device camera, puts a hash code of the captured portrait (e.g. “bc10abcde21 efbcc” in figure 5) in the Protocol Data Unit (PDU) of the Bluetooth Advertising protocol. Then the first device 1 waits for the extension device (the second and/or the third devices 2, 3) to discover it and informs it about the movements of the extension device.
Afterwards, the extension devices, e.g. the second and the third devices 2, 3, discover the first device nearby. Once finding a candidate, the extension device(s) will use the hash code in the candidate's PDU to compare it with the hash code of the portrait image captured by their own on-device camera(s) (e.g. “bc11 abdde21efbb9” of the portrait image caputed by the second device 2 and “bc11dadde22efbb2” of the portrait image caputed by the second device 2, in figure 5). If the similarity of two codes (e.g. 0.91 between the first and the second devices, and 0.94 between the first and the third devices) is higher than a threshold, e.g. 0.85, the extension device will consider that is exactly the host screen (the first device) the extension devices are looking for. Then, the extension devices 2 and 3 may connect to the first device 1 and inform the latter about the movement direction of the portrait. According to this movement direction as received from the extension devices 2 and 3, the first device 1 will derive the extension position of the devices 2 and 3.
It should be noted that, in this embodiment, it is the extension device (i.e. the second and/or the third devices 2, 3) that detects its movement direction and the first device that derives the extension position from this movement direction, hence the sending of an information about the movement direction from the extension device to the first device.
However, alternatively, since the movement direction of the extension device is naturally detected by the extension device, the position of this extension device does not have to be derived by the first device, but could also be derived by the extension device itself. In such a case, it is an information about the position of the extension device, not about its movement direction, that is communicated to the first device, for the purpose of content transmission.
In another alternative, the position of the extension device relatively to the first device can also be derived by a third-party device/network server. In that case, the extension device provides an information about the movement direction to this third- party device/network server, which derives the extension device position relatively to the first device, then sends an information about this position to the first device for the purpose of content transmission and screen extension.
Afterwards, the first device 1 , which is for example a host device for the content to be displayed (i.e. the device storing or receiving this content, for instance a streamed video), extends its display content on its screen (i.e. the first screen) to the second screen and/or the third screen of the second device 2 and the third device 3. Since the extension position of the second device (e.g. left extension) and the third device (e.g. the right extension) is already determined as mentioned above, there is no need to configure the extension position manually by the user. The following content transmission between the first device 1 and the extension devices 2, 3 can be proceeded based on this automatically determined extension position configuration by means of any display content transmission technologies in the art. As these technologies already exist and are known to the skilled person in the art, they will not be discussed. Moreover, it is known to those skilled in the art, the aforementioned examples according to the present invention, can be implemented in many ways, such as program instructions for execution by a processor, as software modules, microcode, as computer program product on computer readable media, as logic circuits, as application specific integrated circuits, as firmware, etc. The embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer, processing device, or any instruction execution system. For the purposes of this description, a computer- usable or computer readable medium can be any apparatus that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be electronic, magnetic, optical, or a semiconductor system (or apparatus or device). Examples of a computer- readable medium include, but are not limited to, a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a RAM, a read-only memory (ROM), a rigid magnetic disk, an optical disk, etc. Current examples of optical disks include compact disk-read-only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
The embodiments described hereinabove are illustrations of this invention. Various modifications can be made to them without leaving the scope of the invention which stems from the annexed claims. In particular, a specific example has been illustrated where the screen of a first device is extended to the screens of two other devices, but the invention is not limited to this specific example and can be applied to extend the screen of a first device to the screen(s) of a single or any number of secondary device(s).

Claims

1 . A method for extending the display of a content on a first screen of a first device to a second screen of a second device, comprising:
- detecting a movement of the second device;
- determining a position of the second device relatively to the first device based on the detected movement; and
- extending the display content on the first screen of the first device to the second screen of the second device based on the determined position.
2. The method according to claim 1 , wherein the second device comprises an on-device camera, wherein the step of detecting the movement of the second device relatively to the first device comprises:
- capturing a plurality of first images by the on-device camera of the second device; - comparing the captured first images in order to detect the movement of the second device.
3. The method according to claim 2, wherein the step of comparing the captured a plurality of images comprises:
- detecting a reference object in the captured images;
- detecting a displacement of the reference object in the captured images; and
- determining the movement of the second device relative to the first device based on the detected displacement of the reference object in the captured images.
4. The method according to claim 3, wherein the displacement of the reference object is a translation and/or a rotation thereof.
5. The method according to claim 3 or 4, wherein the reference object is a user face in front of the on-device cameras.
6. The method according to any one of claim 2 to 5, wherein the first device comprises an on-device camera, the method further comprising:
- capturing a second image by the on-device camera of the first device; and
- verifying the determined position of the second device relatively to the first device, by comparing the second image with one of the first images; the step of extending display content being performed only if the verification of the determined position of the second device relatively to the first device outputs a positive result.
7. The method according to any one of claims 1 to 6, wherein the position of the second device is on the left, right, top or bottom side of the first device.
8. The method according to any one of claims 1 to 7, wherein it comprises, after the step of determining a position of the second device relatively to the first device based on the detected movement, connecting the second device to the first device in order to enable extending the display content on the first screen.
9. The method according to claim 6, wherein it further comprises a step of connecting the second device to the first device comprises:
- calculating a first hash code of an image captured by the first device;
- calculating a second hash code of one of the images captured by the second device;
- determining whether said first and second hash codes match each other;
- when said first and second hash codes are matching, connecting the second device with the first device and transmitting, from the second device to the first device, an information about the detected movement and/or the determined position of the second device.
10. The method according to claim 8 or 9, wherein the second device is connected to the first device using a short-range wireless communication technology.
11. The method according to any one of claims 1 to 10, wherein the step of extending display content on the first screen of the first device to the second screen of the second device based on the detected position comprises transmitting extended content corresponding to the detected position to second device for displaying said extended content on the second screen.
12. A device for enabling screen extension, comprising:
- a screen;
- a movement detection module for detecting a movement of said device relatively to another device;
- a position detection module for determining a position of said device relatively to said other device based on the detected movement; and
- display content extension module for extending display content on the screen of the other device to the screen of said device for enabling screen extension, based on the detected position.
13. The device for enabling screen extension according to claim 12, wherein the device comprises an on-device camera and wherein the movement detection module comprises:
- a module for capturing a plurality of images by the on-device camera;
- a module for comparing the captured first images in order to detect the movement of the device for enabling screen extension.
14. A screen extension system comprising a first device with a first screen and a second device according to one of claims 12 and 13, wherein the display content extension module of the second device is for extending display content on the first screen of the first device to the screen of the second device.
15. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of any one of claims 1 to 11 .
PCT/IB2023/000561 2022-09-30 2023-09-29 A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system Ceased WO2024069228A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2022/123513 2022-09-30
PCT/CN2022/123513 WO2024065775A1 (en) 2022-09-30 2022-09-30 A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system

Publications (1)

Publication Number Publication Date
WO2024069228A1 true WO2024069228A1 (en) 2024-04-04

Family

ID=84047573

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/123513 Ceased WO2024065775A1 (en) 2022-09-30 2022-09-30 A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system
PCT/IB2023/000561 Ceased WO2024069228A1 (en) 2022-09-30 2023-09-29 A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/123513 Ceased WO2024065775A1 (en) 2022-09-30 2022-09-30 A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system

Country Status (1)

Country Link
WO (2) WO2024065775A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162240A1 (en) * 2013-07-29 2016-06-09 Samsung Electronics Co., Ltd. Method and apparatus for constructing multi-screen display
US20190103075A1 (en) * 2018-11-13 2019-04-04 Intel Corporation Reformatting image data using device sensing
US20210068202A1 (en) 2019-08-28 2021-03-04 Icancontrol Tech Co., Ltd Intelligent industrial internet of things system using two-way channel artificial neural network
US20210096802A1 (en) 2019-09-26 2021-04-01 Google Llc Device manager that utilizes physical position of display devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162240A1 (en) * 2013-07-29 2016-06-09 Samsung Electronics Co., Ltd. Method and apparatus for constructing multi-screen display
US20190103075A1 (en) * 2018-11-13 2019-04-04 Intel Corporation Reformatting image data using device sensing
US20210068202A1 (en) 2019-08-28 2021-03-04 Icancontrol Tech Co., Ltd Intelligent industrial internet of things system using two-way channel artificial neural network
US20210096802A1 (en) 2019-09-26 2021-04-01 Google Llc Device manager that utilizes physical position of display devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KRAWETZ NEAL: "Kind of like that", 21 January 2013 (2013-01-21), XP093141590, Retrieved from the Internet <URL:https://www.hackerfactor.com/blog/?/archives/529-Kind-of-Like-That.html> [retrieved on 20240314] *

Also Published As

Publication number Publication date
WO2024065775A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
US9674507B2 (en) Monocular visual SLAM with general and panorama camera movements
CN105283905B (en) Use the robust tracking of Points And lines feature
US10659769B2 (en) Image processing apparatus, image processing method, and storage medium
JP7715460B2 (en) Avatar service providing method and system
CN110246147A (en) Vision inertia odometer method, vision inertia mileage counter device and mobile device
CN112802081B (en) Depth detection method and device, electronic equipment and storage medium
WO2018063608A1 (en) Place recognition algorithm
CN111325798B (en) Camera model correction method, device, AR implementation equipment and readable storage medium
US10769811B2 (en) Space coordinate converting server and method thereof
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN107832598A (en) Solve lock control method and Related product
US20130113952A1 (en) Information processing apparatus, information processing method, and program
CN117537800A (en) Map update method
WO2024069228A1 (en) A method for extending the display of a content on a first screen of a first device to a second screen of a second device, a device for enabling screen extension and a screen extension system
CN113489897B (en) Image processing method and related device
WO2013032785A1 (en) Line tracking with automatic model initialization by graph matching and cycle detection
WO2020114585A1 (en) Object location determination in frames of a video stream
CN111625101B (en) Display control method and device
EP3916522A1 (en) System, method, device and computer program product for connecting users to a persistent ar environment
US12462332B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium for processing images including two image regions
CN115115530B (en) Image deblurring method, device, terminal equipment and medium
US10419666B1 (en) Multiple camera panoramic images
CN107690799A (en) The method, apparatus and server of a kind of data syn-chronization
CN114528476A (en) Data processing method and device based on panoramic display scene
CN114489912B (en) Method and device for adjusting visual angle of direction indicator, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23798279

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23798279

Country of ref document: EP

Kind code of ref document: A1