[go: up one dir, main page]

FI20245341A1 - Modifying video content for a receiving device - Google Patents

Modifying video content for a receiving device

Info

Publication number
FI20245341A1
FI20245341A1 FI20245341A FI20245341A FI20245341A1 FI 20245341 A1 FI20245341 A1 FI 20245341A1 FI 20245341 A FI20245341 A FI 20245341A FI 20245341 A FI20245341 A FI 20245341A FI 20245341 A1 FI20245341 A1 FI 20245341A1
Authority
FI
Finland
Prior art keywords
video content
classification
receiving device
computer
modified
Prior art date
Application number
FI20245341A
Other languages
Finnish (fi)
Swedish (sv)
Inventor
Mikko Spoof
Original Assignee
Advantage Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advantage Holding Ltd filed Critical Advantage Holding Ltd
Priority to FI20245341A priority Critical patent/FI20245341A1/en
Priority to PCT/IB2025/053112 priority patent/WO2025202875A1/en
Publication of FI20245341A1 publication Critical patent/FI20245341A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/10Arrangements for replacing or switching information during the broadcast or the distribution
    • H04H20/103Transmitter-side switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/2723Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25825Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Disclosed is a computer-implemented method comprising obtaining captured video content (415) that is generated by capturing an activity, wherein the activity is defined by a formation (120, 122, 126, 126, 300) adjacent to a structure (110, 200, 310, 320, 330, 410), or comprised in the structure (110, 200, 310, 320, 330, 410), the structure (110, 200, 310, 320, 330, 410) being located in a geographical location, identifying elements comprised in the captured video content (415), determining for each identified element a classification, wherein a first classification is a classification for essential elements and a second classification is for replaceable elements, determining at least one participant of the activity as an element in the first classification, determining, based on a digital duplicate (250) of the structure (110, 200, 310, 320, 330, 410), at least one part of the structure (110, 200, 310, 320, 330, 410) as an element in the second classification, based on at least one parameter of a receiving device (260, 460), obtaining at least one additional element to replace the element in the second classification, generating modified video content (455) comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification, and providing the modified video content (455) to the receiving device (260, 460) for rendering.

Description

MODIFYING VIDEO CONTENT FOR A RECEIVING DEVICE
FIELD
The exemplary embodiments discussed in the present disclosure relate to modifying video content for a receiving device.
BACKGROUND
Many activities, such as sports events, occur in places that comprise a physical structure, such as a stadium, for the activity in front of live audience. Yet, as the places for audience are limited and not everyone has a chance to travel to the site, such events are often recorded, in other words, captured as video content — that can then be provided to one or more receiving devices. The captured video content may be for example broadcasted and/or a receiving device may stream the captured video content from a server. This allows the activity to be watched from various geographical locations and by using various receiving devices by multiple viewers. As there may be many different types of receiving devices, various — geolocations, and/or different viewers, the requirements regarding the characteristics of the video content may be different.
BRIEF DESCRIPTION
The scope of protection sought for various embodiments is defined by the independent claims. Dependent claims define further embodiments included in — the scope of protection. Exemplary embodiments, if any, that do not fall into any scope of protection defined in the claims, are to be considered as examples useful for understanding the cope of protection. s According to a first aspect there is provided a computer-implemented < method comprising, obtaining captured video content that is generated by & 25 capturing an activity, wherein the activity is defined by a formation adjacent to the © structure, or comprised in the structure, the structure being located in a
N geographical location, identifying elements comprised in the captured video
E content, determining for each identified element a classification, wherein a first — classification is a classification for essential elements and a second classification is 3 30 for replaceable elements, determining at least one identified element as a
N participant of the activity and as an element in the first classification, determining
N at least one other identified element, based on a digital duplicate of the structure, as one partof the structure and as an element in the second classification, based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification, generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification, and providing the modified video content to the receiving device for rendering.
According to a second aspect there is provided a computing device comprising means for obtaining captured video content that is generated by capturing an activity, wherein the activity is defined by a formation adjacent to the structure, or comprised in the structure, the structure being located in a geographical location, identifying elements comprised in the captured video — content, determining for each identified element a classification, wherein a first classification is a classification for essential elements and a second classification is for replaceable elements, determining at least one identified element as a participant of the activity and as an element in the first classification, determining at least one other identified element, based on a digital duplicate of the structure, as one part of the structure and as an element in the second classification, based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification, generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification, and — providing the modified video content to the receiving device for rendering.
In some examples according to the second aspect, the means comprises at least one processor; and at least one memory including computer program code which, when executed by the at least one processor, causes the performance of the computing device.
According to a third aspect there is provided a computer program
N product comprising instructions, which, when executed by a computing device,
N cause the computing device to perform a computer-implemented method 3 comprising at least the following: obtaining captured video content that is
O generated by capturing an activity, wherein the activity is defined by a formation = 30 adjacentto the structure, or comprised in the structure, the structure being located > in a geographical location, identifying elements comprised in the captured video 3 content, determining for each identified element a classification, wherein a first 0 classification is a classification for essential elements and a second classification is
O for replaceable elements, determining at least one identified element as a — participant of the activity and as an elementin the first classification, determining at least one other identified element, based on a digital duplicate of the structure,
as one part of the structure and as an element in the second classification, based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification, generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification, and providing the modified video content to the receiving device for rendering.
In some examples according to the third aspect, the computer program product is a software application.
According to a fourth aspect there is provided a non-volatile computer- readable medium comprising program instructions stored thereon which, when executed on a computing device, cause the computing device to perform a computer-implemented method comprising at least the following: obtaining captured video content that is generated by capturing an activity, wherein the activity is defined by a formation adjacent to the structure, or comprised in the — structure, the structure being located in a geographical location, identifying elements comprised in the captured video content, determining for each identified element a classification, wherein a first classification is a classification for essential elements and a second classification is for replaceable elements, determining at least one identified element as a participant of the activity and as an element in the — first classification, determining at least one other identified element, based on a digital duplicate of the structure, as one part of the structure and as an element in the second classification, based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification, generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the
N element in the first classification, and providing the modified video content to the
N receiving device for rendering. 3 In some examples according to the fourth aspect the program
O instructions form a software application.
E 30 According to a fifth aspect there is provided a system comprising at least one video camera for capturing video content, a server configured to receive 3 the captured video content and generated based on the captured video content a 0 modified video content and a receiving device, wherein the receiving device is
O configured to receive the modified video content, and wherein the system is configured to perform a computer-implemented method comprising obtaining captured video content that is generated by capturing an activity, wherein the activity is defined by a formation adjacent to the structure, or comprised in the structure, the structure being located in a geographical location, identifying elements comprised in the captured video content, determining for each identified element a classification, wherein a first classification is a classification for essential elements and a second classification is for replaceable elements, determining at least one identified element as a participant of the activity and as an element in the first classification, determining at least one other identified element, based on a digital duplicate of the structure, as one part of the structure and as an element in the second classification, based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification, generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification, and providing the modified video content to the receiving device for rendering. — BRIEF DESCRIPTION OF THE DRAWINGS
Some of the exemplary embodiments are discussed with reference to the figures in which:
FIG. 1 illustrates exemplary embodiments of activities ad elements that may be captured as video content.
FIG. 2 illustrates an exemplary embodiment of a digital duplicate.
FIG. 3 illustrates an exemplary embodiment in which captured video content is optimized for a receiving device.
FIG. 4 illustrates an exemplary embodiment of a system in which an activity occurring at a structure is captured as video content, and the video content
N 25 is then modified to be more optimal to a receiving device, which then streams the
N modified video content. 3 FIG. 5 illustrates a flow chart according to an exemplary embodiment.
O FIG. 6 illustrates an exemplary embodiment of a computing device.
E DETAILED DESCRIPTION
5 30 As mentioned above, various activities can take place in a location that 2 comprises a physical structure. As part of the physical structure, or adjacent to the
N physical structure, there may also be a formation that may be considered as
N defining the activity. For example, the structure may be a stadium and adjacent to the stadium there may be a formation that is a field for a certain sports related activity. The formation may be for example a football field, an ice-rink, an athletics track, a basketball field, or a baseball field. Thus, the formation may define boundaries and/or areas introducing meaning to the activity, thus bearing significance with respect to the activity, and therefore the formation can be understood as defining the activity. It is to be noted that while sports is mentioned 5 as an example of activity that may be defined by a formation having a structure adjacent to the formation, another example of such activity is a musical activity such as a concert. For the musical activity there may then be a stage for the musicians, which may be considered as a formation and the formation may be adjacent to the structure, which may be a physical venue. Alternatively, the formation may be comprised in the structure. It is to be noted that there may be further kinds of activities as well, that take place in a location comprising a structure, the structure being associated with a formation, by for example being adjacent to the formation. For example, a play or an opera performance may also be considered as an activity.
Figure 1 illustrates an example of structure 110, which may be for example a stadium, a hall or any other suitable kind of physical structure. The structure 110 thus has fixed, physical elements that may for example host an audience as well as provide spaces for different functions such as preparing food, changing clothing, etc. On surfaces of the structure there may be messages. The — messages may be general messages targeted for the audience such as information, messages relating to the activity, such as names of players and/or scores of a game, advertisements, or any other messages targeted to be seen by the audience. The messages may be fixed to the surfaces, or they may be replaceable such as messages attached to the structure and/or displayed on a display attached to the — structure. The structure may, in some examples, comprise parts that are dedicated
N for the messages.
N Figure 1 also illustrates examples of formations. In this figure there are 3 formations 120, 122, 124 and 126, which are examples and aimed at helping to
O explain. Therefore, this is not an exhaustive listing of formations, and other = 30 formations may also exist. The formation 120 is a field for playing football. The field > may thus be surrounded, at least partially, by a structure such as the structure 110. 3 The formation 122 is a basket for basketball. The formation 122 may thus be a part 0 ofanother formation or it may be considered as an independent part of a formation
O that comprises two or more independent formations. The formation 124 is part of agolfcourse. The golf course may be considered as a formation and the formation 124 may be a part of that formation or it may be an independent formation that together with at least one other independent formation forms the formation of a golf course. The formation 126 illustrates a football goal. The goal may thus be an independent part of another formation, such as the formation 120, or it may be a part comprised in the formation 120.
In an activity, occurring at a structure and defined by a formation, there are also those performing the activity, such as players, coaches, referees, and/or musicians, who may be referred to as participants. In figure 1 there are examples of participants. Participant 140 is a baseball player and participant 142 is a golf player. The participants may use an equipment to perform the activity. The equipment may be for example a ball, such as the ball 132 used in basketball or the ball 134 used in football. The equipment may also comprise multiple parts, such as the equipment 130 that comprises both baseball bat and ball.
Since the activity may be interesting to see for others than those at the venue, the activity may be captured as video. In other words, a video may be recorded using a capturing device 150 such as a video camera. It is to be noted that more than one capturing device may be used to create the recording, which is the captured video. The recording may then be broadcasted to receiving devices such as televisions. This allows people from different geographical locations to watch the activity. Additionally, or alternatively, the recording may be stored to a server, for example, to a cloud computing -based service, from which it can be streamed to a receiving device that may be any suitable computing device capable of connecting to the Internet and retrieving video data and causing the retrieved video data to be displayed on a display device that may be part of the computing device or may be connected to it. Retrieving the video data may be understood as downloading the video data. The downloading may be streaming, or it may be downloading and
N storing the video data such that the video data is rendered later.
N As the captured video content, which may be understood as video data 3 or video content, is viewed by a user using a receiving device that renders the video
O content to a display, the messages that are on the structure, such as the structure = 30 110, may not be as relevant to the user watching the video content on a receiving > device as they are to the audience at the venue, for example sitting on a chair of the 3 structure. If the receiving device, and thereby the user, are in a different country or 0 in a different continent, then the messages may not be relevant for the user, or the
O messages may be made so generic that they do not have a very clear target audience. Additionally, the captured video content may comprise further elements that are not relevant for a user in a different geographical location who wishes to watch the activity. Therefore, in the video content, different elements may have different priorities in terms of their relevance to the user. Based on such relevance, the elements may be divided into different classifications, which may be understood as different categories. For example, there may be elements that are essential to keep, such as the formation, the participants and the equipment of the activity. Some elements may be replaceable such as messages, that may be replaced, partially or completely, with something that is more relevant to the user of a receiving device. The replaceable and the essential elements may both be further enhanced by for example modifying their appearance in terms of colour and/or superimposing additional data to them such that the additional data is rendered superimposed to the element. Thus, the elements that are essential may be considered to be in a first classification, and the elements that are replaceable may be considered to be in a second classification. Additionally, there may be elements in the video content that can be removed. The elements that can be removed may be for example such that the user watching the video content on the receiving device, either on the display of the receiving device or on a display connected to the receiving device, or using a combination of both, does not benefit from viewing such elements, and/or there is a greater benefit for the user if those elements are removed, such as the benefit of being able to reduce the amount of — data comprised in the video content. Those elements may thus be considered as removable elements that may be elements in a third classification.
A digital duplicate may be understood as a digital representation of a product that is physical. A digital duplicate of a structure may thus be a digital representation, available digitally, of the actual physical structure. The digital — duplicate may be an exactreplica, in a digital format, of the physical structure, or it
N may be sufficiently corresponding. For example, there may be modifications to the
N physical structure that are not updated to the digital duplicate, while in some cases 3 all modifications to the actual structure are also updated to the digital duplicate.
O Thus, a digital twin may be understood as a type of a digital duplicate. = 30 Figure 2 illustrates an exemplary embodiment of a digital duplicate. In > this exemplary embodiment, there is a structure 200 that is for hosting an activity 3 and which may comprise, or be adjacent to, a formation defining the activity. The 0 structure 200 has a digital duplicate 250, which is a digital representation of the
O structure 200. The digital duplicate 250 is stored using any suitable memory capable of storing digital information. The memory may be transitory or non- transitory. [tis also to be noted that the digital duplicate may be replicated as many times as desired. The digital duplicate 250 may be updated regularly to mimic the modifications that have occurred in the structure 200. The updating may be performed regularly or in an ad hoc manner. It is also to be noted that there may be modifications to the structure 200 that are determined to be such that they are — not necessarily required to be updated to the digital duplicate 250.
The digital duplicate 250 may be accessible to a computer device such that it can be received by the computer device, which is a receiving device, , and may also referred to as a computing device. This may be enabled for example if the digital duplicate 250 is stored in a server, and the receiving device is connected to — the server and downloads the digital supplicate 250. The server may be comprised in a cloud computing system, for example, or in any other computing system, configured to provide server functionalities, and which may also be understood as back-end services.
As the digital duplicate is in a digital format, it may be modified using one or more software algorithms. Such modification(s) may be desirable for example if the structure 200 is to be rendered in a manner that is optimally suited to the receiving device. Then, using the one or more software algorithms the digital duplicate 250 may be for example scaled to better fit the display to which the computing device renders the representation of the structure 200, which may also — be referred to as the structure 200 comprised in captured video content.
Additionally, or alternatively, for example the hue, or saturation of the representation of the structure 200 may be modified based on the capabilities of the display to render various colours and/or the ambient light surrounding the display. It is also to be noted that parts of the representation of the structure 200 may be removed or replaced in the rendered representation of the structure 200
N by manipulating the digital duplicate 250 using any suitable software algorithms.
N Additionally, or alternatively, additional digital representations of various 3 elements may be superimposed to the manipulated digital duplicate 250 thus
O achieving a rendering of a representation of the structure 200 such that there are = 30 superimposed elements on the structure 200 in the rendered representation of the > structure 200. 3 As mentioned above, a receiving device may be any suitable computing 0 device that is capable of rendering data, such as video content, on a display that
O may be comprised in, and/or connected to the receiving device. Thus, various receiving devices may be configured to render the received video content, which may comprise a representation of the structure 200, on different types of displays.
For example, the receiving device may be the receiving device 260 that is a mobile phone, which can be understood as a type of a mobile device. On a mobile phone the display area is limited and also, the ratio of the display may be different compared to for example a large screen television. Additionally, or alternatively, memory and processing power available in the mobile phone may be limited which may mean that for the rendering of the video content there may be less resources available and thus the amount of data in the video content may be desirable to be reduced for optimal rendering. Additionally, or alternatively, also the quality of connection to a server may affect how well the video content can be rendered — depending on how fast data can be streamed. Thus, in case the amount of data comprised in the video content can be reduced, the reduced amount of data to be streamed and rendered may consequently be reduced thus enabling better rendering of the video content. The mobile phone 260 may also be affected by its geographical location, which in this example is Asia 265. For example, the time zone in which the receiving device 260 is located at may differ from the geographical location of the structure 200 thus affecting for example the ambient light and/or language in which the user of the receiving device would like to be used in the video content.
As another example, another receiving device may be connected to a head-mounted display 270, that may be configured to render the video content on as virtual reality and/or augmented reality content. The head-mounted display may thus work optimally in manner that is different from the receiving device 260 as the field of view available is different and some elements of the video content may therefore be rendered outside the field of view. Also, the head-mounted display may enable three or six degrees of freedom for the user experience and
N thus, when rendering a representation of the structure 200, the representation
N may be modified using the digital duplicate 250 such that the user may utilize the 3 possibilities of exploring enabled by the head-mounted display 270. Additionally,
O or alternatively, the head-mounted display 270 may have a geographical location = 30 — that differs from that of the receiving device 260. In this example, the head- > mounted display has a geographical location in North-America 275, which affects 3 for example the ambient light in case the video content is rendered at least 0 substantially simultaneously as by the receiving device 260. This may be the case
O for example if there is an activity taking place at the location of the structure 200 and video content captured is then streamed and rendered live by the receiving devices.
Another example of a different type of receiving device is a television 280. The television 280 may thus be a computing device that is a receiving device.
Alternatively, the television may be connected to a computing device that is the receiving device causing the received video content to be rendered on the television 280. The television 280 has a different display to which the video content is rendered than the receiving device 260 or the head-mounted display 270 and thus in an optimal rendering the representation of the structure 200 comprised in the video content may be modified differently, using the digital representation 250, than for the receiving device 260 or for the head-mounted display 270. The receiving device 280 may have yet another geographical location, such as the location 285 that is in Europe.
A receiving device may thus have different characteristics, which may be taken into account when modifying for example representation of a structure based on a digital duplicate of the structure. The characteristics may include for example geographical location, capabilities of the receiving device, time zone and/or a type of display onto which the video content is to be rendered. When the representation of the structure 200 comprised in a video content is modified, after the modification, the video content may be understood as modified video content.
The representation of the structure 200 may be modified by modifying the digital representation 250 and then extracting the representation of the structure 200 from the video and replacing it with the modified digital duplicate 250.
Alternatively, the digital duplicate may be modified, and one or more parts of the modified digital duplicate are then inserted to the video content, as replacing and/or adding to at least some parts of the representation of the structure 200. As another alternative, other elements of the video content may be recognized and
N extracted such that they can then be inserted to the modified digital duplicate 250
N thus creating the modified video content. 3 The modified video content, which may be obtained using any of the
O approaches discussed above, may also comprise other modifications such that the = 30 — resultis more optimal for the receiving device caused to render the modified video > content. The characteristics that may be used to define the receiving device may 3 comprise for example one or more of the following: location of the receiving device, 0 processing capabilities of the receiving device, weather conditions around the
O receiving device at the time of rendering the modified video content, time of the day when the modified video content is rendered, connectivity capabilities, quality of connection to the server from which the modified video content is downloaded,
historical data regarding activity performed by a user on the receiving device, data regarding a user currently identified as an active user of the receiving device, and/or number of users watching the rendering of the modified video content. The characteristics may be identified using one or more parameters. One parameter may be for defining one characteristic, and/or one parameter may be used to define a plurality of parameters. It is also to be noted that the parameters may have a hierarchical order with respect to each other. This may be beneficial in case two characteristics are conflicting in terms of how to modify the video content. The parameter that is higher in the hierarchical order may then prevail. For example, — the location may be more important than the type of the receiving device and therefore, the video may be modified at least partly based on the location regardless of the type of the receiving device. Then, to continue the example, there may be further aspects of the video that are then modified depending on the type of the receiving device, so while two different receiving devices, that are located in — the same geographical area, receive the modified video content, there may be modifications that are primarily dependent on the location, and thus the same for both receiving devices, and there may also be modifications in the modified video content that are different for the receiving devices in the same geographical location, as the receiving devices are different, which is indicated by the — characteristics of the receiving devices that differ from each other. Thus, the modifications made to the modified video content may be made for an individual receiving device, for a group of receiving devices, or for a combination of both. The modifications may be made such that modification(s) made for an individual receiving device and modification(s) made for a group of receiving devices, are made depending on parameters of the receiving devices, and hierarchies among
N the parameters. It is to be noted that it may be pre-determined what constitutes
N the same geographical location, and it may be for example a town, a country and/or 3 continent.
O Figure 3 illustrates an exemplary embodiment in which captured video = 30 content is optimized for a receiving device. In this exemplary embodiment, an > activity is captured as the video content. The activity is a game of football in this 3 exemplary embodiment. The activity occurs on a formation 300 that is the football 0 field. The formation 300 is surrounded by a structure, that in this exemplary
O embodiment is a stadium. The structure comprises a digital duplicate that has been stored on a server. The structure comprises different parts, for example parts 310, 320 and 330 on which messages may be displayed. The messages may be displayed using a display, by attaching the messages to the structure, by painting the messages to the structure, and/or by using any other suitable means for displaying a message. The message may comprise text, images and/or video. The messages may also comprise other data viewed as a message, for example, data that is perceived as three-dimensional visual data. In this exemplary embodiment, the part 310 is configured to display visual data 315, the part 320 is configured to display visual data 325 and the part 320 is configured to display visual data 335.
As the activity is captured as video data, these messages are also captured to the video content. Additionally, the video content captured comprises the players 340 and 345, which may be understood as participant to the activity, as well as the ball 305, which may be understood as an equipment for the activity, in other words, an activity equipment.
Thus, as the activity is captured as video content, the video content may be identified to comprise elements of different categories. In this exemplary embodiment, the digital duplicate may be utilized to identify the structure as well as the parts of the structure. The identifying may be performed using one or more software algorithms, which may be executed using any suitable computing device.
In this exemplary embodiment, the captured video content is transmitted to a server and then the server is configured to execute the algorithms and to identify — the structures. Optionally, the identifying may comprise receiving user input from a user. Additionally, one or more software algorithms, and optionally also user input, may be used to identify the formation 300, the players 340 and 345 as well as the ball 305.
After identifying the elements of the video content, the elements may be classified to different classifications. It is to be noted that optionally machine
N learning may be utilized in identifying and/or classifying the elements in the video
N content. In this exemplary embodiment, the formation 300, the players 340 and 3 345 as well as the ball 305 are classified into a first classification, which in this
O exemplary embodiment is a category for essential elements. The parts of the = 30 structure 310, 320 and 330, including their messages 315, 320, and 335, are > classified as elements comprised in a second classification, which in this exemplary 3 embodiment is a classification for elements that are replaceable. 0 In the first classification, the elements may be enhanced for example
O visually, but the elements, and their movement, which may also be understood to include lack of movement by remaining at one place, is to be preserved in any modified video content. The enhancement may comprise for example modifying colour in which the element in this classification is rendered such that the element may for example be better recognized in the video content. For example, if the field is green, a player with a red shirt may be modified such that the colour of the shirt is distinguished by colour blind users as well. Additionally, or alternatively, the visual appearance of the ball 305 may be enhanced such that in varying ambient light conditions, that are associated with a receiving device, the ball is more recognizable in the video content, which may be useful in case the modified video content is then to be modified more for example by executing a software application. Such additional modification may be performed for example by the receiving device.
In the second classification, the elements are replaceable, which may be understood such that at least part of the element may be removed or replaced with digital content, that may be understood as additional content, that is also referred to as replacing content. Such replacing content may be digitally generated, by the — server for example, based on one or more parameters of the receiving device.
Additionally, or alternatively, the replacing content may be stored and fetched based on the one or more parameters of the receiving device. Additionally, or alternatively, the shape of the replacing element may be altered digitally with respect to the element it replaces.
Once the activity is captured, there is captured video content that can be shared with one or more receiving devices using any suitable data connectivity as discussed above. When modifying the video content, the video content may be modified to be more optimal for a receiving device and/or for a group ofreceiving devices. The group of receiving devices may be determined based on one or more — parameters associated with a plurality of receiving devices. For example, receiving
N devices may be grouped based on their geographical locations, processing power
N of the receiving devices, based on data regarding previous activities performed by 3 the receiving devices, and/or the kind of display onto which the modified video
O content is to be rendered. For example, one group may comprise all mobile phones = 30 in one country, another group may comprise all televisions in two different > countries and so on. 3 In this exemplary embodiment, the video content is modified for one 0 receiving device. As the structure has a digital duplicate, based on the digital
O duplicate, the parts of the structure 310, 320, and 330 in the video content that comprise the messages 315, 325, and 335 may be recognized. In this exemplary embodiment, as it is determined that the receiving device is in a different country than the structure and/or the user that is identified as the user of the receiving device has a native language different than the language used in the messages.
Additionally, or alternatively, it may be determined that the receiving device has capability to display more data and/or different kind of data than what is comprised in the messages 315, 325, and 335. For example, if a message in the captured video content has still images, the replacing message in the modified video content may comprise moving images. Additionally, or alternatively, the content of the message may be different and thus, the original message comprised in the captured video content may be removed and there may be a new message inserted as a replacement to the modified video content. For example, in case the receiving device renders content for a head-mounted display that renders augmented reality and/or virtual reality content, and thus enables spatial computing, a message comprised in the video content may be replaced with such content that enables immersive user experience.
In this exemplary embodiment, the digital duplicate is used to recognize the parts 310, 320 and 330 of the structure and then to remove the messages 315, 325 and 335. The modification process then, based on the digital duplicate, creates and inserts to the modified video the parts 360, 370 and 380 such that the part 360 replaces the part 310, the part 370 replaces the part 320 and the part 380 replaces the part 330. The modified video may be obtained using any of the approaches discussed previously. Optionally, using the digital duplicate, the whole structure may be removed and replaced using the digital duplicate, for example, by inserting a copy of the digital duplicate into the modified video from which the original structure is removed. The inserted copy may then be for example modified in terms of shapes and/or dimensions as well in terms of its appearance. For
N example, based on the size of the display on which the modified video content is to
N be rendered, the amount of the structure represented in the video content may 3 differ such that the activity as such is optimally in focus while still allowing for
O example the messages to be rendered. For example, there may be enough of the = 30 structure rendered such that the place is still recognizable to the user, or there may > be more of the structure rendered than in the originally captured video content. 3 In this exemplary embodiment, the message 315 is replaced with 0 message 365, the message 320 is replaced with the message 375 and the message
O 330 is replaced with the message 385. In this exemplary embodiment, the content ofthe replacing messages relate to the same topic as the original messages, but the messages are targeted for the receiving device such that there is more data to be rendered as the receiving device is capable of rendering more data, such as rendering more visual effects. The actual message may also be different such that it better matches the receiving device based on one or more of the parameters of the receiving device. For example, the content of the message may be modified based on the receiving device. The message that is relevant for one viewer may not be relevant for another and/or the language of the messages may be different for the viewers to be able to understand the messages. The replacement messages may be different also depending on the time of the day or on the day of the week, such that for example on a Saturday morning there is a different message than on
Wednesday afternoon. This allows to have more relevant messages rendered for the user in dependence of the time and thus the user experience may be modified based on if the user is watching the video content as a live broadcasting or streaming or later on. Then for example messages that are outdated can be automatically replaced with more up-to-date messages. As a further additional, or — alternative, example, the receiving device may have one or more parameters indicating historical data regardinghow a user identified as active on the receiving device has used the receiving device. For example, the user may have chosen a language other than the native language of the user in which case the replacing message can be in that language, the historical data may have revealed the user's — preference towards videos instead of still images or vice versa, as well as preference in terms of topics and how much information the user wishes to have on any topic. Such historical data may be analysed using one or more software algorithms. It is to be noted that also one or more machine learning models may be trained to analyse the historical data and to provide a parameter defining the receiving device as an output. The historical data may then be indicated using one
N or more parameters that are then used as basis for determining the replacing
N message(s). 3 The formation 300 may be part of the digital duplicate or it may be
O separate. In case the formation 300 is part of the digital duplicate, then the digital = 30 duplicate may be used to recognize the formation. Otherwise, any suitable image > recognition may be used to recognize the formation from the captured video 3 content. In this exemplary embodiment, the formation 300 is part of the digital 0 duplicate. As the formation 300 is an element in the captured video content that is
O classified as essential, the formation 300 is to be present in the modified video content as well. Thus, in the modified video content the formation 350 is present and it corresponds to the formation 300. For example, when inserting generated element(s) to the video during the modification process, the formation 300 is kept and no replacing elements are inserted to replace the formation 300. However, optionally, the representation of the formation may be enhanced thus resulting in the formation 350 in the modified video content. The enhancing may be understood as for example enhancing the visual appearance of the formation such as modifying saturation of the colours, modifying contrast, modifying hue and/or modifying lighting. It is to be noted that in some exemplary embodiments there may be additional elements that are superimposed to the formation in the modified video content.
In addition to the formation, the players 340 and 345 as well as the ball 305, are identified as essential elements in the captured video content and thus, in the modified video, they should be presented in a manner corresponding to that of the originally captured video. A corresponding manner may be understood as a manner in which the movement, or the lack of movement, as well as the shape of — the essential element, in the modified video content is to be corresponding to the originally captured video content. Thus, the players 390 and 395 as well as the ball 355 are representations in the modified video content that correspond to those of the originally captured video. Optionally, the visual appearance of the players 390 and 395 as well as of the ball 355 may be modified with respect to the players 340 and 345 as well as the ball 305, and/or additional elements generated fo the modified video content may be superimposed.
By having the replaceable elements replaced, removed or modified in some other manner, the activity itselfis still rendered in a manner that corresponds to that of the originally captured video content. This allows a user to have the essential information, which is the activity and the topic of the video, preserved,
N while the other elements may be modified such that the modified video content is
N more optimal for the receiving device. Additionally, there may be elements in the 3 captured video content that are removed from the modified video content without
O replacing them. For example, parts of the structure may be removed. The audience = 30 may be removed completely or partially and so on. The amount of removing and > replacing performed may be dependent on one or parameters of the receiving 3 device. 0 Figure 4 illustrates an exemplary embodiment of a system 400 in which
O an activity occurring at a structure 410 is captured as video content 415, and the — video content 415 is then modified to be more optimal to a receiving device 460, which then streams the modified video content 455. In this exemplary embodiment, the activity occurs at the structure 400 for which a digital duplicate 430 is stored in a cloud-based service 420. The video content, after being captured using any suitable capturing means, is the transmitted to a back-end service 420, which in this exemplary embodiment is a cloud-based computing service. The cloud-based computing service comprises a unit 430, which may be understood as a logical unit, that stores a digital duplicate of the structure 410. It is to be noted that the digital duplicate may also be a copy of a digital duplicate stored in another location. The cloud-based service 420 also comprises a unit 440, which may be understood as a logical unit, for determining replacement data, that may be understood as replacement elements or additional elements, to enhance and/or replace elements in the captured video content 415, and additionally, or alternatively, inserting new elements, which may also be understood as additional elements, to the video content. The cloud-based service also comprises a unit 450, which may be understood as a logical unit, for modifying the video content 415.
The video content may be modified by identifying elements comprised in the video content. The identifying may be performed using any suitable object recognition software. A trained machine learning model may also be used for identifying and classifying the elements. So, the elements identified may be classified into different categories such as discussed above in the context of — previous examples. Categories may also be understood as classifications. For example, there may be essential elements, which are classified into a first category, and replaceable elements, which may be classified into a second category and then removable elements that are classified into a third category. Additionally, or alternatively, there may be a fourth category for elements that are not present in — the video content 415, but which are provided by the replacement data unit 440 or
N by any other suitable unit. Such elements may be understood as additional
N elements, and they may be for example visual elements. Such visual elements may 3 in some examples allow immersive user experience. One or more additional
O elements may also be used to replace one or more replaceable elements. The = 30 additional elements may be provided by generating those based on input received, > or they may be fetched from a database. When fetching from the database, the input 3 query may comprise one or more parameters of the receiving device 460, and 0 optionally also information regarding for example one or more replaceable
O elements identified from the video content 415. Using such input guery, the most optimal elements to be provided to one or more receiving devices may be identified and the added to a modified video content. Also, in case the additional elements are generated based on input, the input may also comprise one or more parameters of the receiving device 460 and/or information regarding identified elements.
For example, one of the parameters associated with the receiving device 460 may be data regarding historical activity. Such data may comprise for example information regarding how a user has used the Internet, which applications the user has spent time on, what topics have been of interest to the user and so on.
Such data may then be analysed, using for example a trained machine learning model, to identify one or more additional elements that are then determined to be optimal for the user and thus those additional elements should be obtained, by generating and/or fetching, and then inserted to the modified video content.
The unit 450 that is for modification of the video content then generates the modified video content 455. The modified video content may be generated by modifying the video content 415 as described above. Alternatively, the modified video content 455 may be generated such that the digital duplicate of the structure 410 is first fetched and based on one or more parameters of the receiving device 460, the digital duplicate may be modified to be suitable for the receiving device 460. Additionally, or alternatively, the digital duplicate may be modified based on the video content 415. For example, the lighting of the digital duplicate may be modified to correspond to the lighting of the structure 410. Additionally, or alternatively, the digital duplicate may be modified such that its representation corresponds to the representation of the structure 410 in the captured video content 415. For example, the parts of the structure 410 visible at a given time in the video content 415 may be used to determine how to represent also the digital duplicate such that it corresponds to the representation of the structure 410. Yet, — the correspondence may have some tolerance such that while the angle and/or
N field of view towards the structure 410 are followed in the representation
N generated using the digital duplicate, the amount of structure visible in the video 3 content 415 may differ from the representation generated using the digital
O duplicate. It is to be noted that in some examples the representation of the = 30 structure may be omitted in the generated modified video content. The generated > video content may then focus purely on the activity captured. 3 In this exemplary embodiment, the generated modified video content 0 comprises modifying the digital duplicate to mimic the representation of the
O structure 410 in the captured video content 415 by following the field of view and angle of those of the representation of the structure 410 in the captured video content 415. Additionally, the lighting of the digital duplicate may be adjusted to correspond to that of the representation of the structure 410. Once the digital duplicate, which may also be a copy of a digital duplicate, is modified to be suitable for the modified video content 455 that is generated, elements may be inserted to the modified digital duplicate and thereby to the modified video content 455. For example, the essential elements may be inserted to the modified digital duplicate.
Optionally, the essential elements are also enhanced in terms of their appearance.
Then, in case there are additional elements that are to replace replaceable elements, the replaceable elements are removed, and the additional elements are inserted instead. If there are elements determined as removable, those elements — are omitted from the generated modified video content. Optionally, in case there are further additional elements to be added to the modified video content, which are intended to provide additional elements without necessarily replacing other elements completely or partly, those are also added. The additional elements may for example provide information that is determined as relevant based on one or more parameters of the receiving device 460. Such information may for example inform the user how the rendering of the modified video content may be altered, information that is additional information to the captured activity and which based on historical activity and/or display space available on the receiving device is determined as relevant to the receiving device, as well as provide any other information determined as relevant. The information may also comprise messages from a third party that are determined to be relevant for a user active on the receiving device 460. In some examples, this may help to provide more targeted content to the receiving device and thereby also to the user of the receiving device.
It is to be noted that the user may in some examples comprise a plurality of users, —who are watching the modified video content. The targeted information may be
N useful for example if the user is not technically savvy and assistance for more
N optimal rendering of the modified video content can be provided, and/or targeted 3 commercial messaging can be enabled. Such targeted commercial messaging may
O for example guide the user to take certain interactive actions with respect to the = 30 receiving device, in other words, to provide suitable input to the receiving device > to allow the user to perform actions prompted by the message. Such actions may 3 comprise using browser, performing purchases, sending message, taking photos 0 and so on.
O Once the modified video content is generated, the modified video content may be transmitted to the receiving device 460, which in this exemplary embodiment is a computer, and the receiving device 460 is then configured to render the modified video content on its display and/or on a display connected to it.
Figure 5 illustrates a flow chart according to an exemplary embodiment.
In this exemplary embodiment, in block S1, video content is obtained, and the video content generated by capturing an activity. The activity may be for example an activity such as those discussed in the context of previous exemplary embodiments. Then, in block S2, elements comprised in the captured video content may be identified. The elements may be understood as entities, which may be understood as objects as well, that are present in the video content, such as — structures, vehicles, people, animals, flowers, trees, sections in the ground and so on. After identifying the elements, using for example object recognition, in block 53, a classification is determined for each identified element.
Then, in block S4, at least one of the identified elements is determined as a participant of the activity and conseguently as an essential element. The — activity is performed by the participant(s) and thus to convey the activity, the participants are to be conveyed. In some examples, the formation may also be identified as an essential element that is to be conveyed, which in some other examples the formation may not be considered as an essential element. For example, if the activity comprises singing on a stage, the stage defines the singing — activity at least partly, but it may not be necessary to convey the stage, but new additional content may be generated that can be used to replace the stage thus allowing the singer to be placed in a different environment for the performance.
In block S5 then based on the digital duplicate, at least part of the structure, which is an identified element, may be determined to be an element in a — second classification, which a classification for elements that may be replaced, at
N least partly, with different content, in other words, with one or more additional
N elements. In block S6, based on at least one parameter of a receiving device, at least 3 one additional element is obtained to replace the element in the second
O classification, that was determined in block S5. The obtaining may comprise = 30 — generatingand/or fetching, in other words, retrieving. Then in block S7, a modified > video content is generated, the modified video content comprising the at least one 3 additional element, that replaces the element in the second classification, and the 0 elements in the first classification. Finally, in block S8, the modified video content
O is provided to the receiving device for rendering.
Figure 6 illustrates an exemplary embodiment of a device 600 that may be or may be comprised in a computing device or a computing device used for running a back-end service. This exemplary embodiment is compatible with the previous exemplary embodiments, and they may be combined in any suitable manner. In this exemplary embodiment, there is atleast one processor 640, at least one memory 630, at least one connectivity unit 610 and at least one unit for receiving input and providing output 620. It is to be noted that the units described here are logical units and thus the actual implementation may vary. The at least one processor 640, at least one memory 630, at least one connectivity unit 610 and at least one unit for receiving input and providing output 620 may be connected to each other.
The at least one processor 640 may also be referred to as core, a central processing unit (CPU), microprocessor or graphical processing unit (GPU). A processor may be understood as an integrated circuit for performing calculations according to instructions provided using computer code. The atleast one memory 630 may comprise volatile and/or non-volatile memory. Thus, the at least one memory 630 may be understood to be one block of memory or a combination of different blocks of memory. The memory may be for storing different types of data.
The at least one memory 630 stores also computer program instructions, for example in the form of an application and/or an operating system. The atleast one memory 630 provides computer program instructions to the atleast one processor — 640 for executing and the at least one processor 640 may then be configured to store data into the atleast one memory 630. Some examples of memory are random access memories (RAMs), such as static RAM (SRAM) and dynamic RAM (DRAM), read-only memory (ROM), flash memories, optical discs, and magnetic computer storage devices, such as hard disk drives. The input and output unit 620 may allow user input, such as pressing a button, touch input and/or voice input, to be received
N by the device 600 and output such as audio, haptic or visual output to be provided
N to a user. The connectivity unit 610 allows connection to be formed between the 3 device 600 and another device. The connectivity unit may allow wireless and/or
O wired connections to be formed between the device 600 and other devices.
E 30 Examples of connection types that may be supported by the connectivity unit 610 are cellular communication -based connections, local area networks, Bluetooth- 3 connections, Wi-Fi connections, etc. 0 The present disclosure has been described above with reference to the
O exemplary embodiments. However, a person skilled in the art will understand there may be embodiments that vary from the example embodiments discussed above within the scope of the claims. Thus, skilled person will understand that the exemplary embodiments described above may, but are not required to, be combined with other exemplary embodiments in various manners. i
N
O
N
O
2 ©
N
I a a — <t
O
O
+
N
O
N

Claims (15)

1. A computer-implemented method comprising: obtaining captured video content that is generated by capturing an activity, wherein the activity is defined by a formation adjacent to the structure, or comprised in the structure, the structure being located in a geographical location; identifying elements comprised in the captured video content; determining for each identified element a classification, wherein a first classification is a classification for essential elements and a second classification is for replaceable elements; determining at least one identified element as a participant of the activity and as an element in the first classification; determining at least one other identified element, based on a digital duplicate of the structure, as one part of the structure and as an element in the second classification; based on at least one parameter of a receiving device, obtaining at least one additional element to replace the element in the second classification; generating modified video content comprising the at least one additional element, that replaces the element in the second classification, and the element in the first classification; and providing the modified video content to the receiving device for rendering.
2. A computer-implemented method according to claim 1, wherein movement, or lack of movement, of the essential elements in the modified video + 25 — content corresponds to movement, or lack of movement, of the essential elements S in the captured video content. O <Q 3. A computer-implemented method according to claim 1 or 2, wherein N the method further comprises enhancing visual appearance of one or more of the E 30 essential elements for the modified video content.
3
4. A computer-implemented method according to any previous claim, S wherein the element in the second classification comprises a message that is I replaced by another message comprised in the additional element.
5. A computer-implemented method according to any previous claim,
wherein the method further comprises modifying the digital duplicate to have a visual appearance that corresponds, at least partly, to the structure comprised in the captured video content and inserting the modified digital duplicate, at least partly, to the modified video content as an additional element replacing at least — partofthe structure.
6. A computer-implemented method according to any previous claim, wherein the method further comprises modifying the digital duplicate based on one or more parameters of a receiving device and inserting at least part of the modified digital duplicate to the modified video content.
7. A computer-implemented method according to claim 6, wherein the at least one additional element is rendered superimposed to the modified digital supplicate in the modified video content.
8. A computer-implemented method according to any previous claim, wherein obtaining the at least one additional element comprises generating or fetching the at least one additional element based on one or more parameters of the receiving device.
9. A computer-implemented method according to any previous claim, wherein the identified elements that are in the first classification are extracted from the captured video content and generating the modified video content comprises combining the extracted elements with at least part of the digital duplicate and the at least one additional element. 3 N
10. A computer-implemented method according to any previous claim, 3 wherein the method further comprises identifying at least one element is O determined to be in a third classification, which is for elements that are removed = 30 from the modified video content and removing the identified at least one element > in the third classification from the modified video content. - 2
11. A computer-implemented method according to any previous claim, O wherein the classification is performed using a trained machine-learning model.
12. A computer-implemented method according to any previous claim,
wherein the atleast one parameter of the receiving device comprises one or more of the following parameters: geographical location, language, user identified as active, processing capability of the receiving device, display onto which the receiving device renders the modified video content, quality of connection to a server, time of a day, and/or historical activity performed by the receiving device.
13. A computer-implemented method according to any previous claim, wherein the receiving device is one of the following: a mobile phone, a tablet computer, a computer, or a television.
14. A computer program product comprising instructions, which, when executed by one or more computing devices, cause the one or more computing devices to perform a computer-implemented method according to any of claims 1 to 13.
15. A system comprising at least one video camera for capturing video content, a server configured to receive the captured video content and generated based on the captured video content a modified video content and a receiving device, wherein the receiving device is configured to receive the modified video — content, and wherein the system is configured to perform a computer- implemented method according to any of claims 1 to 13. <t N O N O 2 © N I a a T 0 LO + N O N
FI20245341A 2024-03-26 2024-03-26 Modifying video content for a receiving device FI20245341A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20245341A FI20245341A1 (en) 2024-03-26 2024-03-26 Modifying video content for a receiving device
PCT/IB2025/053112 WO2025202875A1 (en) 2024-03-26 2025-03-25 Modifying video content for a receiving device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20245341A FI20245341A1 (en) 2024-03-26 2024-03-26 Modifying video content for a receiving device

Publications (1)

Publication Number Publication Date
FI20245341A1 true FI20245341A1 (en) 2025-09-27

Family

ID=97141888

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20245341A FI20245341A1 (en) 2024-03-26 2024-03-26 Modifying video content for a receiving device

Country Status (2)

Country Link
FI (1) FI20245341A1 (en)
WO (1) WO2025202875A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090315978A1 (en) * 2006-06-02 2009-12-24 Eidgenossische Technische Hochschule Zurich Method and system for generating a 3d representation of a dynamically changing 3d scene
US20150297949A1 (en) * 2007-06-12 2015-10-22 Intheplay, Inc. Automatic sports broadcasting system
US20170323478A1 (en) * 2014-01-17 2017-11-09 Nokia Technologies Oy Method and apparatus for evaluating environmental structures for in-situ content augmentation
US20190088005A1 (en) * 2018-11-15 2019-03-21 Intel Corporation Lightweight View Dependent Rendering System for Mobile Devices
US20210012557A1 (en) * 2017-05-31 2021-01-14 LiveCGI, Inc. Systems and associated methods for creating a viewing experience
US20210144449A1 (en) * 2019-11-11 2021-05-13 José Antonio CRUZ MOYA Video processing and modification
US20230260219A1 (en) * 2022-02-17 2023-08-17 Rovi Guides, Inc. Systems and methods for displaying and adjusting virtual objects based on interactive and dynamic content
US11831965B1 (en) * 2022-07-06 2023-11-28 Streem, Llc Identifiable information redaction and/or replacement
US20240071006A1 (en) * 2022-08-31 2024-02-29 Snap Inc. Mixing and matching volumetric contents for new augmented reality experiences

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12266176B2 (en) * 2014-02-28 2025-04-01 Genius Sports Ss, Llc Data processing systems and methods for generating interactive user interfaces and interactive game systems based on spatiotemporal analysis of video content
WO2018057530A1 (en) * 2016-09-21 2018-03-29 GumGum, Inc. Machine learning models for identifying objects depicted in image or video data
US10325410B1 (en) * 2016-11-07 2019-06-18 Vulcan Inc. Augmented reality for enhancing sporting events
US10419790B2 (en) * 2018-01-19 2019-09-17 Infinite Designs, LLC System and method for video curation
US10967277B2 (en) * 2019-03-29 2021-04-06 Electronic Arts Inc. Automated player sponsorship system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090315978A1 (en) * 2006-06-02 2009-12-24 Eidgenossische Technische Hochschule Zurich Method and system for generating a 3d representation of a dynamically changing 3d scene
US20150297949A1 (en) * 2007-06-12 2015-10-22 Intheplay, Inc. Automatic sports broadcasting system
US20170323478A1 (en) * 2014-01-17 2017-11-09 Nokia Technologies Oy Method and apparatus for evaluating environmental structures for in-situ content augmentation
US20210012557A1 (en) * 2017-05-31 2021-01-14 LiveCGI, Inc. Systems and associated methods for creating a viewing experience
US20190088005A1 (en) * 2018-11-15 2019-03-21 Intel Corporation Lightweight View Dependent Rendering System for Mobile Devices
US20210144449A1 (en) * 2019-11-11 2021-05-13 José Antonio CRUZ MOYA Video processing and modification
US20230260219A1 (en) * 2022-02-17 2023-08-17 Rovi Guides, Inc. Systems and methods for displaying and adjusting virtual objects based on interactive and dynamic content
US11831965B1 (en) * 2022-07-06 2023-11-28 Streem, Llc Identifiable information redaction and/or replacement
US20240071006A1 (en) * 2022-08-31 2024-02-29 Snap Inc. Mixing and matching volumetric contents for new augmented reality experiences

Also Published As

Publication number Publication date
WO2025202875A1 (en) 2025-10-02

Similar Documents

Publication Publication Date Title
US20250342653A1 (en) Creating and distributing interactive addressable virtual content
US10609308B2 (en) Overly non-video content on a mobile device
US11484795B2 (en) Overlaying content within live streaming video
US10691202B2 (en) Virtual reality system including social graph
CN108401175B (en) Barrage message processing method and device, storage medium and electronic equipment
US11860936B2 (en) Method and system for producing customized content
US9832441B2 (en) Supplemental content on a mobile device
US9143699B2 (en) Overlay non-video content on a mobile device
US8869199B2 (en) Media content transmission method and apparatus, and reception method and apparatus for providing augmenting media content using graphic object
US20170048597A1 (en) Modular content generation, modification, and delivery system
KR20110118808A (en) Media processing method and procedure
CN107736034A (en) Beholder&#39;s relationship type video creating device and preparation method
CN113613062B (en) Video data processing method, device, equipment and storage medium
JP2009022010A (en) Method and apparatus for providing placement information of content to be overlaid to user of video stream
JP2005100053A (en) Avatar information transmission / reception method, program, and apparatus
WO2023130715A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN105191298B (en) Multidimensional content service providing system using 2D‑3D multidimensional content file, service providing method, and multidimensional content file
KR101943554B1 (en) Method and server for providing sports game information
FI20245341A1 (en) Modifying video content for a receiving device
US12101529B1 (en) Client side augmented reality overlay
Domski eSports: The newest addition to China's public diplomacy
Yin An Analysis of Chinese Web Series Development and Strategy Go Princess Go: A Case Study
FI20245521A1 (en) Video content enhancement for a receiving device
JP2024001739A (en) Information processing system, information processing device, information processing method and program
CN120935392A (en) Information display methods, devices, equipment and storage media

Legal Events

Date Code Title Description
PC Transfer of assignment of patent

Owner name: ADVANTAGE HOLDING LIMITED