[go: up one dir, main page]

GB2552376A - Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system - Google Patents

Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system Download PDF

Info

Publication number
GB2552376A
GB2552376A GB1612727.6A GB201612727A GB2552376A GB 2552376 A GB2552376 A GB 2552376A GB 201612727 A GB201612727 A GB 201612727A GB 2552376 A GB2552376 A GB 2552376A
Authority
GB
United Kingdom
Prior art keywords
video
video stream
strategy
stream
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1612727.6A
Other versions
GB201612727D0 (en
GB2552376B (en
Inventor
Sevin Julien
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB201612727A priority Critical patent/GB2552376B/en
Publication of GB201612727D0 publication Critical patent/GB201612727D0/en
Publication of GB2552376A publication Critical patent/GB2552376A/en
Application granted granted Critical
Publication of GB2552376B publication Critical patent/GB2552376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • H04N21/6379Control signals issued by the client directed to the server or network components directed to server directed to encoder, e.g. for requesting a lower encoding rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system, where the plurality of video streams comprise at least one current video stream and a target video stream to be generated, where the plurality of streams are generated by forming candidate video flow strategies 430, estimating for each strategy a cost in terms of system resources 440 before selecting the strategy with the smallest cost 450. The modules of the video-surveillance system include a viewing or display module, a recording or storage module and a video content analytics module (VCA). The cost of a strategy is determined based on the network cost and the processing cost. The cost being calculated by taking into account the frame rate, resolution and compression rate of the video streams. The cost also being calculated with respect to bandwidth limits.

Description

(54) Title ofthe Invention: Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system Abstract Title: Cost efficient generation of video streams for a CCTV system (57) Generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system, where the plurality of video streams comprise at least one current video stream and a target video stream to be generated, where the plurality of streams are generated by forming candidate video flow strategies 430, estimating for each strategy a cost in terms of system resources 440 before selecting the strategy with the smallest cost 450. The modules ofthe video-surveillance system include a viewing or display module, a recording or storage module and a video content analytics module (VCA). The cost of a strategy is determined based on the network cost and the processing cost. The cost being calculated by taking into account the frame rate, resolution and compression rate of the video streams. The cost also being calculated with respect to bandwidth limits.
Figure GB2552376A_D0001
1/7
150
Figure GB2552376A_D0002
Figure GB2552376A_D0003
Figure GB2552376A_D0004
Figure GB2552376A_D0005
Figure 1b
2/7 ο
Figure GB2552376A_D0006
Fig. 2
3/7
300
Z
Figure GB2552376A_D0007
Fig. 3
4/7
Figure GB2552376A_D0008
5/7
Figure GB2552376A_D0009
m m m δ
m
D)
-z o
Fig. 5
Figure GB2552376A_D0010
o co re m
d)
6/7
Figure GB2552376A_D0011
Figure GB2552376A_D0012
Figure GB2552376A_D0013
Figure GB2552376A_D0014
Figure 6d co
d)
7/7
710 711 712
Figure GB2552376A_D0015
Figure 7d
Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system
FIELD OFTHE INVENTION
The present invention relates in general to video-surveillance systems, and in particular to a method, a device, a system and a computer program configured for optimizing the generation of plural video streams required by different devices of a video-surveillance system.
BACKGROUND OF THE INVENTION io A video surveillance system is a system comprising one or many cameras and associated means configured to transmit the video signal (or signals) through a closed circuit to devices to display, record, process, and/or analyze it.
A video surveillance system may have several purposes, such as crime prevention, surveillance of industrial processes, traffic monitoring.
A video surveillance (VS) system has in general two main baseline functions: live viewing and recording (for the retrieval and viewing of video such as for a post-event investigation). Concerning the live viewing, a human person, referred to as an operator, monitors one or several cameras, each camera corresponding to a given scene. In addition to these two main functions, there is also a growing use of Video Content Analytics (VCA) algorithms which process and analyse video in order to perform new tasks. For instance, typical VCA algorithms used in a video-surveillance system are face detection or recognition, people tracking, and licence plate reading.
Each function (recording, viewing, VCA processing) may require a respective video stream derived from a same video flow but with different characteristics.
The video flow comprises video data captured by a video camera in operation. It is formed by a set of consecutive images or frames. Relative to a video stream (as described as follows), a video flow can be considered as raw data (i.e., which is not encoded).
A video stream comprises data which corresponds to a result of the processing of the video flow (or another video stream) by an encoding module performing an encoding algorithm which is set with parameters such as video encoding standard, frame rate, resolution, and/or compression rate. The values of these parameters are a set of characteristics which defines the video stream.
For example, a video stream intended for a recording function (implemented in a recording server) is referred to as a “recording stream”. A video stream intended for a viewing function (implemented in a viewer) is referred to as a “viewing stream”. A video stream intended for a VCA processing function (implemented in a VCA server) is referred to as a “VCA stream”.
For the sake of illustration, an example is given as follows to illustrate the above three video streams generated from a same video flow but with different characteristics:
recording stream: processed in a low frame rate (5 fps) but a high resolution (1080p);
viewing stream: processed in a high frame rate (30 fps) but a medium resolution (720p); and
VCA stream: processed in a medium frame rate (15 fps) and a medium resolution (720p).
As mentioned above, the characteristics required for each function (recording, viewing, VCA processing) are predetermined. The characteristics of a video stream are minimum specifications required by a corresponding function/application (e.g., implemented by a recording server, viewer, VCA server). This is particularly the case of an application embedded in a VCA server and configured to analyse a video stream received by the VCA server. As a matter of fact, the video quality required to count people/object shown in a video stream is considerably lower than that required to identify a person/object shown in the same video stream.
As mentioned above, the way to generate a video stream with predetermined (target) characteristics is referred to as a “video stream strategy”. Two main types of video stream strategies are proposed in the state of the art. A first type of video stream strategies is to generate at least two video streams all directly from a camera of the VS system. A second type of video stream strategies consists in using a camera to generate only one video stream with a first set of characteristics, and using another device of the VS system different from the camera to generate another video stream with another set of characteristics (which can be considered as a “target” set of characteristics different from the first set of characteristics of the stream generated by the camera) by processing the video stream with the first set of characteristics.
In an example of the first type of video stream strategies, one single camera contains a plurality of encoders each of which is set up with a set of dedicated predetermined characteristics (e.g. encoding, frame rate (fps), resolution, and/or compression rate, etc). The plurality of encoders (e.g. n encoders wherein n is an integer greater than 1) are used to generate, based on the same video flow, n video streams which are respectively destined to a corresponding target device/server of the VS system. For example, three different video streams comprising a recording stream, a viewing stream and a VCA stream may be generated independently (and possibly sent simultaneously) by only one camera. Various known methods such as Time Division Multiple Access (TDMA) and Carrier Sense Multiple Access (CSMA) may be performed by the camera to send simultaneously the multiple video streams.
It is noted that a video stream directly generated by a camera is generated according to a video stream strategy which is, as mentioned above, referred to as a “camera video stream strategy”.
In addition, the first types of video stream strategies are particularly relevant when a camera embeds the multi-streaming technology.
As for the second type of video stream strategies, the target video streams with the target sets of characteristics which will be provided for a corresponding target device are not generated directly by a camera. Instead, the target video streams are processed by modifying a video stream received from a camera or from another device. For example, a VCA stream is a target video stream being generated for example at recording server level (See Fig 1b, the recording server 120), by processing a recording stream received from a camera. In such a case, the recording server 120 duplicates the received recording stream (so there are two copies of the received recording stream), wherein one of the two io copies is stored in the recording server 120 and the other of the two copies is processed/modified to generate the VCA stream with the target (predetermined) characteristics to be sent to a VCA server.
It is noted that the processing performed by the recording server 120 to generate the target video stream (e.g. the above VCA stream) comprises, for example, a reduction of a frame rate and/or a reduction of resolution and/or an increase of a compression rate, depending on a target set of characteristics (i.e., minimal required characteristics for processing the target stream).
An alternative video stream strategy could be to generate the VCA stream also from the recording stream but at the VCA server level. Similarly, some other alternative video stream strategies can be defined.
For generating from a single video flow multiple video streams required by respective modules/devices of the VS system, different video stream strategies may consume different levels of system resources. Typically, a video surveillance system has limited system resources such as processing resources and/or network resources. The network resources for the video-surveillance system (VS system) are limited; for example, the network infrastructure providing a plurality of communication paths used to transmit the video streams from one to another module of the VS system has a limited bandwidth.
A communication path comprises one or a plurality of links. A link of a communication path is used to connect two consecutive devices/modules of the VS system, and it offers a predetermined data rate or bandwidth. The devices/modules of the VS system are for instance a camera, a recording server, a viewer, or a VCA server. A link can be implemented as using a physical cable or a wireless connection.
Concerning the processing resources, each device/module of the VS system has limited resources in terms of processing resources provided for example by its central processing unit (CPU) and/or memory. Consequently, if the resource constraints are not taken into account during the selection of video stream strategies to be used, some data of the video streams would be lost and the performance of the VS system could accordingly not be not assured.
Each video stream strategy has its drawbacks and advantages in terms of processing resources and/or of network resources used by the VS system.
For example, a camera video stream strategy (as the “first type of video stream strategies” as mentioned above) allows to reduce the processing resources consumed by a target device/server of the VS system because no additional processing is required by the target device/server. On the other hand, it may result in a higher cost in terms of network resources because all the video streams are generated by the same source device as the single camera and all need to be transmitted over said network from the camera to different or similar target devices.
The second type of video stream strategies may result in a higher cost in terms of processing resources but a lower cost in terms of network resources. Since not all target video streams are generated by a same camera (which means one or some of the target resources are generated by a recording server, a viewer and/or a VCA server), the network resources used to transmit the target video streams can thus be saved. On the other hand, additional computational processing needs to be performed in some devices of the VS system (e.g. a recording server, a viewer and/or a VCA server) to process the received video stream, which may thus result in a higher cost in terms of processing resources.
For a considered VS system, different types of resource constraints, such as network constraints and/or processing constraints, may affect the same video streams in different ways. For instance, concerning the network resources and constraints, the device/modules of a VS system can be arranged according to a centralized deployment (i.e., centralized architecture) or a distributed deployment (i.e., distributed architecture). The two different deployments may present different bottlenecks of the network infrastructure. The impacts on the VS systems resulted from such a bottleneck (as a network constraint) can be very different depending on the deployments. The centralized and distributed deployments will be illustrated more in detail in the following paragraphs describing Figure 2.
io As for the processing constraints, the device/modules of a VS system can have different constraints which depend on the processing resources provided by the devices. These processing resources depend on characteristics of the electronic components used, such as their number of cores, their performances, and/or the amount of memory used.
It can thus be understood that depending on the considered VS systems (e.g., architecture), the network and processing resources and constraints can be very different. In addition, depending on the network and processing resources and constraints as well as on the video stream strategies selected for generating the target video streams (such as a VCA stream required by a VCA server), the performance of the VS systems can also be very different.
Conventionally, the selection of one or several video stream strategies is manually performed by the administrator of the VS system, without knowing and/or taking into account the resource constraints of the VS system whether in terms of network or processing. As a matter of fact, the manual selection of video stream strategies can be very inefficient.
For example, “trial-and-error is a frequently way to configure a VS system until a workable set of video stream strategies is found. Such a manual selection may need to waste much time on testing many useless sets of video stream strategies before finding a workable set of video stream strategies which match the resource constraints of the VS system. The useless sets of video stream strategies corresponds to those video stream strategies which are not possible to be applied due to a lack of resources (in terms of network and/or processing) of the VS system. As the scale of the VS system enlarges (e.g. the number of the devices/modules of the VS system being increased), the number of video stream strategies to be tested is considerably increased. Therefore the computational complexity may result a great system overhead which may become impossibly affordable.
Therefore, one of objectives of the invention is to automatically determine an optimized set of video stream strategies to be used to generate the respective video streams required by the respective devices/modules of a video surveillance (VS) system.
The invention also makes it possible to automatically select among these sets of video stream strategies, the optimized set of video stream strategies to be applied to generate the respective video streams.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method for 20 generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system; the plurality of video streams comprising at least one current video stream and a target video stream to be generated, the target video stream being distinct from the at least one current video stream. The method comprises forming candidate video flow strategies according to each of which the at least one current video stream and the target video stream can be generated; estimating, for each of the candidate video flow strategies, a cost in terms of system resources based on at least one of network features and of processing features of the video-surveillance system required to process the candidate video flow strategy; and selecting, from the candidate video flow strategies, a target video flow strategy which presents a smallest cost among the estimated costs of the candidate video flow strategies.
The method of the invention makes it possible to automatically find the optimized video flow strategy to be used to generate the corresponding video streams, that means the method optimizes the video surveillance (VS) system in terms of network and/or processing resources. Consequently, the invention makes it possible to increase the number of video streams to be generated for the corresponding modules/devices of the VS system while maintaining the same conditions of utilization as well as the same resource constraints of the VS system in terms of network and processing. In other words, the performance of the VS system can be largely increase without investments on new equipment investments.
Furthermore, compared to the conventional manual selection of video stream strategies, the method saves much time and is able to select, by taking into account the resource constraints, an optimal video flow strategy to be used to generate video streams. For example, the network deployment (e.g. the distributed or centralized deployment) is often complex for the operator of a VS system to manage to master network considerations as well as to select an optimal video flow strategy while taking into account the network constraints, especially when the scale ofthe VS system is large.
Optional features of embodiments of the invention are defined in the appended claims. Some of these features are explained here below with reference to a method, while they can be transposed into features dedicated to a device according to embodiments ofthe invention.
In an embodiment, a candidate video flow strategy is formed by performing at least one of following: adding to an existing video flow strategy a target video stream strategy according to which the target video stream is directly derived from the video flow; adding to an existing video flow strategy a target video stream strategy according to which the target video stream is derived from one of the current video streams; and modifying one or plural existing video stream strategies of an existing video flow strategy, so as to generate, according to the modified one or plural existing video stream strategies, the current video stream and the target video stream.
In an embodiment, the candidate video flow strategies comprise respectively at least one video stream strategy, the video stream strategy comprising at least part of following information on a corresponding video stream: a set of characteristics of the video stream which is generated according to the video stream strategy, the set of characteristics comprising at least one of a frame rate, a resolution value and a compression rate of the video stream; a type of processing to be performed to generate the video stream; an identifier of a source video stream from which the video stream is generated; and an identifier of one of the modules configured to process the video stream.
In an embodiment, the frame rate and the resolution value of the target video stream are respectively not greater than the frame rate and the resolution value of a source video stream used to generate the target video stream, and the compression rate of the target video stream is not smaller than the compression rate of the source video stream, wherein the source video stream is one of the current video streams.
In an embodiment, the cost of the candidate video flow strategy comprises a network cost of the candidate video flow strategy calculated based on at least one of following: an input rate of the target video stream which is determined based on the target set of characteristics; an input rate of the at least one current video stream which is determined based on the corresponding set of characteristics; and the network features comprising bandwidth limits of links of a communication path used by the candidate video flow strategy for transmitting the video streams including the target video stream.
In an embodiment, the network cost of the candidate video flow strategy is calculated by taking into account the link bandwidths of said links occupied by the transmission of other data which are not relative to the candidate video flow strategy.
In an embodiment, the cost of a candidate video flow strategy comprises a processing cost of the candidate video flow strategy calculated based on at least one of: the processing features of the modules which participates to the processing of the candidate video flow strategy; and frame rate, resolution reduction and/or compression rate increase resulting from the processing of the candidate video flow strategy by said modules.
In an embodiment, the processing cost of the candidate video flow strategy is calculated by taking into account processing loads placed on said modules for processing other data which are not relative to the candidate video flow strategy.
In an embodiment, the cost of the candidate video flow strategy is determined based on the network cost and the processing cost.
In an embodiment, the method comprises further, after the step of selecting, a step of generating the target video stream with the target set of characteristics and the at least one current video stream according to the selected target video flow strategy.
In an embodiment, one of the modules is a video streaming device configured to generate the video flow, and the rest of the modules comprise a video content analytics (VCA) server, a recording server and/or a viewer.
According to a second aspect of the invention, there is provided a device for generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system; the plurality of video streams comprising at least one current video stream and a target video stream to be generated, the target video stream being distinct from the at least one current video stream; the device comprising a processor configured for carrying out the steps of forming candidate video flow strategies according to each of which the at least one current video stream and the target video stream can be generated; estimating, for each of the candidate video flow strategies, a cost in terms of system resources based on at least one of network features and of processing features of the video-surveillance system required to process the candidate video flow strategy; and selecting, from the candidate video flow strategies, a target video flow strategy which presents a smallest cost among the estimated costs of the candidate video flow strategies.
The device of the invention makes it possible to automatically find the optimized video flow strategy to be used to generate the corresponding video streams, that means the method optimizes the video surveillance (VS) system in terms of network and/or processing resources. Consequently, the invention makes it possible to increase the number of video streams to be generated for the corresponding modules/devices of the VS system while maintaining the same conditions of utilization as well as the same resource constraints of the
VS system in terms of network and processing. In other words, the performance of the VS system can be largely increase without investments on new equipment investments.
Furthermore, compared to the conventional manual selection of video stream strategies, the device saves much time and is able to select, by taking into account the resource constraints, an optimal video flow strategy to be used to generate video streams. For example, the network deployment (e.g. the distributed or centralized deployment) is often complex for the operator of a VS system to manage to master network considerations as well as to select an optimal video flow strategy while taking into account the network constraints, especially when the scale of the VS system is large.
In an embodiment, the processor is further configured for carrying out at least one of following steps to form a candidate video flow strategy: adding to an existing video flow strategy a target video stream strategy according to which the target video stream is directly derived from the video flow; adding to an existing video flow strategy a target video stream strategy according to which the target video stream is derived from one of the current video streams; and modifying one or plural existing video stream strategy of an existing video flow strategy, so as to generate, according to the modified one or plural existing video stream strategies, the target video stream and the current video streams.
The invention relates to a computer program product for a programmable apparatus, the computer program product comprising instructions for carrying out a method for generating a plurality of video streams required by a videosurveillance system as previously described, when the program is loaded and executed by a programmable apparatus.
The invention also relates to a non-transitory computer-readable medium storing instructions of a computer program for implementing a method for generating a plurality of video streams required by a video-surveillance system as previously described.
The non-transitory computer-readable medium may have features and advantages that are analogous to those set out above in relation to the method and the device.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to as a module or system''. Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium, and in particular a suitable tangible carrier medium or suitable transient carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CDROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:
Figure 1 is an illustration of typical video streams in a video surveillance system;
Figure 2 is a schematic representation of an example of architecture of a video surveillance system;
Figure 3 is a schematic block diagram of a computing device which may be used for implementing one or more embodiments of the invention;
Figure 4 is a diagram explaining main steps 400 to 470 of the method according to an embodiment of the invention;
Figure 5a illustrates sub-steps 431 to 433 of the step 430 performed to generate a candidate video flow strategy based on an existing video flow strategy; and Figure 5b illustrates steps 510 to 530 of the step 440 performed to determine the cost of a test video flow strategy;
Figures 6a to 6d schematically illustrate examples of processing a video flow strategy to generate video streams from a given video flow generated by a camera of the VS system; and io Figures 7a to 7d schematically illustrate an example of results of the steps ofthe method according to an embodiment of the invention.
Other particularities and advantages of the invention will also emerge from the following description.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
In the present document, the following words or expressions are used as hereafter stated.
- A video flow is defined as the video data captured by a video camera in 20 operation. It is formed by a set of consecutive images or frames. Relative to a video stream (as described as follows), a video flow can be considered as raw data (i.e., which is not encoded).
- A video stream comprises data which corresponds to the result of the processing of the video flow by an encoding module performing an encoding algorithm which is set with parameters such as a frame rate and/or resolution and/or compression rate.
- A video stream strategy is defined as a processing method to generate a video stream with given (predetermined) characteristics such as for example frame rate, resolution and/or compression rate. The processing method defines the devices to be used for the processing, and data related to the processing to be implemented by said devices.
In other words, a video stream strategy defines at least a part of a path of a communication network over which said data related to the video stream is to be transmitted, and the processing to be applied by at least one device of the path.
- A video flow strategy is defined as a set of video stream strategies applied, successively or in parallel, to one video flow, or to video streams resulting from the processing of said video flow.
- An administrator is the person who configures the video surveillance system.
In particular, he/she sets the required characteristics of all video streams which could be generated in an operation mode in the video surveillance system.
- An operator is the person who has to monitor scenes, each scene corresponding to a video flow issued from a given camera. Generally, he/she can select at any time any scene to display. However, contrary to the administrator, he cannot modify the characteristics of the flow corresponding to a scene. Typically, when an operator wishes to watch (display) a video generated by a given camera, he/she receives the video stream having a quality previously set by the administrator.
Figure 1 represents an illustration of the generation of video streams in a video surveillance system. The represented video surveillance system comprises several devices able to generate and process video streams. Figure
1 focuses on four of them: a camera 100 in Figure 1a, a recording server 120 in
Figure 1 b, a VCA server 150 in Figure 1 c and a viewer in Figure 1 d.
A video stream strategy is a processing method to generate a video stream with given (predetermined) characteristics such as example frame rate, resolution and/or compression rate. According to an embodiment, a video stream strategy can be characterized (formalized) by a triplet of three fields. The first field corresponds to the type of processing to be performed to generate the video stream. It is referred to as a “processing field”. The second field corresponds to the identification of a source video stream used to generate the video stream. It is referred to as an “input stream field”. The third field corresponds to the location (corresponding to a device of the video surveillance system) wherein the generation of the video stream is performed. It is referred to as a “location field”. According to another embodiment, in addition to the above-mentioned triplet of three fields, a video stream strategy may comprise further information such as a set of characteristics of the video stream comprising at least one of a frame rate, a resolution value and a compression rate of the video stream.
In Figure 1a, a surveillance video-camera 100 captures a video flow of a given scene 105. The video flow captured may be processed by a set of independent video encoders 110, 111 and 112 embedded in the video-camera 100. Each video encoder is set up according to image parameters such as an encoding algorithm (typically H264 or MJPEG), a frame rate, a resolution, and a compression rate, and generates as output data a resulting flow referred to as a “video stream”.
The set-up of an encoder depends on the characteristics of the video flow (corresponding to the input data of the encoder) and the characteristics of the video stream (corresponding to the output data of the encoder), and that are related to characteristics required for processing said stream in an optimal way.
With multiple encoders, independent video streams with different characteristics can be generated simultaneously from the same video flow by the camera 100. This corresponds to a target video stream strategy and it may referred to as a “camera video stream strategy”.
For instance, the encoder 110 generates the video stream 130, the video encoder 111 generates the video stream 131 and the encoder 112 generates the video stream 132.
The three video streams 130, 131 and 132 output from the camera 100 may have different characteristics. Each stream output by a camera may be addressed to a predetermined device or module of the video surveillance system, referred to as a destination device defined by its function (e.g., recording, viewing, VCA processing).
Typically, in a video surveillance system, a destination device/module may be a recording server, a VCA server or a viewer. A video stream destined for a recording server is referred to as a “recording stream”. A video stream intended for a viewer is referred to as a “viewing stream”. A video stream intended for a VCA server is referred to as a “VCA stream”.
At camera level, the basic video stream strategy, referred to as a “camera video stream strategy”, can be formalized by a triplet comprising a processing field corresponding to the video encoding algorithm of the video flow with predetermined image parameters, an input stream field set to “null” (since no input stream from outside the camera is used) and a location field set to the identifier corresponding to the camera. An example of application of this strategy is given in Figure 6a which is described hereafter.
In Figure 1b, a recording server 120 receives an input stream 140 which may be a recording stream from a video-camera. The recording server may also receive VCA streams or viewing streams. The recording server 120 may be configured to process streams which are not recording streams by using an appropriate video stream strategy to generate a recording stream. However a recording server 120 may also generate and output streams of other types such as viewing or VCA streams. Consequently, several video stream strategies can be defined at the recording sever level to process video streams from the received streams, and generate output streams 142.
According to an embodiment, it is possible to define six video stream strategies at the recording server level:
• A first video stream strategy may be defined to generate a viewing stream with an appropriate target set of characteristics by processing a received recording stream which has given characteristics;
• A second video stream strategy may be defined to generate a VCA stream 30 with an appropriate target set of characteristics by processing a received recording stream with given characteristics. An example of this strategy is given in Figure 6b which is hereafter described. The second video stream strategy formalization comprises a processing field set to “processing of modification” (from a recording stream with given characteristics to a VCA stream with the target set of characteristics), an input stream field set to “recording stream” and a location field set to the identifier of the recording server;
• A third video stream strategy may be defined to generate a recording stream with an appropriate target set of characteristics by processing a received VCA stream having given characteristics. An example of io application of this strategy is given in Figure 6d;
• A fourth video stream strategy may be defined to generate a viewing stream with an appropriate target set of characteristics by processing a received VCA stream with given characteristics. The fourth video stream strategy formalization comprises a processing field set to “processing of modification” (from a VCA stream with given characteristics to a viewing stream with the target set of characteristics), an input stream field set to “VCA stream” and a location field set to the identifier of the recording server;
• A fifth video stream strategy may be defined to generate a recording stream with an appropriate target set of characteristics by processing a received viewing stream having given characteristics;
• A sixth video stream strategy may be defined to generate a VCA stream with an appropriate target set of characteristics by processing a received viewing stream with given characteristics. The sixth video stream strategy formalization comprises a processing field set to “processing of modification” (from a viewing stream with given characteristics to a VCA stream with the target set of characteristics), an input stream field set to “viewing stream” and a location field set to the identifier of the recording server;
To perform such video stream strategies, the recording server 120 of Figure 1b contains a module 125 configured to process a received input video stream 140 having a first set of characteristics so as to generate a (modified) output video stream 142 having a second set of characteristics. The module 125 may perform different operations such as transcoding, reduction of frame rate, reduction of resolution, and/or modification of the compression rate. These operations may be combined. For instance a stream with a resolution of 720p and a frame rate of 10 fps may be generated from a stream with a resolution of 1080p and a frame rate of 30 fps. Each operation (processing for modification) has its own processing cost which depends on the type of the operation, the set of characteristics of the input video stream 140 and above the target set of io characteristics of the output video stream 142.
In Figure 1c, a VCA server 150 receives an input stream 190 which may be a VCA stream generated by a camera or a recording server. The VCA server may also receive recording streams or viewing streams that may need to be processed (modified) so as to generate a VCA stream.
Different types of video stream strategies can be defined at the VCA level to generate a VCA stream 171 from another stream, to be next analysed in a core module 165:
• A first video stream strategy may be defined to generate a VCA stream with an appropriate target set of characteristics from a received recording stream with given characteristics. An example of this first strategy is represented in Figure 6c, which is hereafter described. The first video stream strategy can be formalized by a triplet comprising a processing field set to “processing of modification” (from a recording stream with given characteristics to a VCA stream with the target set of characteristics), an input stream field set to “recording stream” and a location field set to the identifier of the VCA server.
• A second video stream strategy may be defined to generate a VCA stream with an appropriate target set of characteristics from a received viewing stream with given characteristics. The video stream strategy can be formalized by a triplet comprising a processing field set to “processing of modification”, an input stream field set to “viewing stream” and a location field set to the identifier of the VCA server.
To perform such video stream strategies, the VCA server 150 contains a module 160 able to process a received input video stream 170 with the first set of characteristics in order to generate a modified stream (a VCA stream 171) with second (target) characteristics. Different operations such as transcoding, reduction of frame rate, reduction of resolution, and modification of the compression rate may be performed by the module 160. These operations may io be combined. Once processed, the VCA stream 171 is sent to the core module 165 of the VCA server (embedding the VCA algorithm), which is configured to analyze the VCA stream. Each operation (processing for modification) has its own processing cost which depends on the type of the operation, the characteristics of the input video stream (170) and above the target set of characteristics of the output video stream (171).
In Figure 1d a viewer 180 receives an input stream 190 which is a viewing video stream to display. The viewer 180 contains a module 185 which is configured to decode a received video stream in order to re-generate the data flow (that may have been degraded) and to display it.
Figure 2 is a schematic representation of an example of architecture of a video surveillance system.
In the represented architecture, the video surveillance system 200 is composed of two remote sites 210 and 220, one central site 240 and a backbone network 230 which interconnects remote sites and the central site. According to an example of the VS system 200, the backbone network may be a Wide area Network (WAN) as Internet, corresponding to a data rate of about 50 Mb/s.
The first remote site 210 contains, in the represented example, a set of cameras 212 interconnected by a dedicated infrastructure network 205. The dedicated infrastructure network 205 of the first remote site 210 is typically a
Local Area Network (LAN) based on a hierarchical architecture with 10/100/1000 Mbps Gigabit Ethernet, RJ-45 using Ethernet switches. The set 212 contains cameras for example as described in reference to Figure 1a.
The second remote site 220 contains a set of cameras 222 and a set of recording servers 225 interconnected by a dedicated infrastructure network 215. The dedicated infrastructure network 215 of the second remote site 220 is typically a Local Area Network (LAN) based on hierarchical architecture with 10/100/1000 Mbps Gigabit Ethernet, RJ-45 using Ethernet switches. The set 222 contains cameras for example as described in reference to Figure 1a. The io set 225 contains recording servers for example as described in reference to
Figure 1b.
The set of the recording servers 225 is typically configured to store the video flows of the set of cameras 222.
The central site 240 may comprise a Video Manager System (VMS) 250 configured to manage the video surveillance system, but also a system analyser 260 configured to monitor the resources of the video surveillance system, an auto-setting server 290 configured to determine the video stream strategies, a set of recording server 270 configured to store the received video streams, a set of Video Content Analytics (VCA) 280 configured to analyse the received video streams and a set of viewers 285 configured to display received video streams, all the modules being interconnected by a dedicated infrastructure network 242. The dedicated infrastructure network 242 of the central site is typically a Local Area Network (LAN) based on hierarchical architecture with 10/100/1000 Mbps Gigabit Ethernet, RJ-45 using Ethernet switches.
The network deployment of the video surveillance system represented in
Figure 2 is a non-limiting example of a system which can implement an embodiment of a method according to a first aspect of the invention, or that may be configured to form a video surveillance system according to another aspect of the invention.
Embodiments of the invention may be used for different types of network deployments such as a distributed deployment, a centralized deployment, or a combination of them. These two deployments may be similar in terms of the basic framework because both of them may comprise respectively a single central site 240 (which can be also called as a “headquarter”), one or several remote sites, and the backbone network 230.
For instance, for a video surveillance system containing a central site 240 and one or several remote site 210, the deployment is called “centralized deployment”. For a video surveillance system containing a central site 240 and one or several remote sites 220, the deployment is called “distributed deployment”. One of the differences between the distributed and centralized io deployments is that the remote site 210 of the centralized deployment comprises only the cameras 212 while the remote site 220 of the distributed deployment comprises not only the cameras 222 but also other devices/modules of a VS system, such as the set of recording servers 225 as illustrated in Figure 2 and/or VCA servers. In other words, the recording servers/VCA servers according to the centralized deployment are no longer installed in the remote sites and are, instead, installed in the central site 240.
According to an example of the distributed deployment, the recording servers and/or VCA servers arranged to be installed in the remote site 220 can be embedded into the cameras (e.g. the cameras 222) of one or several of the remote sites (e.g. the remote site 220).
The network constraints of a centralized deployment and those of a distributed deployment can be different and are located in different locations. Furthermore, for example, compared to the LAN networks 205, 215 and 242, the backbone network 230 allows a smaller data rate and may thus be a bottleneck of the network infrastructure. The impacts on the VS system 200 resulting from such a bottleneck (as a network constraint) can be very different depending on the deployments.
The set of recording servers 270 may be configured to store video flows that are not already stored in a remote site. For instance, in the video surveillance system 200, the remote site 210 does not comprise recording servers and consequently, the set of recording servers 270 may be used to record video flows issued from the set of cameras 212. Typically, the set of recording servers 270 receives recording streams, but according to the video stream strategies selected by the invention, it may be also view streams or VCA streams.
The set of VCA servers 280 comprises the VCA software modules which are configured to process video flows. Typically, the set of VCA servers 280 receives VCA streams, but according to the selected and implemented video stream strategies, it may also be provided with viewing streams or recording streams.
io The Video Manager System (VMS) device 250 comprises the software module which configures, controls and manages the video surveillance system. It is may be controlled via an administration interface. In particular, this administration interface comprises a list of the devices or modules of the videosurveillance system such as the cameras 212 and 222, recording servers 270 and the VCA servers 280. Each device or module may be set up via the administration interface. The Video Manager System 250 also manages video streams (preferably all video streams) of the VS system. For instance, the characteristics of each video stream are set up via the Video Manager System 250. More particularly, the administrator initially configures a video stream strategy comprising the target set of characteristics for video stream. These characteristics are fixed or modifiable.
The system analyser 260 monitors the available resources of the video surveillance system in terms of network and processing. In particular, the system analyser 260 centralises the processing resources capacity (e.g., the maximum supported processing load and/or memory capacity) of each device. In the same way, it retrieves the network resources capacity (i.e., the maximum supported bandwidth) of each communication link of the video surveillance system.
The viewers 285 are used to display video streams of the video surveillance system. When an operator wants to display a new video flow (generated by a camera covering a given scene), a new viewing stream is generated and a request, referred to as a new (video) stream request, is sent to the VMS 250.
The auto-setting server 290 may comprise the module implementing a method according to the invention. It is described more precisely in Figure 3.
Figure 3 is a schematic block diagram of a computing system adapted for implementing one or more embodiments of the invention. It may be embedded in the auto-setting server 290 as shown in the example architecture of Figure 2.
The represented computing device 300 comprises a communication bus io connected to:
- a central processing unit (CPU) 310 such as a microprocessor;
- an input/output (I/O) module 320 used for receiving/sending data, such as a video stream generation request (defined in Figure 4) and new (video) stream request (defined in Figure 4), from/to external devices;
- a read only memory (ROM) 330 used for storing computer programs for implementing embodiments; For instance, a list of predetermined video flow strategies can be stored in the ROM 330;
- a hard disk (HD) 340;
- a random access memory (RAM) 350 used for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary;
- a communication module (IHL) 370 is typically connected to a communication network over which digital data to be processed are transmitted or received;
- a user interface (UI) 360 making it possible to configure input parameters to be used in a method according to the invention. The user interface 360 may be used by the administrator of the video surveillance system to configure it.
The executable code may be stored either in random access only memory 350 (preferred option), on the hard disk 340 or on a removable digital medium such as for example a disk.
The central processing unit 310 is configured to control and direct the 5 execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 310 is capable of executing instructions from main RAM 350 relating to a software application after those instructions have been loaded from the program io ROM 330 or the hard-disk (HD) 340 for example.
Figure 4 is a diagram illustrating a method of generating and selecting a new video flow strategy according to an embodiment of the invention.
A video flow strategy comprises a set of video stream strategies applied, 15 successively or in parallel, to one video flow or to video streams resulting from the processing of said video flow.
The new video flow strategy to be generated will be used to process a plurality of video streams, one of the video streams being a (new) target video stream defined with a target set of characteristics. The steps 400 to 470 of the method make it possible to generate a new video flow strategy, following a “video stream generation request” for processing a (new) target video stream by a module of a video surveillance (VS) system 200. In other words, the reception of a “video stream generation request” by a module/device of the VS system leads to the generation of a new video flow strategy (which is different from the video flow strategy being currently applied).
The new video flow strategy is generated by taking into account the target set of characteristics associated to the target video stream to be generated and the resource constraints such as the network constraints and/or processing constraints of the VS system 200.
As a reminder, a video flow strategy is associated to a single video flow and corresponds to a set of video stream strategies which comprises one or several video stream strategies. Therefore, based on the video stream strategies of the new video flow strategy, the VS system 200 generates from a single video flow a plurality of video streams, one of the video streams being the (new) target video stream defined with the (new) set of target characteristics. It is noted that the video streams, including this (new) target video stream, results from a direct or indirect processing of the single video flow (detailed in the following).
According to an embodiment, the steps 400 to 470 of the method are performed by the auto-setting server 290 of the VS system 200. According to io another embodiment, the steps 400 to 470 are performed by the VMS 250. According to another embodiment different from the above-mentioned embodiments, some of the steps 400 to 470 can be dispatched between the Video Manager System (VMS) 250 and the auto-setting server 290, and in such case, a dedicated communication interface is implemented between the VMS
250 and the auto-setting server 290.
The step 400 consists of obtaining a video stream generation request sent by the VMS 250 and/or the auto-setting server 290 after a previous configuration by an administrator. According to an embodiment, this video stream generation request comprises information to indicate which video flow is to be considered to generate the (new) target video stream, and the characteristics of the (new) target video stream. The above-mentioned video stream generation request comprises:
- a flow identifier to identify a video flow (generated by which camera), or a currently existing video stream generated based on said video flow, to consider to generate the (new) target video stream^
- a function information (optional): the function is either a recording function, a viewing function or a VCA processing function. Depending on the function information, the target video stream to be generated is either a recording, a viewing or a VCA stream;
- a destination device identifier: this information indicates the target video stream is required by which one of the modules of the VS system 200; and
- a set of characteristics: the requested target video stream is defined by its set of characteristics (as the “target set of characteristics” hereafter) such as its frame rate, resolution value and/or compression rate.
The above-mentioned types of information are consistent one with another 5 since the target set of characteristics are used to describe the target video stream to be transmitted to the destination device for the use of a corresponding function.
According to an embodiment, the video stream generation request may result from a “processing request” to process (e.g., display or analyse) a stream launched by the user (i.e. an operator/administrator) via a graphical user interface. The administrator sets up the video stream generation request to indicate that the requested target video stream is a recording stream, a VCA stream or a viewing stream derived, directly or indirectly, from a video flow generated by which camera. In such a case, the video stream generation request is generated and sent from the administration interface of the VMS 250 to the auto-setting server 290 and processed finally as illustrated by the following steps 410 to 470.
Alternatively, the video stream generation request corresponds to a processing request generated by the video-surveillance system that can for example, be launched after a predetermined amount of time or after detecting an abnormal event by another camera.
The step 410 consists of determining if an existing video flow strategy comprising a video stream strategy associated to the (new) target video stream has been previously defined.
According to an embodiment, the step 410 is performed by comparing the flow identifier of a current video flow (if it exists) and the flow identifier contained in the obtained video stream generation request. If a video flow currently exists and its flow identifier is equal to the flow identifier contained in the obtained video stream generation request, the following step is the step 430; otherwise, the step 420 being performed. If the current video flow exists and its flow identifier is equal to the flow identifier contained in the video stream generation request, it is possible that the target set of characteristics contained in the video stream generation request are similar to the set of characteristics of an existing video stream which has been previously generated. In other words, the existing video stream is addressed to another module (e.g. “recording server”) which is different from the module (e.g. ’viewer”) that requires to receive the (new) target video stream. In another case, the target set of characteristics contained in the video stream generation request are different from any existing video stream.
With either one of the above-mentioned possibilities (such as a video stream with a set of characteristics similar to or different from the target set of characteristics existing), the existing video flow strategy (associated with the current video flow) is used to generate, during the following step 430 (as described below in detail), one or several candidate video flow strategies. The one or several candidate video flow strategies will be possibly used to generate the (new) target video stream together with the current one or several video streams. Otherwise, it means that currently no useable video flow and video flow strategy exist, and the following step to be performed is the step 420.
The step 420 consists of initializing a new video flow strategy relative to the (new) target video stream. More precisely, at this step 420, one video flow is generated by the corresponding camera (i.e., based on the flow identifier contained in the obtained video stream generation request). Then, a first target video stream strategy is then generated and added into the new video flow strategy. The first target video stream strategy comprises information to indicate how the (new) target video stream can be generated from the video flow, and the process to be applied directly by the camera generating said video flow.
As the (new) target video stream will be generated directly from the video flow according to the new video flow strategy obtained in the step 420, a preferable embodiment is that the camera generates, according to the corresponding first target video stream strategy obtained in the step 420, the (new) target video stream. In this case, this first target video stream strategy is referred to as a “camera video stream strategy”. As previously mentioned in the paragraphs illustrating Figure 1, the first target video stream strategy formalization is characterized by a processing field set to the encoding of the video flow with the target set of characteristics, an input stream field set to null (since there does not exist a current video stream that could be processed to generate the (new) video stream) and a location field set to the identifier of the camera (similar to the one indicated in the “video stream generation request”). All the information related to the video stream generation request (video flow identifier, function, destination device identifier, target set of characteristics) and the new video flow strategy comprising the first target video stream strategy are io stored in the memory 350.
According to another embodiment, an intermediate processing is applied by a module/device different from the video-camera for generating the target video stream. The video-camera generates a first video stream which is processed by an intermediate module/device to generate the target video stream.
The step 470 is then launched so as to process (e.g. generate, display) the (new) target video stream according to the new video flow strategy.
The step 430 is performed to determine, based on the existing video flow strategy identified in the step 410, one or a plurality of candidate video flow strategies which may be applied to generate the (new) target video stream.
It is noted that according to an embodiment, the existing video flow strategies have previously been stored in the memory 350, for example, during the step 420.
Each of the candidate video flow strategy is generated by adapting the 25 existing video flow strategy (containing one or a plurality of existing video stream strategies) to an addition of the (new) target video stream. The adaptation corresponds to a modification of the current video flow strategy, and includes the adding of a new video stream strategy associated to the (new) target video stream and a modification of at least a part of the video streams strategies of the set.
According to an embodiment, the step 430 consists of the following substeps 431 to 433 (as illustrated in Figure 5a) performed to generate a candidate video flow strategy based on the existing video flow strategy:
- sub-step 431: generating a target video stream strategy which can be used 5 to generate the (new) target video stream;
- sub-step 432 of generating adapted video stream strategies:
adapting the existing video stream strategies of the existing video flow strategy to the addition of the (new) target video stream strategy (the latter being obtained in the previous sub-step 431). During the adaption performed io in the sub-step 432, the current video stream strategies may be modified so that the current video streams and the (new) target video stream can be generated respectively according to the modified existing video stream strategies and the target video stream strategy; and
- sub-step 433: forming the candidate video flow strategy by including the 15 target video stream strategy (obtained in the sub-step 431) along with the adapted video stream strategies (obtained in the sub-step 432).
According to one embodiment, all candidate video flow strategies to generate the video streams required (including the target video stream newly required at the step 400) are generated. The sub-steps 431 to 433 describe a way to generate these candidate video flow strategies.
For ease of illustration, the sub-steps 431 to 433 will be described with using an embodiment in which the (new) target video stream to be generated is a VCA stream and the current video streams of the current video flow are recording stream(s).
According to an embodiment, the sub-step 431 of generating the target video stream strategy comprises selecting firstly the camera video stream strategy used to generate the target video stream, as a candidate target video stream strategy.
Next, all video stream strategies which can be respectively used to generate a video stream from a current video stream, are also considered as candidate target video stream strategies. The target video stream may be generated based on any one of the candidate target video stream strategies. For example, according to the present embodiment, six video stream strategies defined at the recording server level and two video stream strategies defined at the VCA server level are considered as the candidate target video stream strategies.
However, some of the above candidate target video stream strategies may not be further taken into account, which is determined as a function of the current video streams and/or of the target set of characteristics of the target video stream compared to the sets of characteristics of the other current video streams.
Firstly, it depends on the current video streams which are generated respectively according to the associated existing video stream strategies of the existing (current) video flow strategy. For instance, if there does not exist a viewing stream related to the considered current video flow (as the example illustrated in Figure 6), it is not possible to generate the newly requested video stream (such as the VCA stream) from a viewing stream which does not actually exist. Therefore the candidate target video stream strategies relative to a viewing stream are no longer considered.
Secondly, as mentioned above, it also depends on the target set of characteristics of the (new) target video stream compared to the sets of characteristics of the current video streams (which are generated according to the exiting video flow strategy). Indeed it is not possible to generate a (new) target video stream which offers a better quality “in terms of characteristics” than one of the current video streams from which the target video stream will be generated. For instance, the frame rate of the (new) target video stream needs to be not greater than that of a current video stream. The resolution of the target video stream needs to be not greater than that of the current video stream. Similarly, the compression rate of the target video stream needs to be not smaller than that of the current video stream.
If the considered current video stream does not present a better or at least the same quality than the (new) target video stream, the candidate target video stream strategies which intend to use the considered current video stream to generate the target video stream are no longer considered. Otherwise the candidate target video stream strategies are eligible to be selected to be the target video stream strategy.
At the end of the sub-step 431, one of the eligible target video stream strategies is selected to enter the following sub-step 432. As mentioned above, according to an embodiment, each of the eligible target video stream strategies can be selected to enter the following sub-step 432.
According to an embodiment, regardless of being selected or not, the io candidate video stream strategies can be stored in the ROM 330.
The sub-step 432 consists of adapting the exiting (current) video stream strategies of the existing (current) video flow strategy to the addition of the selected target video stream strategy. Each possible adaptation is taken into account and constitutes corresponding adapted video stream strategies.
During the adaption, it is possible that the existing video stream strategies remain the same.
On the other hand, it is also possible that the existing video stream strategies may be modified to be adapted to the target video stream strategy, so that the current video streams and the (new) target video stream can be generated respectively according to the modified existing video stream strategies and the target video stream strategy.
For example, one of the exiting video stream strategies concerns a recording stream and corresponds to a camera video stream strategy. The target video stream strategy (determined in the sub-step 431) concerns the target video stream (a VCA stream in the present embodiment) and corresponds also to a camera video stream strategy. In such a case, the existing video stream strategy can be modified by setting the input stream field (indicating the source video stream being originally “null”) to “VCA stream”, as well as by setting the third location field to “recording server”. In this way, instead of originally generating the current recording video stream at the camera level, the recording video stream can be generated at the recording server level from the VCA stream.
The sub-step 433 consists of forming a candidate video flow strategy by including the target video stream strategy (obtained in the sub-step 431) along with the adapted video stream strategies (obtained during the sub-step 432). According to the candidate video flow strategy formed in the sub-step 433, the current video streams as well as the target video stream will be able to be processed.
As mentioned above, at the end of the sub-step 431, there may be p io eligible target video stream strategies which can be selected to enter the substep 432. Then, for each of the p eligible target video stream strategies, q candidate video flow strategies can be obtained at the end of sub-steps 432 and 433, wherein q is an integer greater than or equal to 1 and it varies depending on different eligible target video stream strategies. The p * q candidate video flow strategies will be checked in the following steps 440 to 470 so as to select one with a smallest estimated cost to be the target video flow strategy.
Before describing the steps 440 to 470 of an embedment of the method of the invention, an example is given in Figures 7a to 7d to schematically illustrate the steps 400 to 430.
Figure 7a illustrates an initial status of the VS system of the present example. The VS system comprises one camera 710, one recording server 711 and one VCA server 712. At the current stage, no video streams are currently processed (e.g. generated and/or displayed) by any one of the camera 710, the recording server 711 and the VCA server 712 which are structurally and functionally similar with those of the VS system 200 as illustrated in Figure 2.
During the step 400, a video stream generation request is obtained by the VMS and/or the auto-setting server of the VS system. The video stream generation request comprises information about a (new) target video stream 725 to be generated; for example, a target set of characteristics (e.g. frame rate: 10 fps, resolution value: 1080p, compression rate: 0.05) of the target video stream 725, and the flow identifier which indicates that the target video stream
725 will be processed based on a video flow generated by the camera 710. According to the video stream generation request, it may be also known that the target video stream 725 is a recording stream required by the recording server 711.
The step 410 is then performed to determine if an existing video flow strategy has been previously defined. However, at the current stage as illustrated in Figure 7a, no current video stream is processed by the VS system and accordingly, there is no existing video flow strategy.
It is thus determined that the step 420 will be then performed.
io The step 420 consists of initializing a new video flow strategy relative to the (new) target video stream 725 to be generated. More precisely, at this step 420, one video flow is generated by the camera 710 (i.e., based on the flow identifier contained in the obtained video stream generation request). Then, a target video stream strategy is generated and added into the new video flow strategy. The target video stream strategy comprises information to indicate how the (new) target video stream 725 can be generated from the video flow and processed directly by the camera 710.
The new video flow strategy comprising the target video stream strategy is considered as a target video flow strategy relative to the target video stream
725. The step 470 is then launched so that, as illustrated in Figure 7b, the target video stream 725 is generated by the camera 710 and transmitted to the recording server 711.
Then, another video stream generation request is obtained by the VMS and/or the auto-setting server to require generating a new target video stream
726. The new target video stream 726 is distinct from the previous target video stream (which is the recording stream 725). The current video stream generation request comprises information about the target video stream 726 to be generated; for example, a target set of characteristics (e.g. frame rate: 20 fps, resolution value: 1080p, compression rate: 0.05) of the target video stream
726, and the flow identifier which indicates that the target video stream 726 will be processed based on the video flow generated by the camera 710. According to the video stream generation request, it may be also known that the target video stream 726 is a VCA stream required by the VCA server 712.
By performing the step 410, it can be known that the flow identifier indicated in the video stream generation request is equal to that of the current video flow generated by the camera 710. At the current stage of the example, the previously defined target video flow strategy comprises only one video stream strategy used to generate the recording stream 725.
Then, several candidate video flow strategies are generated during the step 430. The existing video flow strategy (associated with the current video io flow) is used as a basis to generate one or several candidate video flow strategies. One of the candidate video flow strategies will be selected and used to generate the (new) target video stream 726 together with the current recording stream 725 (steps 440 and 450).
The step 430 (comprising the sub-steps 431 to 433) is then performed to 15 determine, based on the existing video flow strategy, one or a plurality of candidate video flow strategies (such as the two candidate video flow strategies illustrated in Figures 7c and 7d) which may be applied to generate the target video stream 726.
The sub-step 431 is performed to select firstly a camera video stream 20 strategy used to process (e.g. generate) the target video stream 726, as a candidate target video stream strategy. With the existence of the current video flow, the camera 710 may directly process the video flow to generate the target video stream 726. Such video stream strategy is a camera video stream strategy considered as a first candidate target video stream strategy.
On the other hand, since the target video stream 726 may possibly be generated based on any one of the video stream strategies contained in the existing video flow strategy, the existing video stream strategy (of the exiting video flow strategy) used to generate the current recording stream 725 (as shown in Figure 7b) can be considered as a base stream strategy to define a second candidate target video stream strategy.
However, it is not possible to generate the target video stream 726 with a frame rate of 20 fps from the recording stream 725 which gets a frame rate of 10 fps. Therefore, the second candidate target video stream strategy is not eligible. Only the first candidate target video stream strategy can be selected as the target video stream strategy to enter the following sub-step 432.
The sub-step 432 is performed to adapt the exiting (current) video stream strategies of the existing video flow strategy to the addition of the selected target video stream strategy.
Therefore a set of video stream strategies (comprising the adapted io existing video stream strategies and the target video stream strategy) is obtained, according to which the current video streams as well as the target video stream will be able to be processed.
According to the present example, the existing video flow strategy comprises an existing video stream strategy which indicates the recording stream 725 is generated by the camera 710 and then sent to the recording server 711.
During the adaption, in the case in which the existing video stream strategy remains the same, the non-modified existing video stream strategy and the target video stream strategy will form (in the sub-step 433) a candidate video flow strategy as illustrated in Figure 7c.
In another case, the existing video stream strategy is modified so that the recording stream 725 will be generated by the recoding server 711 and based on the target video stream 726. The modified existing video stream strategy and the target video stream strategy will form another candidate video flow strategy as illustrated in Figure 7d.
According to the candidate video flow strategy illustrated in Figure 7c, the camera 710 processes two video streams which comprise the recording stream 725 (generated based on the existing video stream strategy) and the target video stream 726 (generated based on the target video stream strategy). The generated recording stream 725 is transmitted from the camera 710 to the recording server 711. As for the target video stream 726, there may be several ways to transmit it to the VCA server 712, depending on the settings of the VS system. For example, the target video stream 726 can be transmitted directly from the camera 710 to the VCA server 712 without passing through the recording server 711, as illustrated in Figure 7c. Another way is to transmit the target video stream 726 from the camera 711 to the recording server 711, and the recording server 711 forwards the received target video stream 726 to the VCA server 712 (not illustrated).
According to the candidate video flow strategy illustrated in Figure 7d, the camera 710 processes only one video stream which is the target video stream io 726 (generated based on the target video stream strategy). The generated target video stream 726 is necessarily transmitted from the camera 710 to the recording server 711. The recording server 711 obtains (e.g. stores) a copy of the target video stream 726 and forwards it to the VCA server 712. The recording server 711 generates the recording stream 725 according to the modified existing video stream strategy and based on the received target video stream 726, i.e., it processes the received stream 726 (with the following set of characteristics: resolution value: 1080p, frame rate: 20 fps, compression rate: 0.05) to generate another stream 725 (with the following set of characteristics: resolution value: 1080p, frame rate: 10 fps, compression rate: 0.05).
Once the candidate video flow strategies are obtained (in the step 430), the step 440 is then performed to estimate the cost of each of the candidate video flow strategies generated during the several executions of the step 430. The step 440 comprises further steps 510 to 530 which will be illustrated later with reference to Figure 5b. During the step 440, for each of the candidate video flow strategies, a cost in terms of system resources is calculated, based on a network cost and/or a processing cost required by the candidate video flow strategy.
According to a preferred embodiment, the cost required by the candidate video flow strategy can be estimated based on the target set of characteristics and at least one of network features concerning the communication path used by the candidate video flow strategy and of processing features used to generate video streams according to the candidate video flow strategy.
Next, the step 450 of selecting from the candidate video flow strategies (generated at step 430) a candidate video flow strategy which presents a smallest cost relative to the costs of other candidate video flow strategies (the costs estimated in the step 440). The selected candidate video flow strategy with the smallest cost is set to be the target flow strategy.
In a case that the costs of each of the considered candidate video flow strategies is not affordable for the VS system 200 (which means that the costs io are all equal to an infinite value as described in the steps 510 to 530), none of the candidate video flow strategies is selected as the target video flow strategy. In addition, a notification is directly sent to the user interface 360 which indicates that it is impossible to require the current VS system 200 to generate the requested target video stream with the target set of characteristics.
According to an embodiment, all information of the video stream generation request (comprising for example, as mentioned previously, the video flow identifier, the function information, the destination device identifier, and the target set of characteristics) and the corresponding selected target video flow strategy are stored in the memory 350.
Following the steps 450 or 420, the step 470 consists of sending to the
VMS 250 and/or other devices a request for processing (e.g. generating, displaying) the target video stream according to the selected target video flow strategy. According to the selected target video flow strategy, not only the target video stream but also the existing video streams (possibly re-generated based on adapted video stream strategies) can be processed (e.g. generated, displayed).
As a result, after the application of step 470, the (new) target video stream is processed (e.g., generated and/or displayed) with the required target set of characteristics (e.g. a given frame per second, a given resolution and/or a given compression rate using a given encoding algorithm). At the same time, the previously processed video streams are processed (e.g. generated and/or displayed), without their quality being decreased. However, the way to generate the previously processed video streams (if the previously processed video streams exist) may have changed.
Figure 5b describes the steps 510 to 530 of the step 440 performed to estimate the cost of a test video flow strategy which can be, for example, a candidate video flow strategy formed in the previous step 430. As mentioned above, for each of the candidate video flow strategies formed in the previous step 430, the steps 510 to 530 are performed to estimate a cost in terms of io system resources required by the VS system 200 to generate video streams (comprising the requested target video stream) according to the candidate video flow strategy.
The test video flow strategy (e.g. candidate video flow strategy) comprises a set of test video stream strategies which are, for example, the target video stream strategy as well as the adapted video stream strategies of the candidate video flow strategy. Test video streams of the test video flow can be generated according to the test video stream strategies.
According to an embodiment, the cost in terms of system resources required for performing the test video flow strategy to generate the test video flow is estimated based on sets of characteristics of the test video streams, and at least one of network features concerning the communication path used by the test video flow strategy and of processing features used to generate the test video streams according to the test video flow strategy.
The step 510 consists of calculating a network cost of the test video flow strategy based on the sets of characteristics of the test video streams. According to another embodiment, the network cost is calculated based on not only the sets of characteristics of the test video streams, but also on the network features comprising bandwidth limits of links of the communication path used to transmit the test video streams.
The communication path used by transmitting the test video streams comprises a plurality of links (e.g. implemented as physical cables or wireless connections) which are consecutive, wherein each of the links is used to connect two consecutive modules of the VS system 200 (e.g. for transmission of a test video stream from the source device to the destination device). Examples illustrated in Figure 6 and the related paragraphs describe more in detail embodiments in which the links of a communication path used to transmit the video streams generated by a source device (e.g. camera, resource server, VCA server) to a destination device (resource server, VCA server).
The consumed link bandwidth taken for transmitting a test video stream via a link depends on its set of characteristics. For instance, with reference to io Figure 1, the set of characteristics of a considered test video stream comprises the frame rate, the resolution, and the compression rate of the test video stream. According to an embodiment, the consumed link bandwidth required for the transmission of the test video stream via the corresponding link is determined as a function of the frame rate, the resolution and the compression rate of the encoding of the test video stream.
Consequently, a link network cost Ni of a link used by the corresponding test video stream is calculated by summing up consumed link bandwidths of all test video streams which are generated according to the corresponding test video stream strategy and use the link I to transmit. Depending on the test video flow strategy, a link (as the link I) can be used to transmit one or a plurality of test video streams.
According to an embodiment, the network cost of the test video flow strategy (required by the transmission of the corresponding test video flow) depends only on the link network cost Ni of each of the links I of the communication path. For instance, the network cost of the test video flow strategy is calculated by summing up all the link network costs Ni of the links I, I being a link included in at least one communication path of a test video stream.
According to another embodiment, the network cost of the test video flow strategy may depend also on the deployment (e.g. the above-mentioned distributed or centralized deployment) of the VS system 200. The link network costs Ni of the links I are respectively weighted by a corresponding weight value αι, the weight values αι assigned to the corresponding links I being determined by taking into account the network deployment of the VS system 200. In a preferred embodiment, the weight value αι assigned to the corresponding link I is proportional to the link bandwidth of the link I consumed for transmitting all data comprising the test video streams and other data (such as other video streams not relative to the test video flow strategy), while being inversely proportional to the network capacity of the link I (e.g. the link’s bandwidth limit).
According to an embodiment, the network capacities of the links I are predetermined and given by the system analyser 260. The network cost of the io test video flow strategy is calculated by summing up all the link network costs Ni weighted by the corresponding weight values Or
If for a link I, the corresponding weight value αι assigned to the link I is greater than 1, it means that the link I’s bandwidth limit is exceeded. Therefore the network cost of the test video flow strategy may be set to an infinite value to indicate that the test video flow strategy will not be processed in the step 450 and will not be selected as the target video flow strategy.
The step 520 consists of calculating a processing cost of the test video flow strategy based on the processing resources of the devices/modules of the VS system used to generate the test video streams according to the test flow stream strategy. The processing cost of the test video flow strategy depends also on a processing load placed on a device d (as indicated in the location field of a video stream strategy) for processing (such as generating) a test video stream according to the corresponding test video stream strategy. The device d can be for example a recording server, a viewer or a VCA server of the VS system 200.
According to an embodiment, the processing load can be a load average which represents the average system load over a predetermined period of time.
Consequently, for a device d included in at least one communication path used by a test video stream strategy of the test video flow strategy, its device processing cost Rd is calculated by summing up all processing loads placed on the device d for processing (such as generating) the test video streams according to the corresponding test video stream strategies. There can be one or a plurality of test video stream strategies to be processed by the device d.
According to an embodiment, the processing cost of the test video flow strategy depends only on the device processing cost of each of the devices d of the VS system. For instance, the processing cost of the test video flow strategy is calculated by summing up all device processing costs Rd.
According to another embodiment, the processing cost of the test video flow strategy may depend also on the processing deployment of the VS system 200. The device processing costs Rd of the devices/modules of the VS system io 200 are respectively weighted by a corresponding weight value βϋ assigned to the corresponding devices d being determined by taking into account the processing deployment of the VS system 200. In a preferred embodiment, the weight value βϋ assigned to the device d is proportional to a sum of processing loads placed on the device d for processing (e.g. generating, displaying) not only the test video stream strategies of the test video flow strategy but also other data (such as other video stream strategies not relative to the test video flow strategy), while being inversely proportional to the processing capacity of the device d (e.g. the processing limit of the device).
According to an embodiment, the processing capacities of the devices d are predetermined and given by the system analyser 260. The processing cost of the test video flow strategy is calculated by summing up all the device processing costs Rd weighted by the corresponding weight values βϋ. If for a device d, the corresponding weight value βϋ assigned to the device d is greater than 1, it means that the processing capacity of the device d is exceeded.
Therefore the processing cost of the test video flow strategy may be set to an infinite value to indicate that the test video flow strategy will not be processed in the step 450 and will not be selected as the target video flow strategy.
It is noted that the steps 510 and 520 can be performed sequentially or in parallel. In addition, the step 520 can also be performed prior to the step 510.
The step 530 consists of calculating a cost of the test video flow strategy in terms of system resources. The cost of the test video flow strategy is calculated according to the network cost calculated in the step 510 and the processing cost calculated in the step 520. According to an embodiment, the cost of the test video flow strategy is a summation of the network cost and the processing cost. According to another embodiment, the cost of the test video flow strategy is a summation of the network cost weighted by a network factor v and the processing cost weighted by a processing factor y. These two factors v and γ assigned to the VS system 200 can be set by the operator of the VS system 200 via the user interface 360 in order to reflect the cost of using the network resources compared to the cost of using the processing resources of io the VS system 200 (or vice-versa). If the processing cost or the network cost of the VS system 200 is equal to an infinite value, it means that the system capacity is exceeded and the cost of the test video flow strategy in this case is set to an infinite value.
Each of Figures 6a to 6d schematically illustrates an example of processing a video flow strategy to generate video streams from a given video flow generated by a camera of the VS system. This illustrates various contexts in which the invention may be implemented.
In each of Figures 6a to 6d, only one camera (610 in Figure 6a, 620 in
Figure 6b, 630 in Figure 6c, 640 in Figure 6d), one recording server (611 in Figure 6a, 621 in Figure 6b, 631 in Figure 6c, 641 in Figure 6d) and one VCA server (612 in Figure 6a, 622 in Figure 6b, 632 in Figure 6c, 642 in Figure 6d) are considered. The camera, recording server and VCA server are structurally and functionally similar with those ofthe VS system 200 as illustrated in Figure
2.
As illustrated in each of Figures 6a to 6d, the camera generates a video flow. The camera, the recording server and the VCA server may be configured to process one or a plurality of video stream strategies of the video flow strategy to process (i.e. generate) a corresponding video stream required by the recording server or by the VCA server.
From the video flow generated by the camera, two video streams have to be generated, a recording stream and a VCA stream, each with a predetermined set of characteristics. In the illustrated VS system architecture, a stream generated by the camera is first sent to the recording server before it can reach the VCA server.
Figure 6a illustrates processing a video flow strategy VFS-A concerning the application of a so called “camera video stream strategy”. Two independent video streams are generated simultaneously with different sets of characteristics by the camera 610: a recording stream 615 intended to the io recording server 611 and a VCA stream intended to the VCA server 612, both being transmitted via a communication path comprising links L61a and L61b. The two video streams 615 and 616 are transmitted via the link L61a between the camera 610 and the recording server 611, and thus received by the recording server 611. The recording server 611 stores the recording stream 615 and forwards the VCA stream 617 to the VCA server 612, via the other link L61b between the recording server 611 and the VCA server 612. The VCA server 612 receives the VCA stream 617 and processes it.
Figure 6b illustrates processing a video flow strategy VFS-B to generate a VCA stream at the recording server with a “target set of characteristics”, by processing the received recording stream having a given set of characteristics. The camera 620 generates, according to a video stream strategy of the video flow strategy, a recording stream 625 which is then transmitted via a link L62a and received by the recording server 621. The recording server 621 uses, and may also store, the received recording stream 625. The recording server 621 is configured to process the recording stream 625 according to another video stream strategy of the video flow strategy, so as to generate the VCA stream 626 with a predetermined (the “target” set) of characteristics. The VCA stream 626 is then transmitted via a link L62b to the VCA server 622. The VCA server 622 receives the VCA stream 626 and processes it. The links L62a and L62b constitute a communication path used by the video flow strategy.
Figure 6c illustrates processing a video flow strategy VFS-C to generate a VCA stream at the VCA server with a target set of characteristics, from the received recording stream having a given set of characteristics. The camera 630 generates a recording stream 635 according to a video stream strategy of the video flow strategy. The recording server 631 receives the recording stream 635 transmitted via a link L63a. The recording server 631 duplicates the received recording stream 635 so that there are two recording streams 635a and 636. The recording streams 635a is stored in and may be used by the recording server 631, while the recording stream 636 is forwarded directly (no io process of modification is applied by the recording server 631) to the VCA server 632. The VCA server 632 receives the recording stream 636 transmitted via a link L63b. The VCA server 632 processes the recording stream 636, according to another video stream strategy of the video flow strategy, so as to generate the VCA stream. The VCA server 632 then performs the desired analysis on the VCA stream (by the VCA core algorithm). The links L63a and L63b constitute a communication path used by the video flow strategy.
Figure 6d illustrates processing a video flow strategy VFS-D to generate a recording stream at the recording server with a target set of characteristics from a received VCA stream having a given set of characteristics. The camera 640 generates a VCA stream 645 according to a video stream strategy of the video flow strategy. The recording server 641 receives the VCA stream 645 transmitted via a link L64a. The recording server 641 processes the recording stream 645, according to another video stream strategy of the video flow strategy, so as to generate the recording stream with the target set of characteristics. Before generating the recording stream, the recording server 641 may duplicate the received VCA stream 645 so as to obtain a copy (646) of the VCA stream 645 and forward the copy 646 of the VCA stream 645 to the VCA server 642. The VCA server 642 receives via a link L64b the copy 646 of the VCA stream 646 and processes it. The links L64a and L64b constitute a communication path used by the video flow strategy.
Two examples of estimating the costs of candidate video flow strategies, such as the costs of the above-mentioned video flow strategies VFS-A to VFSD (with reference to Figure 6) are given as follows.
As illustrated above, the video flow strategies VFS-A, VFS-B, VFS-C and 5 VFS-D are used to generate a current video stream (required by a recording server) and a newly requested target video stream required by a VCA server.
In the current first example, the newly requested target video stream is a VCA stream with a target set of characteristics (e.g. frame rate: 1 fps, resolution value: 720p, compression rate: 0.05). The current video stream required by the io recording server is a recording stream with a corresponding set of characteristics (e.g. frame rate: 5 fps, resolution value: 720p, compression rate:
0.05) different from the target set of characteristics.
According to an embodiment, an input rate (also as “data rate”) of a video stream is determined as a function of the frame rate, the resolution and the compression rate of the encoding of the test video stream. For example, the input rates of the recording video stream and the VCA stream are respectively 5 Mb/sec and 1 Mb/sec.
Therefore, for a link of a communication path, the link network cost can be calculated based on the input rate of one or a plurality of video streams which are generated according to the video flow strategy and are transmitted via the link. For example, as illustrated in Table 1 as below, for the video flow strategy VFS-A, the link network cost (NCR) of the link L61a between the camera and the recording server is 6 Mb/sec, equal to the sum of the input rates of the recording stream 615 and the VCA stream 616. For the video flow strategy VFS-C, the link network cost (NRV) of a link between the recording server and the VCA server is 5 Mb/sec, equal to the input rate of the only one stream (being the VCA stream 636) which uses the link L63b between the recording server 631 and the VCA server 632. The rest of the link network costs can be found in the following Table 1. In addition, please note that the video flow strategy VFS-D is indicated as “Not possible” since it is not applicable to generate a recording stream presenting a data rate of 5 Mb/s, from the VCA stream 645 presenting a data rate of 1 Mb/s.
Video Flow Strategy NCR (Mb/sec) NRV (Mb/sec)
VFS-A 6 1
VFS-B 5 1
VFS-C 5 5
VFS-D Not possible
Table 1: link network costs
It is noted that as mentioned previously, for each of the video flow strategies VFS-A to VFS-D, the network cost can be estimated in a simplified way. The network cost of a video flow strategy is calculated by summing up all the link network costs NCR and NRV. In this simplified case of calculating the network cost of a video flow strategy, no further conditions, such as a link of the io communication path being used only or not for transmitting the video streams generated according to the video flow strategy, is taken into account.
The following Table 2 illustrates a different way of calculating the network cost of a video flow strategy. For each of the video flow strategies VFS-A to VFS-D, the Table 2 indicates a network cost of the video flow strategy which is a sum of the link network costs (e.g. NCR, NRV) weighted by corresponding link weight values (e.g. aCR, aRV) assigned to the links. As mentioned previously, the distributed deployment and the centralized deployment of a VS system may have different network constraints, which results in having different link weight values for the VS system.
For example, the network cost of the video flow strategy VFS-B according to the distributed deployment is equal to 15 (Mb/sec), obtained by the following equation: 15 (Mb/sec) = 5 (Mb/sec) * 1 + 1 (Mb/sec) * 10. The rest of the network costs can be found in Table 2.
Video Flow Strategy Distributed deployment Network cost (Mb/sec) aCR = 1, aRV = 10 Centralized deployment Network cost (Mb/sec) aCR = 10, aRV = 1
VFS-A 16 61
VFS-B 15 51
VFS-C 55 55
VFS-D Not possible
Table 2: network costs
The following Table 3 lists, for each of the video flow strategies VFS-A to VFS-D, a device processing cost (e.g. RC, RR, RV) of each of the devices (e.g. camera, recording server, VCA server) which can be calculated based on the input rate of one or a plurality of video streams which are generated by the device according to the video flow strategy. According to a preferred embodiment, the device processing cost is calculated as a function of a frame rate reduction, such as a frame rate reduction from 5 fps (of a recording stream) to 1 fps (of a VCA stream).
io
Video Flow Strategy RC RR RV
VFS-A 1 0 0
VFS-B 0 8 0
VFS-C 0 0 8
VFS-D Not possible
Table 3: device processing costs
As mentioned previously, the device processing cost may correspond to the amount of resources (such as CPU processing time and the amount of memory used) required to execute the operations. For example, as indicated in Table 3, the value RR of the video flow strategy VFS-B is 8, which is calculated based on the reduction of the data rate from 5 Mb/sec to 1 Mb/sec by the recording server. The value RV of the video flow strategy VFS-C is 8, which is calculated based on the reduction of the data rate from 5 Mb/sec to 1 Mb/sec by the VCA server.
It is noted that according to an embodiment, a step of normalization can be performed to the values of the above-mentioned link network costs (expressed in Mb/sec for example) and network costs (expressed in Mb/sec for example) and/or the calculations of the device processing costs and resources costs, so as to obtain representative scores or percentage values. For example, the numerical values as indicated in Table 3 are scores obtained after the step of normalization. The numerical values, as indicated in Tables 1 and 2, can be used as scores being no longer expressed in the unit as “Mb/sec”, which is considered as the result of the normalization step.
As for the calculation of the processing cost of a video flow strategy, it can be estimated in a simplified way of summing up all the device processing costs
RC, RR and RV of the video flow strategy. In this simplified case of calculating the processing cost of a video flow strategy, no further conditions, such as a module/device of the VS system (e.g. the recording server and the VCA server) being used only or not for processing the video flow strategy.
The following Table 4 illustrates a different way of calculating the io processing cost of a video flow strategy. For each of the video flow strategies VFS-A to VFS-D, the following Table 4 indicates a processing cost of the video flow strategy which is a sum of the device processing costs (e.g. RC, RR, RV) weighted by corresponding device weight values (e.g. βθ, 3R, βν) assigned to the devices. The device weight values βθ, βΗ, βν are respectively assigned to the camera, the recording server and the VCA server.
Video Flow Strategy Processing cost (Mb/sec) βΟ=10, PR = 1, pV = 5
VFS-A 10
VFS-B 8
VFS-C 40
VFS-D Not possible
Table 4: processing costs
The following Table 5 lists, for each of the video flow strategies VFS-A to
VFS-D, the cost in terms of network resources and processing resources, determined depending on the deployment (centralized or distributed) applied to the VS system.
According to an embodiment, the cost of the candidate video flow strategy in terms of system resources is a summation of the network cost and the processing cost.
According to another embodiment, the cost of the candidate video flow strategy in terms of system resources is a summation of the network cost weighted by a network factor v and the processing cost weighted by a processing factor γ. In the present example, the network costs and processing costs used to calculate the costs of video flow strategies are those calculated in a weighted way as listed in the Tables 2 and 4. In addition, in this example, the network factor v and processing factor γ are respectively set to 0.5.
For example, for the video flow strategy VFS-A performed by using the distributed deployment, the network cost is 6 and the processing cost is 10. The cost of the video flow strategy VFS-A using the distributed deployment is 8, obtained by the following equation: 13 = 16 (network cost) * 0.5 + 10 (processing cost) * 0.5.
io For the video flow strategy VFS-A performed by using the centralized deployment, the network cost is 60 and the processing cost is 10. The cost of the video flow strategy VFS-A using the centralized deployment is equal to 35, obtained by the following equation: 35.5 = 61 (network cost) * 0.5 + 10 (processing cost) * 0.5. The rest of the costs of the video flow strategies can be found in the following Table 5.
Video Flow Strategy Distributed deployment Costs of video flow strategy Centralized deployment Costs of video flow strateqy
VFS-A 13 35.5
VFS-B 11.5 29.5
VFS-C 47.5 47.5
VFS-D Not possible
Table 5: costs of video flow strategies
Therefore, according to the Table 5, it is known that the VS system applying the distributed deployment should select the video flow strategy VFS-B which has a smallest cost value as 11.5 relative to the rest of the video flow strategies. However, while the VS system applies the centralized deployment, it is the video flow strategy VFS-B to be selected to perform since the cost of the video flow strategy VFS-B is 29.5, which is smaller than the costs value of the rest of the video flow strategies.
It is noted that the above-mentioned normalization step is an optional step which helps simplify the calculation of the cost of a video flow strategy in terms of system resources.
The second example of estimating the costs of candidate video flow strategies, such as the costs of the above-mentioned video flow strategies VFSA to VFS-D (with reference to Figure 6) are given as follow. In the second example, the newly requested target video stream is a VCA stream with a target set of characteristics (e.g. frame rate: 10 fps, resolution value: 720p, compression rate: 0.05). The current video stream required by the recording server is a recording stream with a corresponding set of characteristics (e.g. frame rate: 5 fps, resolution value: 720p, compression rate: 0.05) different from io the target set of characteristics. The input rates (i.e. data rates) of the recording video stream and the VCA stream are respectively 5 Mb/sec and 10 Mb/sec.
The following Table 6 lists the link network costs NCR and NRV of each of the video flow strategies VFS-A to VFS-D. For example, for the video flow strategy VFS-A, the link network cost NCR of the link L61a between the camera and the recording server is 15 Mb/sec, equal to the sum of the input rates of the recording stream 615 and the VCA stream 616. In addition, please note that the video flow strategy VFS-B is indicated as “Not possible” since it is not applicable to generate a VCA stream presenting a frame rate of 10 fps, from the recording stream 625 presenting a frame rate of 5 fps. Similarly, the video flow strategy
VFS-C is also indicated as “Not possible”. The rest of the link network costs can be found in Table 6.
Video Flow Strategy NCR (Mb/sec) NRV (Mb/sec)
VFS-A 15 10
VFS-B Not possible
VFS-C Not possible
VFS-D 10 10
Table 6: link network costs
For each of the video flow strategies VFS-A to VFS-D, the following Table 7 indicates a network cost of the video flow strategy which is a sum of the link network costs (e.g. NCR, NRV) weighted by corresponding link weight values (e.g. aCR, aRV) assigned to the links. For example, the network cost of the video flow strategy VFS-A according to the centralized deployment is equal to 160 (Mb/sec), obtained by the following equation: 160 (Mb/sec) = 15 (Mb/sec) * 10 + 10 (Mb/sec) * 1. The rest of the network costs can be found in Table 7.
Video Flow Strategy Distributed deployment Network cost (Mb/sec) aCR = 1, aRV = 10 Centralized deployment Network cost (Mb/sec) aCR = 10, aRV = 1
VFS-A 115 160
VFS-B Not possible
VFS-C Not possible
VFS-D 110 110
Table 7: network costs
Similar with Table 3 as mentioned above, the following Table 8 lists, for each of the video flow strategies VFS-A to VFS-D, a device processing cost io (e.g. RC, RR, RV) of each of the devices (e.g. camera, recording server, VCA server). The device processing cost may be calculated, for example, as a function of a frame rate reduction. For example, as indicated in Table 8, the value RR of the video flow strategy VFS-D is 8, which is calculated based on the frame rate reduction from 10 fps to 5 fps by the VCA server.
Video Flow Strategy RC RR RV
VFS-A 1 0 0
VFS-B Not possible
VFS-C Not possible
VFS-D 0 8 0
Table 8: device processing costs
Moreover, for each of the video flow strategies VFS-A to VFS-D, the 20 following Table 9 indicates a processing cost of the video flow strategy which is a sum of the device processing costs (e.g. RC, RR, RV) weighted by corresponding device weight values (e.g. 3C, 3R, βν) assigned to the devices such as the camera, the recording server and the VCA server.
Video Flow Strategy Processing cost (Mb/sec) pC =10, pR = 1, βν = 5
VFS-A 10
VFS-B Not possible
VFS-C Not possible
VFS-D 8
Table 9: processing costs
The following Table 10 lists, for each of the video flow strategies VFS-A 5 to VFS-D, the cost in terms of network resources and processing resources, determined depending on the deployment (centralized or distributed) applied to the VS system. The cost of the candidate video flow strategy in terms of system resources is a summation of the network cost weighted by a network factor v and the processing cost weighted by a processing factor y, wherein the network io factor v and processing factor γ are respectively set to 0.5. In the present example, the network costs and processing costs used to calculate the costs of video flow strategies are those calculated in a weighted way as listed in the
Tables 7 and 9.
For example, for the video flow strategy VFS-A performed by using the 15 centralized deployment, the network cost is 160 and the processing cost is 10. The cost of the video flow strategy VFS-A using the centralized deployment is 85, obtained by the following equation: 85 = 160 (network cost) * 0.5 + 10 (processing cost) * 0.5. The rest of the costs of the video flow strategies can be found in the following Table 5.
Video Flow Strategy Distributed deployment Costs of video flow strateqy Centralized deployment Costs of video flow strateqy
VFS-A 62.5 85
VFS-B Not possible
VFS-C Not possible
VFS-D 59 59
Table 10: costs of video flow strategies
Therefore, according to the Table 10, it is known that no matter which 25 one of the distributed and centralized deployments is applied to the VS system, the VS system should select the video flow strategy VFS-D which has a smallest cost value as 59 relative to the rest of the video flow strategies.
The method for generating the optimal video flow strategy used to generate video streams required by the devices/modules of a video-surveillance system, as described above, presents one or more of the advantages as follows.
The method of the invention makes it possible to automatically find the optimized video flow strategy to be used to generate the corresponding video streams, that means the method optimizes the video surveillance (VS) system in terms of network and/or processing resources. Consequently, the invention makes it possible to increase the number of video streams to be generated for the corresponding modules/devices of the VS system while maintaining the same conditions of utilization as well as the same resource constraints of the VS system in terms of network and processing. In other words, the performance of the VS system can be largely increase without investments on new equipment investments.
Furthermore, compared to the conventional manual selection of video stream strategies, the invention saves much time and is able to select, by taking into account the resource constraints, an optimal video flow strategy to be used to generate video streams. For example, the network deployment (e.g. the distributed or centralized deployment) is often complex for the operator of a VS system to manage to master network considerations as well as to select an optimal video flow strategy while taking into account the network constraints, especially when the scale of the VS system is large.
In addition, the invention allows to know if it is possible to require the VS system for generating a new video stream, to generate possible video flow strategies, estimate the related costs spent by the possible video flow strategies, and select the optimal video flow strategy to be used to generate the target video stream. The possible video flow strategies refers to video flow strategies which are in line with the resource constraints (such as the processing and network constraints) of the VS system.
Another aspect of the invention set forth below concerns a method for controlling the transmission of a video data generated by a video streaming device over a communication path, the method comprising:
- receiving a request for obtaining (or processing) a video data with first characteristics by a first module;
- receiving a request for obtaining (or processing) said video data with second characteristics by a second module, the second characteristics being different from the first characteristics, the first module being different from the second module, the first and/or second module being a Video Content Analytics server, a recording server or a viewer;
- determining a plurality of video flow strategies with the first module obtaining (or processing) the video data with first characteristics, and the second module obtaining (or processing) the video data with second characteristics;
for each video flow strategy, determining a cost based on network characteristics of the communication path and/or processing characteristics; and
- selecting the configuration with a minimum cost.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications which lie within the scope of the present invention will be apparent to a person skilled in the art.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention as determined by the appended claims. In particular different features from different embodiments may be interchanged, where appropriate.

Claims (16)

1. A method for generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system; the plurality
5 of video streams comprising at least one current video stream and a target video stream to be generated, the target video stream being distinct from the at least one current video stream; the method comprising:
- forming candidate video flow strategies according to each of which the at least one current video stream and the target video stream can be io generated;
- estimating, for each of the candidate video flow strategies, a cost in terms of system resources based on at least one of network features and of processing features of the video-surveillance system required to process the candidate video flow strategy; and
15 - selecting, from the candidate video flow strategies, a target video flow strategy which presents a smallest cost among the estimated costs of the candidate video flow strategies.
2. The method of claim 1, wherein a candidate video flow strategy is
20 formed by performing at least one of following:
- adding to an existing video flow strategy a target video stream strategy according to which the target video stream is directly derived from the video flow;
- adding to an existing video flow strategy a target video stream strategy
25 according to which the target video stream is derived from one of the current video streams; and
- modifying one or plural existing video stream strategies of an existing video flow strategy, so as to generate, according to the modified one or plural existing video stream strategies, the current video stream and the
30 target video stream.
3. The method of any one of claims 1 and 2, wherein the candidate video flow strategies comprise respectively at least one video stream strategy, the video stream strategy comprising at least part of following information on a corresponding video stream:
5 - a set of characteristics of the video stream which is generated according to the video stream strategy, the set of characteristics comprising at least one of a frame rate, a resolution value and a compression rate of the video stream;
- a type of processing to be performed to generate the video stream;
io - an identifier of a source video stream from which the video stream is generated; and
- an identifier of one of the modules configured to process the video stream.
15
4. The method of claim 3, wherein the frame rate and the resolution value of the target video stream are respectively not greater than the frame rate and the resolution value of a source video stream used to generate the target video stream, and the compression rate of the target video stream is not smaller than the compression rate of the source video stream, wherein the source video
20 stream is one of the current video streams.
5. The method any one of claims 1 to 4, wherein the cost of the candidate video flow strategy comprises a network cost of the candidate video flow strategy calculated based on at least one of following:
25 - an input rate of the target video stream which is determined based on the target set of characteristics;
- an input rate of the at least one current video stream which is determined based on the corresponding set of characteristics; and
- the network features comprising bandwidth limits of links of a
30 communication path used by the candidate video flow strategy for transmitting the video streams including the target video stream.
6. The method of claim 5, wherein the network cost of the candidate video flow strategy is calculated by taking into account the link bandwidths of said links occupied by the transmission of other data which are not relative to the candidate video flow strategy.
7. The method of any one of claims 1 to 6, wherein the cost of a candidate video flow strategy comprises a processing cost of the candidate video flow strategy calculated based on at least one of:
- the processing features of the modules which participates to the io processing of the candidate video flow strategy; and
- frame rate, resolution reduction and/or compression rate increase resulting from the processing of the candidate video flow strategy by said modules.
15
8. The method of claim 7, wherein the processing cost of the candidate video flow strategy is calculated by taking into account processing loads placed on said modules for processing other data which are not relative to the candidate video flow strategy.
20
9. The method of any one of claims 7 and 8 depending on claim 5, wherein the cost of the candidate video flow strategy is determined based on the network cost and the processing cost.
10. The method of any one of claims 1 to 9 comprises further, after the
25 step of selecting, a step of generating the target video stream with the target set of characteristics and the at least one current video stream according to the selected target video flow strategy.
11. The method of any one of claims 1 to 10, wherein one of the modules
30 is a video streaming device configured to generate the video flow, and the rest of the modules comprise a video content analytics (VCA) server, a recording server and/or a viewer.
12. A computer program product for a programmable apparatus, the computer program product comprising instructions for carrying out each step of the method according to any one of claims 1 to 11 when the program is loaded
5 and executed by a programmable apparatus.
13. A computer-readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 11.
io
14. A device for generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system; the plurality of video streams comprising at least one current video stream and a target video stream to be generated, the target video stream being distinct from the at
15 least one current video stream; the device comprising a processor configured for carrying out the steps of:
- forming candidate video flow strategies according to each of which the at least one current video stream and the target video stream can be generated;
20 - estimating, for each of the candidate video flow strategies, a cost in terms of system resources based on at least one of network features and of processing features of the video-surveillance system required to process the candidate video flow strategy; and
- selecting, from the candidate video flow strategies, a target video flow
25 strategy which presents a smallest cost among the estimated costs of the candidate video flow strategies.
15. The device of claim 14, wherein the processor is further configured for carrying out at least one of following steps to form a candidate video flow
30 strategy:
- adding to an existing video flow strategy a target video stream strategy according to which the target video stream is directly derived from the video flow;
- adding to an existing video flow strategy a target video stream strategy
5 according to which the target video stream is derived from one of the current video streams; and
- modifying one or plural existing video stream strategy of an existing video flow strategy, so as to generate, according to the modified one or plural existing video stream strategies, the target video stream and the current
10 video streams.
16. A method for method for generating, based on a single video flow, a plurality of video streams required by modules of a video-surveillance system substantially as hereinbefore described with reference to, and as shown in
15 Figure 4 or in Figure 4 with Figure 5.
Intellectual
Property
Office
Application No: GB1612727.6 Examiner: Mr Rhys Miles
GB201612727A 2016-07-22 2016-07-22 Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system Active GB2552376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201612727A GB2552376B (en) 2016-07-22 2016-07-22 Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201612727A GB2552376B (en) 2016-07-22 2016-07-22 Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system

Publications (3)

Publication Number Publication Date
GB201612727D0 GB201612727D0 (en) 2016-09-07
GB2552376A true GB2552376A (en) 2018-01-24
GB2552376B GB2552376B (en) 2020-01-01

Family

ID=56894539

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201612727A Active GB2552376B (en) 2016-07-22 2016-07-22 Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system

Country Status (1)

Country Link
GB (1) GB2552376B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988560A (en) * 2019-05-22 2020-11-24 安讯士有限公司 Method and apparatus for encoding and streaming video sequences over multiple network connections
CN113992980A (en) * 2020-07-09 2022-01-28 杭州海康威视数字技术股份有限公司 Generation method, device and equipment of attack code stream
EP3985974A1 (en) * 2020-10-13 2022-04-20 Axis AB An image processing device, a camera and a method for encoding a sequence of video images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314127A1 (en) * 2011-06-09 2012-12-13 Inayat Syed Provisioning network resources responsive to video requirements of user equipment nodes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314127A1 (en) * 2011-06-09 2012-12-13 Inayat Syed Provisioning network resources responsive to video requirements of user equipment nodes

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988560A (en) * 2019-05-22 2020-11-24 安讯士有限公司 Method and apparatus for encoding and streaming video sequences over multiple network connections
EP3742739A1 (en) * 2019-05-22 2020-11-25 Axis AB Method and devices for encoding and streaming a video sequence over a plurality of network connections
US11683510B2 (en) 2019-05-22 2023-06-20 Axis Ab Method and devices for encoding and streaming a video sequence over a plurality of network connections
CN113992980A (en) * 2020-07-09 2022-01-28 杭州海康威视数字技术股份有限公司 Generation method, device and equipment of attack code stream
CN113992980B (en) * 2020-07-09 2023-05-26 杭州海康威视数字技术股份有限公司 Method, device and equipment for generating attack code stream
EP3985974A1 (en) * 2020-10-13 2022-04-20 Axis AB An image processing device, a camera and a method for encoding a sequence of video images
US11477459B2 (en) 2020-10-13 2022-10-18 Axis Ab Image processing device, a camera and a method for encoding a sequence of video images

Also Published As

Publication number Publication date
GB201612727D0 (en) 2016-09-07
GB2552376B (en) 2020-01-01

Similar Documents

Publication Publication Date Title
US11503304B2 (en) Source-consistent techniques for predicting absolute perceptual video quality
Li et al. Streaming video over HTTP with consistent quality
US10530990B2 (en) Method for controlling a video-surveillance and corresponding video-surveillance system
JP6595287B2 (en) Monitoring system, monitoring method, analysis apparatus and analysis program
CN105827633B (en) Video transmission method and device
US10516856B2 (en) Network video recorder cluster and method of operation
US10321144B2 (en) Method and system for determining encoding parameters of video sources in large scale video surveillance systems
US20230247069A1 (en) Systems and Methods for Adaptive Video Conferencing
US11917327B2 (en) Dynamic resolution switching in live streams based on video quality assessment
CN115150592A (en) Audio and video transmission method, server and computer readable storage medium
US9071821B2 (en) Method and system for long term monitoring of video assets
US20090315886A1 (en) Method to prevent resource exhaustion while performing video rendering
GB2552376A (en) Method and device for efficiently generating, based on a video flow, a plurality of video streams required by modules of a video surveillance system
CN113660465A (en) Image processing method, device, readable medium and electronic device
US20190228625A1 (en) Prioritization of video sources
CN116781973B (en) Video encoding and decoding method and device, storage medium and electronic equipment
JP2015061293A (en) Camera system, master camera device and slave camera device
CN111836020A (en) Code stream transmission method and device in monitoring system and storage medium
GB2557617A (en) Method and device for managing video streams in a video surveillance system
US10237582B2 (en) Video stream processing method and video stream device thereof
CN110121051B (en) Distributed streaming media forwarding method and device based on network camera
KR20240002346A (en) Electronic apparatus for processing image using AI encoding/decoding and cotrol method thereof
HK40074531A (en) A bit rate adaptive method, device, computer equipment and storage medium
HK40074531B (en) A bit rate adaptive method, device, computer equipment and storage medium
Bošnjaković et al. Picture quality meter—No-reference video artifact detection tool