[go: up one dir, main page]

CN106686466B - Video data positioning method - Google Patents

Video data positioning method Download PDF

Info

Publication number
CN106686466B
CN106686466B CN201710016545.5A CN201710016545A CN106686466B CN 106686466 B CN106686466 B CN 106686466B CN 201710016545 A CN201710016545 A CN 201710016545A CN 106686466 B CN106686466 B CN 106686466B
Authority
CN
China
Prior art keywords
positioning
data
stag
client
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710016545.5A
Other languages
Chinese (zh)
Other versions
CN106686466A (en
Inventor
黄丹丹
操勇
张�林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Core Ruishi Technology Co ltd
Original Assignee
Shenzhen Core Ruishi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Core Ruishi Technology Co Ltd filed Critical Shenzhen Core Ruishi Technology Co Ltd
Priority to CN201710016545.5A priority Critical patent/CN106686466B/en
Publication of CN106686466A publication Critical patent/CN106686466A/en
Application granted granted Critical
Publication of CN106686466B publication Critical patent/CN106686466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method for positioning video data, which comprises the following steps: adding a content identifier sTag into a frame header; the video monitoring equipment automatically accumulates the content identification after the client positioning operation; according to the invention, under the condition that a relative time stamp is used in the process of packaging the image data of the H264/H265 video frame, namely when frame header data is designed, the video data before and after positioning is marked by adding an sTag so as to conveniently and accurately control the effect of remote playback positioning and avoid the problems of screen splash, inconsistent playing and expectation and the like.

Description

Video data positioning method
Technical Field
The invention relates to the technical field of video monitoring, in particular to a method for positioning video data.
Background
The current video monitoring field generally uses the H264/H265 coding standard to compress and code video images, so as to transmit higher-quality video images under the same network bandwidth.
The image types (also called frame types) coded by the H264/H265 standard are mainly divided into two types, namely a key frame (I frame for short), a reference frame (P frame for short), wherein the I frame is characterized by comprising all image information at the coding moment, and the current scene can be completely restored according to the information.
In practical application, an I frame and N P frames are usually selected to be coded in a fixed period (such as 1 second), the I frame contains complete and independent image information, so the image data volume is large, and the P frame only stores a part which is changed from the previous scene image, so the data volume is small.
In practice, manufacturers have adopted a method of adding a fixed length of data in front of the image data to store this type of information (which we will generally refer to as a frame header).
In the practical application process, there are two methods for designing the time stamp, namely an absolute time stamp and a relative time stamp.
The advantage of absolute time stamping is obvious, and by this information we can know the detailed time of this frame image generation accurately to the millisecond level, then if we express a moment like 2016, 12.23.12.0.0.500 milliseconds, we choose a 32-bit integer to express the number of milliseconds since 1970, then this value is above 1450656000000(46 years), and a maximum value that a 32-bit unsigned integer can express is 4294967295, then we must express it in terms of 64-bit integer that is 8 bytes, thus again showing at least three disadvantages: the method comprises the steps of firstly, using 8 bytes, using 1 byte more than a relative time stamp, secondly, calculating the difference value of the time stamp of two frames, namely that the calculation amount of 64 bits is larger than that of 32 bits, and in addition, when the difference value is converted into specific time from a 64 bit integer, a large amount of calculation resources are consumed, so that the overall performance is reduced.
The relative time stamp adopts a 32-bit integer to represent the number of milliseconds, counting is started from 0, the 1 st frame is 0, the accumulated time interval (the number of milliseconds) of the next frame time stamp on the basis of the previous frame time stamp is 40 milliseconds, and the like, the 3 rd frame, the 4 th frame and the Nth frame time stamp are 80, 120 and (N-1) 40.
Table 1 shows the prior art frame header data structure:
TABLE 1
Frame type Relative time stamp Actual image data length
For convenience and illustrative explanation in the present invention, we simplify the complexity of the application here. That is, only 1 key frame and 2 reference frames are coded in 1 second, but the video to be played is only 4 seconds, and the whole video content can be described as shown in the following table 2;
TABLE 2
Frame numbering 1 2 3 4 5 6 7 8 9 10 11 12
Frame type I P P I P P I P P I P P
For example, P5, I7, P11 denote the 5 th, 7 th, 11 th frames, the types of which are P frames, I frames, P frames.
The prior art video transmission process describes positioning details.
1, the process of playing the content after timing is inconsistent with the expectation is shown in Table 3
TABLE 3
Figure GDA0002206970760000031
2, the process of playing the splash screen after positioning is shown in Table 4
TABLE 4
Figure GDA0002206970760000032
Figure GDA0002206970760000041
Disclosure of Invention
In order to solve the above-mentioned deficiency, the present invention provides a method for positioning video data, in which, under the condition that a relative timestamp is used during the process of packaging the image data of the H264/H265 video frame, i.e. when the frame header data is designed, a sta is added to mark the video data before and after positioning, so as to conveniently and precisely control the effect of remote playback positioning, and avoid the problems of screen-splash, play inconsistency and the like.
The invention provides a method for positioning video data, which comprises the following steps:
adding a content identifier sTag into a frame header;
the video monitoring equipment automatically accumulates the content identification operated by the client;
and when the client receives the positioned video data, the client takes whether the content identification sTag changes as a basis.
In the foregoing method, the step of adding the content identifier sdag to the frame header includes: the content id sotag is used to indicate image data before and after positioning.
In the method, the step of automatically accumulating the content identifier operated by the client by the video monitoring device includes: if the value of the sTag of each frame before positioning is 0, the value of the sTag after positioning is 1.
The method mentioned above, wherein the step of the client receiving the next video data based on the last marked content id comprises:
and discarding the data before the client locates.
The method specifically comprises the following steps:
the client sends a first request to a network layer;
the network layer sends the received first request to a monitoring preparation and adds a content identifier sTag into a frame header, wherein the initialization value of the sTag is 0;
and the monitoring equipment sends the positioned image data to the client and plays the image data to finish one-time positioning playing.
In the above method, the step of sending the located image data to the client and playing the located image data by the monitoring device, and completing one-time location playing further includes:
the client resends the second positioning request to the monitoring equipment;
the monitoring equipment marks the second positioning content identification as an sTag value of 1;
the client receives the content identification positioned for the second time, and then discards the previously marked data, thereby completing the accumulation of the content identification for the first time.
The method described above, wherein the step of the client receiving a new content identifier and then discarding the previously marked data by the client, thereby completing the accumulation of the content identifier once further includes:
the client sends a third positioning request to the monitoring equipment;
the monitoring equipment marks the third-time positioning content identification as an sTag value of 2;
the network layer sends the data of the second positioning and the data of the third positioning to the client;
and the client identifies the received data, discards the data with the content identification of the sTag value of 1, and plays the data with the content identification of the sTag value of 2.
The invention has the following advantages: under the condition that relative timestamps are used in the process of packaging H264/H265 video frame image data, namely when frame header data is designed, the video data before and after positioning is marked by adding one sTag, so that the effect of remote playback positioning is conveniently and accurately controlled, and the problems of screen omission, inconsistent playing with expectation and the like are avoided.
Drawings
The invention and its features, aspects and advantages will become more apparent from reading the following detailed description of non-limiting embodiments with reference to the accompanying drawings. Like reference symbols in the various drawings indicate like elements. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Fig. 1 is a flowchart illustrating a method for locating video data according to the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The following detailed description of the preferred embodiments of the invention, however, the invention is capable of other embodiments in addition to those detailed.
If the difference can be made, the image content can be accurately played only by discarding and emptying all the image data cached in the network layer and included in the data before the positioning, in other words, a reliable frame dropping strategy must be realized to achieve the aim. In the present invention, it should be noted that the references to the first time, the second time, etc. only indicate the convenience of distinguishing the sequence of the positioning requests, and are not particularly limited.
Referring to fig. 1, the present invention provides a method for positioning video data, including the following steps:
step S1: a content tag sTag is added to the frame header, and the tag can clearly indicate the image data before and after positioning.
Step S2: for example, the value of the sTag of each frame before positioning is 0, and the value of the sTag after positioning is 1, even if frame data before positioning is cached in a network layer, the sTag values of the frames are still 0 before positioning, so that the client can discard the frame data immediately, and the sTag value of the frame data after positioning is certainly different from the current value by 1. The monitoring device must accumulate the sta after the client performs the positioning operation.
Step S3: when the client receives the positioned video data, the client uses the content identification marked at the last as the basis, and the frame data after positioning is bound to start from the I frame, so the screen-splash problem is not generated. When the client receives the positioned video data, the client discards the video data without change according to whether the sTag is changed or not, and the changed video data is the video data which is in line with expectation.
The key point of the invention is that a mark sTag is arranged in the frame header data structure for distinguishing frame image data before and after positioning, compared with the frame header data in the prior art, the invention has the advantages that the schematic diagram of the frame header data structure improved by the method of the invention is shown in Table 5;
TABLE 5
Frame type Relative time stamp sTag Actual image data length
In a preferred but nonlimiting embodiment of the present invention, the method for locating video data specifically includes the following steps: the client sends a first request to a network layer; the network layer sends the received first request to a monitoring preparation and adds a content identifier sTag into a frame header, wherein the initialization value of the sTag is 0; and the monitoring equipment sends the positioned image data to the client and plays the image data to finish one-time positioning playing.
In a preferred but nonlimiting embodiment of the present invention, the step of the monitoring device sending the located image data to the client and playing the located image data, and completing a location playing further includes: the client resends the second positioning request to the monitoring equipment; the monitoring equipment marks the second positioning content identification as an sTag value of 1; the client receives the content identification positioned for the second time, and then discards the previously marked data, thereby completing the accumulation of the content identification for the first time.
In a preferred but nonlimiting embodiment of the present invention, the step of the client receiving a new content id and then discarding the previously marked data, thereby completing an accumulation of the content id further comprises: the client sends a third positioning request to the monitoring equipment; the monitoring equipment marks the third-time positioning content identification as an sTag value of 2; the network layer sends the data of the second positioning and the data of the third positioning to the client; and the client identifies the received data, discards the data with the content identification of the sTag value of 1, and plays the data with the content identification of the sTag value of 2. A specific example is provided below to further illustrate the present invention, as shown with reference to Table 6.
TABLE 6
Figure GDA0002206970760000081
Figure GDA0002206970760000091
As can be seen from table 6, the process of the present invention is specifically implemented as follows: at the time of T0, the client sends a playback request (C- > V) to the network layer, the network layer forwards the playback request to the monitoring equipment, and the monitoring equipment receives the playback request and prepares for the playback; the monitoring equipment completes the sending of 6 frames of images, wherein the network layer forwards 4 frames of data, and buffers 2 frames of data, namely P5 and P6, wherein sTag in the frame header structure of the 6 frames of data is 0; the client receives 4 frames, plays 2 frames of images, and should play P3 next, where the frame header structure of these 4 frames of data has a sTag of 0;
at time T2, the client initiates a request to locate to I7, the network layer forwards the location request to the monitoring device, the monitoring device receives the request and prepares, and the client immediately clears the unplayed 2 frames, i.e., P3, I4.
At time T3, the network layer forwards P5(V-C) buffer 1 frame, that is, P6, including the frame that has been sent and the current buffer frame still whose sta is 0, the client discards at time T2 because I4 is an I frame, and newly received P5 relies on I4, and discards the data before detecting that its sta is 0 according to the established frame loss policy, that is, belonging to positioning.
At time T4, the monitoring device completes I7 transmission, since the newly located data setag is accumulated once, that is, the value is 1, the network layer completes P6 transmission to the client, and the setag of the cache I7, P6 is 0; when the sta of the newly received I7 is 1, the client receives P6, and the processing method is the same as that of T3 and is not repeated.
At time T5, the network layer forwards I7, at this time, the sta of I7 is 1, and the client receives I7 and detects that the sta is 1 (the sta before positioning is 0), that is, the sta is changed, that is, it can be determined that the frame of data is already positioned data, and normal playing should be performed, until the positioning process is completed normally.
At time T6, the monitoring device sends P8, P9, I10, and sstag is 1, because there is no location, the value of sstag remains unchanged all the time, the network layer forwards P8, P9, buffer I10, P8, P9, and sttag of I10 is 1, the client receives P8, P9, and sstag is still 1, there is no change, and the client plays normally.
At time T7, the client initiates a request to locate to I4, and then the network layer forwards the location request, and the monitoring device is ready to receive the request.
At time T8, the monitoring device sends I4, at this time, the content identifier sstag is 2, the network layer forwards I10, the sTag is 1, the buffer I4, the sttag is 2, the client receives I10, because I10 is an I frame, direct playing does not screen, but because the sttag value of I10 is 1, it indicates data before positioning, and the predetermined policy is discarding, thereby avoiding the problem that playing is inconsistent with expectation.
At time T9, the network layer forwards the I4 to the client, where the content identifier is "sta" 2, and the client receives the I4 and detects that the value of the etag is 2, which changes to indicate that the currently received data is already located, so that the data can be played normally, and thus, a location process is successfully completed.
The above description is of the preferred embodiment of the invention. It is to be understood that the invention is not limited to the particular embodiments described above, in that devices and structures not described in detail are understood to be implemented in a manner common in the art; those skilled in the art can make many possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments to equivalent variations, without departing from the spirit of the invention, using the methods and techniques disclosed above. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (1)

1. A method for locating video data, comprising the steps of:
in the case where relative timestamps are used in the encapsulation of the image data of the H264/H265 video frames i.e. in the design of the frame header data,
the network layer adds a content identification sTag in the frame header, wherein the content identification sTag is used for indicating the video data before and after positioning;
the monitoring device automatically adds 1 to the content identifier after the client positioning operation, namely: if the sTag value of each frame before positioning is 0, the sTag value after positioning is 1;
after the client is positioned, receiving the next effective video data by taking the change of the content identification as a positioning basis, and discarding the video data with unchanged content identification;
in particular
The client sends a first request to a network layer;
the network layer sends the received first request to a monitoring preparation and adds a content identifier sTag into a frame header, wherein the initialization value of the sTag is 0;
the monitoring equipment sends the positioned video data to the client and plays the video data to finish one-time positioning playing;
the client sends a second positioning request to the monitoring equipment;
the monitoring equipment marks the second positioning content identification as an sTag value of 1;
the client receives the video data with the content identifier 1 positioned for the second time, and then discards the video data marked as 0 before, so that the accumulation of the content identifiers is completed for one time;
the client sends a third positioning request to the monitoring equipment;
the monitoring equipment marks the third-time positioning content identification as an sTag value of 2;
the network layer sends the data of the second positioning and the data of the third positioning to the client;
and the client identifies the received data, discards the data with the content identification of the sTag value of 1, and plays the data with the content identification of the sTag value of 2.
CN201710016545.5A 2017-01-10 2017-01-10 Video data positioning method Active CN106686466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710016545.5A CN106686466B (en) 2017-01-10 2017-01-10 Video data positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710016545.5A CN106686466B (en) 2017-01-10 2017-01-10 Video data positioning method

Publications (2)

Publication Number Publication Date
CN106686466A CN106686466A (en) 2017-05-17
CN106686466B true CN106686466B (en) 2020-10-23

Family

ID=58849275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710016545.5A Active CN106686466B (en) 2017-01-10 2017-01-10 Video data positioning method

Country Status (1)

Country Link
CN (1) CN106686466B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813994B (en) * 2019-04-11 2024-06-07 阿里巴巴集团控股有限公司 Data processing and file playback method and device based on interactive whiteboard
CN110913273A (en) * 2019-11-27 2020-03-24 北京翔云颐康科技发展有限公司 Video live broadcasting method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072361A (en) * 2007-06-22 2007-11-14 中兴通讯股份有限公司 Method for improving receiving performance of multimedia broadcasting terminal
CN201127093Y (en) * 2007-09-19 2008-10-01 中兴通讯股份有限公司 Mobile multimedia broadcast terminal
US8396369B1 (en) * 2007-09-28 2013-03-12 Aurora Networks, Inc. Method and system for propagating upstream cable modem signals and RF return video control signals over the same optical network
CN203104105U (en) * 2012-09-29 2013-07-31 上海市电力公司 Intelligent Video Monitoring System for Power Transmission and Transformation
CN105094705A (en) * 2015-07-27 2015-11-25 武汉兴图新科电子股份有限公司 Method for optimizing disk storage strategy

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257647B (en) * 2007-02-28 2011-09-07 国家广播电影电视总局广播科学研究院 Method for transmiferring mobile multimedia broadcast electric business guide
CN102789804B (en) * 2011-05-17 2016-03-02 华为软件技术有限公司 Video broadcasting method, player, monitor supervision platform and audio/video player system
CN102802088B (en) * 2012-08-29 2015-04-15 上海天跃科技股份有限公司 Data transmission method based on real-time transmission protocol
CN103532923B (en) * 2012-11-14 2016-07-13 Tcl集团股份有限公司 A kind of real-time media stream transmission method and system
CN104581406A (en) * 2014-12-25 2015-04-29 桂林远望智能通信科技有限公司 Network video recording and playback system and method
CN105959310B (en) * 2016-07-01 2019-09-10 北京小米移动软件有限公司 Frame alignment method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072361A (en) * 2007-06-22 2007-11-14 中兴通讯股份有限公司 Method for improving receiving performance of multimedia broadcasting terminal
CN201127093Y (en) * 2007-09-19 2008-10-01 中兴通讯股份有限公司 Mobile multimedia broadcast terminal
US8396369B1 (en) * 2007-09-28 2013-03-12 Aurora Networks, Inc. Method and system for propagating upstream cable modem signals and RF return video control signals over the same optical network
CN203104105U (en) * 2012-09-29 2013-07-31 上海市电力公司 Intelligent Video Monitoring System for Power Transmission and Transformation
CN105094705A (en) * 2015-07-27 2015-11-25 武汉兴图新科电子股份有限公司 Method for optimizing disk storage strategy

Also Published As

Publication number Publication date
CN106686466A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN111135569B (en) Cloud game processing method and device, storage medium and electronic equipment
US8774413B2 (en) Method and apparatus for processing entitlement control message packets
US8442052B1 (en) Forward packet recovery
US9191158B2 (en) Communication apparatus, communication method and computer readable medium
US9100180B2 (en) Method, device and communication system for retransmission based on forward error correction
DE60128409T2 (en) Method and apparatus for decompressing packet header data
JP5875725B2 (en) Content reproduction information estimation apparatus, method, and program
KR20120042833A (en) Backward looking robust header compression receiver
EP2086174A1 (en) A method and system of multimedia service performance monitoring
EP2622819B1 (en) Determining loss of ip packets
CN104270684A (en) Video and audio data network transmission system and method oriented to real-time application
CN110958331A (en) Data transmission method and terminal
CN106686466B (en) Video data positioning method
KR20110090596A (en) Jitter Correction Method and Device
US20130007567A1 (en) Adaptive encoding and decoding for error protected packet-based frames
US20180255325A1 (en) Fault recovery of video bitstream in remote sessions
CN104410927A (en) Low-redundancy compensation method of video transmission packet loss in erasure channel
CN115942000B (en) H.264 format video stream transcoding method, device, equipment and medium
CN110740133A (en) network voting and election method and system based on RTMP protocol
JP7562485B2 (en) Streaming server, transmission method and program
CN106937168B (en) Video coding method, electronic equipment and system using long-term reference frame
CN101296166A (en) Method for measuring multimedia data based on index
CN105407351B (en) A kind of method and apparatus for rebuilding coding mode from Realtime Transport Protocol data packet
CN113973227A (en) Data processing efficiency optimization method and device
CN112087635A (en) Image coding control method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: 4 / F, B, Weiyu longbuji factory building, 2016 Xuegang Road, Gangtou community, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen core Ruishi Technology Co.,Ltd.

Address before: 430000 Hubei city of Wuhan Province, East Lake new technology development zone two Road No. 1 International Business Center

Applicant before: WUHAN ZHUOWEI SHIXUN TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A video data location method

Effective date of registration: 20220613

Granted publication date: 20201023

Pledgee: Shenzhen high tech investment and financing Company limited by guarantee

Pledgor: Shenzhen core Ruishi Technology Co.,Ltd.

Registration number: Y2022980007613

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230928

Granted publication date: 20201023

Pledgee: Shenzhen high tech investment and financing Company limited by guarantee

Pledgor: Shenzhen core Ruishi Technology Co.,Ltd.

Registration number: Y2022980007613

PC01 Cancellation of the registration of the contract for pledge of patent right