[go: up one dir, main page]

CN111931156A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111931156A
CN111931156A CN202011094432.5A CN202011094432A CN111931156A CN 111931156 A CN111931156 A CN 111931156A CN 202011094432 A CN202011094432 A CN 202011094432A CN 111931156 A CN111931156 A CN 111931156A
Authority
CN
China
Prior art keywords
verification
picture
instruction
pictures
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011094432.5A
Other languages
Chinese (zh)
Inventor
李繁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youmi Technology Shenzhen Co ltd
Original Assignee
Youmi Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youmi Technology Shenzhen Co ltd filed Critical Youmi Technology Shenzhen Co ltd
Priority to CN202011094432.5A priority Critical patent/CN111931156A/en
Publication of CN111931156A publication Critical patent/CN111931156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a plurality of pictures, wherein the plurality of pictures comprise a first frame picture and a reference picture, and the reference picture is a picture used for representing that verification passes; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction; and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed. Therefore, the verification can be performed by obtaining the simple operation of the user without calling out an input keyboard, so that the user operation required in the verification process is simplified, and the user experience is improved.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, the captcha technology has been widely used in the internet field, and it is possible to distinguish whether an operation object is a machine or a real person by setting some tasks that a human being can easily perform and an automatic program can hardly complete, so as to prevent malicious use. The current verification code is usually based on picture recognition, and a user needs to distinguish a processed picture containing some abstract numbers, English, Chinese characters or combination of object images, and can complete identity verification only by inputting or selecting a correct verification code. The identifying code based on the picture identification needs a user to select and submit a plurality of pictures through an input keyboard, the operation is complex, and the user experience is not good.
Disclosure of Invention
The application provides an image processing method, an image processing device, an electronic device and a storage medium, so as to overcome the defects.
In a first aspect, an embodiment of the present application provides an image processing method, which may include: acquiring a plurality of pictures, wherein the plurality of pictures comprise a first frame picture and a reference picture, and the reference picture is a picture used for representing that verification passes; when a verification starting instruction is detected, the plurality of pictures are played frame by starting from the first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction; and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which may include: the image acquisition module is used for acquiring a plurality of images, wherein the plurality of images comprise a first frame image and a reference image, and the reference image is used for representing an image which passes verification; the picture playing module is used for starting to play the plurality of pictures frame by frame from the first picture on the interactive interface according to a specified sequence when a verification starting instruction is detected; the result acquisition module is used for acquiring an operation result corresponding to the verification ending instruction when the verification ending instruction is detected; and the result judging module is used for passing the verification if the operation result meets the verification condition corresponding to the reference picture.
In a third aspect, an embodiment of the present application provides an electronic device, which may include: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having program code stored therein, where the program code is called by a processor to execute the method according to the first aspect.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein a plurality of pictures are obtained, the plurality of pictures comprise a first frame picture and a reference picture, and the reference picture is a picture used for representing verification passing; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction; and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed. Therefore, the verification can be performed by obtaining the simple operation of the user without calling out an input keyboard, so that the user operation required in the verification process is simplified, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating an image processing method according to another embodiment of the present application.
Fig. 4 shows a flow chart of an image processing method according to another embodiment of the present application.
Fig. 5 is a flowchart illustrating an image processing method according to still another embodiment of the present application.
Fig. 6 is a flowchart illustrating step S520 in an image processing method according to an embodiment of the present application.
Fig. 7 is a flowchart illustrating an image processing method according to still another embodiment of the present application.
Fig. 8(a) shows an interface schematic diagram provided in an embodiment of the present application.
Fig. 8(b) shows another interface schematic diagram provided in the embodiment of the present application.
Fig. 8(c) shows a schematic view of another interface provided in the embodiment of the present application.
FIG. 9 shows a block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the present application for executing an image processing method according to an embodiment of the present application.
Fig. 11 illustrates a block diagram of a computer-readable storage medium for executing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In computer networks such as the internet, when a server provides a service, it is difficult to avoid the problem that various malicious programs automatically access or submit data. If the computer program automatically performs the operations that the person should perform, such as filling in a form to register, accessing resources, etc., the server resources will be occupied and may cause security problems.
In order to distinguish whether the request received by the server is submitted by a person or a computer program, the server adopts a verification code technology, and distinguishes whether the user of the network service is a program or a real person through a full-automatic Turing test for distinguishing the computer from the human. The authentication code is provided by the server to the end-user for testing to distinguish whether the end-user is a computer program. Such tests are generally based on the problem that computers are currently unable or difficult to handle, which makes it easy for humans to pass tests, whereas computer programs are unable or costly to pass.
The existing verification codes are various in types, and the traditional verification codes require a terminal user to recognize a group of randomly generated alphanumerics or characters, and distinguish the operations of a real user and a malicious program through the recognition result input by the user. Some authentication codes provide a series of specially processed pictures that require the end user to click on a picture of the object depicted in the question. Other authentication codes provide a user with a picture with multiple Chinese characters, requiring the end user to click on the Chinese characters in the picture in a specified order for authentication. The verification codes based on picture identification all have the problems that the image content is difficult to identify or the user needs to have knowledge in some fields, and the user needs to spend more time and energy to identify the required verification codes from the pictures, so that poor use experience is caused for the user. Moreover, when the mobile terminal such as a mobile phone uses this method, the operation of inputting the verification code by the user through the input keyboard is inconvenient, and the verification efficiency is easily too low due to the input error.
Based on the above defects, the inventor provides the image processing method, the image processing device, the terminal device and the storage medium according to the embodiment of the application after long-term research, and can obtain simple operation of a user for verification based on the picture verification code, the verification code is easy to identify and is strong in interestingness, an input keyboard does not need to be called, and verification experience of the user is improved.
In order to better understand an image processing method, an image processing apparatus, a terminal device, and a storage medium provided in the embodiments of the present application, an application environment suitable for the embodiments of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application scenario of an image processing method according to an embodiment of the present application. The image processing method provided by the embodiment of the application can be applied to the multi-state interactive system 10 shown in fig. 1. The polymorphic interaction system 10 includes a terminal device 100 and a server 200, the server 200 being communicatively coupled to the terminal device 100. The server 200 may be an individual server, or a server cluster, or a local server, or a cloud server, which is not limited specifically herein.
The terminal device 100 may be various terminal devices having a display screen and supporting data input, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a wearable terminal device, and the like. Specifically, the data input may be based on a voice module provided on the terminal device 100 to input voice, a character input module to input characters, an image input module to input images, a video input module to input video, and the like, or may be based on a gesture recognition module provided on the terminal device 100, so that a user may implement an interaction manner such as gesture input.
In some embodiments, the terminal device 100 may have a client application installed thereon, and the user may communicate with the server 200 based on the client application (e.g., APP, wechat applet, etc.), and in particular, the server 200 has a corresponding server application installed thereon, and the user may register a user account with the server 200 based on the client application, communicate with the server 200 based on the user account, for example, a user logs into a user account at a client application, and enters through the client application based on the user account, text information, voice information, image information or video information and the like can be input, and after the client application program receives the information input by the user, the information may be sent to the server 200 so that the server 200 may receive the information and process and store the information, and the server 200 may also receive the information and return a corresponding output information to the terminal device 100 according to the information.
In some embodiments, when the user communicates with the server 200 based on the client application, the server 200 may send an authentication request to the server 200 through the terminal device 100, and the server 200 may send an authentication code to the terminal device 100 after receiving the authentication request, and the terminal device 100 displays the authentication code, acquires authentication code operation information of the user, and sends the authentication code operation information to the server 200. After receiving the verification code operation information sent by the terminal device 100, the server 200 determines a verification result according to the verification code operation information, returns the verification result to the terminal device 100, and only when the verification is successful, the terminal device 100 has the right to access the server application program. As one way, the server 200 may transmit the authentication code and the authentication condition to the terminal device 100 so that the terminal device 100 can determine the authentication result without transmitting the authentication code operation information to the server 200.
In some embodiments, the server in which the server application is installed and the server for authentication may be the same server, and the server 200 may further authenticate user account information of the terminal device, such as a user name and a password, after determining the authentication result according to the authentication code operation information. As one mode, the server in which the server application is installed and the server for authentication may be the same server or different servers, and if the terminal device receives an authentication result that passes authentication and is sent by the server for authentication, the terminal device may send the user account information to the server application for authentication, thereby improving the security of the user account information.
The above application environments are only examples for facilitating understanding, and it is to be understood that the embodiments of the present application are not limited to the above application environments.
The following describes in detail an image processing method, an image processing apparatus, a terminal device, and a storage medium provided by embodiments of the present application with specific embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application, where the image processing method according to the embodiment may be applied to a terminal device, and the terminal device may be the terminal device with a display screen or other image output devices. In a particular embodiment, the image processing includes: s210 to S240.
S210: and acquiring a plurality of pictures.
The terminal device may obtain a plurality of pictures for verification, where the plurality of pictures may include a first frame picture and a reference picture, the first frame picture is used to represent a first picture during frame-by-frame playing, the reference picture is used to represent a picture that passes verification, and the first frame picture and the reference picture may not be the same picture. The pictures are continuous multi-frame images which can form a dynamic picture or a video, for example, the pictures can be continuous multi-frame images which form an advertisement video; the plurality of pictures may also be unrelated pictures, for example, the plurality of pictures may be pictures of pupils, fruit pictures, building pictures, and the like which are unrelated to each other; the multiple pictures can also be pictures composed of several elements, wherein the elements can be numbers, letters, characters of various countries, music symbols and the like.
In some embodiments, the multiple pictures may be pictures sent by the server to the terminal device after the server receives the authentication request sent by the terminal device. In other embodiments, the multiple pictures may be pictures stored in a database of the terminal device, and after the terminal device obtains the verification request, the terminal device may obtain the multiple pictures in the database.
The authentication request may be a request triggered by an operation behavior of the user when the terminal device requests to access a web page or a service provided by the client application. As one way, the operation behavior of the user may be an authentication operation, and the terminal device may generate an authentication request when authenticating the identity of the user. For example, the authentication operation may be an operation that requires authentication of a user identity, such as transaction, account registration, account login, and information transmission, using the terminal device. As another mode, the terminal device may also generate the verification request when detecting an abnormal operation of the user, where the abnormal operation may be a case where the user logs in the application too many times within a certain period of time, logs in the account at a different place, inputs a password by mistake, and the like, and the abnormal operation is not limited herein.
In some embodiments, the terminal device may acquire the multiple pictures from a server or a database, and acquire a first frame picture and a reference picture of the multiple pictures. When the acquired multiple pictures are consecutive multiple frame images forming a dynamic picture or a video, the first frame picture may be a first frame picture corresponding to the dynamic picture or the video.
In other embodiments, the terminal device may acquire the still picture and the generation parameter, generate the multiple pictures based on the still picture and the generation parameter, and determine a first frame picture and a reference picture in the multiple pictures. Compared with obtaining multiple pictures, obtaining a static picture and generating parameters can greatly reduce the amount of transmitted data, and particularly, please refer to the following embodiments.
S220: when a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to the specified sequence.
The verification starting instruction is used for instructing the terminal equipment to execute the operation of starting to play the plurality of pictures frame by frame from the first frame picture in the specified sequence.
The interactive interface can be an interface which is displayed on the terminal equipment and is generated by the action trigger of the user for verification. For example, when a user logs in a website through terminal equipment, after the user inputs account information such as a user name and a password, a website server performs risk assessment on the account information of the user, if the fact that the user logs in a different place is detected, risks may exist, controls logged in the website can be hidden, and an interaction interface used for verifying that the user is a real person rather than a computer application program is triggered, so that malicious attack on the computer application program is prevented.
In some embodiments, before the interactive interface is displayed, component resources of the verification code matched with the interface configuration may also be obtained and loaded according to the interface configuration of the terminal device, where the component resources of the verification code may include but are not limited to JS (shorthand for JavaScript, which refers to a scripting language widely used for client Web development) resources, Cache Subsystem (CSS) resources, Flash resources, and the like. Specifically, different websites may set interface configurations according to requirements, for example, the background size of the interactive interface, the position of the interactive interface, the size of the verification code of the display picture, and the like are configured, that is, for interface configurations of different terminal devices, component resources of the verification code may be different.
As a mode, the sizes of the acquired multiple pictures can be adjusted according to the component resources of the verification code, so that the sizes of the pictures correspond to the interactive interface, and the situation that the pictures cross the border is avoided. Alternatively, multiple pictures may be obtained according to component resources of the verification code. Specifically, the terminal device may send the interface configuration and the verification request to the verification code server, and the verification code server returns the corresponding component resource according to the interface configuration of the terminal device through a preset algorithm and returns a plurality of pictures adapted to the interface configuration, so that the situation of boundary crossing or overlapping of the pictures and the frame of the interactive interface is avoided.
In some embodiments, the interactive interface may include a verification code display area and an input area, wherein the verification code display area is an area on the interactive interface where the verification code may be displayed, and the input area is an area on the interactive interface where the user operation instruction may be acquired. As one mode, the verification code display area and the input area may be the same area of the interactive interface, that is, a verification code picture may be displayed in the area, and an operation instruction of the user may also be obtained. As another mode, the verification code display area and the input area can be different areas on the interactive interface, and the mode can avoid that the user shields the verification code to display the content of the area when the user performs input operation.
As one mode, verification prompt information can be displayed on the interactive interface to prompt the user of the verification passing condition. Specifically, the verification prompt message may be a reference picture displayed on the interactive interface, or a text description corresponding to the reference picture displayed on the interactive interface in a text box manner. For example, the acquired multiple pictures are 3 natural landscape pictures and 1 car picture, the reference picture is the car picture, the content of the text box of "pressing the preset area for a long time until the picture becomes the car picture" may be displayed on the interactive interface, and the car picture may also be displayed on the interactive interface.
Alternatively, the verification prompt may be played by voice. For example, when the interactive interface is displayed, the voice message of "pressing the preset area for a long time until the displayed car picture is displayed on the interface" is played by the audio playing device to prompt the user of the condition of passing the verification.
The terminal device can monitor the user's behavior on the interactive interface to obtain the user's verification start instruction.
In some embodiments, a verify start instruction may be used to characterize the start of one persistent operation.
As one mode, the verification start instruction may be an instruction that a point touch operation is received in a preset area of the interactive interface, where the point touch operation may be a click operation or a touch operation, and the preset area may be an input area preset on the interactive interface. For example, the verification instruction is a continuous click on an input area on the interactive interface, the verification start instruction may be a detection of a finger of the user touching the screen in the input area, and further, in order to prevent the user from inadvertently performing a touch operation, a touch operation instruction lasting for a specified time may be used as the verification start instruction, wherein the specified time is a period of time starting from the detection of the touch operation.
Alternatively, the verification start instruction may be an action instruction meeting a preset condition, and the action instruction may be detected based on a sensor or an image capturing device. For example, the verification initiation instruction may be a gesture motion sensed by a distance sensor to a user, the user's hand remaining within 10 centimeters of the sensor. By using the action command as the verification start command, the user can perform a non-contact verification operation, for example, in the case where the user has wet hands and is inconvenient to touch the screen.
In other embodiments, the verification start instruction may be a non-persistent instruction, where a non-persistent instruction is an instruction whose duration is below a specified time threshold, which may be a small value. For example, the specified time threshold may be the length of time that the user has completed one single click of the touch screen. The verification start instruction may be a non-continuous click instruction, or the verification start instruction may be a non-continuous action instruction or a voice instruction.
Specifically, an instruction meeting a first preset condition may be used as a verification start instruction, and an instruction meeting a second preset condition may be used as a verification end instruction, where the first preset condition is used to represent a preset verification start condition, and the second preset condition is used to represent a preset verification end condition after the verification start instruction is detected. The first preset condition and the second preset condition may be the same or different.
For example, when the verification instruction is a voice instruction, the first preset condition may be that "start" of voice is detected, the second preset condition may be that "end" of voice is detected, and when the terminal device detects that the voice of the user includes "start" through the voice acquisition device, the terminal device determines that the verification start instruction is detected, and plays the multiple pictures frame by frame from the first picture in the specified sequence on the interactive interface. When the voice of the user is detected to contain 'end' through the voice acquisition device, the terminal equipment judges that the verification end instruction is detected.
When a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to a specified sequence, wherein the specified sequence is used for representing the playing sequence of the multiple pictures, the multiple pictures can be played in a forward sequence or a reverse sequence, the specified sequence can also comprise the display duration of each picture, and the display durations of different pictures can be the same or different. It is understood that, when the plurality of pictures are continuously changing pictures, the shorter the display time period of each picture is, the faster the changing speed of the picture is.
In one approach, each of the multiple pictures may have a unique identifier, and the specified order may be used to characterize the playing order of each picture determined according to the picture identifier. As another mode, when the plurality of pictures are a plurality of frame images constituting a moving picture or a video, the specified order is a play order of the frame images corresponding to the moving picture or the video.
In some embodiments, it is periodic to play the multiple pictures frame by frame starting from the first picture, that is, when the multiple pictures are played frame by frame starting from the first picture to the last picture in the multiple pictures in the specified order, the multiple pictures can be played again starting from the first picture, or played from the last picture to the first picture in reverse order and then played again starting from the first picture. Compared with the method of playing pictures non-periodically, on one hand, the picture data needing to be acquired in the periodic playing is less, so that the data volume of network transmission can be reduced, on the other hand, the picture which is convenient for the user to identify the playing is played in the periodic playing, and when the user carelessly misses the reference picture which accords with the verification prompt information, the user can more easily identify the picture which accords with the verification prompt information in the next period, so that the cognitive difficulty of the user is reduced.
S230: and when the verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction.
In some embodiments, before determining whether the verification end instruction is detected, it may be determined whether a time interval between the current time and the time when the verification start instruction is acquired is greater than a preset time threshold, where the preset time threshold is used to represent a maximum time period for waiting for the verification end instruction after the verification start instruction is acquired. If the time interval is larger than the preset time threshold, acquiring a plurality of pictures again for verification, or prompting the user to input a verification starting instruction again. By the method, the condition that a user inputs an authentication ending instruction for a long time can be avoided, on one hand, power consumption can be saved, and on the other hand, malicious attacks on a website or an authentication server which are occupied for a long time can be avoided.
The verification ending instruction corresponds to the verification starting instruction, and different verification starting instructions correspond to different verification ending instructions.
In some embodiments, when the verify start instruction is a start instruction used to characterize an apersistence operation, the verify end instruction may be used to characterize an end instruction of an apersistence operation.
As one mode, when the verification start instruction is an instruction of receiving a touch operation in a preset area of the interactive interface, the verification end instruction may be an instruction of stopping the touch operation after the touch operation lasts for a specified time, and the specified time may be a preset period of time starting from the verification start instruction. For example, the verification end instruction may be that the user releases after touching the screen for more than a specified time, i.e., that the user's finger is detected to be off the screen.
Alternatively, when the verification start instruction is an action instruction satisfying a preset condition, the verification end instruction may be an instruction representing that the action instruction ends after lasting for a specified time. For example, the verification end instruction may be an instruction to detect that the gesture motion of the user disappears after sensing the gesture motion for a certain period of time by the distance sensor.
In other embodiments, when the verification start instruction is a non-persistent instruction, the verification end instruction is a non-persistent instruction corresponding to the verification start instruction. The verification ending instruction may be a non-continuous click instruction, or a non-continuous action instruction or a voice instruction.
As one mode, when the verification start instruction is a first touch instruction detected in a preset area of the interactive interface, a second touch instruction detected in the preset area is used as a verification end instruction. For example, after a first mouse click operation is detected in a preset area of the interactive interface, a plurality of pictures are played on the interactive interface frame by frame from a first picture according to a specified sequence, and a second mouse click operation detected in the preset area is used as a verification ending instruction.
When the verification ending instruction is detected, an operation result corresponding to the verification ending instruction may be obtained, where the operation result corresponding to the verification ending instruction may be a time interval between the detection of the verification starting instruction and the detection of the verification ending instruction, or may be a picture displayed on an interactive interface when the verification ending instruction is detected.
The verification end instruction of the user is received, the operation result corresponding to the verification end instruction is obtained for verification, and an input keyboard does not need to be called out, so that the verification process of the verification code becomes simpler, the time consumption is reduced, the verification operation of the users of various terminals is facilitated, and particularly the verification operation of the users using small-screen equipment such as a smart phone is facilitated.
S240: and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed.
After the operation result corresponding to the verification end instruction is obtained, the operation result can be judged according to the verification condition corresponding to the reference picture. In some embodiments, the terminal device may send the operation result to a server for verification, and the server determines whether the operation result satisfies a verification condition corresponding to the reference picture, and if the verification condition is satisfied, the server sends information that the verification is passed to the terminal device. In other embodiments, the verification condition may also be stored in a database local to the terminal device, and the terminal device determines whether the operation result passes the verification based on the verification condition corresponding to the reference picture.
Specifically, the verification conditions corresponding to different operation results may also be different. As one way, when the operation result corresponding to the verification end instruction is a time interval between the detection of the verification start instruction and the detection of the verification end instruction, the verification condition may be that the time interval satisfies a time threshold. Alternatively, when the operation result corresponding to the verification ending instruction is a picture displayed on the interactive interface when the verification ending instruction is detected, the verification condition may be matching with the picture. Specifically, please refer to the following embodiments, which are not described herein.
In some embodiments, when the obtained operation result does not satisfy the verification condition corresponding to the reference picture, the picture verification code may be updated. Optionally, after acquiring a plurality of pictures in step S210, if an instruction for switching the verification code input by the user is acquired, the picture verification code may also be updated.
In some embodiments, step S210 may be performed to retrieve a plurality of pictures for verification. In other embodiments, the verification condition corresponding to the currently displayed multiple pictures may also be updated, and the verification start instruction of the user may be obtained again. Furthermore, the number of times of updating the picture verification code in a specified time period can be recorded, and if the number of times of updating the picture verification code is greater than a specified value, the picture verification code can be updated according to a preset updating rule.
As one way, the preset update rule may be to use a picture that is more difficult to recognize as a verification code to increase the difficulty of verification. For example, pixels of the picture may be blurred, local images in the picture may be deformed, interference elements may be introduced into the picture, and so on. The preset update rule may also increase the difficulty of verification by setting a verification condition that is more difficult to satisfy. For example, a time threshold in the verification condition is reduced, etc.
Alternatively, the preset update rule may be to lock the current interactive interface for a period of time, and the user cannot perform the verification operation based on the interactive interface within the locked time. For example, when the number of times that the user updates the picture verification code is greater than the specified value of 5 times, the interactive interface may be locked for 1 minute, and "please wait for 1 minute and then perform verification" may be displayed in the form of a text box.
As another way, the preset update rule may be to increase the time interval required for updating the picture verification code and increase the difficulty of verification.
According to the image processing method provided by the embodiment of the application, a plurality of pictures are obtained; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction; and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed. Therefore, whether the verification passes or not can be judged by obtaining the verification starting instruction and the verification ending instruction of the user and the corresponding operation result under the condition that the input keyboard is not output, the user operation required in the verification process is simplified, and the user experience is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an image processing method according to another embodiment of the present application, where an execution subject of the method may be a terminal device having a display screen or other image output apparatus, and the method includes: s310 to S340.
S310: and acquiring a plurality of pictures.
S320: when a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to the specified sequence.
S330: when a verification end instruction is detected, a time interval between the detection of the verification start instruction and the detection of the verification end instruction is acquired.
In some embodiments, before determining whether the verification end instruction is detected, it may be determined whether a time interval between the current time and the time for obtaining the verification start instruction is greater than a preset time threshold.
When the verification end instruction is detected, a time interval between the detection of the verification start instruction and the detection of the verification end instruction can be acquired, and the time interval is used as an operation result corresponding to the verification end instruction.
In some embodiments, a time when the verification start instruction is detected may be recorded as a first time point, and a time when the verification end instruction is detected may be recorded as a second time point, and the time interval may be obtained by calculating a time difference between the second time point and the first time point. For example, the verification instruction is a gesture motion, the time point when the terminal device detects the gesture operation of the user is 23 hours and 50 minutes and 00 seconds, 23 hours and 50 minutes and 00 seconds are taken as a first time point, the time point when the terminal device senses that the gesture operation of the user disappears is 23 hours and 51 minutes and 20 seconds, 23 hours and 51 minutes and 20 seconds are taken as a second time point, the time difference between the second time point and the first time point is 1 minute and 20 seconds, and 1 minute and 20 seconds are taken as the time interval between the detection of the verification start instruction and the detection of the verification end instruction.
In other embodiments, an internal timer may be used for timing when the verification start command is detected, and the timing is stopped when the verification end command is detected, and the timing time obtained by the timer is taken as the time interval. For example, the verification instruction is a continuous point touch operation, when the terminal device detects the point touch operation of the user, the timer starts to count time from 0.0 second, after 4.5 seconds, the user stops the point touch operation, and when the terminal device detects that the point touch operation of the user disappears, the timer stops counting time, and 4.5 seconds are taken as a time interval between the detection of the verification start instruction and the detection of the verification end instruction.
S340: and if the difference value between the time interval and the reference time length is smaller than the specified error threshold value, the verification is passed.
The reference duration is used for representing the time required for playing from the first frame picture to the reference picture according to a specified sequence, and the reference duration can be determined according to the number of the acquired multiple pictures, the display duration corresponding to each picture and the picture playing sequence. It is understood that whether the user inputs the verification ending instruction when displaying the reference picture on the interactive interface can be judged according to the time interval, so as to judge whether the verification is passed.
In some embodiments, when the playing of the plurality of pictures frame by frame starting from the first picture in the specified order is a cyclic playing with periodicity, a number of cycles of playing the pictures that pass between the detection of the verification start instruction and the detection of the verification end instruction may be acquired, and the reference time length may be determined according to the number of cycles. Specifically, the time length of one period is T, that is, pictures are played from the first picture in the specified order, and after the time T elapses, pictures are played from the first picture in the specified order again. In one period, the time required for playing from the first frame picture to the reference picture in the specified sequence is t, wherein 0 is< t <T, the complete cycle number passing between the verification starting instruction and the detection of the verification ending instruction is n, and the reference time length is
Figure 905560DEST_PATH_IMAGE001
The specified error threshold may be a preset specified value for representing a difference between the time interval satisfying the verification passing condition and the reference time length. For example, the reference time length is 10 seconds, the error threshold is 1 second, and if the acquired time interval is greater than or equal to 9 seconds and less than or equal to 10 seconds, it is determined that the difference between the time interval and the reference time length is less than the specified error threshold, and the verification is passed. Whether the verification passes or not is judged according to the time, the judgment mode is simple and easy to realize, the time for a user to wait for a verification result after inputting verification operation can be reduced, and the verification efficiency is improved.
In some embodiments, the difficulty of verification may be changed by changing the specified error threshold, and in particular, verification conditions that are more difficult to satisfy may be set by decreasing the specified error threshold, thereby increasing the difficulty of verification. For example, the reference time length when the user performs authentication for the first time is 10 seconds, the error threshold is 1 second, and after the user updates the authentication code for 5 times, the error threshold may be set to 0.5 second, and the user needs to input an authentication end instruction more accurately to pass the authentication.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the image processing method provided by the embodiment of the application, a plurality of pictures are obtained; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring a time interval between the detection of the verification starting instruction and the detection of the verification ending instruction; and if the difference value between the time interval and the reference time length is smaller than the specified error threshold value, the verification is passed. Therefore, the time interval between the user verification starting instruction and the verification ending instruction can be obtained, whether verification passes or not is judged by comparing the time interval with the reference time length, the judging mode is simple and easy to realize, and therefore the verification efficiency is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to another embodiment of the present application, where an execution subject of the method may be a terminal device having a display screen or other image output device, and the method includes: s410 to S440.
S410: and acquiring a plurality of pictures.
S420: when a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to the specified sequence.
S430: and when a verification ending instruction is detected, acquiring a picture displayed on the interactive interface when the verification ending instruction is detected.
In some embodiments, before determining whether the verification end instruction is detected, it may be determined whether a time interval between the current time and the time for obtaining the verification start instruction is greater than a preset time threshold.
As an implementation manner, when a verification start instruction is detected, a plurality of pictures are played on the interactive interface frame by frame from a first frame picture according to a specified sequence, and when a verification end instruction is detected, the playing of the pictures on the interactive interface can be stopped, and the pictures displayed on the interactive interface when the verification end instruction is detected are acquired. For example, 4 pictures are acquired, the picture numbers of which are 1 to 4 respectively, wherein the first frame of picture is the picture with the number 1, the reference picture is the picture with the number 3, the reference picture and a text prompt are displayed on an interactive interface of the terminal device until the picture with the number 3 is displayed on the interface, when the fact that a finger of a user touches a preset area on the screen is detected, the pictures are played frame by frame on the interactive interface from the picture with the number 1 according to the sequence of the numbers from small to large, and if the fact that the finger of the user leaves the preset area on the screen is detected when the picture with the number 3 is displayed on the interactive interface, the picture with the number 3 is acquired and is used as an operation result corresponding to the verification ending instruction.
S440: and if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is detected, the verification is passed.
And after the picture displayed on the interactive interface when the verification ending instruction is detected is obtained, judging whether the picture is matched with the reference picture, and if the picture is matched with the reference picture, passing the verification.
As one approach, whether the picture matches a reference picture may be determined based on an image algorithm. For example, whether the picture matches with the reference picture may be determined by matching the acquired picture with the reference picture on the basis of gray scale matching or feature matching by an image matching algorithm. For another example, when multiple pictures are played in a specified order, the positions of the verification points on the pictures dynamically change, and whether the acquired picture is matched with the reference picture can be determined by detecting whether the positions of the verification points displayed on the interactive interface at the time of the verification ending instruction coincide with the positions of the verification points on the reference picture.
As another mode, each picture displayed on the interactive interface has a unique identifier, when a verification end instruction is detected, the identifier of the picture displayed on the interactive interface when the verification end instruction is detected can be obtained, and if the identifier is the same as the identifier of the reference picture, it is determined that the verification is passed. Optionally, multiple pictures before and after the playing of the reference picture in the designated sequence may be acquired, and if the verification end instruction is detected, it may be determined that the picture displayed on the interactive interface belongs to the multiple pictures before and after the playing of the reference picture according to the identifier of the picture, and it is determined that the picture is matched with the reference picture.
In some embodiments, verification conditions that are more difficult to satisfy may be set. For example, the difficulty of authentication may be increased by reducing the display time of the reference picture when the pictures are played frame by frame, and it is understood that the shorter the display time of the reference picture is, the more difficult it is for the user to input an authentication end instruction while the reference picture is displayed, and thus the more difficult it is to pass the authentication.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the image processing method provided by the embodiment of the application, a plurality of pictures are obtained; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring a picture displayed on an interactive interface when the verification ending instruction is detected; and if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is detected, the verification is passed. Therefore, whether the verification passes can be judged by judging whether the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is sent, the difficulty of cracking the verification condition by a machine is increased, and the verification safety is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to still another embodiment of the present application, where an execution subject of the method may be a terminal device having a display screen or other image output apparatus, and the method includes: s510 to S550.
S510: obtaining a static picture and generating parameters.
The static picture and the generation parameter can be sent to the terminal equipment by the server after the server receives a verification request sent by the terminal equipment; the still picture and the generation parameter may also be stored in the terminal device in advance.
The still picture may be a real picture obtained by shooting or a composite picture, the format of the still picture may be JPEG, PNG, BMP or RGB type, and the still picture may be a gray-scale picture or a color picture, which is not limited herein.
The generation parameters are used for representing parameters required for generating a plurality of pictures. In some embodiments, the generation parameters are parameters that can be used to generate an object that changes in a specified motion pattern. The motion mode may be various changes such as rotation, deformation, displacement, and color change, which is not limited herein. As one way, the object may be a partial image or a solid element in a still picture. For example, an elephant in a still picture containing the elephant, a cloud in a still sky picture containing the cloud, etc. Alternatively, the object may be a specified object generated from the generation parameters. For example, a vector element of a shape such as a pentagram, a circle, etc. generated by the generation parameter.
In some embodiments, the generation parameter may be a parameter required for changing the still picture, and in particular, the generation parameter may be a parameter for performing operations of picture cropping, size scaling, rotation, and the like on the still picture. For example, the still picture may be rotated clockwise by 90 degrees according to the generation parameter, so as to obtain the rotated still picture.
S520: and generating a plurality of pictures based on the static pictures and the generation parameters.
In some embodiments, the still picture may be changed according to the generation parameter to generate a plurality of pictures, for example, the still picture may be cut into 9 sub-pictures in a squared manner using two horizontal lines and two vertical lines according to the generation parameter, and each sub-picture is rotated according to the generation parameter to obtain the plurality of pictures, and one picture is determined as a first frame picture and one picture is determined as a reference picture in the plurality of pictures.
In some embodiments, one or more designated objects that change in a designated motion mode may be generated according to the generation parameters, and the generated designated objects are rendered on the still picture respectively by an image rendering technique to obtain multiple pictures in which the designated objects change on the still picture, where one or more designated objects may exist on one picture according to different generation parameters. For example, the designated object is a vector graphic of a pentagram shape, the still picture is a picture of the sky, one pentagram shape may be generated on the picture of the sky according to the generation parameter, or a plurality of pentagram shapes may be generated on the picture of the sky. As one approach, the designated object may be a discontinuous changing object. For example, a circle in the first picture is generated in the upper left corner of the still picture, and a circle in the second picture is generated in the upper right corner of the still picture.
In some embodiments, the image of the local area of the still picture may be changed based on the generation parameter to generate multiple pictures with continuously changing local areas corresponding to the still picture, and specifically, referring to fig. 6, step S520 may include:
s521: and generating a plurality of area images which continuously change according to the generation parameters.
The multiple area images can be generated based on local images of the static pictures or generated according to specified objects corresponding to the generation parameters; each region image corresponds to a display position of the still picture, the display position may be a position coordinate of the specified position of the region image corresponding to the still picture, and the specified position may be a center point of the region image or a vertex of the region image, which is not limited herein. The display position corresponding to the image of each region may be the same or different.
The multiple pictures may include a first frame picture and a reference picture, the first frame picture may be a still picture or a picture other than the still picture in the multiple pictures, and the reference picture is a picture used for representing that the verification passes in the multiple pictures.
In some embodiments, the local images in the still picture may be identified, and a plurality of local images that continuously change may be generated according to the generation parameter, and the plurality of local images may be regarded as a plurality of region images. For example, when the still picture is a picture including an elephant, the elephant in the still picture can be identified through an image identification algorithm, a plurality of area images with continuously changing motions of the elephant are generated according to the generation parameters, and the display position of each area image in the still picture is determined.
In some embodiments, a plurality of area images including the designated object that changes continuously may be generated according to the generation parameter, and each area image may include one or a plurality of designated objects. For example, the designated object is a circle, and a region image including a circle whose diameter gradually increases from 1 cm to 10 cm may be generated according to the generation parameter, that is, the diameter of the circle on the first region image is 1 cm, the diameter of the circle on the second region image is 1.5 cm, the diameter of the circle on the third region image is 2 cm, and so on, it is understood that the smaller the difference in diameter between adjacent pictures is, the smoother the dynamic effect when pictures are played frame by frame is.
In some embodiments, the image content of the plurality of region images may be constant, and the display position corresponding to each region image may be continuously changed. For example, the plurality of area images are images of a flower recognized in the still picture, and the display positions corresponding to the plurality of area images may be horizontally and gradually moved from the position of the flower in the still picture to the position on the right side of the still picture according to the generation parameter.
In other embodiments, the image content of the plurality of region images may be continuously changed, and the corresponding display position of each region image may be maintained. For example, an image of a flower recognized in a still picture may be generated, and a plurality of area images gradually blooming by a flower bud may be generated according to the generation parameter.
In still other embodiments, the image content of the plurality of region images and the corresponding display positions of the images may be continuously varied. For example, a plurality of area images in which the buds burst gradually may be generated according to the generation parameter, and the display positions corresponding to the plurality of area images may be horizontally moved gradually from the position of the flower in the still picture to the position on the right side of the still picture according to the generation parameter.
It can be understood that the smaller the difference between the generated plurality of region images is, the better the visual effect is and the smoother the change of the pictures is when the plurality of pictures synthesized by the region images and the static pictures are played frame by frame according to the specified order. The difference between the plurality of area images may be the difference between the image contents of the area images corresponding to the adjacent pictures in the designated order, or may be the difference between the display positions. In practical applications, the number of generated area images may be determined according to the requirement for the visual effect.
S522: and synthesizing each region image with the static picture according to the display position corresponding to each region image to generate a plurality of pictures.
After a plurality of continuously changing area images are generated according to the generation parameters and the display position corresponding to each area image is obtained, each area image can be synthesized with the static picture according to the display position corresponding to each area image to generate a plurality of pictures. Specifically, an area image may be rendered at a display position corresponding to the area image in the still picture to generate an image including the area image and the content of the still picture, and after a first picture is generated according to the still picture, each subsequent picture may be generated according to a previous picture, so that a dynamic visual effect is generated when the generated pictures are played frame by frame.
One of the generated pictures can be used as a first frame picture, and the other picture can be used as a reference picture, and when the pictures are played frame by frame from the first frame picture according to a specified sequence, a dynamic change effect is visually presented.
S530: when a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to the specified sequence.
S540: and when the verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction.
S550: and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the image processing method provided by the embodiment of the application, the static image and the generation parameters are obtained; generating a plurality of pictures based on the static pictures and the generation parameters; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring a picture displayed on an interactive interface when the verification ending instruction is detected; and if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is given, the verification is passed. Therefore, the static pictures and the generation parameters can be acquired to generate a plurality of pictures in real time, and compared with the acquisition of a plurality of pictures, the data volume transmitted by a network can be greatly reduced, so that the verification performance is improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating an image processing method according to still another embodiment of the present application, where an execution subject of the method may be a terminal device having a display screen or other image output apparatus, and the method includes: s610 to S660.
S610: obtaining a static picture and generating parameters.
The still picture may be a real picture obtained by shooting or a composite picture. For example, the still picture may be a picture synthesized from a plurality of different vector graphics, the format of the still picture may be JPEG, PNG, BMP, or RGB type, and the still picture may be a gray-scale picture or a color picture, which is not limited herein.
The generating parameters may include a position parameter and a shape parameter, the shape parameter may be used to generate a plurality of region images, and the position parameter may be used to represent a display position of the still picture corresponding to each region image.
S620: and generating area images with continuously changed graphic shapes in the graphic set according to the shape parameters, and determining a display position of each area image corresponding to the static picture according to the position parameters.
The region image may be a plurality of images generated according to the shape parameter, the region image includes a graphic set, the region image in which the shapes of the graphics in the graphic set are continuously changed may be generated according to the shape parameter, and a display position of the still picture corresponding to each region image may be determined according to the position parameter.
The display position may be a position coordinate of the designated position of the region image corresponding to the still picture, and the designated position may be a center point of the region image or a vertex of the region image, which is not limited herein. The display position corresponding to the image of each region may be the same or different.
The graph set may include one or more graphs, and the graph shape may be used to represent an appearance of the graph, including shape attributes such as a size, a rotation angle, a color, and the like of the graph, and specifically, the type of the graph may be a geometric graph. Geometric figures such as circles, triangles, squares and the like, the types of figures can be other types such as letters, characters, symbols and the like, and the sizes of the figures can be the lengths and the widths of the figures; the rotation angle of the graphics may be a clockwise or counterclockwise rotation angle, and the number of graphics and the shape of the graphics in the graphics set are not limited herein. The shapes of the individual graphics in the graphics set may be the same or different. For example, the graphic set may include a circle, or may include a circle and two triangles with different sizes.
The region images in which the shapes of the figures in the figure set are continuously changed may be generated based on the shape parameters, and specifically, the plurality of region images in which the shape attributes of the figures in the figure set are continuously changed may be generated based on the shape parameters, and as one way, one of the shape attributes may be changed based on the shape parameters. For example, when the graph is a circle, the size of the circle can be changed according to the shape parameter, and a plurality of area images with the graph radius gradually increased from 1 cm to 3 cm are generated; alternatively, multiple ones of the shape attributes may be changed according to the shape parameters. For example, when the figure is an equilateral triangle, the size and color of the figure can be changed simultaneously according to the shape parameters, and a plurality of area images in which the side length of the triangle is gradually increased from 2 cm to 3 cm, and the color is gradually changed from red to green can be generated.
In some embodiments, continuously changing area images which are difficult to identify can be generated according to the shape parameters, and the difficulty of identifying the changing graphics by a user is increased, so that the difficulty of verification is increased. As one way, an interference element may be added to the region image. For example, some random dots or lines are added that are not graphical. Alternatively, a plurality of patterns may be overlapped to some extent. For example, there may be two triangles with overlapping regions. As yet another way, some hard-to-recognize patterns may be generated. For example, a quadrangle having a large difference in length between the sides.
Compared with the method for identifying the local images in the static pictures and generating the multiple area images with continuously changed local images according to the generation parameters, the method for generating the area images with continuously changed graphic shapes in the graphic set according to the shape parameters does not need to identify the static pictures, the change of the graphic shapes is easier to realize than the change of the local images, the required computing resources are less, and the generation efficiency is higher.
S630: and synthesizing each region image with the static picture according to the display position corresponding to each region image to generate a plurality of pictures.
S640: when a verification starting instruction is detected, the multiple pictures are played frame by frame from the first picture on the interactive interface according to the specified sequence.
Referring to fig. 8(a), fig. 8(a) shows an interface schematic diagram provided in the embodiment of the present application. The interactive interface shown in fig. 8(a) includes a display area 710 for displaying the verification code, wherein the display area 710 for displaying the verification code displays a triangle, a trapezoid, a circle, a pentagram, and other figures with different shapes, and the verification prompt information 720 displayed in a text form on the interactive interface is used to represent the condition that the reference picture corresponds to passing the verification, and the verification prompt information 720 may also be displayed in a picture form. For example, the verification prompt 720 may be a schematic diagram of the conditions of passing the verification. An input area 730 for acquiring a user operation instruction is arranged below the verification code display area 710, and optionally, the verification code display area 710 and the input area 730 may be the same area, and the display positions of the verification code display area 710 and the input area 730 on the interactive interface are not limited.
Fig. 8(b) shows another interface schematic diagram provided in the embodiment of the present application, when a verification start instruction is detected, multiple pictures are played frame by frame on the interactive interface shown in fig. 8(b) in a specified order, the area images in the multiple pictures are circles 740 with periodically and continuously changing shapes and positions, in the multiple pictures, the diameter of the circle 740 is gradually increased from 0 to a certain value and then is changed from 0, and the position of the circle 740 is also dynamically moved.
Fig. 8(c) shows another interface schematic diagram provided in this embodiment, when the circle 740 in fig. 8(b) changes to the state of the circle 750 in fig. 8(c), there are 3 triangular shapes in the changed circle on the interactive interface, which satisfy the content of the verification prompt information 720, and if a verification end instruction is detected, that is, it is detected that the user releases the mouse, an operation result corresponding to the verification end instruction is obtained, and it is determined whether the verification passes according to the operation result.
S650: and when the verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction.
S660: and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the image processing method provided by the embodiment of the application, the static image and the generation parameters are obtained; generating area images with continuously changed graphic shapes in the graphic set according to the shape parameters, and determining a display position of each area image corresponding to the static picture according to the position parameters; synthesizing each region image with a static picture according to the display position corresponding to each region image to generate a plurality of pictures; when a verification starting instruction is detected, starting to play a plurality of pictures frame by frame from a first picture on an interactive interface according to a specified sequence; when a verification ending instruction is detected, acquiring a picture displayed on an interactive interface when the verification ending instruction is detected; and if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is given, the verification is passed. Therefore, the corresponding multiple pictures are obtained by generating the area graph with the continuously changed graph shape, on one hand, the data volume transmitted by a network and the difficulty of generating the multiple pictures can be reduced, on the other hand, the identification difficulty of a user can also be reduced by using the multiple pictures containing the changed graph shape for verification, and the experience of user verification operation is improved.
When a user requests to access a web page or a service provided by a client application using a terminal device, an interactive interface for authentication may be triggered, on which an authentication code display area and an authentication code input area may be displayed, a static image including a plurality of shapes may be displayed on the authentication code display area, and authentication passing conditions with a textual description, such as "3 triangles exist within a circle that changes by long pressing the lower area" may be displayed. After a user clicks the verification code input area, the circle in the verification code display area begins to dynamically change periodically, the diameter of the circle is gradually increased from 0, and when the diameter of the circle is increased to a certain degree, the diameter of the circle is changed from 0 again. If the user stops clicking on the area when 3 triangles appear in the changing circle, the verification passes. If the verification fails, the static image displayed in the display area can be updated, and the user can perform the verification operation again.
Referring to fig. 9, a block diagram of an image processing apparatus 900 according to an embodiment of the present application is shown, which may include: a picture acquiring module 910, a picture playing module 920, a result acquiring module 930, and a result determining module 940.
The picture obtaining module 910 is configured to obtain multiple pictures, where the multiple pictures include a first frame picture and a reference picture, and the reference picture is a picture used for representing that verification passes.
Further, the picture acquiring module 910 further includes: parameter acquisition submodule and picture generate submodule, wherein:
and the parameter acquisition submodule is used for acquiring the static picture and generating parameters.
And the picture generation submodule is used for generating the plurality of pictures based on the static pictures and the generation parameters.
Further, the picture generation sub-module further includes: a region image generating unit, a picture synthesizing unit, wherein:
and the area image generating unit is used for generating a plurality of continuously changed area images according to the generating parameters, and each area image corresponds to one display position of the static picture.
And the picture synthesis unit is used for synthesizing each area image and the static picture according to the display position corresponding to each area image so as to generate the multiple pictures.
Further, the region image includes a graphic set, the generation parameter includes a position parameter and a shape parameter, and the picture synthesis unit includes: a graph generation subunit, wherein:
and the graph generating subunit is used for generating the area images with continuously changed graph shapes in the graph set according to the shape parameters, and determining one display position of the static picture corresponding to each area image according to the position parameters.
The picture playing module 920 is configured to, when a verification start instruction is detected, start to play the multiple pictures frame by frame from the first picture on the interactive interface according to a specified sequence.
The result obtaining module 930 is configured to, when the verification ending instruction is detected, obtain an operation result corresponding to the verification ending instruction.
The result determining module 940 is configured to pass the verification if the operation result meets the verification condition corresponding to the reference picture.
Further, the operation result corresponding to the verification ending instruction is a time interval between the detection of the verification starting instruction and the detection of the verification ending instruction, and the result determining module 940 further includes: a time judgment sub-module, wherein:
and the time judgment submodule is used for passing the verification if the difference value between the time interval and the reference time length is smaller than a specified error threshold, wherein the reference time length is used for representing the time required by playing the first frame picture to the reference picture according to the specified sequence.
Further, the operation result corresponding to the verification ending instruction is a picture displayed on the interactive interface when the verification ending instruction is detected, and the result determining module 940 further includes: a picture judgment sub-module, wherein:
and the picture judgment sub-module is used for passing the verification if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is detected.
Further, in the image processing apparatus 900, the verification start instruction is an instruction for receiving a touch operation in a preset area of the interactive interface, and the verification end instruction is an instruction for stopping the touch operation after the touch operation lasts for a specified time.
The image processing apparatus provided in the embodiment of the present application is used to implement the corresponding image processing method in the foregoing method embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
It can be clearly understood by those skilled in the art that the image processing apparatus provided in the embodiment of the present application can implement each process in the foregoing method embodiment, and for convenience and brevity of description, the specific working processes of the apparatus and the modules described above may refer to corresponding processes in the foregoing method embodiment, and are not described herein again.
Referring to fig. 10, a block diagram of an electronic device according to an embodiment of the present application is shown. The electronic device 1000 may be an electronic device capable of running an application, such as a smart phone, a tablet computer, and an electronic book. Specifically, in the embodiment of the present application, the electronic device 1000 may be the terminal device 100 described above.
The electronic device 1000 in the present application may include one or more of the following components: a processor 1010, a memory 1020, and one or more applications, wherein the one or more applications may be stored in the memory 1020 and configured to be executed by the one or more processors 1010, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 1010 may include one or more processing cores. The processor 1010 interfaces with various components throughout the electronic device 900 using various interfaces and circuitry to perform various functions of the electronic device 1000 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1020 and invoking data stored in the memory 1020. Alternatively, the processor 1010 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1010 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1010, but may be implemented by a communication chip.
The Memory 1020 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 1020 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1020 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 1000 during use (e.g., phone book, audio-video data, chat log data), and the like.
Referring to fig. 11, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 1100 has stored therein program code that can be called by a processor to perform the method described in the above-described method embodiments.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1100 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 1110 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1110 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a plurality of pictures, wherein the plurality of pictures comprise a first frame picture and a reference picture, and the reference picture is a picture used for representing that verification passes;
when a verification starting instruction is detected, the plurality of pictures are played frame by starting from the first picture on an interactive interface according to a specified sequence;
when a verification ending instruction is detected, acquiring an operation result corresponding to the verification ending instruction;
and if the operation result meets the verification condition corresponding to the reference picture, the verification is passed.
2. The method according to claim 1, wherein an operation result corresponding to the verification ending instruction is a time interval between detection of the verification starting instruction and detection of the verification ending instruction, and if the operation result satisfies a verification condition corresponding to the reference picture, the verification is passed, including:
and if the difference value between the time interval and the reference time length is smaller than a specified error threshold value, the verification is passed, and the reference time length is used for representing the time required by playing the first frame picture to the reference picture according to the specified sequence.
3. The method according to claim 1, wherein an operation result corresponding to the verification end instruction is a picture displayed by the interactive interface when the verification end instruction is detected, and if the operation result satisfies a verification condition corresponding to the reference picture, the verification is passed, including:
and if the picture displayed on the interactive interface is matched with the reference picture when the verification ending instruction is detected, the verification is passed.
4. The method of claim 1, wherein the obtaining the plurality of pictures comprises:
obtaining a static picture and generating parameters;
and generating the plurality of pictures based on the static pictures and the generation parameters.
5. The method of claim 4, wherein generating the plurality of pictures based on the static picture and the generation parameters comprises:
generating a plurality of continuously changed area images according to the generation parameters, wherein each area image corresponds to one display position of the static picture;
and synthesizing each region image with the static picture according to the display position corresponding to each region image to generate the multiple pictures.
6. The method according to claim 5, wherein the region image comprises a graphic set, the generating parameters comprise a position parameter and a shape parameter, and the generating a plurality of continuously changing region images according to the generating parameters, each region image corresponding to a display position of the still picture comprises:
and generating the area images with continuously changed graphic shapes in the graphic set according to the shape parameters, and determining one display position of the static picture corresponding to each area image according to the position parameters.
7. The method according to any one of claims 1 to 6, wherein the verification start instruction is an instruction for receiving a touch operation in a preset area of the interactive interface, and the verification end instruction is an instruction for stopping the touch operation after the touch operation lasts for a specified time.
8. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a plurality of images, wherein the plurality of images comprise a first frame image and a reference image, and the reference image is used for representing an image which passes verification;
the picture playing module is used for starting to play the plurality of pictures frame by frame from the first picture on the interactive interface according to a specified sequence when a verification starting instruction is detected;
the result acquisition module is used for acquiring an operation result corresponding to the verification ending instruction when the verification ending instruction is detected;
and the result judging module is used for passing the verification if the operation result meets the verification condition corresponding to the reference picture.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
CN202011094432.5A 2020-10-14 2020-10-14 Image processing method, image processing device, electronic equipment and storage medium Pending CN111931156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011094432.5A CN111931156A (en) 2020-10-14 2020-10-14 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011094432.5A CN111931156A (en) 2020-10-14 2020-10-14 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111931156A true CN111931156A (en) 2020-11-13

Family

ID=73334503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011094432.5A Pending CN111931156A (en) 2020-10-14 2020-10-14 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111931156A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114238924A (en) * 2021-12-17 2022-03-25 北京达佳互联信息技术有限公司 Image resource verification method and device, electronic device and storage medium
CN114722376A (en) * 2022-03-11 2022-07-08 王宏宏 Click type dynamic verification code method
US12361111B2 (en) 2021-04-20 2025-07-15 National Tsing Hua University Verification method and verification apparatus based on attacking image style transfer

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080127302A1 (en) * 2006-08-22 2008-05-29 Fuji Xerox Co., Ltd. Motion and interaction based captchas

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080127302A1 (en) * 2006-08-22 2008-05-29 Fuji Xerox Co., Ltd. Motion and interaction based captchas

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12361111B2 (en) 2021-04-20 2025-07-15 National Tsing Hua University Verification method and verification apparatus based on attacking image style transfer
CN114238924A (en) * 2021-12-17 2022-03-25 北京达佳互联信息技术有限公司 Image resource verification method and device, electronic device and storage medium
CN114722376A (en) * 2022-03-11 2022-07-08 王宏宏 Click type dynamic verification code method

Similar Documents

Publication Publication Date Title
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
US8141146B2 (en) Authentication server, authentication method and authentication program
US10218506B1 (en) Cross-device authentication
US8751628B2 (en) System and method for processing user interface events
TWI787211B (en) Verification method and device
CN105471808B (en) Generate the method for identifying code and the method, apparatus and system of safety verification
US10007776B1 (en) Systems and methods for distinguishing among human users and software robots
EP2410450A1 (en) Method for providing a challenge based on a content
US10127373B1 (en) Systems and methods for distinguishing among human users and software robots
CN111931156A (en) Image processing method, image processing device, electronic equipment and storage medium
CN103701600A (en) Input validation method and device
CN105354481B (en) Network verification method and network authentication server
CN103971045A (en) Click type verification code implementation method
US12301560B2 (en) Multi-factor authentication using symbols
CN114547581B (en) Method and apparatus for providing a captcha system
CN104811304B (en) Identity verification method and device
TW201734882A (en) Method for inputting verification information
CN110892677A (en) System and method for distinguishing human user from software robot
CN115130086A (en) A method, device and computer equipment for generating a dynamic verification code
JP6057471B2 (en) Authentication system and method using deformed graphic image
CN104699406A (en) Fingerprint triggering operation simulating method, device and terminal
JP2013254468A (en) Inversion tuning test method and access authentication method
JP6057377B2 (en) Authentication system and authentication method using electronic image tally
CN107272920A (en) Method and device for changing correspondence between keys and characters
CN110968849A (en) Verification method, intelligent terminal and device with storage function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication