[go: up one dir, main page]

CN110457400B - Event correlation method and device and storage device - Google Patents

Event correlation method and device and storage device Download PDF

Info

Publication number
CN110457400B
CN110457400B CN201910601950.2A CN201910601950A CN110457400B CN 110457400 B CN110457400 B CN 110457400B CN 201910601950 A CN201910601950 A CN 201910601950A CN 110457400 B CN110457400 B CN 110457400B
Authority
CN
China
Prior art keywords
information
user
association
receiving
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910601950.2A
Other languages
Chinese (zh)
Other versions
CN110457400A (en
Inventor
武楚荷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910601950.2A priority Critical patent/CN110457400B/en
Publication of CN110457400A publication Critical patent/CN110457400A/en
Application granted granted Critical
Publication of CN110457400B publication Critical patent/CN110457400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/288Entity relationship models

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the field of correlation methods, in particular to a correlation method, a correlation device and a storage device of events, wherein the correlation method comprises the following steps: receiving content information of an event input by a user, and generating a first information frame; receiving behavior information which is input by a user and is caused by the content information, and generating a second information frame; receiving an association instruction of a user aiming at different first information frames and second information frames; recording the incidence relation between different first information frames and second information frames which are selected by a user to be correlated; and outputting and displaying the association relation to the user. The first information frame is generated by the content information, the second information frame is generated by the behavior information, the first information frame and the second information frame form an association relationship with each other, and a subsequent user can clearly know the association between all information of the event through the visual association relationship, so that the user can be helped to have certain clear cognition on the event, and the user can conveniently comb and memorize the event.

Description

Event correlation method and device and storage device
Technical Field
The present invention relates to the field of correlation methods, and in particular, to a correlation method, device and storage device for events.
Background
At present, handheld devices such as mobile phones and tablet computers are gradually popularized, and in order to meet the development of the informatization market and provide better life services for people, people often record events corresponding to information through the handheld devices, so that a function of reminding memorandum is achieved.
However, the existing recording tool is only limited to the original memo APP on the handheld device, and only can perform input recording on a single event, and each event is in an independent state. And based on a single event, the development process information of the event can be recorded only in a general way, and all relevant information related to the event, such as ideas and the like caused by the event, is recorded. And all information can only form one or more sections, and when a user consults the information later, the user can only search the correlation among the information by depending on self memory one by one, so that the user has very poor experience.
Therefore, designing a method, an apparatus and a storage apparatus for associating events is one of the issues that those skilled in the art focus on.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an event correlation method, an event correlation device, and a storage device, aiming at the above-mentioned defects in the prior art, so as to solve the problem that the information of events in the conventional event recording method cannot be correlated with each other, and only has a scattered recording function.
In order to solve the technical problem, the invention provides an event correlation method, which comprises the following steps: receiving content information of an event input by a user, and generating a first information frame; receiving behavior information which is input by a user and is caused by the content information, and generating a second information frame; receiving an association instruction of a user aiming at different first information frames and second information frames; recording the incidence relation between different first information frames and second information frames selected by the user to be correlated; and outputting and displaying the association relation to a user.
Further, the step of receiving content information of an event input by a user and generating a first information frame specifically includes: receiving a first amplification instruction input by a user, and outputting an amplified first display area to the user according to the first amplification instruction; and receiving the content information of the event input by the user in the first display area, and generating the first information frame.
Further, the step of receiving behavior information, which is input by the user and is caused by the content information, and generating a second information frame specifically includes: receiving a second amplification instruction input by a user, and outputting an amplified second display area to the user according to the second amplification instruction; and receiving behavior information which is input by a user in the second display area and is caused by the content information, and generating the second information frame.
Still further, the step of recording the association relationship between the first information frame and the second information frame associated by the user selection further comprises: and generating an association relation library according to the association relation.
Further, the step of outputting and displaying the association relationship to the user specifically includes: selecting the incidence relation in the incidence relation library; and outputting the amplified third display area to a user, wherein the third display area displays the association relation.
Further, the step of generating an association relation library according to the association relation specifically includes: receiving an association label customized for the association relation by a user; recording the associated tag customized by a user; and storing the association labels and the association relations corresponding to the association labels in the association relation library.
Further, the step of outputting and displaying the association relationship to the user specifically includes: receiving a first-level selection instruction of a user for selecting a first-level associated label, acquiring a second-level associated label corresponding to the first-level associated label according to the first-level selection instruction, and outputting and displaying the second-level associated label to the user in the third display area; and receiving a second-level selection instruction of selecting a second-level associated label by a user, acquiring a plurality of association relations corresponding to the second-level associated label according to the second-level selection instruction, and outputting and displaying the association relations to the user in the third display area.
Still further, the step after receiving the association instruction between the user and the different first information frame and the second information frame further includes: receiving remark information input by a user aiming at different first information frames and second information frames selected to be associated; and storing the remark information.
Further, the step of receiving remark information input by the user for selecting different associated first information boxes and second information boxes specifically includes: outputting question information according to a preset question path; and receiving remark information fed back by the user for the question information and input for selecting the related different first information frame and second information frame.
Further, the step after receiving the behavior information induced by the content information and inputted by the user and generating the second information frame further includes: the first information frames and the second information frames are arranged in an array form according to a time axis, and the first information frames and the second information frames are respectively distributed on two sides.
Still further, the behavioral information is one or more of a thought item, a question item, or a next step plan item.
Furthermore, the association instruction is a line association instruction for connecting lines between different first information frames and different second information frames or a graphic association instruction for marking different first information frames and different second information frames with the same graphic.
The invention also provides an event correlation device, which comprises: a first information frame generating unit, for receiving the content information of the event input by the user and generating a first information frame; the second information frame generating unit is used for receiving behavior information which is input by a user and is caused by the content information and generating a second information frame; the association instruction receiving unit is used for receiving an association instruction between different first information frames and different second information frames by a user; the recording unit is used for recording the association relation between different first information frames and second information frames selected to be associated by a user; and the output unit is used for outputting and displaying the association relation to a user.
Further, the first information frame generating unit specifically includes: the first display area output unit is used for receiving a first amplification instruction input by a user and outputting an amplified first display area to the user according to the first amplification instruction; and the first input unit is used for receiving the content information of the event input by the user in the first display area and generating the first information frame.
Further, the second information frame generating unit specifically includes: the second display area output unit is used for receiving a second amplification instruction input by the user and outputting the amplified second display area to the user according to the second amplification instruction; and the second input unit is used for receiving behavior information which is input by a user in the second display area and is caused by the content information, and generating the second information frame.
Still further, the association apparatus further includes: and the incidence relation library generating unit is used for generating an incidence relation library according to the incidence relation.
Furthermore, the output unit specifically includes: a selecting unit, configured to select the association relationship in the association relationship library; and the third display area output unit is used for outputting the amplified third display area to a user, and the third display area displays the association relation.
Further, the association relation library generating unit specifically includes: the user-defined unit is used for receiving the associated tag which is defined by the user aiming at the associated relation; the associated tag recording unit is used for recording the associated tag customized by the user; a storage unit, configured to store the association tag and the association relationship corresponding to the association tag in the association relationship library.
Furthermore, the output unit specifically includes: the first-level selection instruction unit is used for receiving a first-level selection instruction for selecting a first-level associated label by a user, acquiring a second-level associated label corresponding to the first-level associated label according to the first-level selection instruction, and outputting and displaying the second-level associated label to the user in the third display area; and the second-level selection instruction unit is used for receiving a second-level selection instruction for selecting a second-level associated label by a user, acquiring a plurality of association relations corresponding to the second-level associated label according to the second-level selection instruction, and outputting and displaying the association relations to the user in the third display area.
Still further, the association apparatus further includes: the remark information receiving unit is used for receiving remark information input by a user aiming at different first information frames and second information frames which are selected to be associated; and the remark information storage unit is used for storing the remark information.
Still further, the remark information receiving unit includes: the question unit is used for outputting question information according to a preset question path; and the receiving unit is used for receiving remark information which is fed back by the user aiming at the question information and is input aiming at different first information frames and second information frames which are selected to be associated.
Still further, the association apparatus further includes: and the arrangement unit is used for arranging the first information frames and the second information frames in an array mode according to a time axis, and the first information frames and the second information frames are distributed on two sides respectively.
Still further, the behavioral information is one or more of a thought item, a question item, or a next step plan item.
Furthermore, the association instruction is a line association instruction for connecting lines between different first information frames and different second information frames or a graphic association instruction for marking different first information frames and different second information frames with the same graphic.
The invention also provides an event correlation device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the correlation method when executing the computer program.
The invention also provides a storage device storing a computer program executable to implement the steps of the associated method as described above.
Compared with the prior art, the event correlation method, the event correlation device and the event correlation storage device have the advantages that the first information frame is generated by the content information, the second information frame is generated by the behavior information, the correlation relationship is formed between the first information frame and the second information frame, and a subsequent user can clearly know the correlation between all information of the event through the visual correlation relationship, so that the user can clearly know the event, and the user can conveniently comb and memorize the event.
Drawings
The invention will be further described with reference to the following drawings and examples, in which:
FIG. 1 is a flow diagram of a method of correlating events of the present invention;
FIG. 2 is a block flow diagram of the present invention for generating a first information box;
FIG. 3 is a block flow diagram of the present invention for generating a second message box;
FIG. 4 is a block flow diagram of the present invention for generating an associative relational library;
FIG. 5 is a block flow diagram of the present invention showing an association relationship;
FIG. 6 is a block diagram of a flow of customizing an associated tag of the present invention;
FIG. 7 is a block diagram of a flow of displaying an association sequentially via a first level selection instruction and a second level selection instruction in accordance with the present invention;
FIG. 8 is a block diagram of a process for arranging a first information frame and a second information frame in accordance with the present invention;
FIG. 9 is a display interface of the association method of the present invention;
FIG. 10(a) is a display interface of the present invention for editing an experience card;
FIG. 10(b) is a display interface for editing cognitive cards in accordance with the present invention;
FIG. 11(a) is a display interface of the associative relational library of the present invention;
FIG. 11(b) is a display interface of the association of the present invention;
FIG. 12 is a block diagram of the structure of the association device of the present invention;
fig. 13 is a detailed configuration block diagram of the first information frame generating unit of the present invention;
fig. 14 is a detailed configuration block diagram of a second information frame generating unit of the present invention;
FIG. 15 is a block diagram showing the structure of an associative library generating unit according to the present invention;
FIG. 16 is a block diagram showing the detailed results of the output unit of the present invention;
FIG. 17 is a block diagram showing the detailed structure of the associative relational database generating unit according to the present invention;
FIG. 18 is a block diagram of the structure of a first level select instruction unit and a second level select instruction unit of the present invention;
fig. 19 is a block diagram showing the structure of an arrangement unit according to the present invention.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
As shown in fig. 1 to 11, the present invention provides a preferred embodiment of an event correlation method.
Specifically, referring to fig. 1, an event correlation method includes the following steps:
step 1, receiving content information of an event input by a user and generating a first information frame;
step 2, receiving behavior information which is input by a user and is caused by the content information, and generating a second information frame;
step 3, receiving a correlation instruction between different first information frames and different second information frames by a user;
step 4, recording the incidence relation between the first information frame and the second information frame which are selected by the user to be associated;
and 6, outputting and displaying the association relation to a user.
The system receives the content information input by the user and generates a first information frame according to the content information, and the content information and the first information frame are in one-to-one correspondence. The user inputs behavior information caused by the content information in the system, the system receives the behavior information input by the user and generates a second information frame according to the behavior information, and the behavior information and the second information frame are in one-to-one correspondence. After the first information frame and the second information frame are generated, a user inputs an association instruction according to the association between different first information frames and different second information frames, the system receives the association instruction input by the user, records the association relationship between different first information frames and different second information frames which the user needs to select and associate according to the association instruction, and outputs and displays the association relationship to the user.
Referring to fig. 9, a display interface after the association method is performed. In practical cases, the experience card in the figure is the first information frame, the content information input by the user is located at the corresponding position of the first information frame, the cognitive card in the figure is the second information frame, and the behavior information input by the user is located at the corresponding position of the second information frame. Subsequently, if the user thinks that the content information A is related to the behavior information A, generating a correlation instruction between a first information frame corresponding to the content information A and a second information frame corresponding to the behavior information A; considering that the behavior information A is related to the content information C, generating a correlation instruction between a second information frame corresponding to the behavior information A and a first information frame corresponding to the content information C; and if the content information C is associated with the behavior information D, generating an association instruction between a first information frame corresponding to the content information C and a second information frame corresponding to the behavior information D.
Since the extrinsic content information in the first frame often affects the intrinsic behavior information in the second frame, the behavior information in turn affects the following content information. The user only needs to input the most important content information and behavior information, other unnecessary information is saved based on the mode of the double information frames, and the user can select the associated content information and behavior information more intuitively and quickly subsequently, so that the finally formed association relationship is more concise and ordered. Through the method, the first information frame and the second information frame can be associated together, and the association relation is output and displayed to a user. Therefore, a subsequent user can know the relevance between the first information frame and the second information frame only by directly viewing the relevance relation, clearly know the development context of the whole event, clearly know the whole event, and conveniently sort and memorize the event, so that the user can know the processing mode of each important event and better know the user. Moreover, the user can plan the future development of the user more intelligently and further grow after the user summarizes and puzzles aiming at each incidence relation. The frequently occurring events affect the emotion of the user, but the emotion brought by the events directly is unreasonable. Through the association method, the user can record each piece of information of the event and realize association of each piece of information, namely, the user can objectively learn about the event more clearly from the angle of standing at a third party, and the obtained cognition is applied to the life and work fields, so that the user can grow better, and the user can control the life of the user more easily through cognitive application.
It should be noted that, because the cognitive level of each person is different from the resources around the person, many people will not see the possibility of things due to the limitation of the cognitive level, and will not see the resources around the person and lose the possibility of some people, the association method can help people to associate the resources with personal growth, and can also help users to see various possibilities through the growth path of other people, thereby helping people to achieve better personal growth.
It should be noted that the content information and the behavior information may be at least one of text, voice, picture, and video.
Still more specifically, referring to fig. 2, the step of receiving content information of an event input by a user and generating a first information frame specifically includes:
step 11, receiving a first amplification instruction input by a user, and outputting an amplified first display area to the user according to the first amplification instruction;
and step 12, receiving the content information of the event input by the user in the first display area, and generating the first information frame.
The user inputs a first amplification instruction to the system according to the self requirement when needing to input the content information. The system receives a first amplification instruction input by a user, and outputs an amplified first display area to the user according to the first amplification instruction. Then, the user inputs content information of the event in the first display area. The system receives content information input by a user and generates a first information frame according to the content information.
Referring to fig. 9 and 10(a), the user may select to go through the card, i.e., input the first zoom-in instruction. Subsequently, the system displays the first display area shown in fig. 10(a) after enlargement to the user in accordance with the first enlargement instruction of the user, and the user inputs the content information again in the first display area.
Because the content information required to be input by the user is often more and limited by the size of the interface of the terminal device, the user is inconvenient to input one or more content information in the same interface. Therefore, when the user needs to input the content information, the system outputs and displays the amplified first display area to the user, and the first display area occupies the whole interface of the terminal equipment, so that the user can input the content information conveniently. Even if the text description aiming at the single content information is more, the user can still realize the quick input of the content information.
More specifically, referring to fig. 3, the step of receiving behavior information, which is input by a user and is caused by the content information, and generating a second information frame specifically includes:
step 21, receiving a second amplification instruction input by a user, and outputting an amplified second display area to the user according to the second amplification instruction;
and step 22, receiving behavior information which is input by a user in the second display area and is caused by the content information, and generating the second information frame.
And the user inputs a second amplification instruction to the system according to the self requirement when needing to input the behavior information. And the system receives a second amplification instruction input by the user and outputs the amplified second display area to the user according to the second amplification instruction. Then, the user inputs behavior information induced by the content information in the second display area. The system receives behavior information input by a user and generates a second information frame according to the behavior information.
Referring to fig. 9 and 10(b), the user may select the cognitive card, i.e., input the second enlargement instruction. Subsequently, the system displays the second display area shown in fig. 10(a) after enlargement to the user in accordance with the second enlargement instruction of the user, and the user inputs the behavior information again in the second display area.
Because the behavior information required to be input by the user is often more and limited by the size of the interface of the terminal device, the user is inconvenient to input one or more behavior information in the same interface. Therefore, when the user needs to input the behavior information, the system outputs and displays the amplified second display area to the user, and the second display area occupies the whole interface of the terminal equipment, so that the user can input the behavior information conveniently. Even if the character description required to be input for single behavior information is more, the user can still realize the quick input of the behavior information.
Further, referring to fig. 4, the step after the recording of the association relationship between the first information frame and the second information frame, which are associated by the user selection, further includes:
and 5, generating an association relation library according to the association relation.
After the system records the incidence relation between the first information frame and the second information frame selected by the user, an incidence relation library is generated according to the incidence relation. It should be noted that, after each time the user selects an association relationship, the system automatically adds the association relationship to the association relationship library.
Referring to fig. 11(a), it is a display interface of the generated association relation library, and all the recorded association relations are included in the association relation library. After the association relation library is output and displayed to the user, the user can acquire the association relation between the first information frame and the second information frame which are recorded arbitrarily in the association relation library.
Specifically, referring to fig. 5, the step of outputting and displaying the association relation to the user specifically includes:
step 61, selecting the incidence relation in the incidence relation library;
and step 62, outputting the amplified third display area to the user, wherein the third display area displays the association relation.
After the association relation library is generated, the user can select any recorded association relation in the association relation library. And after receiving the association selected by the user, the system automatically outputs the amplified third display area to the user, and the association is displayed in the third display area.
Because the association relationship includes a plurality of content information and behavior information which are associated with each other, if all the association relationships are displayed in the display interface, it is inconvenient for the user to intuitively comb the required association relationships. Therefore, the association relation selected by the user is independently arranged in the third display area, and the user can more intuitively and quickly know the required association relation, namely the required correlation between the first information frame and the second information frame.
The corresponding identifier of the association library may be added at any position in fig. 9, and after the user selects the corresponding identifier of the association library, a display interface of the association library shown in fig. 11(a) pops up. Subsequently, the user can select any recorded association in the association library, and referring to fig. 11(b), the system displays the selected association again in the third display area.
Specifically, referring to fig. 6, the step of generating an association relation library according to the association relation specifically includes:
step 51, receiving the self-defined association tag of the user aiming at the association relation;
step 52, recording the associated tag customized by the user;
and 53, storing the association tag and the association relation corresponding to the association tag into the association relation library.
The user performs self-defining on the association relation aiming at the association relation, namely, the association relation is named to generate the association label, and the association relation and the association label are in one-to-one correspondence. The system receives the associated label customized by the user, records the associated label, and stores the associated label and the associated relation corresponding to the associated label into the associated relation library.
Since the association relationship includes a plurality of content information and behavior information associated with each other, if all the association relationships are displayed in the display interface of the terminal device, that is, all the content information and behavior information corresponding to the association relationships are displayed in the display interface of the terminal device, the association relationships are not easily combed by the user due to the limitation of the size of the display interface of the terminal device. Therefore, the user can define each association relationship, only the association label needs to be displayed in the display interface, the association relationship and the association label are in one-to-one correspondence, and the user can obtain the association relationship corresponding to the association label only by selecting the association label, namely, the required associated first information frame and the second information frame are obtained.
Referring to fig. 11(a), in the generated association relation library, a plurality of association relations are included, and a user can customize an association tag for each association relation. Referring to fig. 11(b), the subsequent user selects the required association relationship, and only needs to select the corresponding association tag, so that the association relationship can be obtained.
Still more specifically, referring to fig. 7, the step of outputting and displaying the association relation to the user specifically includes:
step 610, receiving a first-level selection instruction of a user for selecting a first-level associated tag, acquiring a second-level associated tag corresponding to the first-level associated tag according to the first-level selection instruction, and outputting and displaying the second-level associated tag to the user in the third display area;
and step 620, receiving a second-level selection instruction of selecting a second-level associated tag by the user, acquiring a plurality of associated relationships corresponding to the second-level associated tag according to the second-level selection instruction, and outputting and displaying the associated relationships to the user in the third display area.
The associated labels comprise first-level associated labels and second-level associated labels with the priority lower than that of the first-level associated labels, the first-level associated labels comprise a plurality of second-level associated labels, and the second-level associated labels comprise a plurality of association relations. The user can input a first-level selection instruction first, and a second-level associated label corresponding to the first-level associated label is obtained according to the first-level selection instruction. The second level associated tab is then presented to the user output in a third display area. Then, the user inputs a second-level selection instruction, and acquires a plurality of association relations corresponding to the second-level association labels according to the second-level selection instruction. Subsequently, the plurality of associations is presented to the user output, also in the third display area.
Therefore, through multi-layer classification, when the first information frame and the second information frame are too many and the formed association relationship is too many, a user can classify and summarize according to the first-level association label and the second-level association label, and then perform layer-by-layer screening on the first-level association label and the second-level association label to finally obtain the desired association relationship.
Further, the step after receiving the association instruction between the user and different first information frames and second information frames further includes:
step 31, receiving remark information input by a user aiming at different first information frames and second information frames selected to be associated;
and step 32, storing the remark information.
After selecting the first information frame and the second information frame to generate the association instruction, the user inputs remark information aiming at the associated different first information frame and second information frame, wherein the remark information comprises the association reason aiming at the first information frame and the second information frame, that is, the remark explains why the first information frame and the second information frame need to be associated according to the selected first information frame and the selected second information frame. The system receives remark information input by a user and stores the remark information.
Therefore, when the subsequent user visually associates, the user can clearly know why the first information frame and the second information frame are associated at that time, and the development of the event is more clearly known.
Specifically, the step of receiving remark information input by the user for selecting different associated first information frames and second information frames specifically includes:
311, outputting question information according to a preset question path;
and step 312, receiving remark information fed back by the user for the question information and input for selecting different associated first information frames and second information frames.
The method comprises the steps of inputting remark information by a user, and outputting question information to the user according to the question path when the user needs to input the remark information. Then, the user feeds back remark information according to the question information, and the system receives the remark information input by the user.
For example, it is preset that: why the selected first information frame and the second information frame are associated; link the selected first and second information boxes gives you what benefits. After the user selects and associates the first information frame and the second information frame, the system automatically outputs a question firstly, the user feeds back response information firstly aiming at the question firstly, the system automatically outputs a question secondly, the user feeds back response information secondly aiming at the question secondly, and the response information firstly and the response information secondly form remark information. Therefore, the follow-up user can see the remark information, clearly know why the first information frame and the second information frame are related at that time, and have clearer cognition on the development of the event.
Further, referring to fig. 8, the step after receiving behavior information induced by the content information and input by a user and generating a second information frame further includes:
step 23, arranging the first information frame and the second information frame in an array according to a time axis, and
the system is provided with a time axis, wherein the time axis is an actual time line formed by a time point for generating the first information frame and a time point for generating the second information frame, or the time axis is an occurrence time line formed by a time point for generating the content information in the first information frame and a time point for generating the behavior information in the second information frame. Referring to fig. 9, the system arranges the first information frames and the second information frames in an array according to the time axis, and for the convenience of user division, the first information frames and the second information frames are respectively distributed on two sides, that is, the first information frames and the second information frames are respectively located at opposite positions, which may be two sides of the time axis, or on the same side of the time axis, but located at opposite positions.
Optionally, the behavioral information is one or more of a thought item, a question item, a next step plan item, or an execution item. The thought item is the related idea caused by the content information of the event, the question item is the problem caused by the content information of the event, the plan item is the plan caused by the content information of the event, and the execution item is something done at the time of the event, for example, the action made at the time or the spoken words.
Optionally, the association instruction is a line association instruction for line connection between different first information frames and different second information frames, that is, a user may select different first information frames and different second information frames in sequence, and a line association instruction connected by a line is formed between the selected first information frame and the selected second information frame; after displaying the association result, the user may obtain the associated first information frame and second information frame according to the line.
Or, the association instruction is a graphic association instruction for marking different first information frames and different second information frames with the same graphic, that is, the user can mark different first information frames and different second information frames with the same graphic in sequence, and the same graphic mark is arranged between the selected first information frame and the selected second information frame; after displaying the association result, the user may acquire the associated first information frame and second information frame according to the same graphic.
As shown in fig. 9 to 19, the present invention further provides a preferred embodiment of an event correlation apparatus.
Specifically, referring to fig. 12, an event correlation apparatus includes:
a first information frame generating unit 1, configured to receive content information of an event input by a user, and generate a first information frame;
a second information frame generating unit 2, configured to receive behavior information, which is input by a user and is caused by the content information, and generate a second information frame;
an association instruction receiving unit 3, configured to receive an association instruction between different first information frames and different second information frames for a user;
the recording unit 4 is used for recording the association relation between different first information frames and second information frames selected and associated by a user;
and the output unit 6 is used for outputting and displaying the association relation to a user.
The system receives the content information input by the user through the first information frame generating unit 1, and generates a first information frame according to the content information, wherein the content information and the first information frame are in one-to-one correspondence. The user inputs behavior information caused by the content information in the system, the system receives the behavior information input by the user through the second information frame generating unit 2, and generates a second information frame according to the behavior information, wherein the behavior information and the second information frame are in one-to-one correspondence. After the first information frame and the second information frame are generated, a user inputs a correlation instruction according to the correlation between different first information frames and different second information frames, the system receives the correlation instruction input by the user through the correlation instruction receiving unit 3, the recording unit 4 records the correlation between different first information frames and different second information frames which are required to be selected by the user according to the correlation instruction, and the output unit 6 outputs and displays the correlation to the user.
Referring to fig. 9, a display interface after the association method is performed. In practical cases, the experience card in the figure is the first information frame, the content information input by the user is located at the corresponding position of the first information frame, the cognitive card in the figure is the second information frame, and the behavior information input by the user is located at the corresponding position of the second information frame. Subsequently, if the user thinks that the content information A is related to the behavior information A, generating a correlation instruction between a first information frame corresponding to the content information A and a second information frame corresponding to the behavior information A; considering that the behavior information A is related to the content information C, generating a correlation instruction between a second information frame corresponding to the behavior information A and a first information frame corresponding to the content information C; and if the content information C is associated with the behavior information D, generating an association instruction between a first information frame corresponding to the content information C and a second information frame corresponding to the behavior information D.
Since the extrinsic content information in the first frame often affects the intrinsic behavior information in the second frame, the behavior information in turn affects the following content information. The user only needs to input the most important content information and behavior information, other unnecessary information is saved based on the mode of the double information frames, and the user can select the associated content information and behavior information more intuitively and quickly subsequently, so that the finally formed association relationship is more concise and ordered. Through the device, the first information frame and the second information frame can be associated together, and the association relation is output and displayed to a user. Therefore, a subsequent user can know the relevance between the first information frame and the second information frame only by directly viewing the relevance relation, clearly know the development context of the whole event, clearly know the whole event, and conveniently sort and memorize the event, so that the user can know the processing mode of each important event and better know the user. Moreover, the user can plan the future development of the user more intelligently and further grow after the user summarizes and puzzles aiming at each incidence relation. The frequently occurring events affect the emotion of the user, but the emotion brought by the events directly is unreasonable. Through above-mentioned correlation device, the user can be through each information of record incident to the realization of each information is correlated, stands in the angle of third party more objectively has more clear cognition to this incident, and uses the cognition that obtains to life, work field, helps the user to obtain better growth, can control oneself life more easily through cognitive application.
It should be noted that, because the cognitive level of each person is different from the resources around the person, many people will not see the possibility of things due to the limitation of the cognitive level, and will not see the resources around the person, and thus lose the possibility of some people, the association device can help people to associate the resources with personal growth, and can also help users to see various possibilities through the growth path of other people, thereby helping people to achieve better personal growth.
It should be noted that the content information and the behavior information may be at least one of text, voice, picture, and video.
Still more specifically, referring to fig. 13, the first information frame generating unit 1 specifically includes:
a first display area output unit 11, configured to receive a first zoom-in instruction input by a user, and output a first display area after zoom-in to the user according to the first zoom-in instruction;
a first input unit 12, configured to receive content information of an event input by a user in the first display area, and generate the first information frame.
The user inputs a first amplification instruction to the system according to the self requirement when needing to input the content information. Through the first display area output unit 11, the system receives a first zoom-in instruction input by the user, and outputs the first display area after being zoomed in to the user according to the first zoom-in instruction. Then, the user inputs content information of the event in the first display area. Through the first input unit 12, the system receives content information input by a user and generates a first information frame according to the content information.
Referring to fig. 9 and 10(a), the user may select to go through the card, i.e., input the first zoom-in instruction. Subsequently, the system displays the first display area shown in fig. 10(a) after enlargement to the user in accordance with the first enlargement instruction of the user, and the user inputs the content information again in the first display area.
Because the content information required to be input by the user is often more and limited by the size of the interface of the terminal device, the user is inconvenient to input one or more content information in the same interface. Therefore, when the user needs to input the content information, the system outputs and displays the amplified first display area to the user, and the first display area occupies the whole interface of the terminal device, so that the user can input the content information conveniently. Even if the text description aiming at the single content information is more, the user can still realize the quick input of the content information.
More specifically, referring to fig. 14, the second information frame generating unit 2 specifically includes:
a second display area output unit 21, configured to receive a second zoom-in instruction input by the user, and output the zoomed-in second display area to the user according to the second zoom-in instruction;
a second input unit 22, configured to receive behavior information, which is input by a user in the second display area and is caused by the content information, and generate the second information frame.
And the user inputs a second amplification instruction to the system according to the self requirement when needing to input the behavior information. Through the second display area output unit 21, the system receives a second zoom-in instruction input by the user, and outputs the zoomed-in second display area to the user according to the second zoom-in instruction. Then, the user inputs behavior information induced by the content information in the second display area. Through the second input unit 22, the system receives behavior information input by the user and generates a second information frame according to the behavior information.
Referring to fig. 9 and 10(b), the user may select the cognitive card, i.e., input the second enlargement instruction. Subsequently, the system displays the second display area shown in fig. 10(a) after enlargement to the user in accordance with the second enlargement instruction of the user, and the user inputs the behavior information again in the second display area.
Because the behavior information required to be input by the user is often more and limited by the size of the interface of the terminal device, the user is inconvenient to input one or more behavior information in the same interface. Therefore, when the user needs to input the behavior information, the system outputs and displays the amplified second display area to the user, and the second display area occupies the whole interface of the terminal equipment, so that the user can input the behavior information conveniently. Even if the character description required to be input for single behavior information is more, the user can still realize the quick input of the behavior information.
Further, referring to fig. 15, the association apparatus further includes:
and the incidence relation library generating unit 5 is used for generating an incidence relation library according to the incidence relation.
After the system records the association relationship between the first information frame and the second information frame selected by the user, the system generates an association relationship library according to the association relationship through the association relationship library generating unit 5. It should be noted that, after each time the user selects an association relationship, the system automatically adds the association relationship to the association relationship library.
Referring to fig. 11(a), it is a display interface of the generated association relation library, and all the recorded association relations are included in the association relation library. After the association relation library is output and displayed to the user, the user can acquire the association relation between the first information frame and the second information frame which are recorded arbitrarily in the association relation library.
Specifically, referring to fig. 16, the output unit 6 specifically includes:
a selecting unit 61, configured to select the association relationship in the association relationship library;
a third display area output unit 62, configured to output the enlarged third display area to the user, where the third display area displays the association relationship.
After the association library is generated, the user can select any recorded association in the association library by the selection unit 61. Through the third display area output unit 62, after receiving the association selected by the user, the system automatically outputs the enlarged third display area to the user, and the association is displayed in the third display area.
Because the association relationship includes a plurality of content information and behavior information which are associated with each other, if all the association relationships are displayed in the display interface, it is inconvenient for the user to intuitively comb the required association relationships. Therefore, the association relation selected by the user is independently arranged in the third display area, and the user can more intuitively and quickly know the required association relation, namely the required correlation between the first information frame and the second information frame.
The corresponding identifier of the association library may be added at any position in fig. 9, and after the user selects the corresponding identifier of the association library, a display interface of the association library shown in fig. 11(a) pops up. Subsequently, the user can select any of the recorded associations in the association library, and referring to fig. 11(b), the system displays the selected associations in the third display area.
Specifically, referring to fig. 17, the association library generating unit 5 specifically includes:
a self-defining unit 51, configured to receive an association tag defined by a user for the association relationship;
an associated tag recording unit 52, configured to record the associated tag customized by the user;
a storage unit 53, configured to store the association tag and the association relationship corresponding to the association tag in the association relationship library.
The user performs self-defining on the association relation aiming at the association relation, namely, the association relation is named to generate the association label, and the association relation and the association label are in one-to-one correspondence. The system receives the associated label customized by the user through the customizing unit 51, records the associated label through the associated label recording unit 52, and stores the associated label and the associated relation corresponding to the associated label into the associated relation library through the storing unit 53.
Since the association relationship includes a plurality of content information and behavior information associated with each other, if all the association relationships are displayed in the display interface of the terminal device, that is, all the content information and behavior information corresponding to the association relationships are displayed in the display interface of the terminal device, the association relationships are not easily combed by the user due to the limitation of the size of the display interface of the terminal device. Therefore, the user can define each association relationship, only the association tags need to be displayed in the display interface, the association relationships and the association tags are in one-to-one correspondence, and the user can obtain the association relationships corresponding to the association tags only by selecting the association tags, namely, the required associated first information frame and the second information frame are obtained.
Referring to fig. 11(a), in the generated association relation library, a plurality of association relations are included, and a user can customize an association tag for each association relation. Referring to fig. 11(b), the subsequent user selects the required association relationship, and only needs to select the corresponding association tag, so that the association relationship can be obtained.
Still more specifically, referring to fig. 18, the output unit 6 specifically includes:
a first level selection instruction unit 610, configured to receive a first level selection instruction for a user to select a first level associated tag, obtain, according to the first level selection instruction, a second level associated tag corresponding to the first level associated tag, and output and display the second level associated tag to the user in the third display area;
a second level selection instruction unit 620, configured to receive a second level selection instruction for selecting a second level associated tag by a user, obtain, according to the second level selection instruction, a plurality of association relationships corresponding to the second level associated tag, and output and display the association relationships to the user in the third display area.
The associated labels comprise first-level associated labels and second-level associated labels with the priority lower than that of the first-level associated labels, the first-level associated labels comprise a plurality of second-level associated labels, and the second-level associated labels comprise a plurality of association relations. Through the first-level selection instruction unit 610, a user may input a first-level selection instruction first, and obtain a second-level associated tag corresponding to the first-level associated tag according to the first-level selection instruction. The second level associated tab is then presented to the user output in a third display area. Then, through the second level selection instruction unit 620, the user inputs a second level selection instruction, and obtains a plurality of association relationships corresponding to the second level association tag according to the second level selection instruction. Subsequently, the plurality of associations is presented to the user output, also in the third display area.
Therefore, through multi-layer classification, when the first information frame and the second information frame are too many and the formed association relationship is too many, a user can classify and summarize according to the first-level association label and the second-level association label, and then perform layer-by-layer screening on the first-level association label and the second-level association label to finally obtain the desired association relationship.
Further, the association apparatus further includes:
a remark information receiving unit 31, configured to receive remark information input by a user for selecting different associated first information frames and second information frames;
and a remark information storage unit 32, configured to store the remark information.
After selecting the first information frame and the second information frame to generate the association instruction, the user inputs remark information aiming at the associated different first information frame and second information frame, wherein the remark information comprises the association reason aiming at the first information frame and the second information frame, that is, the remark explains why the first information frame and the second information frame need to be associated according to the selected first information frame and the selected second information frame. The system receives the remark information input by the user through the remark information receiving unit 31 and stores the remark information through the remark information storage unit 32.
Therefore, when the subsequent user visually associates, the user can clearly know why the first information frame and the second information frame are associated at that time, and the development of the event is more clearly known.
Specifically, the remark information receiving unit 31 specifically includes:
the question unit 311 is configured to output question information according to a preset question path;
a receiving unit 312, configured to receive remark information, which is fed back by the user for the question information and is input for selecting different associated first information frames and second information frames.
A question path may be preset, and when the user needs to input the remark information, the question unit 311 outputs the question information to the user according to the question path. Then, through the receiving unit 312, the user feeds back the remark information according to the question information, and the system receives the remark information input by the user.
For example, it is preset that: why the selected first information frame and the second information frame are associated; associating the selected first information box and the second information box gives you what benefits. After the user selects and associates the first information frame and the second information frame, the system automatically outputs a question firstly, the user feeds back response information firstly aiming at the question firstly, the system automatically outputs a question secondly, the user feeds back response information secondly aiming at the question secondly, and the response information firstly and the response information secondly form remark information. Therefore, the follow-up user can see the remark information, clearly know why the first information frame and the second information frame are related at that time, and have clearer cognition on the development of the event.
Further, referring to fig. 19, the association apparatus further includes:
the arrangement unit 23 is configured to arrange the first information frames and the second information frames in an array according to a time axis, and the first information frames and the second information frames are respectively distributed on two sides.
The system is provided with a time axis, wherein the time axis is an actual time line formed by a time point for generating the first information frame and a time point for generating the second information frame, or the time axis is an occurrence time line formed by a time point for generating the content information in the first information frame and a time point for generating the behavior information in the second information frame. Referring to fig. 9, the arrangement unit 23 is used to arrange the first information frames and the second information frames in an array according to the time axis, and to facilitate the user to divide the first information frames and the second information frames, the first information frames and the second information frames are respectively distributed on two sides, that is, the first information frames and the second information frames are respectively located at opposite positions, which may be two sides of the time axis, or on the same side of the time axis, but are located at opposite positions.
Optionally, the behavioral information is one or more of a thought item, a question item, a next step plan item, or an execution item. The thought item is the related idea caused by the event, the question item is the question caused by the event, the plan item is the plan caused by the event, and the execution item is something done at the time of the event, for example, the action made at the time or the spoken word.
Optionally, the association instruction is a line association instruction for line connection between different first information frames and different second information frames, that is, a user may select different first information frames and different second information frames in sequence, and a line association instruction connected by a line is formed between the selected first information frame and the selected second information frame; after displaying the association result, the user may obtain the associated first information frame and second information frame according to the line.
Or, the association instruction is a graphic association instruction for marking different first information frames and different second information frames with the same graphic, that is, the user can mark different first information frames and different second information frames with the same graphic in sequence, and the same graphic mark is arranged between the selected first information frame and the selected second information frame; after displaying the association result, the user may acquire the associated first information frame and second information frame according to the same graphic.
The invention also provides an event correlation device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the correlation method when executing the computer program.
The invention also provides a storage device storing a computer program executable to implement the steps of the association method as described above.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (22)

1. A method for associating events, comprising the steps of:
receiving content information of an event input by a user, and generating a first information frame;
receiving behavior information which is input by a user and is caused by the content information, and generating a second information frame;
receiving an association instruction of a user aiming at different first information frames and second information frames;
recording the incidence relation between different first information frames and second information frames selected to be associated by a user;
generating an incidence relation library according to the incidence relation;
outputting and displaying the association relation to a user;
the step of displaying the association relation to the user output specifically includes:
selecting the incidence relation in the incidence relation library;
and outputting the amplified third display area to a user, wherein the third display area displays the association relation.
2. The association method according to claim 1, wherein the step of receiving content information of an event input by a user and generating a first information frame specifically includes:
receiving a first amplification instruction input by a user, and outputting an amplified first display area to the user according to the first amplification instruction;
and receiving the content information of the event input by the user in the first display area, and generating the first information frame.
3. The association method according to claim 1, wherein the step of receiving behavior information, which is input by a user and is caused by the content information, and generating the second information frame specifically includes:
receiving a second amplification instruction input by a user, and outputting an amplified second display area to the user according to the second amplification instruction;
and receiving behavior information which is input by a user in the second display area and is caused by the content information, and generating the second information frame.
4. The association method according to claim 1, wherein the step of generating an association relation library according to the association relation specifically includes:
receiving an association tag customized for the association relation by a user;
recording the associated label customized by a user;
and storing the association labels and the association relations corresponding to the association labels in the association relation library.
5. The association method according to claim 4, wherein the step of outputting and displaying the association relation to the user specifically includes:
receiving a first-level selection instruction for selecting the first-level associated tag by a user, acquiring a second-level associated tag corresponding to the first-level associated tag according to the first-level selection instruction, and outputting and displaying the second-level associated tag to the user in the third display area;
and receiving a second-level selection instruction of selecting a second-level associated label by a user, acquiring a plurality of association relations corresponding to the second-level associated label according to the second-level selection instruction, and outputting and displaying the association relations to the user in the third display area.
6. The association method according to claim 1, wherein the step of receiving the association instruction between the user and the different first information frame and second information frame further comprises:
receiving remark information input by a user aiming at different first information frames and second information frames selected to be associated;
and storing the remark information.
7. The association method according to claim 6, wherein the step of receiving remark information input by a user for selecting different associated first information boxes and second information boxes specifically comprises:
outputting question information according to a preset question path;
receiving remark information fed back by the user for the question information and input for selecting the associated different first information frame and second information frame.
8. The association method according to claim 1, wherein the step after receiving the behavior information induced by the content information and inputted by the user and generating the second information frame further comprises:
the first information frames and the second information frames are arranged in an array form according to a time axis, and the first information frames and the second information frames are respectively distributed on two sides.
9. The association method according to claim 1, wherein the behavior information is one or more of a thought item, a question item, a next-step plan item, or an execution item.
10. The association method according to claim 1, wherein the association instruction is a line association instruction for connecting lines between different first information frames and different second information frames or a graphic association instruction for marking different first information frames and different second information frames with the same graphic.
11. An apparatus for associating events, comprising:
a first information frame generating unit, for receiving the content information of the event input by the user and generating a first information frame;
the second information frame generating unit is used for receiving behavior information which is input by a user and is caused by the content information and generating a second information frame;
the association instruction receiving unit is used for receiving an association instruction between different first information frames and different second information frames by a user;
the recording unit is used for recording the association relation between different first information frames and second information frames selected to be associated by a user;
the incidence relation library generating unit is used for generating an incidence relation library according to the incidence relation;
the output unit is used for outputting and displaying the association relation to a user;
the output unit specifically includes:
a selecting unit, configured to select the association relationship in the association relationship library;
and the third display area output unit is used for outputting the amplified third display area to a user, and the third display area displays the association relation.
12. The association apparatus according to claim 11, wherein the first information frame generating unit specifically includes:
the first display area output unit is used for receiving a first amplification instruction input by a user and outputting an amplified first display area to the user according to the first amplification instruction;
and the first input unit is used for receiving the content information of the event input by the user in the first display area and generating the first information frame.
13. The association apparatus according to claim 11, wherein the second information frame generating unit specifically includes:
the second display area output unit is used for receiving a second amplification instruction input by the user and outputting the amplified second display area to the user according to the second amplification instruction;
and the second input unit is used for receiving behavior information which is input by a user in the second display area and is caused by the content information, and generating the second information frame.
14. The association apparatus according to claim 11, wherein the association relation library generating unit specifically includes:
the user-defined unit is used for receiving the associated tag which is defined by the user aiming at the associated relation;
the associated tag recording unit is used for recording the associated tag customized by the user;
a storage unit, configured to store the association tag and the association relationship corresponding to the association tag in the association relationship library.
15. The correlation apparatus according to claim 14, wherein the output unit specifically comprises:
the first-level selection instruction unit is used for receiving a first-level selection instruction for selecting a first-level associated label by a user, acquiring a second-level associated label corresponding to the first-level associated label according to the first-level selection instruction, and outputting and displaying the second-level associated label to the user in the third display area;
and the second-level selection instruction unit is used for receiving a second-level selection instruction for selecting a second-level associated label by a user, acquiring a plurality of association relations corresponding to the second-level associated label according to the second-level selection instruction, and outputting and displaying the association relations to the user in the third display area.
16. The association apparatus as claimed in claim 11, wherein the association apparatus further comprises:
the remark information receiving unit is used for receiving remark information input by a user aiming at different first information frames and second information frames which are selected to be associated;
and the remark information storage unit is used for storing the remark information.
17. The association apparatus according to claim 16, wherein the remark information receiving unit includes:
the question unit is used for outputting question information according to a preset question path;
and the receiving unit is used for receiving remark information which is fed back by the user aiming at the question information and is input aiming at different first information frames and second information frames which are selected to be associated.
18. The association apparatus as claimed in claim 11, wherein the association apparatus further comprises:
and the arrangement unit is used for arranging the first information frames and the second information frames in an array mode according to a time axis, and the first information frames and the second information frames are distributed on two sides respectively.
19. The association apparatus according to claim 11, wherein the behavior information is one or more of a thought item, a question item or a next step plan item.
20. The association apparatus according to claim 11, wherein the association instruction is a line association instruction for connecting lines between different first information frames and different second information frames or a graphic association instruction for marking different first information frames and different second information frames with the same graphic.
21. An apparatus for correlating events, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the correlation method according to any one of claims 1 to 10 when executing the computer program.
22. A storage device, characterized in that it stores a computer program executable to implement the steps of the association method according to any one of claims 1 to 10.
CN201910601950.2A 2019-07-05 2019-07-05 Event correlation method and device and storage device Active CN110457400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910601950.2A CN110457400B (en) 2019-07-05 2019-07-05 Event correlation method and device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910601950.2A CN110457400B (en) 2019-07-05 2019-07-05 Event correlation method and device and storage device

Publications (2)

Publication Number Publication Date
CN110457400A CN110457400A (en) 2019-11-15
CN110457400B true CN110457400B (en) 2022-06-17

Family

ID=68482108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910601950.2A Active CN110457400B (en) 2019-07-05 2019-07-05 Event correlation method and device and storage device

Country Status (1)

Country Link
CN (1) CN110457400B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404593A (en) * 2000-12-18 2003-03-19 皇家菲利浦电子有限公司 Diary/calendar software application with personal and historical data
CN105094753A (en) * 2014-04-18 2015-11-25 阿里巴巴集团控股有限公司 Method, device, and system for drawing wireframe
CN107329665A (en) * 2017-07-04 2017-11-07 杭州哲信信息技术有限公司 A kind of journal record method based on smart machine
CN108600812A (en) * 2018-05-11 2018-09-28 青岛海信电器股份有限公司 A kind of method for displaying user interface and device based on calendar browsing
CN109033163A (en) * 2018-06-19 2018-12-18 珠海格力电器股份有限公司 Method and device for adding diary in calendar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126922B2 (en) * 2009-04-15 2012-02-28 Crieghton University Calendar system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404593A (en) * 2000-12-18 2003-03-19 皇家菲利浦电子有限公司 Diary/calendar software application with personal and historical data
CN105094753A (en) * 2014-04-18 2015-11-25 阿里巴巴集团控股有限公司 Method, device, and system for drawing wireframe
CN107329665A (en) * 2017-07-04 2017-11-07 杭州哲信信息技术有限公司 A kind of journal record method based on smart machine
CN108600812A (en) * 2018-05-11 2018-09-28 青岛海信电器股份有限公司 A kind of method for displaying user interface and device based on calendar browsing
CN109033163A (en) * 2018-06-19 2018-12-18 珠海格力电器股份有限公司 Method and device for adding diary in calendar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Android的自助理财系统的设计和实现;乔赵有;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20160315;I138-2820 *

Also Published As

Publication number Publication date
CN110457400A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
KR101660271B1 (en) Metadata tagging system, image searching method, device, and method for tagging gesture
US9280255B2 (en) Structured displaying of visual elements
US9703771B2 (en) Automatic capture of information from audio data and computer operating context
KR20080073066A (en) Content management device and method
CN101345791B (en) Method and apparatus for date-based integrated processing of data in mobile terminal
WO2019242257A1 (en) Method and apparatus for adding diary to calendar
US20250166270A1 (en) Template-Based Virtual Background Management In Video Meetings
CN118044178A (en) Contextual messaging in video conferencing
KR20120135244A (en) Association of information entities along a time line
TWI582623B (en) File management system and method
CN110532048B (en) An event recording method, device and storage device
US20140272898A1 (en) System and method of providing compound answers to survey questions
CN109697242B (en) Photographing question searching method and device, storage medium and computing equipment
CN106462580A (en) Media organization
CN110457400B (en) Event correlation method and device and storage device
CN114745594A (en) Method, device, electronic device and storage medium for generating live playback video
US20180096358A1 (en) Methods and Systems for Managing User Experience Design
CN110457468B (en) Event classification method and device and storage device
CN107767156A (en) A kind of information input method, apparatus and system
US10338780B2 (en) System and method for graphical resources management and computer program product with application for graphical resources management
CN110471993B (en) Event correlation method and device and storage device
JP2021039618A (en) Information processing system, information processing device, information processing method and program
US11636440B2 (en) Electronic dynamic calendar system and operation method thereof
WO2022183814A1 (en) Voice annotation and use method and device for image, electronic device, and storage medium
JP2020057272A (en) Workshop support system and workshop support method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant