[go: up one dir, main page]

CN119987604A - Content generation method and device, computing device, medium, system and program product - Google Patents

Content generation method and device, computing device, medium, system and program product Download PDF

Info

Publication number
CN119987604A
CN119987604A CN202411998914.1A CN202411998914A CN119987604A CN 119987604 A CN119987604 A CN 119987604A CN 202411998914 A CN202411998914 A CN 202411998914A CN 119987604 A CN119987604 A CN 119987604A
Authority
CN
China
Prior art keywords
content
user
target
content material
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411998914.1A
Other languages
Chinese (zh)
Inventor
李嘉稷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yizi Shanghai Technology Co ltd
Original Assignee
Yizi Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yizi Shanghai Technology Co ltd filed Critical Yizi Shanghai Technology Co ltd
Priority to CN202411998914.1A priority Critical patent/CN119987604A/en
Publication of CN119987604A publication Critical patent/CN119987604A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

提供一种内容生成方法及装置、计算装置、介质、系统及程序产品。所述内容生成方法包括:展示用户交互界面,所述用户交互界面包括画布区域;在所述画布区域提供至少一个元素框;接收用户在第一元素框中输入的第一用户意图信息;响应于用户针对所述第一元素框的运行操作,利用预先设置的与所述第一用户意图信息对应的智能体,根据所述第一用户意图信息,获取第一内容素材;根据所述第一内容素材,生成目标内容。根据本公开的内容生成方法及装置、计算装置、介质及系统解决了用户与智能体单线性地进行对话无法满足一些复杂场景的内容生成需求的问题,能够提高用户与智能体之间的交互性,以满足更多的应用场景的需求。

A content generation method and device, computing device, medium, system and program product are provided. The content generation method comprises: displaying a user interaction interface, the user interaction interface comprises a canvas area; providing at least one element box in the canvas area; receiving first user intention information input by a user in the first element box; in response to the user's operation on the first element box, obtaining first content material according to the first user intention information using a pre-set intelligent agent corresponding to the first user intention information; and generating target content according to the first content material. The content generation method and device, computing device, medium and system according to the present disclosure solve the problem that a linear dialogue between a user and an intelligent agent cannot meet the content generation requirements of some complex scenarios, and can improve the interactivity between the user and the intelligent agent to meet the requirements of more application scenarios.

Description

Content generation method and device, computing device, medium, system and program product
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a content generating method and apparatus, a computing apparatus, a medium, a system, and a program product.
Background
With the development of computer technology, some automatic auxiliary systems can help people to improve the efficiency of content writing, for example, various text contents can be generated, and the text can comprise elements in different formats such as characters, pictures, tables, symbols and the like.
In the related art, desired contents may be generated through a dialogue between a user and an agent (e.g., a large language model) based on an artificial intelligence AI, however, during such a dialogue, the degree to which the user can intervene is very limited, and only the dialogue can be linearly performed with the current agent, which may not meet the content generation requirements of some complex scenes.
Disclosure of Invention
Exemplary embodiments of the present disclosure may solve at least the above problems.
According to a first aspect of the disclosure, a content generation method is provided, and the content generation method comprises the steps of displaying a user interaction interface, providing at least one element frame in a canvas area, wherein the element frame is used for interacting with a preset agent, the at least one element frame comprises a first element frame, receiving first user intention information input in the first element frame by a user, responding to operation of the user on the first element frame, acquiring first content materials according to the first user intention information by using the preset agent corresponding to the first user intention information, and generating target content according to the first content materials.
Optionally, the providing at least one element box in the canvas area includes providing a first element box in the canvas area in response to a user operation to add the first element box in the canvas area.
The method comprises the steps of displaying an operation control at a preset position near the first element frame, responding to operation of a user on the first element frame, utilizing preset agents corresponding to first user intention information, and acquiring first content materials according to the first user intention information, wherein responding to click operation of the user on the operation control, utilizing preset agents corresponding to the first user intention information, and acquiring first content materials according to the first user intention information.
Optionally, after the first content material is acquired, the method further comprises displaying the acquired first content material at a preset position near the first element frame, and displaying the association relation between the first element frame and the acquired first content material.
Optionally, the first user intention information includes target agent information, where the target agent is used to obtain the first content material.
Optionally, the generating the target content according to the first content material comprises providing a robot dialogue window, wherein a dialogue box is arranged in the robot dialogue window, receiving a content generation instruction input by a user through the dialogue box, and generating the target content according to the first content material based on the content generation instruction.
The canvas area comprises a plurality of acquired first content materials, wherein the generation of the target content according to the first content materials based on the content generation instruction comprises the steps of generating the target content according to the first content materials specifically introduced by a user if the first content materials specifically introduced by the user are designated in the content generation instruction and used for generating the target content, or generating the target content according to all the first content materials in the canvas area if the first content materials specifically introduced by the user are not designated in the content generation instruction and used for generating the target content.
Optionally, the step of providing at least one element frame in the canvas area comprises the step of responding to an instruction of introducing a target template by a user, wherein the target template comprises a plurality of element frames, the plurality of element frames comprise the first element frame and the second element frame, the step of generating target content according to the first content material comprises the steps of receiving a content generation instruction input by the user in the second element frame, and the step of responding to operation of the user on the second element frame, generating the target content according to the first content material by utilizing preset intelligent bodies corresponding to the content generation instruction.
Optionally, in the target template, a user operation prompt message corresponding to each element frame is displayed at a preset position near the element frame.
Optionally, the acquiring the first content material according to the first user intention information by using the preset agent corresponding to the first user intention information comprises acquiring a plurality of data resources according to the first user intention information by using the preset agent corresponding to the first user intention information, displaying the plurality of data resources in the canvas area, and responding to the selection operation of a user on the target data resources in the plurality of data resources, wherein the selected target data resources are used as the first content material.
Optionally, the first user intention information includes target workflow information, the target workflow is used for obtaining the first content material, the target workflow includes a plurality of subtasks and a sequential execution relationship among the plurality of subtasks, wherein the obtaining the first content material according to the first user intention information by using a preset agent corresponding to the first user intention information includes running the target workflow by using a preset agent corresponding to the target workflow to execute the corresponding subtasks according to the sequential execution relationship of the subtasks in the target workflow, and generating the first content material.
Optionally, the target workflow is operated by using preset agents corresponding to the target workflow, corresponding subtasks are executed according to the sequence execution relation of subtasks in the target workflow, and generating the first content material comprises generating the first content material by executing at least one subtask execution operation, wherein the subtask execution operation comprises the steps of providing a current element frame, receiving current user intention information input by a user, wherein the current user intention information is associated with content material acquired by the last operation, responding to the operation of the user on the current element frame, acquiring and displaying the current content material by using preset agents corresponding to the current user intention information according to the current user intention information and the content material acquired by the last operation, the current element frame is the first element frame when the first subtask is executed, the current user intention information is the first user intention information, the content acquired by the last operation is blank, and the current content material is the first content material when the subtask is executed.
Optionally, the content generating method further comprises the steps of providing a third element frame in the canvas area, receiving a program interface of a target program input by a user in the third element frame, displaying an interface of the target program in the third element frame in response to the operation of the third element frame by the user, and loading the content in the target program into the canvas area or uploading the content in the canvas area to the target program in response to the access operation of the user to the target program in the third element frame.
Optionally, the content generation method further comprises generating second target content according to the content in the target program.
Optionally, after the first content material is acquired, the content generation method further comprises the steps of acquiring a second content material based on the first content material, and generating target content according to the first content material comprises the step of generating the target content according to the second content material.
Optionally, after the first content material is acquired, the content generating method further comprises the steps of displaying the first content material at a preset position near the first element frame and displaying the association relation between the first element frame and the first content material, acquiring the second content material based on the first content material, wherein the step of providing a fourth element frame in the canvas area in response to a fourth element frame adding instruction of a user, connecting the fourth element frame with the first content material in response to an association operation of the user on the fourth element frame and the first content material, receiving second user intention information input by the user in the fourth element frame, responding to an operation of the user on the fourth element frame, acquiring and displaying the second content material according to the first content material by using a preset intelligent body corresponding to the second user intention information, and displaying the association relation between the fourth element frame and the acquired second content material so as to acquire an information stream of the second content material.
Optionally, the content generation method further comprises the steps of responding to the editing operation of the user on the information stream, adjusting the association relation, the user intention information or the content material information in the information stream based on the editing operation, and/or deleting the information stream in response to the deleting operation of the user on the information stream.
Optionally, the content generation method further comprises the step of responding to detail display operation of a user on a target element frame in the canvas area, and displaying relevant data and/or deduction logic corresponding to the target element frame.
Optionally, the content generation method further comprises the steps of responding to the operation that one or more element frames are added in the canvas area by a user, providing the one or more element frames in the canvas area, receiving a first editing operation of the one or more element frames and/or a second editing operation of association relations among the element frames by the user, and responding to an instruction of creating a template by the user, and saving the one or more element frames as the template for generating the content.
According to a second aspect of the present disclosure, there is provided a content generating apparatus including a presentation unit configured to present a user interaction interface including a canvas area, a providing unit configured to provide at least one element frame for interacting with a preset agent, the at least one element frame including a first element frame, a receiving unit configured to receive first user intention information input in the first element frame by a user, an obtaining unit configured to obtain a first content material according to the first user intention information using the preset agent corresponding to the first user intention information in response to an operation of the user on the first element frame, and a generating unit configured to generate a target content according to the first content material.
Optionally, the providing unit is further configured to provide the first element frame in the canvas area in response to a user operation to add the first element frame in the canvas area.
Optionally, the content generating device further comprises a display unit, wherein the display unit is configured to display a running control at a preset position near the first element frame, and the acquisition unit is further configured to acquire the first content material according to the first user intention information by utilizing a preset agent corresponding to the first user intention information in response to a click operation of the running control by a user.
Optionally, the content generating apparatus further includes a relationship display unit, where the relationship display unit is further configured to display, after the acquiring unit acquires the first content material, the acquired first content material at a predetermined position near the first element frame, and display an association relationship between the first element frame and the acquired first content material.
Optionally, the first user intention information includes target agent information, where the target agent is used to obtain the first content material.
Optionally, the generating unit is further configured to provide a robot dialogue window, wherein a dialogue box is arranged in the robot dialogue window, receive a content generating instruction input by a user through the dialogue box, and generate the target content according to the first content material based on the content generating instruction.
The canvas area comprises a plurality of acquired first content materials, wherein the generation unit is further configured to generate the target content according to first content materials specifically introduced by a user if the first content materials specifically introduced by the user in the plurality of first content materials are designated in the content generation instruction, or generate the target content according to all first content materials in the canvas area if the first content materials specifically introduced by the user in the plurality of first content materials are not designated in the content generation instruction.
Optionally, the providing unit is further configured to respond to an instruction of a user for introducing a target template, display the target template in the canvas area, wherein the target template comprises a plurality of element boxes, the element boxes comprise the first element box and the second element box, the generating unit is further configured to receive a content generating instruction input by the user in the second element box, respond to a running operation of the user on the second element box, and generate the target content according to the first content material by utilizing an agent corresponding to the content generating instruction, which is preset.
Optionally, in the target template, a user operation prompt message corresponding to each element frame is displayed at a preset position near the element frame.
Optionally, the obtaining unit is further configured to obtain a plurality of data resources according to the first user intention information by using a preset agent corresponding to the first user intention information, and display the plurality of data resources in the canvas area, and respond to a user selection operation of a target data resource in the plurality of data resources, and take the selected target data resource as the first content material.
Optionally, the first user intention information includes target workflow information, the target workflow is used for acquiring the first content material, the target workflow includes a plurality of subtasks and a sequential execution relationship among the subtasks, and the acquiring unit is further configured to run the target workflow by using a preset agent corresponding to the target workflow, so as to execute the corresponding subtasks according to the sequential execution relationship of the subtasks in the target workflow, and generate the first content material.
The obtaining unit is further configured to generate the first content material by executing at least one subtask execution operation, wherein the subtask execution operation comprises providing a current element frame, receiving current user intention information input by a user, wherein the current user intention information is associated with content material obtained by the last operation, responding to the operation of the user on the current element frame, obtaining and displaying the current content material according to the current user intention information and the content material obtained by the last operation by using preset agents corresponding to the current user intention information, wherein when a first subtask is executed, the current element frame is the first element frame, the current user intention information is the first user intention information, the content material obtained by the last operation is empty, and when a last subtask is executed, the current content material is the first content material.
Optionally, the content generating device further comprises a program access unit, wherein the program access unit is configured to provide a third element box in the canvas area, receive a program interface of a target program input by a user in the third element box, display an interface of the target program in the third element box in response to operation of the third element box by the user, and load content in the target program into the canvas area or upload the content in the canvas area to the target program in response to access operation of the target program in the third element box by the user.
Optionally, the generating unit is further configured to generate a second target content from the content in the target program.
Optionally, the obtaining unit is further configured to obtain a second content material based on the first content material after obtaining the first content material, wherein the generating unit is further configured to generate the target content according to the second content material.
Optionally, the content generating device further comprises a relation display unit, wherein the relation display unit is configured to display the first content material at a preset position near the first element frame after the acquisition unit acquires the first content material, and display the association relation between the first element frame and the first content material, the acquisition unit is further configured to provide a fourth element frame in the canvas area in response to a fourth element frame adding instruction of a user, connect the fourth element frame with the first content material in response to an association operation of the user on the fourth element frame and the first content material, receive second user intention information input by the user in the fourth element frame, acquire and display the second content material according to the first content material by using a preset intelligent body corresponding to the second user intention information in response to an operation of the user on the fourth element frame, and display the association relation between the fourth element frame and the acquired second content material so as to display the second content material information flow.
Optionally, the content generating device further comprises an information stream editing unit, wherein the information stream editing unit is configured to respond to the editing operation of a user on the information stream, adjust the association relation, the user intention information or the content material information in the information stream based on the editing operation, and/or respond to the deleting operation of the user on the information stream to delete the information stream.
Optionally, the content generating device further comprises a detail display unit, wherein the detail display unit is configured to respond to detail display operation of a user on a target element box in the canvas area, and display related data and/or deduction logic corresponding to the target element box.
Optionally, the content generating device further comprises a template generating unit, wherein the template generating unit is configured to respond to the operation of adding one or more element frames in the canvas area by a user, provide the one or more element frames in the canvas area, receive the first editing operation of the one or more element frames and/or the second editing operation of the association relation between the element frames by the user, and store the one or more element frames as templates for generating the content in response to the instruction of creating the template by the user.
According to a third aspect of the present disclosure, there is provided a computing device comprising a processor, a memory for storing the processor-executable instructions, wherein the processor-executable instructions, when executed by the processor, cause the processor to perform a content generation method according to an embodiment of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform a content generation method according to an embodiment of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a system comprising at least one computing device and at least one storage device storing instructions that, when executed by the at least one computing device, cause the at least one computing device to perform a content generation method according to an embodiment of the present disclosure.
According to a sixth aspect of the present disclosure, there is provided a computer program product comprising instructions which, when executed by at least one computing device, cause the at least one computing device to perform a content generation method as described in embodiments of the present disclosure.
According to the content generation method, device, computing device, medium and system of the exemplary embodiments of the present disclosure, at least one element frame can be provided in a canvas area of a user interaction interface, first user intention information input by a user in the first element frame is received, and in response to operation of the first element frame by the user, first content materials are generated according to the first user intention information by using corresponding agents, so as to generate target content according to the first content materials, so that the user can generate content materials related to user intention by interacting with the first element frame first, and then complete generation of target content by using such content materials, interactivity between the user and the agents is improved, excessively limitation of a content generation process due to single linear dialogue interaction is avoided, flexibility of the content generation process is improved, and requirements of more application scenes are met.
Drawings
These and/or other aspects and advantages of the present disclosure will become apparent from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a user interaction interface in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 3 and 4 are diagrams illustrating introduction of a target template in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 5 is a schematic diagram illustrating a target template in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating input of target agent information and target workflow information in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating generation of a picture in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic diagram illustrating generation of text in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 9 is a schematic diagram illustrating presentation data resources in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 10A to 10G are diagrams illustrating generation of target content based on a target workflow in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 11 is a schematic diagram illustrating marking of data resources in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 12 is a schematic diagram illustrating a content generation instruction in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 13A and 13B are diagrams illustrating acquisition of a second content material in a content generating method according to an exemplary embodiment of the present disclosure.
Fig. 14 is a schematic view illustrating an access target program in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 15 is a schematic diagram illustrating an editing operation on an information stream in a content generating method according to an exemplary embodiment of the present disclosure.
Fig. 16 is a schematic diagram showing a presentation interface of a detail presentation operation in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 17 is a schematic diagram illustrating prediction information in a content generation method according to an exemplary embodiment of the present disclosure.
Fig. 18 is a block diagram illustrating a content generating apparatus according to an exemplary embodiment of the present disclosure.
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of the embodiments of the disclosure defined by the claims and their equivalents. Various specific details are included to aid understanding, but are merely to be considered exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be noted that, in this disclosure, "at least one of the items" refers to a case where three types of juxtaposition including "any one of the items", "a combination of any of the items", "an entirety of the items" are included. For example, "comprising at least one of A and B" includes the case of juxtaposition of three of (1) comprising A, (2) comprising B, and (3) comprising A and B. For example, "at least one of the first and second steps is executed", that is, three cases are shown in parallel, namely (1) execute the first step, (2) execute the second step, and (3) execute the first and second steps.
A content generating method, a content generating apparatus, a computing apparatus, a computer-readable storage medium, and a system including at least one computing apparatus and at least one storage apparatus storing instructions according to an exemplary embodiment of the present disclosure are described below with reference to the accompanying drawings.
In a first aspect, exemplary embodiments of the present disclosure propose a content generation method to solve at least one of the problems in the related art.
The execution subject of the content generation method according to the exemplary embodiment of the present disclosure may be any type of computing device capable of performing content generation, for example, the computing device may be loaded with a content generation platform with which a user may interact to generate target content according to the content generation method according to the exemplary embodiment of the present disclosure. The computing device may be, but is not limited to, a desktop computer, a notebook computer, a tablet computer, a personal digital assistant, a smart phone, or other device capable of performing content generation.
Fig. 1 is a flowchart illustrating a content generation method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the content generation method may include:
at step S110, a user interaction interface may be presented, the user interaction interface comprising a canvas area.
At step S120, at least one element box may be provided in the canvas area.
Here, the canvas area may be used for a user to interact with the agent and may present information of the user's interaction with the agent. The element box may be used to interact with a preset agent, for example, to receive instructions or information input by a user, to present content or material produced by the agent to the user, and so on. Further, the position of the element frame in the canvas area may be arbitrarily adjusted by the user, who may move the element frame to an arbitrary position in the canvas area through a drag operation. Further, the agents described herein may be derived based on an agent architecture such as, but not limited to AutoGPT, camel, metaGPT.
As an example, an element box may receive instructions or information entered by a user, and in response to a Run operation of the element box by the user, the agent may perform a corresponding task according to the user's input in the element box. As an example, in response to the agent completing a corresponding task according to the user's input, a new element box may be generated in the canvas area to expose content or material, etc., produced by the agent.
The element box may also be referred to herein as a container (container), which may be, for example, a rich media picture frame, in which one or more blocks (blocks) may be contained, and the data types of the blocks in the element box may be different, and the data types of the blocks may include, for example, but not limited to, plain text, lightweight markup language (mark down), files in formats such as Excel, PDF, CSV, MP, pictures, codes, links, hypertext markup language (HTML) pages, and the like.
In embodiments of the present disclosure, the element boxes in the canvas area may include a first element box. The first element box may be for receiving first user intent information entered by a user. Several examples of providing the first element box are given below.
In an example, in step S120, a first element box may be provided in a canvas area in response to a user' S operation to add the first element box in the canvas area.
For example, FIG. 2 illustrates an example of a user interaction interface, as shown in FIG. 2, that may include a canvas area 210 and a toolbar 220, where the toolbar 220 may include an element box generation control 221, and where a first element box, such as element box 231 in FIG. 2, may be added in the canvas area in response to a user triggering the element box generation control 221 in the toolbar 220, in accordance with embodiments of the present disclosure. It will be appreciated that the user interaction interface of fig. 2 is merely an example, and that the styles of the toolbar 220 and the element box generation control 221 in an actual product are not limited.
Here, in response to a user's operation to add the first element frame multiple times, multiple first element frames may be provided in the canvas area, which may each be independently operated, e.g., independently run, by the user.
In another example, the step S120 may include presenting the target template in the canvas area in response to a user instruction to introduce the target template, wherein the target template may include a plurality of element boxes including the first element box and the second element box described above, such that the first element box may be provided in the target template. Here, the second element box may be used to receive a content generation instruction (to be described in detail below) input by the user.
Here, the target templates may be content generation templates set in advance, each of the templates may be preset with element frames for generating content, and different templates may be used for generating different types of content, and thus the number of element frames, functions, association relations between element frames, and the like contained in the different templates may be different.
For example, fig. 3 and 4 illustrate an example interface for importing templates according to embodiments of the present disclosure, as shown in fig. 3, the user interaction interface may further include a management option bar 240, and the management option bar 240 may include a document management option 241, a template management option 242, and the like. The document management options 241 may be used to query and manage historical documents, such as generated content material, generated content, user created or uploaded documents, and the like. The template management option 242 may be used to query and manage a preset content generation template, which may be presented in a user interactive interface in response to a user selection operation of the template management option 242, as shown in fig. 4, and in response to a user selection operation of a target template in the preset content generation template, which may be presented in a canvas area, as shown in fig. 5, in which a first element box, such as element box 232 in fig. 5, may be provided.
Although several examples of providing the first element box are given above, embodiments of the present disclosure are not limited thereto, and the first element box may be provided in other manners, e.g., may be provided by a user by way of a box or drag a graphic in a canvas area, or may be generated by an agent in the canvas area during a user's dialog with the agent (e.g., a dialog robot to be described below).
Further, as an example, the content generating apparatus further includes providing one or more element boxes in the canvas area in response to a user's operation of adding the one or more element boxes in the canvas area, receiving a first editing operation of the one or more element boxes themselves by the user and/or a second editing operation of an association relationship between the element boxes, and saving the one or more element boxes as a template for generating content in response to a user's instruction to create the template. In this way, the user is free to create a form of template for subsequent introduction, thereby generating related content more quickly.
Referring back to fig. 1, at step S130, first user intention information input by a user in a first element box may be received.
Here, the first user intention information may be information related to a content material or a target content that the user desires to generate.
The first user-intention information may include user-defined content, which may include, for example, but is not limited to, keywords related to content material or target content that the user desires to generate, sentences expressing user intention, and the like.
As an example, the first user intention information may further include at least one of target agent information, target workflow information, and content material information generated in advance. The target agent information, the target workflow information, or the pre-generated content material information may form first user intention information together with the user-defined content.
The target agent information may be used to designate any of the preset agents, and the target agent may be used to obtain the first content material.
As an example, a plurality of agents may be preset, each agent may perform a different task, and a user may input information of a target agent capable of performing a current task in the first element box according to a need.
For example, the target agent information may be input by displaying information of an agent set in advance at the first element frame in response to an insertion operation of the user with respect to the first element frame, and inputting the target agent information in the first element frame in response to a selection operation of the agent by the user.
The target workflow information may be used to specify any of the preset workflows, and the target workflow may be used to obtain the first content material.
As an example, a plurality of workflows may be preset, each workflow may perform a different task, and a user may input information of a target workflow capable of performing a current task in a first element box according to a requirement.
Similarly to the input of the target agent information, the target workflow information may be input by displaying information of a workflow set in advance at the first element frame in response to an insertion operation of the user with respect to the first element frame, and inputting the target workflow information in the first element frame in response to a selection operation of the workflow by the user.
For example, as shown in fig. 6, a user may perform an insert operation by, for example, right clicking a region within the first element frame with a mouse, and in response to the insert operation, information of selectable preset agents, which may include a search header view agent, a picture search agent, a picture generation agent, an upstream and downstream word agent, a marking agent, etc., and a preset workflow, which may include a smart card workflow (workflow), a screening damard workflow, etc., may be presented at the first element frame, for example, in a downward expansion manner.
The user can perform selection operation by clicking on the target agent or the target workflow, and input target agent information or target workflow information in the first element box in response to the selection operation of the user.
The target agent information may include, for example, an introduction prompt for indicating introduction of an agent, for example, a preset symbol such as "@", and an agent identifier for indicating an introduced agent, for example, a name of the agent. Taking the picture generation agent in fig. 6 as an example, the target agent information input in the first element box may be "@ picture generation", for example. Similarly, the target workflow information may include, for example, an import hint and a workflow identification indicating the imported workflow, which may be, for example, the name of the workflow. Taking the smart card workflow in fig. 6 as an example, the target workflow information input in the first element box may be "@ smart card workflow", for example.
The content material information generated in advance may be content material or content generated before the current target content generation process is performed. In embodiments of the present disclosure, content material generated in the middle of generating content and the final generated target content are both stored as documents, such as may be found and managed in the document management options 241 described hereinabove.
For example, the target agent information may be input by presenting information of the generated content material at the first element frame in response to an insertion operation of the user with respect to the first element frame, and inputting the generated content material information in the first element frame in response to a selection operation of the generated content material by the user. The generated content material information may include, for example, an incoming cue and a content material identification.
In step S140, the first content material may be acquired according to the first user intention information by using an agent corresponding to the first user intention information set in advance in response to the operation performed by the user on the first element frame.
Here, the operation performed by the user with respect to the first element frame may be used to trigger the corresponding agent to perform the step of acquiring the first content material according to the first user intention information.
By way of example, the content generation method further includes displaying a run control at a predetermined location near the first element box. In this example, the step S140 may include acquiring the first content material from the first user intention information using an agent corresponding to the first user intention information set in advance in response to a click operation of the run control by the user.
For example, in response to a clicking operation of the first element frame by the user, a running control may be displayed near the first element frame, and after the user clicks the running control, a preset agent corresponding to the first user intention information may be triggered to acquire the first content material. As shown in FIG. 2, in response to a user clicking on the first element box 231, a component bar including a run control 271 may be presented, and in response to a user clicking on the run control 271, an agent may be triggered to acquire the first content material.
However, the operation of the user on the first element frame is not limited to the above example, and the operation control may be displayed by, for example, hovering a cursor in the area of the first element frame by the user, or the operation may be triggered by, for example, double-clicking the first element frame.
In this step S140, the agent corresponding to the first user intention information may refer to an agent capable of recognizing the first user intention information and acquiring the corresponding first content material, and, taking the example of the above-described first user intention information as an example, in the case where the first user intention information includes target agent information or target workflow information, the agent corresponding to the first user intention information may be a target agent or an agent related to a target workflow, and in the case where the first user intention information does not include target agent information and target workflow information, but only user-defined content, the agent corresponding to the first user intention information may be a default agent set in advance. Here, the target agent, the agent related to the target workflow, and the default agent set in advance may be the same or different.
The above agent may acquire the first content material according to the first user intention information, and several examples of acquiring the first content material using the agent will be given below.
In an example, in a case where the first user intention information includes user-defined content and target agent information, the target agent may be utilized to perform a preset task corresponding to the target agent for the user-defined content, and acquire the first content material. For example, as shown in FIG. 7, a picture generation agent may be utilized to generate a picture related to user-defined content "generate a picture of an automobile in a 4s store, photo level".
In another example, in a case where the first user intention information includes user-defined content, pre-generated content material information, and target agent information, a preset task corresponding to the target agent may be performed for the user-defined content and the pre-generated content material by using the target agent, and the first content material may be acquired. For example, as shown in FIG. 8, an agent may be generated using an investigation report, generating text related to the pre-generated content material "mena search presentation" and the user-defined content "what is said about the document, two sentence summary" asking for the content material.
As an example, after the first content material is acquired, the content generating method may further include displaying the acquired first content material at a predetermined position near the first element frame, and displaying an association relationship of the first element frame and the acquired first content material.
Specifically, in response to the agent generating the first content material, a new element box may be presented at a predetermined location in the vicinity of the first element box to present the first content material (e.g., the picture of fig. 7 or the text of fig. 8) in the element box, and the first element box may be connected to the element box by a graphic such as an arrow or a connection line to present the relationship between the first element box and the first content material.
In this way, the information flow can be visually presented, so that the user can intuitively and quickly locate the relationship between the user intention input by the user and the content responded by the agent.
While examples of directly taking content material acquired by an agent as the first content material are described above, in embodiments of the present disclosure, user interactions with the content material may also be received after the agent acquires the content material to determine the final first content material.
The step of obtaining the first content material may include obtaining a plurality of data resources according to the first user intention information using a preset agent corresponding to the first user intention information and displaying the plurality of data resources in the canvas area, and taking the selected target data resource as the first content material in response to a user selection operation of the target data resource of the plurality of data resources.
For example, in the example of generating target content based on the target template shown in fig. 5, in response to the user inputting first user intention information, such as a keyword "camp", in the first element box 232, since a target agent is not specified in the user intention information, data resources related to the keyword "camp" may be searched for by a default agent (e.g., an information collecting agent) set in advance, and as shown in fig. 9, the data resources may be presented to the user. In response to a user selection operation of the target data resource, for example, the partial data resource in fig. 9 is checked, the selected target data resource may be regarded as the first content material.
By the method, after the intelligent agent acquires the primary content materials, the user can be allowed to further interact and participate in the selection process of the first content materials, so that the finally obtained first content materials are closer to the requirements of the user, and target content which accords with the user intention is better generated.
Although the example process of exposing data resources for user selection is described above by taking a target template as an example, embodiments of the present disclosure are not limited thereto, and data resource exposure and selection may also be performed, for example, in the process of generating target content in a user-specified target workflow.
The process of acquiring the first content material based on the target workflow will be described in detail below. As described above, the first user intention information may include target workflow information, and the target workflow may include a plurality of subtasks and a sequential execution relationship between the plurality of subtasks, where the subtasks may be preset.
In an example of obtaining the first content material based on the target workflow, obtaining the first content material may include running the target workflow with an agent corresponding to the first user intention information set in advance to execute corresponding sub-tasks in an order execution relationship of sub-tasks in the target workflow, and generating the first content material.
Specifically, executing the corresponding sub-tasks in the sequential execution relationship of the sub-tasks in the target workflow, and generating the first content material may include generating the first content material by executing at least one sub-task execution operation.
The subtask execution operation may include providing a current element frame, receiving current user intention information input by a user, wherein the current user intention information is associated with content materials acquired by a previous operation, and acquiring and displaying the current content materials according to the current user intention information and the content materials acquired by the previous operation by using preset agents corresponding to the current user intention information in response to the operation of the user on the current element frame.
When the first subtask is executed, the current element frame may be a first element frame, the current user intention information may be first user intention information, the content material acquired by the last operation is empty, and in response to the operation of the user on the first element frame, the content material of the first operation may be acquired and displayed according to the first user intention information by using a preset agent corresponding to the current user intention information. In each subsequent sub-task execution operation, the content material of the current operation can be acquired and displayed based on the content material acquired and displayed in the previous sub-task execution operation and the current user intention information. Here, the current user intention information may be associated with the content material acquired and presented by the previous sub-task execution operation, for example, the user's selection of the content material acquired and presented by the previous sub-task execution operation, or the like.
Fig. 10A to 10D illustrate an example process of acquiring a first content material based on a target workflow. As shown in fig. 10A, the user may input first user intention information in the first element box 233, which may include, for example, content material "mena search presentation" generated in advance and user-defined content "target user is a young girl, 'vehicle suitable for girl' as a subject. The agent may be analyzed using preset keywords to obtain and present initial content material related to the user intent information, such as the keywords presented in element box 234 in fig. 10A.
In response to obtaining and presenting content material of a last sub-task execution operation, such as the keywords described above, a current element box, such as element box 235, of the current sub-task execution operation may be provided to receive current user intent information, which may include user selections of keywords, such as shown in FIG. 10A. In response to the operation performed by the user on the current element frame 235, the content material of the current operation may be acquired and displayed according to the current user intention information and the content material acquired by the previous operation, for example, according to the selection of the user and the selected keyword, by using an agent corresponding to the current user intention information, for example, an information collecting agent, for example, as shown in fig. 10B, an online collected data resource, for example, a public resource on the social network platform, or the like may be acquired and displayed.
In response to obtaining and exposing content material of a last sub-task execution operation, such as the online collected data resources described above, a current element box of the current sub-task execution operation, such as element box 236 shown in FIG. 10C, may be provided to receive current user intent information, which may include, for example, a user's selection of the online collected data resources. In response to the operation performed by the user on the current element box 236, the content material acquired according to the current user intention information and the previous operation, for example, the content material of the current operation, for example, the selected data resource, may be acquired and displayed according to the user's selection, for example, as shown in fig. 10D, where the marking may refer to adding a tag to the data resource, and the data resource after adding the tag may be stored in a form of a table, for example, and stored as an Excel file.
In this example, the marked-up data resource may be used as the first content material ultimately generated for generating the target content.
By the method, the subtasks are executed one by one according to the sequence of the target workflow, and the user can be guided to finish the process of acquiring the first content material and generating the target content, so that the user can quickly realize content generation under the guidance of the workflow even under the condition that the user does not know the complete flow of the content material and the content generation.
In addition, it should be noted that any subtasks in the workflow may be performed as a separate content material generating process, where the workflow may be considered as an agent capable of performing various tasks in an orderly manner to collectively complete the target content generation, and any agent performing the tasks in the workflow may be separately invoked and implement a corresponding function, for example, as shown in fig. 11, a user may invoke the marking agent to perform the data resource marking function, for example, marking the generated data resource specified by the user, by inputting the marking agent information.
Further, in this step S140, the user may add a plurality of first element boxes in the canvas area to allow the user to input different first user intention information in each of the first element boxes, respectively. In response to a user operation on any target element frame in the plurality of first element frames, a preset agent corresponding to first user intention information in the target element frame can be utilized, and corresponding first content materials are acquired according to the first user intention information.
Here, interactions between the user and the different first element boxes are independent of each other, and accordingly, information streams generated by interactions between the user and the different first element boxes are also independent of each other, but the user can edit and integrate the information streams independent of each other, which will be described in detail below.
Referring back to fig. 1, in step S150, the target content may be generated from the first content material.
In this step, the target content may be one or more, for example, the target content may be presented in an element box, and the target content selected by the user may be derived in bulk in response to the user's selection of the target content.
In one example, the target content may be generated by receiving a content generation instruction input by a user in the second element box, and generating the target content according to the first content material by using an agent corresponding to the content generation instruction set in advance in response to a running operation of the user on the second element box.
Here, the content generation instruction may include, for example, but not limited to, user-defined content, target agent information, content material information generated in advance, and the like. For example, in an example where the first user intention information includes target template information, as shown in fig. 5, the user may input a content generation instruction, such as pre-generated content material information, in the second element box 238, in response to a user operation on the second element box 238, an agent (e.g., a default agent) corresponding to the content generation instruction may generate target content, such as a plurality of product introduction documents, or the like, from the first content material and the pre-generated content material specified in the content generation instruction.
Further, as an example, in the target template, a predetermined position near each element frame may be displayed with user operation prompt information corresponding to the element frame to prompt the user for the function of each element frame.
By means of the template generation mode of the preset content, all flows for generating the target content can be visually displayed, so that a user can quickly generate the target content according to the direction in the template, the user does not need to grasp the complete interaction flow deeply, the operation of the user is friendly, and the efficiency of obtaining the expected target content by the user is improved.
In another example, in an example where the first user intent information includes target workflow information, as described above, the first content material may be obtained based on the target workflow, where the target content may also be generated from the first content material based on the target workflow.
Specifically, in the case where the first user intent information includes the target workflow information, in response to acquiring the first content material, such as the marked data asset shown in fig. 10D, based on the target workflow, a second element box may be provided to receive content generation instructions of the user, such as shown in fig. 10E, the content generation instructions may include pre-generated content material information "mena search presentation", and the first content material may include "marked data asset. Xlsx". In response to the user inputting the above-described content generation instruction, a target content may be generated from the first content material and the content material information specified in the content generation instruction using a preset agent corresponding to the content generation instruction, for example, a document generation agent, and the target content may be presented, for example, as shown in fig. 10F, a plurality of documents may be presented. The user may select desired target content (as shown in fig. 10G) among the presented target content to perform operations such as export or save.
In another example, in an example where the first user intent information includes target agent information, as described above, the first content material may be acquired based on the target agent, where the target content may be generated based on the first content material acquired by one or more target agents according to the user interaction.
Specifically, the step S150 may include providing a robot dialog window in which a dialog box is provided, receiving a content generation instruction input by a user through the dialog box, and generating target content from the first content material based on the content generation instruction.
Here, the content generation instruction may be, for example, information related to target content that the user desires to generate. The content generation instructions may include user-defined content, which may include, for example, but not limited to, keywords related to target content that the user desires to generate, sentences expressing user intent, and the like. As an example, the content generation instruction may further include at least one of target agent information, target workflow information, content material information generated in advance, for example, the user may specify that the target agent generates the target content by inputting the target agent information.
For example, as shown in FIG. 2, the user interaction interface may further include a robotic dialog window 250, wherein a dialog box 251 is disposed within the robotic dialog window 250, wherein the dialog box 251 may receive a content generation instruction entered by a user, wherein the robotic dialog window 250 may be juxtaposed with the canvas area 210, for example. In addition, the user interaction interface may also include a dialog window evoked button 260, in response to the user clicking on the dialog window evoked button 260, the robotic dialog window 250 may be presented or the robotic dialog window 250 may be hidden in the user interaction interface.
In accordance with embodiments of the present disclosure, the acquired plurality of first content materials may be contained in the canvas area, e.g., the process of acquiring the first content materials described above may be performed multiple times to generate the plurality of first content materials. In this case, the step of generating the target content from the first content material based on the content generation instruction may include generating the target content from the first content material specifically introduced by the user if the first content material specifically introduced by the user among the plurality of first content materials is specified in the content generation instruction, or generating the target content from all the first content materials in the canvas area if the first content material specifically introduced by the user among the plurality of first content materials is not specified in the content generation instruction.
Specifically, in the case where a specific first content material in the canvas area 210 is specified in a content generation instruction of the user, the user may introduce any first content material in the canvas area by inputting first content material information (e.g., inputting an introduction prompt and a name of the first content material) in the dialog box, or may also introduce by dragging any first content material in the canvas area 210 into the dialog box. In this example, the content generation instructions may also include user-defined content, such as a question of specific content material introduced, "what content is described in those materials," and so forth.
In the event that a particular first content material is not specified in the user's content generation instructions, for example, as shown in FIG. 2, the robotic conversation window 250 may further include a content generation component 252, such as component "START WITH THIS doc," which, in response to user triggering of the content generation component 252, may generate the target content based on all of the first content material currently present in the canvas area 210.
As an example, in the case where the content generation instruction does not include the target agent information and the target workflow information, in the above-described process, the target content may be generated using a default agent set in advance. In the case where the content generation instruction includes target agent information or target workflow information, in the above process, target content may be generated using the target agent or target workflow, for example, as shown in fig. 12, the content generation instruction may include target agent information "@ gpt" and user-defined content "what is said in this document.
Further, as an example, the robotic dialog window 250 may optimize the generated first content material in the canvas area in addition to generating the target content, e.g., in response to a user introducing the target first content material in the dialog and inputting the optimization intent information, the target first content material may be optimized according to the optimization intent information, generating the optimized content material. Here, in response to generating the optimized content material, the optimized content material may also be presented in the robotic conversation window 250 and/or canvas area 210.
By the method for setting the robot dialog window, a user can be allowed to generate target content more flexibly, and the user can selectively introduce a part of content materials in the canvas area to generate the target content, so that the flexibility and interactivity of content generation are improved.
In another example, after the first content material is acquired, the content generation method may further include acquiring a second content material based on the first content material, in which example, the step S150 may include generating the target content from the second content material.
For example, as described above, after the first content material is acquired, the first content material may be displayed at a predetermined position near the first element frame, and the association relationship of the first element frame and the first content material may be displayed.
In this case, the step of acquiring the second content material based on the first content material may include:
Providing a fourth element box in the canvas area in response to a fourth element box adding instruction of the user;
Responding to the association operation of the user on the fourth element frame and the first content material, and connecting the fourth element frame with the first content material;
Receiving second user intention information input by a user in a fourth element box;
And responding to the operation of the user on the fourth element frame, acquiring and displaying the second content material according to the first content material by utilizing a preset intelligent agent corresponding to the second user intention information, and displaying the association relation between the fourth element frame and the acquired second content material so as to present the information flow for acquiring the second content material.
Specifically, a fourth element box, such as element box 239 in FIG. 13A, may be added in the canvas area in a similar manner to the addition of the first element box described above.
The user's association operation of the fourth element box with the first content material may refer to forming the first content material into a contextual relationship with the fourth element box, and corresponding processing may be performed on the first content material when the fourth element box is running. As an example, the association operation may include connecting the fourth element box with the first content material through a graphic such as an arrow or a connection line, and as shown in fig. 13A, the first content material may be connected to the fourth element box by adding an arrow.
The second user intent information may include, for example, but is not limited to, user-defined content, target agent information, pre-generated content material information, and the like. For example, as shown in fig. 13A, the second user intention information may include the user-defined content "shortened up to within 100 words".
In response to the operation performed by the user on the fourth element frame, an agent corresponding to the second user intention information may be utilized, for example, in the example of fig. 13A, the target agent is not specified by the user, the second content material may be acquired and displayed according to the first content material by using a default agent, and the association relationship between the third element frame and the acquired second content material may be displayed, so as to present the information flow of acquiring the second content material. As shown in fig. 13B, the second content material may be presented in a new element box 2310, and the association of the fourth element box with the acquired second content material (e.g., element box 2310) may be graphically represented by an arrow or a connection line, for example. In this manner, the target content may be further generated based on the second content material.
By the method, after the first content material is acquired, the user can add a new element frame according to the requirement to further generate the second content material according to the first content material, so that the generation of the content material is more flexible, the degree of freedom of the user operation is higher, and the complete information flow can be visually displayed, so that the user can intuitively know the generation process of the content material.
In another example, the content generation method may further include:
providing a third element frame in the canvas area;
a program interface for receiving a target program input by a user in a third element frame;
responding to the operation of the user on the third element frame, and displaying an interface of the target program in the third element frame;
In response to a user's access operation to the target program in the third element frame, content in the target program is loaded into the canvas area, or content in the canvas area is uploaded to the target program.
Specifically, as shown in FIG. 14, a program interface of the target program, such as a url address of the program, may be input in the third sub-frame 2311, and in response to a user triggering an operation of the third sub-frame, the interface of the target program may be presented in the third sub-frame to allow the user to access the target program through the element frame in the current interactive interface.
By way of example, the target program may be accessed in the current interactive interface through an element box by setting the large model application as an inline frame for two-way communication in the interactive interface using parent-child page communication techniques, and obtaining access and read rights for the target program through interface authorization.
Specifically, in embodiments of the present disclosure, registration of any third party tool/webpage/application as a widget component (or data block) by url is supported in a free canvas area, unlike conventional embedded webpage schemes in the related art, where the widget registered by the third party tool/webpage/application can be implemented as a component of the canvas area and can actually run in a workflow by, on the one hand, packaging the large model application as an iframe component bidirectionally communicable in an interactive interface by a "parent-child page communication" technique, and, on the other hand, granting access, read rights of the third party application such as an flybook to a content generation platform by way of interface authorization, so that the bidirectionally communicable iframe component can be implemented.
In this example, the second target content may be generated from content in the target program.
By the method, an access interface of the third-party program can be provided to upload local content to the third-party program or apply the content in the third-party program to the current content generation, so that the content generation is more flexible. The above describes an example process of generating content material and target content using a canvas area, in which the processes of acquiring the first content material, acquiring the second content material, and generating the target content all form an information stream.
Specifically, the information flows refer to a process of forming context information in the process of interaction between a user and the agent, each information flow can comprise at least two element boxes, one or more information flows can be formed in a canvas area, different information flows are independent from each other, and when a task in any information flow is executed, the agent can only acquire the context information in the current information flow and cannot acquire the information in other information flows.
For the information stream in the canvas area, the content generation method may further include adjusting an association relationship, user intention information or content material information in the information stream based on the editing operation in response to the editing operation of the information stream by the user and/or deleting the information stream in response to the deleting operation of the information stream by the user.
In particular, in embodiments of the present disclosure, a user is allowed to adjust the contextual relationship inside the information streams and to relate different information streams. By way of example, editing operations by a user on an information stream may include, but are not limited to, adjusting associations between element boxes in the information stream, content of each element box, inserting or deleting element boxes at any location of the information stream.
For example, as shown in FIG. 15, the canvas area may include a first information stream and a second information stream in parallel, and in response to a user editing operation on the information streams, element boxes 2312 in the first information stream and 2313 in the second information stream may be associated with a new element box 2314. In response to a user operation of element box 2314, target content may be generated from the first information stream and the second information stream.
In the session process of the existing large model such as ChatGPT, only linear interaction or session between the user and the intelligent agent is supported, if certain answer or produced content of the intelligent agent is not satisfied, the current session can only be re-executed, historical interaction information flows cannot be freely edited, modified and combined, and for complex content generation tasks, the single linear plane information flow cannot meet the requirement. In the process, the user can edit the information stream, for example, modify the association relationship between the element frames in the information stream, insert or delete the element frames at any position of the information stream, and the like, so that the linear interaction mode of the existing large model can be broken through, and the two-dimensional editable information stream editing can be realized.
Further, in embodiments of the present disclosure, in response to a user's association operation of any two element boxes in the canvas area (including an element box accepting user input and an element box presenting content material or target content), an association relationship between the two element boxes may be established and context information between the contents of the two element boxes is formed.
Furthermore, in embodiments of the present disclosure, all element boxes on the canvas support user wakeup for further interaction. For example, in response to an agent wakeup operation by a user for a target element box, such as clicking on wakeup control 272"ask AI" shown in fig. 2, an operation corresponding to the specified agent may be performed on the content in the target element box with the agent specified in the agent wakeup operation. By way of example, operations corresponding to the specified agent may include, but are not limited to, improving grammar, interpreting selected region content, translating, rewriting composition language, improving composition, expanding text, compacting text, content writing, and the like, for example.
In addition, according to the embodiment of the disclosure, the content generation method can further comprise the step of responding to the detail display operation of the user on the target element box in the canvas area, displaying relevant data and/or deduction logic corresponding to the target element box.
Specifically, as shown in fig. 2, the user may perform a Detail presentation operation on any element box, for example, clicking on a preset running control 273"show Detail", and may present relevant data and/or deduction logic corresponding to the target element box. Here, the deduction logic may refer to a process or basis for obtaining the content in the target element box, for example, as shown in fig. 16, statistical data on which the content is generated may be shown.
By the method, a user can know the content logic and the underlying data of the content generated by the intelligent agent so as to judge whether the content generated by the intelligent agent meets the fact or is close to the current requirement.
In addition, according to the embodiment of the disclosure, the content generation method can further comprise displaying at least one piece of prediction information corresponding to the target element frame at a preset position near the target element frame in response to a user selection operation of the target element frame in the canvas area, wherein the prediction information comprises a next operation which the user may perform on the target element frame or a question which the user may ask on the target element frame, and executing an operation corresponding to the prediction information selected by the user or answering the question corresponding to the prediction information selected by the user in response to the user selection operation of the prediction information. As shown in fig. 17, the prediction information may include generating a mind map, which may be generated for the current element box in response to a user selecting the prediction information.
Further, according to embodiments of the present disclosure, the content generation method may further include, in response to an editing operation of the target element frame by another user, updating a result of the editing operation in a canvas area of the current user. In particular, multiple users may access the same canvas area simultaneously, and editing operations of element frames by different users may be synchronized. Thus, the method can support the online collaboration of multiple people to generate target content.
According to the content generation method, the degree of freedom of interaction between the user and the agent can be improved, and the user can interact with the output of the agent at any node in the content generation process, so that the generation of the content material is improved and optimized, and the generation of the target content is more in line with the expectations of the user.
In a second aspect, an exemplary embodiment of the present disclosure proposes a content generating apparatus, as shown in fig. 18, including a presentation unit 1810, a providing unit 1820, a receiving unit 1830, an acquiring unit 1840, and a generating unit 1850.
The presentation unit 1810 is configured to present a user interaction interface that includes a canvas area.
The providing unit 1820 is configured to provide at least one element box in the canvas area, wherein the element box is used for interacting with a preset agent, and the at least one element box comprises a first element box.
The receiving unit 1830 is configured to receive first user intention information input by a user in a first element box.
The acquisition unit 1840 is configured to acquire the first content material according to the first user intention information by using an agent corresponding to the first user intention information set in advance in response to an operation of the user on the first element frame.
The generation unit 1850 is configured to generate target content according to the first content material.
As an example, the providing unit 1820 is further configured to provide the first element box in the canvas area in response to a user operation to add the first element box in the canvas area.
The content generating apparatus further includes a display unit configured to display a running control at a predetermined position near the first element frame, and the acquisition unit is further configured to acquire the first content material from the first user intention information using an agent corresponding to the first user intention information set in advance in response to a click operation of the running control by the user.
As an example, the content generating apparatus further includes a relationship display unit further configured to display the acquired first content material at a predetermined position near the first element frame after the acquisition unit acquires the first content material, and display an association relationship of the first element frame and the acquired first content material.
As an example, the first user intention information includes target agent information, and the target agent is used to acquire the first content material.
As an example, the generating unit 1850 is further configured to provide a robot dialog window in which a dialog box is provided, receive a content generation instruction input by a user through the dialog box, and generate target content from the first content material based on the content generation instruction.
The canvas area includes a plurality of acquired first content materials, wherein the generating unit is further configured to generate target content according to the first content materials specifically introduced by the user if the first content materials specifically introduced by the user in the plurality of first content materials are specified in the content generation instruction, or generate target content according to all the first content materials in the canvas area if the first content materials specifically introduced by the user in the plurality of first content materials are not specified in the content generation instruction.
As an example, the providing unit 1820 is further configured to present the target template in the canvas area in response to an instruction of the user to introduce the target template, wherein the target template comprises a plurality of element boxes, the plurality of element boxes comprise a first element box and a second element box, wherein the generating unit is further configured to receive a content generating instruction input by the user in the second element box, and generate the target content according to the first content material by using an agent corresponding to the content generating instruction, which is preset, in response to a running operation of the user on the second element box.
As an example, in the target template, a user operation prompt corresponding to each element frame is displayed at a predetermined position near the element frame.
The obtaining unit 1840 is further configured to obtain a plurality of data resources according to the first user intention information by using an agent corresponding to the first user intention information, which is set in advance, and present the plurality of data resources in the canvas area, and respond to a user selection operation of a target data resource in the plurality of data resources, to take the selected target data resource as the first content material.
The first user intention information includes, as an example, target workflow information, where the target workflow is used to obtain the first content material, and the target workflow includes a plurality of subtasks and a sequential execution relationship between the plurality of subtasks, where the obtaining unit 1840 is further configured to run the target workflow by using an agent set in advance and corresponding to the target workflow, so as to execute the corresponding subtasks according to the sequential execution relationship of the subtasks in the target workflow, and generate the first content material.
The obtaining unit 1840 is further configured to generate a first content material by performing at least one sub-task performing operation, where the sub-task performing operation includes providing a current element box, receiving current user intention information input by a user, the current user intention information being associated with a content material obtained by a previous operation, obtaining and displaying the current content material according to the current user intention information and the content material obtained by the previous operation by using an agent corresponding to the current user intention information set in advance in response to an operation of the user on the current element box, wherein the current element box is the first element box when the first sub-task is performed, the current user intention information is the first user intention information, and the content material obtained by the previous operation is null, and the current content material is the first content material when the last sub-task is performed.
As an example, the content generating apparatus further includes a program accessing unit configured to provide a third element frame in the canvas area, a program interface to receive a target program input by a user in the third element frame, to expose an interface of the target program in the third element frame in response to a running operation of the third element frame by the user, to load content in the target program to the canvas area in response to an accessing operation of the target program in the third element frame by the user, or to upload content in the canvas area to the target program.
As an example, the generation unit 1850 is further configured to generate second target content according to the content in the target program.
The obtaining unit 1840 is further configured to obtain a second content material based on the first content material after obtaining the first content material, wherein the generating unit is further configured to generate the target content from the second content material, as an example.
The content generating device further comprises a relation display unit, wherein the relation display unit is configured to display the first content material at a preset position near the first element frame after the first content material is acquired by the acquisition unit, and display the association relation between the first element frame and the first content material, the acquisition unit is further configured to provide a fourth element frame in a canvas area in response to a fourth element frame adding instruction of a user, connect the fourth element frame with the first content material in response to an association operation of the user on the fourth element frame and the first content material, receive second user intention information input by the user in the fourth element frame, and acquire and display the second content material according to the first content material and display the association relation between the fourth element frame and the acquired second content material by using an agent corresponding to the second user intention information, which is preset in response to an operation of the user on the fourth element frame, so as to display an information stream for acquiring the second content material.
As an example, the content generating apparatus further includes an information stream editing unit configured to adjust association relationships in the information stream, user intention information, or content material information based on editing operations in response to editing operations of the information stream by a user, and/or delete the information stream in response to deletion operations of the information stream by the user.
As an example, the content generation apparatus further comprises a detail presentation unit configured to present the relevant data and/or deduction logic corresponding to the target element box in the canvas area in response to a detail presentation operation of the target element box by the user.
As an example, the content generating apparatus further includes a template generating unit configured to provide one or more element frames in the canvas area in response to an operation of adding the one or more element frames in the canvas area by a user, receive a first editing operation of the one or more element frames themselves and/or a second editing operation of an association relationship between the element frames by the user, and save the one or more element frames as a template for generating the content in response to an instruction of creating the template by the user.
On the other hand, each unit shown in fig. 18 may also be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the corresponding operations may be stored in a computer-readable medium, such as a storage medium, so that the processor can perform the corresponding operations by reading and executing the corresponding program code or code segments.
According to a third aspect of the present disclosure there is provided a computing device comprising a processor, a memory for storing processor executable instructions, wherein the processor executable instructions, when executed by the processor, cause the processor to perform a content generation method according to an embodiment of the present disclosure.
In particular, the computing devices may be deployed in servers or clients, as well as on node devices in a distributed network environment. Further, the computing device may be a PC computer, tablet device, personal digital assistant, smart phone, web application, or other device capable of executing the above set of instructions.
Here, the computing device is not necessarily a single computing device, but may be any device or aggregate of circuits capable of executing the above-described instructions (or instruction set) alone or in combination. The computing device may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with locally or remotely (e.g., via wireless transmission).
In a computing device, the processor may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
Some operations described in the content generating method according to the exemplary embodiment of the present disclosure may be implemented in software, some operations may be implemented in hardware, and furthermore, operations may be implemented in a combination of software and hardware.
The processor may execute instructions or code stored in one of the memories, wherein the memories may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory may be integrated with the processor, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. In addition, the memory may include a stand-alone device, such as an external disk drive, a storage array, or any other storage device usable by a database system. The memory and the processor may be operatively coupled or may communicate with each other, for example, through an I/O port, a network connection, etc., such that the processor is able to read files stored in the memory.
In addition, the computing device may also include a video presenter (such as a liquid crystal presenter) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the computing device may be connected to each other via buses and/or networks.
The content generation method according to the exemplary embodiments of the present disclosure may be described as various interconnected or coupled functional blocks or functional diagrams. However, these functional blocks or functional diagrams may be equally integrated into a single logic device or operate at non-exact boundaries.
Accordingly, the content generation method described with reference to fig. 1 through 17 may be implemented by a system including at least one computing device and at least one storage device storing instructions.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform a content generation method according to an embodiment of the present disclosure.
According to an exemplary embodiment of the present disclosure, at least one computing device is a computing device for performing a content generation method according to an exemplary embodiment of the present disclosure, a storage device having stored therein a set of computer-executable instructions that, when executed by the at least one computing device, perform the content generation method described with reference to fig. 1 to 17.
According to a fifth aspect of the present disclosure, there is provided a system comprising at least one computing device and at least one storage device storing instructions that, when executed by the at least one computing device, cause the at least one computing device to perform a content generation method according to an embodiment of the present disclosure.
According to a sixth aspect of the present disclosure, there is provided a computer program product comprising instructions which, when executed by at least one computing device, cause the at least one computing device to perform a content generation method as an embodiment of the present disclosure.
The foregoing description of exemplary embodiments of the present disclosure has been presented only to be understood as illustrative and not exhaustive, and the present disclosure is not limited to the exemplary embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. Accordingly, the scope of the present disclosure should be determined by the scope of the claims.

Claims (10)

1. A content generation method, characterized in that the content generation method comprises:
displaying a user interaction interface, wherein the user interaction interface comprises a canvas area;
providing at least one element frame in the canvas area, wherein the element frame is used for interacting with a preset intelligent agent, and the at least one element frame comprises a first element frame;
Receiving first user intention information input by a user in the first element box;
Responding to the operation of a user on the first element frame, and acquiring a first content material according to the first user intention information by utilizing a preset intelligent agent corresponding to the first user intention information;
And generating target content according to the first content material.
2. The content generation method according to claim 1, wherein the generating the target content from the first content material includes:
providing a robot dialogue window, wherein a dialogue box is arranged in the robot dialogue window;
Receiving a content generation instruction input by a user through the dialog box;
And generating the target content according to the first content material based on the content generation instruction.
3. The content generation method of claim 2, wherein the canvas area contains a plurality of the acquired first content materials;
The generating the target content according to the first content material based on the content generation instruction comprises:
If the first content material which is specifically introduced by the user and used for generating the target content in the plurality of first content materials is specified in the content generation instruction, generating the target content according to the first content material specifically introduced by the user;
Or alternatively
And if the first content materials which are specifically introduced by the user and used for generating the target content in the plurality of first content materials are not specified in the content generation instruction, generating the target content according to all the first content materials in the canvas area.
4. The content generation method of claim 1, wherein the providing at least one element box in the canvas area comprises:
Responsive to a user instruction to introduce a target template, displaying the target template in the canvas area, wherein the target template comprises a plurality of element boxes, and the element boxes comprise the first element box and the second element box;
the generating the target content according to the first content material comprises:
Receiving a content generation instruction input by a user in the second element box;
And responding to the operation of the user on the second element frame, and generating the target content according to the first content material by utilizing a preset intelligent agent corresponding to the content generation instruction.
5. The content generation method according to claim 1, wherein the first user intention information includes target workflow information, the target workflow is used for acquiring the first content material, and the target workflow includes a plurality of subtasks and a sequential execution relationship between the plurality of subtasks;
Wherein, the obtaining, by using an agent corresponding to the first user intention information, the first content material according to the first user intention information includes:
and operating the target workflow by utilizing a preset agent corresponding to the target workflow, so as to execute corresponding subtasks according to the sequence execution relation of the subtasks in the target workflow, and generating the first content material.
6. The content generation method according to claim 1, characterized in that the content generation method further comprises:
Providing a third element frame in the canvas area;
A program interface for receiving a target program input by a user in the third element frame;
responding to the operation of a user on the third element frame, and displaying an interface of the target program in the third element frame;
And responding to the access operation of the user to the target program in the third element frame, loading the content in the target program to the canvas area or uploading the content in the canvas area to the target program.
7. The content generation method according to claim 1, wherein after the first content material is acquired, the content generation method further comprises:
acquiring second content materials based on the first content materials;
the generating the target content according to the first content material comprises:
and generating the target content according to the second content material.
8. A content generation device, characterized in that the content generation device comprises:
the display unit is configured to display a user interaction interface, wherein the user interaction interface comprises a canvas area;
A providing unit configured to provide at least one element frame in the canvas area, wherein the element frame is used for interacting with a preset intelligent agent, and the at least one element frame comprises a first element frame;
A receiving unit configured to receive first user intention information input by a user in the first element box;
an acquisition unit configured to acquire a first content material according to first user intention information by using an agent corresponding to the first user intention information set in advance in response to an operation of a user on the first element frame;
and the generating unit is configured to generate target content according to the first content material.
9. A computing device, the computing device comprising:
a processor;
a memory for storing the processor-executable instructions,
Wherein the processor executable instructions, when executed by the processor, cause the processor to perform the content generation method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform the content generation method of any of claims 1-7.
CN202411998914.1A 2024-12-31 2024-12-31 Content generation method and device, computing device, medium, system and program product Pending CN119987604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411998914.1A CN119987604A (en) 2024-12-31 2024-12-31 Content generation method and device, computing device, medium, system and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411998914.1A CN119987604A (en) 2024-12-31 2024-12-31 Content generation method and device, computing device, medium, system and program product

Publications (1)

Publication Number Publication Date
CN119987604A true CN119987604A (en) 2025-05-13

Family

ID=95641974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411998914.1A Pending CN119987604A (en) 2024-12-31 2024-12-31 Content generation method and device, computing device, medium, system and program product

Country Status (1)

Country Link
CN (1) CN119987604A (en)

Similar Documents

Publication Publication Date Title
TWI287722B (en) Method and computer readable medium for providing contextually sensitive functionality to a computer-generated document
US10901604B2 (en) Transformation of data object based on context
US20180196784A1 (en) Dynamic content generation
Ardito et al. User-driven visual composition of service-based interactive spaces
US20080235261A1 (en) Generating a new file using instance information
US8103703B1 (en) System and method for providing content-specific topics in a mind mapping system
US20140006913A1 (en) Visual template extraction
KR20050039551A (en) Programming interface for a computer platform
AU2014202725A1 (en) Methods and apparatus for translating forms to native mobile applications
JP2018514878A (en) A computer-implemented method for displaying software-type applications based on design specifications
CN105528418A (en) Design document generation method and apparatus
CN118409681A (en) Method, apparatus, device and medium for managing workflow
US20170315713A1 (en) Software application creation for non-developers
JP2025134676A (en) Learned computer control using pointing devices and keyboard actions
Warnars Object-oriented modelling with unified modelling language 2.0 for simple software application based on agile methodology
Liu et al. MUIT: a domain-specific language and its middleware for adaptive mobile web-based user interfaces in WS-BPEL
CN117348871A (en) Template-based page control generation method and device
Li et al. Hyperlink pipeline: Lightweight service composition for users
CN119987604A (en) Content generation method and device, computing device, medium, system and program product
Paternò et al. Authoring interfaces with combined use of graphics and voice for both stationary and mobile devices
Firmenich et al. Distributed Web browsing: supporting frequent uses and opportunistic requirements
Warén Cross-platform mobile software development with React Native
Liu et al. MUIT: a middleware for adaptive mobile web-based user interfaces in WS-BPEL
Khandelwal Developing an Expense Tracking Application using React and Node. js
Pohja et al. Web User Interaction: Comparison of Declarative Approaches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination