Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the invention, the related user information (including but not limited to user personal information, user image information, user equipment information, such as position information and the like) and data (including but not limited to data for analysis, stored data, displayed data and the like) are information and data authorized by a user or fully authorized by all parties, and the related data are collected, stored, used, processed, transmitted, provided, disclosed, applied and the like, all comply with related laws and regulations and standards, necessary security measures are adopted, no prejudice to the public order is provided, and corresponding operation entries are provided for the user to select authorization or rejection.
In the scene of using personal information to make automatic decision, the method, the device and the system provided by the embodiment of the invention provide corresponding operation inlets for users to choose to agree or reject the automatic decision result, and enter an expert decision flow if the users choose to reject. The expression "automated decision" here refers to an activity of automatically analyzing, assessing the behavioral habits, hobbies or economic, health, credit status of an individual, etc. by means of a computer program, and making a decision. The expression "expert decision" here refers to an activity of making a decision by a person who is specializing in a certain field of work, has specialized experience, knowledge and skills and reaches a certain level of expertise.
The embodiment of the disclosure provides a learning content generation method based on virtual reality, which comprises the steps of generating learning content according to first user information, generating a virtual scene for learning based on the learning content and a real scene where a user is located, acquiring second user information in real time in a user learning process, and adjusting a part of the learning content, which is not learned by the user, and the virtual scene in real time according to the second user information.
Fig. 1 schematically illustrates an application scenario diagram of a virtual reality-based learning content generation method, apparatus, device, medium and program product according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the learning content generating method based on virtual reality provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the learning content generation apparatus based on virtual reality provided by the embodiments of the present disclosure may be generally provided in the server 105. The virtual reality-based learning content generation method provided by the embodiments of the present disclosure may also be performed by a server or server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the learning content generating apparatus based on virtual reality provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The method for generating learning content based on virtual reality according to the disclosed embodiment will be described in detail with reference to fig. 2 to 5 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flowchart of a virtual reality-based learning content generation method according to an embodiment of the present disclosure.
As shown in fig. 2, the learning content generation based on virtual reality in this embodiment includes operations S210 to S240.
In operation S210, learning content is generated according to first user information.
In some embodiments, the first user information may include, for example, basic information and behavior information of the user. The basic information of the user may include, for example, age, sex, occupation, etc. of the user. The behavior information of the user may include, for example, a history learning behavior, a browsing record, a search record, a purchase record, a click behavior, and the like, wherein the history learning information may further include a learning course, a learning time, and the like that the user previously participated in.
By analyzing the basic information and the behavior information of the user, learning requirements and learning habits of the user are known, learning contents conforming to learning styles and learning targets of the user are provided for the user, personalized customization of the learning contents is realized, matching degree of the learning contents and the user is improved, and user satisfaction is effectively improved.
In operation S220, a virtual scene for learning is generated based on the learning content and the real scene in which the user is located.
In some embodiments, after the user learning content has been generated, a virtual scene is generated that matches the learning content in combination with the real scene in which the user is located.
For example, when the learning content is language learning, the virtual scene can be utilized to simulate a real language environment, such as a virtual foreign city scene, a business meeting scene, a daily communication scene and the like, and the user can also conduct language communication with the virtual character in the virtual scene, so that the purpose of language learning is achieved, and the language listening and speaking capability of the user is improved.
In operation S230, second user information is acquired in real time in the user learning process.
In some embodiments, the second user information includes physiological information of the user, for example, may include heart rate information, skin electrical information, eye movement information, and the like, and the second user information is acquired in real time by a virtual reality device worn by the user, and may reflect physiological changes generated by the user in terms of perception, permission, cognition, and the like.
The virtual reality device may include, for example, a head mounted display, a hand controller, a stereo headset, and the like. Head mounted displays are used to present a virtual world to a user, and typically include a display screen, lenses, and sensors that enable the user to see the virtual environment and may be used to gather eye movement information of the user. The hand controller is used for enabling a user to execute interactive operations, such as grabbing objects, clicking an operation interface and the like, and a biosensor can be installed on the hand controller, so that physiological information, such as heart rate information, skin electricity information and the like, of the user is collected.
In operation S240, the learning content and the virtual scene are adjusted in real time according to the second user information.
In some embodiments, the second user information may effectively reflect the physiological change of the user during the learning process, for example, when the user is interested in a certain content, the heart rate will generally increase, and when the user feels boring or tired, the heart rate will generally decrease, so the current learning state of the user may be determined according to the second user information, so that the learning content and the virtual scene are adjusted in real time based on the current learning state of the user, so that the user always maintains a more active learning state, thereby ensuring a better learning effect.
In embodiments of the present disclosure, prior to obtaining information of a user, consent or authorization of the user needs to be obtained.
For example, before performing operations S210 and S230, a request for acquiring the first user information and a request for acquiring the second user information may be transmitted to the user, and in the case where the user agrees to acquire the first user information and the request for acquiring the second user information, the first user information and the second user information are acquired, and operations S210 and S230 described above are performed.
On the other hand, second user information is obtained in real time in the user learning process, and the learning content and the virtual scene are adjusted in real time based on the second user information, so that the flexibility of a learning mode is improved, the learning content and the virtual scene can be adjusted in real time along with the change of the learning state of the user, and the learning experience and the learning effect of the user are improved.
Fig. 3 schematically illustrates a flowchart of generating learning content from first user information according to an embodiment of the present disclosure.
As shown in fig. 3, generating learning content according to the first user information in this embodiment includes operations S310 to S330.
In operation S310, a user portrait is constructed based on user basic information and behavior information.
In some embodiments, the user profile may include, for example, age, gender, geographic location, occupation, educational background, funding status, etc., which may reflect the user's context and characteristics. The user behavior information may include historical learning behavior, browsing records, search records, purchase records, clicking behavior, etc. of the user, and may reflect the user's preferences, interests, and potential needs to some extent.
And cleaning and integrating the collected user information, removing repeated, wrong or incomplete data, extracting target features from the cleaned user information, analyzing the target features, and constructing a user portrait based on an analysis result. Wherein the user portrayal is used to describe typical characteristics, preferences and requirements of the user.
In operation S320, similar users are found from the user portraits and learning preferences of the similar users are acquired.
In some embodiments, other users with similar characteristics and needs to the user may be matched using a recommendation system or similarity calculation method, and these similar users may have similar interests, learning habits or professional backgrounds to the current user, so that personalized learning content may be generated as a reference object.
The learning preferences of the users are acquired, and the learning preferences of the similar users are included in one of the reference factors of the learning content generation, so that the accuracy of the personalized learning content generation of the current user is improved.
In the implementation process, the similarity between the users can be calculated based on the Euclidean distance, cosine similarity and other methods, so as to obtain the similar users of the current user.
In operation S330, learning content of the user is generated based on the user portraits and learning preferences of similar users.
In some embodiments, the learning content of the user is generated based on two dimensions of the user portrait and the similar user learning preference, so that the matching degree of the learning content and the user can be effectively improved, and personalized customization of the learning content is realized.
Fig. 4 schematically illustrates a flowchart of generating a virtual scene for learning based on learning content and a user reality scene according to an embodiment of the present disclosure.
As shown in fig. 4, generating a virtual scene for learning based on learning content and a user reality scene in this embodiment includes operations S410 to S430.
In operation S410, a virtual scene to be generated is determined based on the learning content.
In some embodiments, after determining the learning content, a type of virtual scene to be generated is determined based on the learning content. For example, when learning content is business english, the corresponding virtual scene may be a corporate conference room. When the learning content is fund security, the corresponding virtual scene can be a financial transaction platform, an investment platform and the like.
In operation S420, object information of a real object in a real scene is acquired.
In some embodiments, the real scene may include any scene in reality, and the object information of the real object refers to any information related to the real object in the real scene, for example, attribute information and position information of the real object may be included. The attribute information includes color, size, shape, etc. of the real object, and the position information may include coordinate information of the real object in the current real scene, or may be a relative position relationship between the real object and other real objects.
In operation S430, a virtual object corresponding to the real object is generated in the virtual scene based on the object information.
In some embodiments, after object information of a real object is acquired, generating a virtual scene based on the object information, wherein the virtual scene is thinned according to the object information in the generating process, so that the virtual scene and the real scene are fused with high quality. By acquiring the object information of the real object, generating a virtual object corresponding to the real object in the virtual scene, virtual-real combination is realized, and the immersive experience of the user is increased.
Taking a virtual scene as a conference room of a company as an example, a table and a plurality of stools are included in a real scene, at this time, the table and the stools are real objects, the size and the shape of the table and the stool are the attribute information of the table and the stool, and the relative position relationship of the table and the stool is the position information of the table and the stool. At this time, the virtual scene may be created with a table and a stool as a base point, where the table corresponds to a table of a conference room of a company in the virtual scene, and the stool corresponds to a stool of the conference room of the company in the virtual scene (the table and the stool in the virtual scene are virtual objects). And other virtual scenes are unfolded based on the positions of the table and the stool, so that the fusion degree of the virtual scene and the real scene is improved.
Wherein, the position or state of the virtual object will change correspondingly with the change of the position or state of the real object.
For example, in a virtual scene of a corporate meeting, a virtual water cup is placed on a meeting room table, the virtual water cup corresponds to a real water cup in a real scene, when a user in the real scene drinks water from the real water cup, the water level in the real water cup is reduced, at the moment, the water level of the virtual water cup in the virtual scene is correspondingly reduced, so that the state of a virtual object is synchronous with the state of the real object, and the fusion effect of the virtual scene and the real scene is effectively improved.
Fig. 5 schematically illustrates a flowchart of real-time adjustment of learning content and virtual scenes according to second user information according to an embodiment of the present disclosure.
As shown in fig. 5, the real-time adjustment of the learning content and the virtual scene according to the second user information in this embodiment includes operations S510-S520.
In operation S510, the second user information is transferred into the pre-trained rating model, and the current state of the user is determined based on the pre-trained rating model.
In operation S520, the learning content and/or the virtual scene is adjusted in real time based on the current state of the user.
In some embodiments, the user physiological information may reflect the user's learning state to some extent. For example, when a user is interested in a certain content, the heart rate typically increases, and when the user feels boring or matches, the heart rate typically decreases. Or when the user is interested in a certain content, the eye movement reaction is usually concentrated at a key part of the content, and when the user feels boring or tired, the eye movement reaction is scattered or slow, so that the current learning state of the user can be evaluated through the physiological information of the user, and the learning content and/or the virtual scene can be adjusted in real time according to the learning state of the user.
And transmitting the second user information acquired in real time into a pre-trained evaluation model, extracting target features from the second user information by the pre-trained evaluation model, evaluating the user state based on the target features to obtain the current state of the user, and adjusting the learning content and/or the virtual scene in real time according to the current state of the user, so that the learning content can better meet the actual requirements and capability level of the user, better understanding and absorbing knowledge of the user are facilitated, and the learning effect is improved.
The evaluation model may include, for example, a decision tree model, a random forest model, a convolutional neural network model, and the like.
In the implementation process, the current state of the user can reflect the attention level and thinking liveness of the user at the current moment.
Based on the current state of the user, the learning content and/or the virtual scene are adjusted in real time, which comprises the following steps:
And correspondingly adjusting the difficulty of learning the content based on the user thinking liveness. The user thinking activity is positively related to the difficulty of the learning content, and the learning content is adjusted to be a task or a problem with higher difficulty under the condition that the user thinking activity is higher than a preset threshold. Correspondingly, after the learning content is adjusted in real time, the virtual scene is correspondingly adjusted according to the adjusted learning content.
And, based on the user's attention level, making appropriate adjustments to the virtual scene so that the user's attention can be maintained at a higher level, thereby ensuring a learning effect.
And under the condition that the attention of the user is smaller than a preset threshold value, adding active elements in the current virtual scene, so that the attention of the user is improved, the user can re-aggregate the scattered attention in the learning content, and the learning effect is improved.
Wherein the active element comprises any one of a color element, a dynamic element, an audio element, and an interactive element. The color elements are bright colors or high-contrast elements with high saturation, the dynamic elements comprise moving objects, dynamic backgrounds and the like, the sound effect elements can comprise adding background music, sound effects and the like in the current virtual scene, and the interaction elements can comprise buttons, triggers and the like for improving the participation degree of users.
Furthermore, the learning content generation method provided by the embodiment of the disclosure not only can adjust the learning content and/or the virtual scene in real time based on the second user information, but also can adjust the learning content in real time based on the staged learning result of the user in the current learning process. For example, after the user completes learning in one stage, the learning result of the user in the stage is determined based on the interactive feedback of the user in the current stage, and the subsequent learning content is adjusted based on the learning result. The learning stage may be divided according to learning duration or content type. The learning result can reflect the grasp of the user on different knowledge points in the learning stage. When the user is less ideal in grasping a knowledge point, the content related to the knowledge point can be added in the follow-up learning content to help the user grasp the knowledge point as soon as possible.
According to the learning content generation method, based on the current learning state of the user, the learning content and/or the virtual scene are adjusted in real time, individuation of the learning content is guaranteed, different learning content and/or virtual scene are set based on different learning states of the user, the learning efficiency of the user is effectively improved, and good learning results are obtained.
Based on the learning content generation method based on virtual reality, the disclosure also provides a learning content generation device based on virtual reality. The device will be described in detail below in connection with fig. 6.
Fig. 6 schematically shows a block diagram of a learning content generation apparatus based on virtual reality according to an embodiment of the present disclosure.
As shown in fig. 6, the learning content generation device 600 based on virtual reality of this embodiment includes a first generation module 610, a second generation module 620, an acquisition module 630, and an adjustment module 640.
The first generation module 610 is configured to generate learning content according to the first user information. In an embodiment, the first generating module 610 may be configured to perform the operation S210 described above, which is not described herein.
The second generation module 620 is configured to generate a virtual scene for learning based on the learning content and a real scene in which the user is located. In an embodiment, the second generating module 620 may be configured to perform the operation S220 described above, which is not described herein.
The acquiring module 630 is configured to acquire the second user information in real time during the learning process of the user. In an embodiment, the obtaining module 630 may be configured to perform the operation S230 described above, which is not described herein.
The adjustment module 640 is configured to adjust the learning content and the virtual scene in real time according to the second user information. In an embodiment, the adjustment module 640 may be configured to perform the operation S240 described above, which is not described herein.
Any of the first generation module 610, the second generation module 620, the acquisition module 630, and the adjustment module 640 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules, according to embodiments of the present disclosure. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the first generation module 610, the second generation module 620, the acquisition module 630, and the adjustment module 640 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Or at least one of the first generation module 610, the second generation module 620, the acquisition module 630, and the adjustment module 640 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a virtual reality-based learning content generation method, according to an embodiment of the disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of an input portion 706 including a keyboard, mouse, etc., an output portion 707 including a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speaker, etc., a storage portion 708 including a hard disk, etc., and a communication portion 709 including a network interface card such as a LAN card, modem, etc., connected to the I/O interface 705. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium that may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 702 and/or RAM 703 and/or one or more memories other than ROM 702 and RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product runs in a computer system, the program code is used for enabling the computer system to realize the learning content generation method based on virtual reality provided by the embodiment of the disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program may comprise program code that is transmitted using any appropriate network medium, including but not limited to wireless, wireline, etc., or any suitable combination of the preceding.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.