[go: up one dir, main page]

CN111359203B - Personalized railway VR scene interaction method - Google Patents

Personalized railway VR scene interaction method Download PDF

Info

Publication number
CN111359203B
CN111359203B CN202010156431.2A CN202010156431A CN111359203B CN 111359203 B CN111359203 B CN 111359203B CN 202010156431 A CN202010156431 A CN 202010156431A CN 111359203 B CN111359203 B CN 111359203B
Authority
CN
China
Prior art keywords
scene
user
time
browsing
old user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010156431.2A
Other languages
Chinese (zh)
Other versions
CN111359203A (en
Inventor
朱军
朱庆
李维炼
张天奕
任诗曼
党沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010156431.2A priority Critical patent/CN111359203B/en
Publication of CN111359203A publication Critical patent/CN111359203A/en
Application granted granted Critical
Publication of CN111359203B publication Critical patent/CN111359203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种个性化铁路VR场景交互方法,其包括以下步骤:S1、判断用户是否为新用户,若是则进入步骤S2,否则进入步骤S3;S2、向该用户推荐热门浏览场景,并进入步骤S5;S3、根据该用户的历史浏览记录获取其个人兴趣和偏好;S4、根据该用户的个人兴趣和偏好推荐浏览场景,并进入步骤S5;S5、根据用户选择的浏览场景显示对应的最佳路线;S6、用户根据最佳路线或自定义路线进行场景浏览。本发明从大量交互信息中抽象用户的兴趣特征,深度挖掘用户的偏好,充分分析用户对铁路的多层次感知与探索需求,从而为用户提供个性化、人性化的探索方案,全方位地提升了用户在铁路VR场景中的体验感和交互效率。

Figure 202010156431

The present invention discloses a personalized railway VR scene interaction method, which includes the following steps: S1, judging whether the user is a new user, if so, go to step S2, otherwise go to step S3; S2, recommend popular browsing scenes to the user, and Proceed to step S5; S3, obtain the user's personal interests and preferences according to the user's historical browsing records; S4, recommend browsing scenarios according to the user's personal interests and preferences, and enter step S5; S5, display the corresponding browsing scenarios according to the user's selection The best route; S6, the user browses the scene according to the best route or the custom route. The invention abstracts the user's interest characteristics from a large amount of interactive information, deeply mines the user's preference, and fully analyzes the user's multi-level perception and exploration needs of the railway, so as to provide the user with a personalized and humanized exploration plan, and comprehensively improve the User experience and interaction efficiency in railway VR scenes.

Figure 202010156431

Description

Personalized railway VR scene interaction method
Technical Field
The invention relates to the field of virtual reality, in particular to a personalized railway VR scene interaction method.
Background
Intellectualization is an important direction for future development of railways in the world. With the rapid development of railway technology in China, the development and application range and depth of intelligent new technology in the railway field are continuously expanded, and particularly, a digital twin is an important sign of railway informatization and is also a new way for building intelligent railways.
One core problem that needs to be solved by VR scene exploration analysis is interaction, which is essential in a virtual scene to help users capture target information more quickly, naturally, and efficiently in a virtual environment. The immersion feeling and the experience feeling are more emphasized in the existing game VR scene interaction, and the interaction mode is single. However, the railway line is long, the number of scene objects is large, the spatial relationship is complex, and in the railway construction process, users in different fields and different professional backgrounds are involved, the concerned scene information is different, and a single interaction mode can cause different users to be interfered by additional factors in the interaction process, and the information of interest of the users cannot be accurately acquired, so that the interaction efficiency is low.
Disclosure of Invention
Aiming at the defects in the prior art, the personalized railway VR scene interaction method provided by the invention solves the problem of low interaction efficiency in the existing railway VR scene interaction process.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a personalized railway VR scene interaction method is provided, which comprises the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
Further, the specific method for determining whether the user is a new user in step S1 is as follows:
and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
Further, the specific method for recommending a hot browsing scene to the VR visual interface to the new user in step S2 includes the following sub-steps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure BDA0002404214250000021
And using the vector as the user context information; wherein u isiIs the user ID of the ith old user,
Figure BDA0002404214250000022
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000023
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure BDA0002404214250000024
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure BDA0002404214250000025
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure BDA0002404214250000031
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000032
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user uiThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure BDA0002404214250000033
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the data with the stay time less than 5 seconds in the initial time scene information corresponding to the ith old user to obtain the time scene information corresponding to the ith old user; wherein
Figure BDA0002404214250000034
The coordinates of the ith old user at position f,
Figure BDA0002404214250000035
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000036
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000037
Wherein
Figure BDA0002404214250000038
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000039
a vector is embedded for the locus scene of the ith old user,
Figure BDA00024042142500000310
a vector is embedded for the location scene of the ith old user,
Figure BDA00024042142500000311
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure BDA00024042142500000312
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure BDA00024042142500000313
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
Further, the specific method in step S3 includes the following sub-steps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure BDA0002404214250000041
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure BDA0002404214250000042
the nth browsing scene space position of the old user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000043
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure BDA0002404214250000044
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure BDA0002404214250000045
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure BDA0002404214250000046
Is time kCoordinates of the location, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000047
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure BDA0002404214250000051
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure BDA0002404214250000052
The coordinates of the old user at the f position,
Figure BDA0002404214250000053
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000054
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000055
Wherein
Figure BDA0002404214250000056
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000057
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000058
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000059
embedding a vector for the time scene of the ith old user;
s3-7, embedding the average scene of the old user into a vector
Figure BDA00024042142500000510
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure BDA00024042142500000511
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
Further, the average scene of the i-th old user is embedded into the vector V in step S2-7iThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000061
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000062
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000063
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Further, the average scene of the old user is embedded into the vector V in step S3-7iThe specific method for acquiring the recommendation probability of the old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000064
obtaining the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene) Further obtaining the recommendation probability of the ith old user to the space position of each browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000065
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000066
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
The invention has the beneficial effects that: according to the invention, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply excavated, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that an individualized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interaction efficiency of the railway VR is further improved.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, the personalized railway VR scene interaction method includes the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
The specific method for determining whether the user is a new user in step S1 is as follows: and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
The specific method for recommending the hot browsing scene to the VR visual interface to the new user in step S2 includes the following substeps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure BDA0002404214250000081
And using the vector as the user context information; wherein u isiIs the user ID of the ith old user,
Figure BDA0002404214250000082
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000083
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure BDA0002404214250000084
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure BDA0002404214250000085
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure BDA0002404214250000086
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000087
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user uiThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure BDA0002404214250000091
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the initial time corresponding to the ith old userThe data with the stay time of less than 5 seconds in the context information obtain the time context information corresponding to the ith old user; wherein
Figure BDA0002404214250000092
The coordinates of the ith old user at position f,
Figure BDA0002404214250000093
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000094
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000095
Wherein
Figure BDA0002404214250000096
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000097
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000098
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000099
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure BDA00024042142500000910
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure BDA00024042142500000911
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
The specific method in step S3 includes the following substeps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure BDA0002404214250000101
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure BDA0002404214250000102
for the old to useThe nth browsing scene space position of the user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000103
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure BDA0002404214250000104
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure BDA0002404214250000105
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure BDA0002404214250000106
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000107
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure BDA0002404214250000108
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure BDA0002404214250000109
The coordinates of the old user at the f position,
Figure BDA0002404214250000111
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000112
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000113
Wherein
Figure BDA0002404214250000114
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000115
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000116
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000117
embedding a vector for the time scene of the ith old user;
S3-7, embedding the average scene of the old user into a vector
Figure BDA0002404214250000118
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure BDA0002404214250000119
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
In one embodiment of the invention, the average scene of the i-th old user is embedded into a vector ViThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows: according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000121
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000122
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000123
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Each node in the hierarchically sampled binary tree structure is associated with an embedded vector that computes the associated branch probability. In this configuration, each leaf can be reached from a node of the first layer through an appropriate path. For hierarchical sampling the softmax function can train all parameters in the function by random gradient descent (SGD). In parameter training, the function iterates over all trajectory positions and computes the gradient of the parameters through an error back-propagation algorithm, which is then used to update the parameters in the function until convergence.
In a specific implementation process, parameters such as the spatial position, the spatial attitude, the scaling and the like of equipment such as a VR handle, a VR helmet and the like in a virtual scene can be recorded based on technologies such as spatial positioning and gravity sensing of VR hardware equipment; secondly, the method can utilize technologies such as ray focus and the like to obtain the focus information data of the user; finally, the invention can acquire the background data in the VR scene by an information acquisition technology, and process and optimize the acquired interactive information by an information processing technology.
Obtaining the operating parameters of the handle: firstly, a user selects a specific key on a VR handle to operate according to own requirements, the user can realize displacement in a scene by emitting rays to a designated point through the handle in the scene, the system acquires the operation of the user when using the handle by detecting the state of the handle, the system records the information of the position, the spatial attitude, the scaling, the use time and the like of the handle in the virtual scene in the operation of the handle, and the information is stored in the system through a data persistence technology.
For virtual scene user spatial location data acquisition: and taking the spatial position of a head-mounted display in the VR hardware device in the constructed scene as the position of the current user, and recording the position information of the user in the virtual scene at certain time intervals, wherein the position information comprises X, Y, Z three coordinates in the scene and time record.
For user attention focus information acquisition: the center coordinate in the visual field of the head-mounted display is used as a user focus of attention, the specific implementation mode is that the center of the head-mounted display of a current user emits rays, the rays collide with an object in a scene, a first collision point is used as the current user focus of attention, and focus information comprises X, Y, Z coordinates, recording time and information of the object where the focus is located.
Information acquisition and data processing: background data in a VR scene is acquired through the technology, data preprocessing is carried out on the acquired interactive information through the information processing technology, the processed data format comprises user ID, spatial position of a browsed scene, types of the browsed scene, focus objects, entry time, exit time, stay time and other elements, and an interactive information data set is constructed on the basis. For example, after a user enters a VR scene, each time the user accesses one scene by using a VR handle, a piece of interaction information { u, l, c, o, t is generated1,t2,t3Where l can be represented by coordinates (X, Y, Z), i.e., { user ID, browsing scene spatial location, browsing scene category, focus of interest object, time-in, time-out, dwell time }, i.e., user u is at t1Enters the o focus object in the c type scene with the space position l at the moment and reaches t2The moment leaves and stays for t3And second. For example { UID001, (X, Y, Z), station, security equipment, 2019-01-20/3:30pm, 2019-01-20/4:00pm, 1800s }, which indicates that a user with a user ID of 001 enters the station with spatial position (X, Y, Z) at No. 1/20 pm of 2019 and 3:30 in the afternoon, and watches the security equipment, and leaves at No. 1/20 of 2019 and 4:00 in the afternoon, staying for 1800 seconds.
In conclusion, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply mined, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that a personalized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interactive efficiency of the railway VR is further improved.

Claims (5)

1. A personalized railway VR scene interaction method is characterized by comprising the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
s6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes;
the specific method for recommending the hot browsing scene to the VR visual interface to the new user in the step S2 includes the following sub-steps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure FDA0003144808400000011
And using the vector as the user context information; wherein u isiFor the i-th old userThe user ID is set to the user ID,
Figure FDA0003144808400000012
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure FDA0003144808400000013
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure FDA0003144808400000014
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure FDA0003144808400000021
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure FDA0003144808400000022
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure FDA0003144808400000023
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old ageHuu (household)iThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure FDA0003144808400000024
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the data with the stay time less than 5 seconds in the initial time scene information corresponding to the ith old user to obtain the time scene information corresponding to the ith old user; wherein
Figure FDA0003144808400000025
The coordinates of the ith old user at position f,
Figure FDA0003144808400000026
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure FDA0003144808400000027
obtaining average scene embedding vector of ith old user
Figure FDA0003144808400000028
Wherein
Figure FDA0003144808400000029
A vector is embedded for the user scene of the ith old user,
Figure FDA00031448084000000210
a vector is embedded for the locus scene of the ith old user,
Figure FDA00031448084000000211
embedding a vector, V, for the location context of the ith old usert iEmbedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure FDA00031448084000000212
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure FDA0003144808400000031
obtaining the willingness value P of all the old users to recommend the Mth sceneMFurther obtaining the willingness value of all the old users for recommending each scene; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (m | i) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene from large to small according to the recommendation willingness values of all old users, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
2. The personalized railway VR scene interaction method of claim 1, wherein the specific method for determining whether the user is a new user in step S1 is as follows:
and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
3. The personalized railway VR scene interaction method of claim 1, wherein the specific method in step S3 includes the following sub-steps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure FDA0003144808400000041
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure FDA0003144808400000042
the nth browsing scene space position of the old user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure FDA0003144808400000043
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure FDA0003144808400000044
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure FDA0003144808400000045
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure FDA0003144808400000046
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure FDA0003144808400000047
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure FDA0003144808400000051
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure FDA0003144808400000052
The coordinates of the old user at the f position,
Figure FDA0003144808400000053
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure FDA0003144808400000054
obtaining average scene embedding vector of ith old user
Figure FDA0003144808400000055
Wherein
Figure FDA0003144808400000056
A vector is embedded for the user scene of the ith old user,
Figure FDA0003144808400000057
a vector is embedded for the locus scene of the ith old user,
Figure FDA0003144808400000058
embedding a vector, V, for the location context of the ith old usert iEmbedding a vector for the time scene of the ith old user;
s3-7, embedding the average scene of the old user into a vector
Figure FDA0003144808400000059
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure FDA00031448084000000510
the old user recommends the willingness value P of the Mth sceneMFurther obtaining the willingness value of the old user for recommending each scene; wherein R isiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is logarithmicFunction, Pr (m | i) is the recommendation probability of the old user to the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation willingness value of the old user from big to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
4. The personalized railway VR scene interaction method of claim 1, wherein the average scene of the ith old user is embedded into a vector in step S2-7
Figure FDA0003144808400000061
The specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure FDA0003144808400000062
acquiring the recommendation probability Pr (m | i) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure FDA0003144808400000063
embedding vectors for the average scene of the ith old user
Figure FDA0003144808400000064
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
5. The personalized railway VR scene interaction method of claim 3, wherein the average scene of the old user is embedded into a vector in step S3-7
Figure FDA0003144808400000065
The specific method for acquiring the recommendation probability of the old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure FDA0003144808400000066
acquiring the recommendation probability Pr (m | i) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure FDA0003144808400000067
embedding vectors for the average scene of the ith old user
Figure FDA0003144808400000068
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
CN202010156431.2A 2020-03-09 2020-03-09 Personalized railway VR scene interaction method Active CN111359203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010156431.2A CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010156431.2A CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Publications (2)

Publication Number Publication Date
CN111359203A CN111359203A (en) 2020-07-03
CN111359203B true CN111359203B (en) 2021-09-28

Family

ID=71198381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010156431.2A Active CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Country Status (1)

Country Link
CN (1) CN111359203B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395323A (en) * 2020-09-18 2021-02-23 江苏园上园智能科技有限公司 Interaction method based on user experience data
CN113076436B (en) * 2021-04-09 2023-07-25 成都天翼空间科技有限公司 VR equipment theme background recommendation method and system
CN113704605B (en) * 2021-08-24 2024-08-23 山东库睿科技有限公司 Service information recommendation method and device, electronic equipment and medium
CN115587248A (en) * 2022-10-10 2023-01-10 上海人工智能创新中心 Site recommendation system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8065254B1 (en) * 2007-02-19 2011-11-22 Google Inc. Presenting a diversity of recommendations
CN104794207A (en) * 2015-04-23 2015-07-22 山东大学 Recommendation system based on cooperation and working method of recommendation system
CN108303108A (en) * 2017-12-05 2018-07-20 华南理工大学 A kind of personalized route recommendation method based on vehicle historical track
CN108733653A (en) * 2018-05-18 2018-11-02 华中科技大学 A kind of sentiment analysis method of the Skip-gram models based on fusion part of speech and semantic information
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 A Novel Moving Object Destination Prediction Algorithm
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and apparatus, electronic device and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8065254B1 (en) * 2007-02-19 2011-11-22 Google Inc. Presenting a diversity of recommendations
CN104794207A (en) * 2015-04-23 2015-07-22 山东大学 Recommendation system based on cooperation and working method of recommendation system
CN108303108A (en) * 2017-12-05 2018-07-20 华南理工大学 A kind of personalized route recommendation method based on vehicle historical track
CN108733653A (en) * 2018-05-18 2018-11-02 华中科技大学 A kind of sentiment analysis method of the Skip-gram models based on fusion part of speech and semantic information
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and apparatus, electronic device and computer-readable storage medium
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 A Novel Moving Object Destination Prediction Algorithm

Also Published As

Publication number Publication date
CN111359203A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111359203B (en) Personalized railway VR scene interaction method
US20260010803A1 (en) System and method for predicting fine-grained adversarial multi-agent motion
CN109241454B (en) A point of interest recommendation method that integrates social network and image content
US12488445B2 (en) Automatic image quality evaluation
CN109086439A (en) Information recommendation method and device
CN116257704B (en) A point of interest recommendation method based on user spatiotemporal behavior and social information
CN109242043A (en) Method and apparatus for generating information prediction model
CN116250012A (en) Method, system and computer readable storage medium for image animation
JPWO2010084839A1 (en) Likelihood estimation apparatus, content distribution system, likelihood estimation method, and likelihood estimation program
CN113766310B (en) Video generation method, device, equipment and computer readable storage medium
CN115221354B (en) Video playing method, device, equipment and medium
CN110465089A (en) Map heuristic approach, device, medium and electronic equipment based on image recognition
CN113039561A (en) Aligning sequences by generating encoded representations of data items
CN116226521B (en) Hierarchical attention micro-video sequence recommendation method and device based on multi-scale modeling
CN116401400B (en) Model training methods and related equipment
US20250218123A1 (en) Systems and methods for enhanced virtual reality interaction
CN118365917A (en) Image sequence detection method and device, storage medium and electronic device
CN112434629B (en) Online time sequence action detection method and equipment
He et al. An Interactive System for Supporting Creative Exploration of Cinematic Composition Designs
CN116628310B (en) Content recommendation method, device, equipment, medium and computer program product
CN114119861B (en) Human body reconstruction network training, human body reconstruction, fitting method and related device
US20230334323A1 (en) Operation prediction apparatus, model training method for same, and operation prediction method
CN114861049B (en) Information recommendation model training method, information recommendation method, device and server
CN119025762B (en) A long- and short-term interest-aware collaborative recommendation method based on graph neural network
Ferrato Integrating indoor positioning, recommendation, and personalization to enhance museum visitor experiences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant