Disclosure of Invention
Aiming at the defects in the prior art, the personalized railway VR scene interaction method provided by the invention solves the problem of low interaction efficiency in the existing railway VR scene interaction process.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a personalized railway VR scene interaction method is provided, which comprises the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
Further, the specific method for determining whether the user is a new user in step S1 is as follows:
and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
Further, the specific method for recommending a hot browsing scene to the VR visual interface to the new user in step S2 includes the following sub-steps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user u
iCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
And using the vector as the user context information; wherein u is
iIs the user ID of the ith old user,
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user u
iAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
The ith old user u
iAll the position vectors are subjected to rank order to obtain the ith old user u
iLocation context information of; wherein
As coordinates of the location of time k, c
kBrowsing scene category for time k, o
kFor the focus object of interest at time k,
is the coordinate of the location at time k +1, c
k+1Class of browsing scenes at time k +1, o
k+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user u
iThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user u
iScene information vector at f position
The ith old user u
iSequencing the scene information vectors at each position to obtain the ith old user u
iRemoving the data with the stay time less than 5 seconds in the initial time scene information corresponding to the ith old user to obtain the time scene information corresponding to the ith old user; wherein
The coordinates of the ith old user at position f,
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
obtaining average scene embedding vector of ith old user
Wherein
A vector is embedded for the user scene of the ith old user,
a vector is embedded for the locus scene of the ith old user,
a vector is embedded for the location scene of the ith old user,
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
Further, the specific method in step S3 includes the following sub-steps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
And using the vector as the user context information; wherein u is
iIs the user ID of the old user,
the nth browsing scene space position of the old user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
The old user u
iAll the position vectors of the old user u are subjected to rank ordering to obtain the old user u
iLocation context information of; wherein
Is time kCoordinates of the location, c
kBrowsing scene category for time k, o
kFor the focus object of interest at time k,
is the coordinate of the location at time k +1, c
k+1Class of browsing scenes at time k +1, o
k+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old user
iScene information vector at f position
The old user u
iThe scene information vector at each position is subjected to rank order to obtain the old user u
iRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
The coordinates of the old user at the f position,
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
obtaining average scene embedding vector of ith old user
Wherein
A vector is embedded for the user scene of the ith old user,
a vector is embedded for the locus scene of the ith old user,
a vector is embedded for the location scene of the ith old user,
embedding a vector for the time scene of the ith old user;
s3-7, embedding the average scene of the old user into a vector
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
Further, the average scene of the i-th old user is embedded into the vector V in step S2-7iThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N is
RLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
embedding vectors for the average scene of the ith old user
Transposing a matrix; theta
p-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Further, the average scene of the old user is embedded into the vector V in step S3-7iThe specific method for acquiring the recommendation probability of the old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
obtaining the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene) Further obtaining the recommendation probability of the ith old user to the space position of each browsing scene; wherein N is
RLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
embedding vectors for the average scene of the ith old user
Transposing a matrix; theta
p-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
The invention has the beneficial effects that: according to the invention, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply excavated, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that an individualized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interaction efficiency of the railway VR is further improved.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, the personalized railway VR scene interaction method includes the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
The specific method for determining whether the user is a new user in step S1 is as follows: and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
The specific method for recommending the hot browsing scene to the VR visual interface to the new user in step S2 includes the following substeps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user u
iCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
And using the vector as the user context information; wherein u is
iIs the user ID of the ith old user,
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user u
iAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
The ith old user u
iAll the position vectors are subjected to rank order to obtain the ith old user u
iLocation context information of; wherein
As coordinates of the location of time k, c
kBrowsing scene category for time k, o
kFor the focus object of interest at time k,
is the coordinate of the location at time k +1, c
k+1Class of browsing scenes at time k +1, o
k+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user u
iThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user u
iScene information vector at f position
The ith old user u
iSequencing the scene information vectors at each position to obtain the ith old user u
iRemoving the initial time corresponding to the ith old userThe data with the stay time of less than 5 seconds in the context information obtain the time context information corresponding to the ith old user; wherein
The coordinates of the ith old user at position f,
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
obtaining average scene embedding vector of ith old user
Wherein
A vector is embedded for the user scene of the ith old user,
a vector is embedded for the locus scene of the ith old user,
a vector is embedded for the location scene of the ith old user,
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
The specific method in step S3 includes the following substeps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
And using the vector as the user context information; wherein u is
iIs the user ID of the old user,
for the old to useThe nth browsing scene space position of the user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
The old user u
iAll the position vectors of the old user u are subjected to rank ordering to obtain the old user u
iLocation context information of; wherein
As coordinates of the location of time k, c
kBrowsing scene category for time k, o
kFor the focus object of interest at time k,
is the coordinate of the location at time k +1, c
k+1Class of browsing scenes at time k +1, o
k+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old user
iScene information vector at f position
The old user u
iThe scene information vector at each position is subjected to rank order to obtain the old user u
iRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
The coordinates of the old user at the f position,
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
obtaining average scene embedding vector of ith old user
Wherein
A vector is embedded for the user scene of the ith old user,
a vector is embedded for the locus scene of the ith old user,
a vector is embedded for the location scene of the ith old user,
embedding a vector for the time scene of the ith old user;
S3-7, embedding the average scene of the old user into a vector
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
In one embodiment of the invention, the average scene of the i-th old user is embedded into a vector ViThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows: according to a hierarchical sampling softmax function formula:
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N is
RLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
embedding vectors for the average scene of the ith old user
Transposing a matrix; theta
p-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Each node in the hierarchically sampled binary tree structure is associated with an embedded vector that computes the associated branch probability. In this configuration, each leaf can be reached from a node of the first layer through an appropriate path. For hierarchical sampling the softmax function can train all parameters in the function by random gradient descent (SGD). In parameter training, the function iterates over all trajectory positions and computes the gradient of the parameters through an error back-propagation algorithm, which is then used to update the parameters in the function until convergence.
In a specific implementation process, parameters such as the spatial position, the spatial attitude, the scaling and the like of equipment such as a VR handle, a VR helmet and the like in a virtual scene can be recorded based on technologies such as spatial positioning and gravity sensing of VR hardware equipment; secondly, the method can utilize technologies such as ray focus and the like to obtain the focus information data of the user; finally, the invention can acquire the background data in the VR scene by an information acquisition technology, and process and optimize the acquired interactive information by an information processing technology.
Obtaining the operating parameters of the handle: firstly, a user selects a specific key on a VR handle to operate according to own requirements, the user can realize displacement in a scene by emitting rays to a designated point through the handle in the scene, the system acquires the operation of the user when using the handle by detecting the state of the handle, the system records the information of the position, the spatial attitude, the scaling, the use time and the like of the handle in the virtual scene in the operation of the handle, and the information is stored in the system through a data persistence technology.
For virtual scene user spatial location data acquisition: and taking the spatial position of a head-mounted display in the VR hardware device in the constructed scene as the position of the current user, and recording the position information of the user in the virtual scene at certain time intervals, wherein the position information comprises X, Y, Z three coordinates in the scene and time record.
For user attention focus information acquisition: the center coordinate in the visual field of the head-mounted display is used as a user focus of attention, the specific implementation mode is that the center of the head-mounted display of a current user emits rays, the rays collide with an object in a scene, a first collision point is used as the current user focus of attention, and focus information comprises X, Y, Z coordinates, recording time and information of the object where the focus is located.
Information acquisition and data processing: background data in a VR scene is acquired through the technology, data preprocessing is carried out on the acquired interactive information through the information processing technology, the processed data format comprises user ID, spatial position of a browsed scene, types of the browsed scene, focus objects, entry time, exit time, stay time and other elements, and an interactive information data set is constructed on the basis. For example, after a user enters a VR scene, each time the user accesses one scene by using a VR handle, a piece of interaction information { u, l, c, o, t is generated1,t2,t3Where l can be represented by coordinates (X, Y, Z), i.e., { user ID, browsing scene spatial location, browsing scene category, focus of interest object, time-in, time-out, dwell time }, i.e., user u is at t1Enters the o focus object in the c type scene with the space position l at the moment and reaches t2The moment leaves and stays for t3And second. For example { UID001, (X, Y, Z), station, security equipment, 2019-01-20/3:30pm, 2019-01-20/4:00pm, 1800s }, which indicates that a user with a user ID of 001 enters the station with spatial position (X, Y, Z) at No. 1/20 pm of 2019 and 3:30 in the afternoon, and watches the security equipment, and leaves at No. 1/20 of 2019 and 4:00 in the afternoon, staying for 1800 seconds.
In conclusion, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply mined, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that a personalized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interactive efficiency of the railway VR is further improved.